diff options
Diffstat (limited to 'docs/testing')
57 files changed, 2187 insertions, 474 deletions
diff --git a/docs/testing/developer/devguide/devguide.rst b/docs/testing/developer/devguide/devguide.rst index 4fe01c12b..2065f6e0d 100755 --- a/docs/testing/developer/devguide/devguide.rst +++ b/docs/testing/developer/devguide/devguide.rst @@ -47,14 +47,14 @@ your field of interest is. Where can I find some help to start? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -.. _`user guide`: http://artifacts.opnfv.org/yardstick/danube/1.0/docs/stesting_user_userguide/index.html +.. _`user guide`: https://artifacts.opnfv.org/yardstick/docs/testing_user_userguide/index.html .. _`wiki page`: https://wiki.opnfv.org/display/yardstick/ This guide is made for you. You can have a look at the `user guide`_. There are also references on documentation, video tutorials, tips in the -project `wiki page`_. You can also directly contact us by mail with [Yardstick] -prefix in the subject at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan -#opnfv-yardstick. +project `wiki page`_. You can also directly contact us by mail with +``#yardstick`` or ``[yardstick]`` prefix in the subject at +``opnfv-tech-discuss@lists.opnfv.org`` or on the IRC channel ``#opnfv-yardstick``. Yardstick developer areas @@ -401,7 +401,7 @@ Gerrit & JIRA introduction ++++++++++++++++++++++++++ .. _Gerrit: https://www.gerritcodereview.com/ -.. _`OPNFV Gerrit`: http://gerrit.opnfv.org/ +.. _`OPNFV Gerrit`: http://gerrit.opnfv.org/gerrit .. _link: https://identity.linuxfoundation.org/ .. _JIRA: https://jira.opnfv.org/secure/Dashboard.jspa @@ -485,7 +485,7 @@ Git repository:: JIRA: YARDSTICK-XXX -.. _`this document`: http://chris.beams.io/posts/git-commit/ +.. _`this document`: https://chris.beams.io/posts/git-commit/ The message that is required for the commit should follow a specific set of rules. This practice allows to standardize the description messages attached @@ -510,8 +510,8 @@ Yardstick committers and contributors to review your codes. :alt: Gerrit for code review You can find a list Yardstick people -`here <https://wiki.opnfv.org/display/yardstick/People>`_, or use the -``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit. +`here <https://wiki.opnfv.org/display/yardstick/Yardstick+People>`_, or use +the ``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit. Modify the code under review in Gerrit ++++++++++++++++++++++++++++++++++++++ diff --git a/docs/testing/developer/devguide/devguide_nsb_prox.rst b/docs/testing/developer/devguide/devguide_nsb_prox.rst index 79990055a..44a216b75 100755 --- a/docs/testing/developer/devguide/devguide_nsb_prox.rst +++ b/docs/testing/developer/devguide/devguide_nsb_prox.rst @@ -8,14 +8,17 @@ for Telco customers, investigate the impact of new Intel compute, network and storage technologies, characterize performance, and develop optimal system architectures and configurations. +NSB PROX Supports Baremetal, Openstack and standalone configuration. + .. contents:: Prerequisites ============= -In order to integrate PROX tests into NSB, the following prerequisites are required. +In order to integrate PROX tests into NSB, the following prerequisites are +required. -.. _`dpdk wiki page`: http://dpdk.org/ +.. _`dpdk wiki page`: https://www.dpdk.org/ .. _`yardstick wiki page`: https://wiki.opnfv.org/display/yardstick/ .. _`Prox documentation`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation .. _`openstack wiki page`: https://wiki.openstack.org/wiki/Main_Page @@ -159,11 +162,13 @@ A NSB Prox test is composed of the following components :- ``tc_prox_heat_context_vpe-4.yaml``. This file describes the components of the test, in the case of openstack the network description and server descriptions, in the case of baremetal the hardware - description location. It also contains the name of the Traffic Generator, the SUT config file - and the traffic profile description, all described below. See nsb-test-description-label_ + description location. It also contains the name of the Traffic Generator, + the SUT config file and the traffic profile description, all described below. + See `Test Description File`_ -* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the packet size, tolerated - loss, initial line rate to start traffic at, test interval etc See nsb-traffic-profile-label_ +* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the + packet size, tolerated loss, initial line rate to start traffic at, test + interval etc See `Traffic Profile File`_ * Traffic Generator Config file. Usually called ``gen_<test>-<ports>.cfg``. @@ -176,7 +181,7 @@ A NSB Prox test is composed of the following components :- under test. Example traffic generator config file ``gen_l2fwd-4.cfg`` - See nsb-traffic-generator-label_ + See `Traffic Generator Config file`_ * SUT Config file. Usually called ``handle_<test>-<ports>.cfg``. @@ -190,7 +195,7 @@ A NSB Prox test is composed of the following components :- the ports to the Traffic Verifier tasks of the Traffic Generator. Example traffic generator config file ``handle_l2fwd-4.cfg`` - See nsb-sut-generator-label_ + See `SUT Config File`_ * NSB PROX Baremetal Configuration file. Usually called ``prox-baremetal-<ports>.yaml`` @@ -199,12 +204,12 @@ A NSB Prox test is composed of the following components :- This is required for baremetal only. This describes hardware, NICs, IP addresses, Network drivers, usernames and passwords. - See baremetal-config-label_ + See `Baremetal Configuration File`_ * Grafana Dashboard. Usually called ``Prox_<context>_<test>-<port>-<DateAndTime>.json`` where - * <context> Is either ``BM`` or ``heat`` + * <context> Is ``BM``,``heat``,``ovs_dpdk`` or ``sriov`` * <test> Is the a one or 2 word description of the test. * <port> is the number of ports used express as ``2Port`` or ``4Port`` * <DateAndTime> is the Date and Time expressed as a string. @@ -214,15 +219,15 @@ A NSB Prox test is composed of the following components :- Other files may be required. These are test specific files and will be covered later. -.. _nsb-test-description-label: -**Test Description File** +Test Description File +--------------------- -Here we will discuss the test description for both -baremetal and openstack. +Here we will discuss the test description for +baremetal, openstack and standalone. -*Test Description File for Baremetal* -------------------------------------- +Test Description File for Baremetal +----------------------------------- This section will introduce the meaning of the Test case description file. We will use ``tc_prox_baremetal_l2fwd-2.yaml`` as an example to @@ -235,7 +240,8 @@ show you how to understand the test description file. Now let's examine the components of the file in detail 1. ``traffic_profile`` - This specifies the traffic profile for the - test. In this case ``prox_binsearch.yaml`` is used. See nsb-traffic-profile-label_ + test. In this case ``prox_binsearch.yaml`` is used. See + `Traffic Profile File`_ 2. ``topology`` - This is either ``prox-tg-topology-1.yaml`` or ``prox-tg-topology-2.yaml`` or ``prox-tg-topology-4.yaml`` @@ -247,10 +253,13 @@ Now let's examine the components of the file in detail 4. ``interface_speed_gbps`` - This is an optional parameter. If not present the system defaults to 10Gbps. This defines the speed of the interfaces. -5. ``prox_path`` - Location of the Prox executable on the traffic +5. ``collectd`` - (Optional) This specifies we want to collect NFVI statistics + like CPU Utilization, + +6. ``prox_path`` - Location of the Prox executable on the traffic generator (Either baremetal or Openstack Virtual Machine) -6. ``prox_config`` - This is the ``SUT Config File``. +7. ``prox_config`` - This is the ``SUT Config File``. In this case it is ``handle_l2fwd-2.cfg`` A number of additional parameters can be added. This example @@ -259,6 +268,14 @@ Now let's examine the components of the file in detail options: interface_speed_gbps: 10 + traffic_config: + tolerated_loss: 0.01 + test_precision: 0.01 + packet_sizes: [64] + duration: 30 + lower_bound: 0.0 + upper_bound: 100.0 + vnf__0: prox_path: /opt/nsb_bin/prox prox_config: ``configs/handle_vpe-4.cfg`` @@ -272,55 +289,60 @@ Now let's examine the components of the file in detail ``configs/vpe_rules.lua`` : ```` prox_generate_parameter: True - ``interface_speed_gbps`` - this specifies the speed of the interface - in Gigabits Per Second. This is used to calculate pps(packets per second). - If the interfaces are of different speeds, then this specifies the speed - of the slowest interface. This parameter is optional. If omitted the - interface speed defaults to 10Gbps. + ``interface_speed_gbps`` - this specifies the speed of the interface + in Gigabits Per Second. This is used to calculate pps(packets per second). + If the interfaces are of different speeds, then this specifies the speed + of the slowest interface. This parameter is optional. If omitted the + interface speed defaults to 10Gbps. - ``prox_files`` - this specified that a number of addition files - need to be provided for the test to run correctly. This files - could provide routing information,hashing information or a - hashing algorithm and ip/mac information. + ``traffic_config`` - This allows the values here to override the values in + in the traffic_profile file. e.g. "prox_binsearch.yaml". Values provided + here override values provided in the "traffic_profile" section of the + traffic_profile file. Some, all or none of the values can be provided here. - ``prox_generate_parameter`` - this specifies that the NSB application - is required to provide information to the nsb Prox in the form - of a file called ``parameters.lua``, which contains information - retrieved from either the hardware or the openstack configuration. + The values describes the packet size, tolerated loss, initial line rate + to start traffic at, test interval etc See `Traffic Profile File`_ -7. ``prox_args`` - this specifies the command line arguments to start - prox. See `prox command line`_. + ``prox_files`` - this specified that a number of addition files + need to be provided for the test to run correctly. This files + could provide routing information,hashing information or a + hashing algorithm and ip/mac information. + + ``prox_generate_parameter`` - this specifies that the NSB application + is required to provide information to the nsb Prox in the form + of a file called ``parameters.lua``, which contains information + retrieved from either the hardware or the openstack configuration. -8. ``prox_config`` - This specifies the Traffic Generator config file. +8. ``prox_args`` - this specifies the command line arguments to start + prox. See `prox command line`_. -9. ``runner`` - This is set to ``ProxDuration`` - This specifies that the - test runs for a set duration. Other runner types are available - but it is recommend to use ``ProxDuration`` +9. ``prox_config`` - This specifies the Traffic Generator config file. - The following parrameters are supported +10. ``runner`` - This is set to ``ProxDuration`` - This specifies that the + test runs for a set duration. Other runner types are available + but it is recommend to use ``ProxDuration``. The following parameters + are supported - ``interval`` - (optional) - This specifies the sampling interval. - Default is 1 sec + ``interval`` - (optional) - This specifies the sampling interval. + Default is 1 sec - ``sampled`` - (optional) - This specifies if sampling information is - required. Default ``no`` + ``sampled`` - (optional) - This specifies if sampling information is + required. Default ``no`` - ``duration`` - This is the length of the test in seconds. Default - is 60 seconds. + ``duration`` - This is the length of the test in seconds. Default + is 60 seconds. - ``confirmation`` - This specifies the number of confirmation retests to - be made before deciding to increase or decrease line speed. Default 0. + ``confirmation`` - This specifies the number of confirmation retests to + be made before deciding to increase or decrease line speed. Default 0. -10. ``context`` - This is ``context`` for a 2 port Baremetal configuration. +11. ``context`` - This is ``context`` for a 2 port Baremetal configuration. If a 4 port configuration was required then file ``prox-baremetal-4.yaml`` would be used. This is the NSB Prox baremetal configuration file. -.. _nsb-traffic-profile-label: - -*Traffic Profile file* ----------------------- +Traffic Profile File +-------------------- This describes the details of the traffic flow. In this case ``prox_binsearch.yaml`` is used. @@ -330,11 +352,12 @@ This describes the details of the traffic flow. In this case :alt: NSB PROX Traffic Profile -1. ``name`` - The name of the traffic profile. This name should match the name specified in the - ``traffic_profile`` field in the Test Description File. +1. ``name`` - The name of the traffic profile. This name should match the + name specified in the ``traffic_profile`` field in the Test + Description File. -2. ``traffic_type`` - This specifies the type of traffic pattern generated, This name matches - class name of the traffic generator See:: +2. ``traffic_type`` - This specifies the type of traffic pattern generated, + This name matches class name of the traffic generator. See:: network_services/traffic_profile/prox_binsearch.py class ProxBinSearchProfile(ProxProfile) @@ -392,19 +415,20 @@ See this prox_vpe.yaml as example:: lower_bound: 0.0 upper_bound: 100.0 -*Test Description File for Openstack* -------------------------------------- +Test Description File for Openstack +----------------------------------- We will use ``tc_prox_heat_context_l2fwd-2.yaml`` as a example to show you how to understand the test description file. -.. image:: images/PROX_Test_HEAT_Script1.png - :width: 800px - :alt: NSB PROX Test Description File - Part 1 + .. image:: images/PROX_Test_HEAT_Script1.png + :width: 800px + :alt: NSB PROX Test Description File - Part 1 -.. image:: images/PROX_Test_HEAT_Script2.png - :width: 800px - :alt: NSB PROX Test Description File - Part 2 + + .. image:: images/PROX_Test_HEAT_Script2.png + :width: 800px + :alt: NSB PROX Test Description File - Part 2 Now lets examine the components of the file in detail @@ -460,10 +484,64 @@ F. ``networks`` - is composed of a management network labeled ``mgmt`` port_security_enabled: False enable_dhcp: 'false' -.. _nsb-traffic-generator-label: +Test Description File for Standalone +------------------------------------ + +We will use ``tc_prox_ovs-dpdk_l2fwd-2.yaml`` as a example to show +you how to understand the test description file. + + .. image:: images/PROX_Test_ovs_dpdk_Script_1.png + :width: 800px + :alt: NSB PROX Test Standalone Description File - Part 1 -*Traffic Generator Config file* -------------------------------- + .. image:: images/PROX_Test_ovs_dpdk_Script_2.png + :width: 800px + :alt: NSB PROX Test Standalone Description File - Part 2 + +Now lets examine the components of the file in detail + +Sections 1 to 9 are exactly the same in Baremetal and in Heat. Section +``10`` is replaced with sections A to F. Section 10 was for a baremetal +configuration file. This has no place in a heat configuration. + +A. ``file`` - Pod file for Baremetal Traffic Generator configuration: + IP Address, User/Password & Interfaces + +B. ``type`` - This defines the type of standalone configuration. + Possible values are ``StandaloneOvsDpdk`` or ``StandaloneSriov`` + +C. ``file`` - Pod file for Standalone host configuration: + IP Address, User/Password & Interfaces + +D. ``vm_deploy`` - Deploy a new VM or use an existing VM + +E. ``ovs_properties`` - OVS Version, DPDK Version and configuration + to use. + +F. ``flavor``- NSB image generated when installing NSB using ansible-playbook:: + + ram- Configurable RAM for SUT VM + extra_specs + hw:cpu_sockets - Configurable number of Sockets for SUT VM + hw:cpu_cores - Configurable number of Cores for SUT VM + hw:cpu_threads- Configurable number of Threads for SUT VM + +G. ``mgmt`` - Management port of the SUT VM. Preconfig needed on TG & SUT host machines. + is the system under test. + + +H. ``xe0`` - Upline Network port + +I. ``xe1`` - Downline Network port + +J. ``uplink_0`` - Uplink Phy port of the NIC on the host. This will be used to create + the Virtual Functions. + +K. ``downlink_0`` - Downlink Phy port of the NIC on the host. This will be used to + create the Virtual Functions. + +Traffic Generator Config file +----------------------------- This section will describe the traffic generator config file. This is the same for both baremetal and heat. See this example @@ -704,23 +782,29 @@ Now let's examine the components of the file in detail physical core improves performance, however sometimes it is optimal to move task to a separate core. This is best decided by checking performance. - c. ``mode=lat`` - Specifies the action carried out by this task on this core. Supported modes are: acl, - classify, drop, gredecap, greencap, ipv6_decap, ipv6_encap, l2fwd, lbnetwork, lbpos, lbqinq, nop, - police, qinqdecapv4, qinqencapv4, qos, routing, impair, lb5tuple, mirror, unmpls, tagmpls, - nat, decapnsh, encapnsh, gen, genl4 and lat. This task(0) per core(3) receives packets on port. - d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will receive packets on ``Port 1``. - e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet. Note that the packet length should - be longer than ``lat pos`` + 4 bytes to avoid truncation of the timestamp. It defines where the timestamp is - to be read from. Note that the SUT workload might cause the position of the timestamp to change - (i.e. due to encapsulation). - -.. _nsb-sut-generator-label: - -*SUT Config file* -------------------------------- + c. ``mode=lat`` - Specifies the action carried out by this task on this + core. + Supported modes are: ``acl``, ``classify``, ``drop``, ``gredecap``, + ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, ``lbnetwork``, + ``lbpos``, ``lbqinq``, ``nop``, ``police``, ``qinqdecapv4``, + ``qinqencapv4``, ``qos``, ``routing``, ``impair``, ``lb5tuple``, + ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, ``decapnsh``, ``encapnsh``, + ``gen``, ``genl4`` and ``lat``. This task(0) per core(3) receives packets + on port. + d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will + receive packets on ``Port 1``. + e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet. + Note that the packet length should be longer than ``lat pos`` + 4 bytes + to avoid truncation of the timestamp. It defines where the timestamp is + to be read from. Note that the SUT workload might cause the position of + the timestamp to change (i.e. due to encapsulation). + +SUT Config File +--------------- This section will describes the SUT(VNF) config file. This is the same for both -baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to explain the options. +baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to +explain the options. .. image:: images/PROX_Handle_2port_cfg.png :width: 1400px @@ -730,13 +814,15 @@ See `prox options`_ for details Now let's examine the components of the file in detail -1. ``[eal options]`` - same as the Generator config file. This specified the EAL (Environmental Abstraction Layer) - options. These are default values and are not changed. - See `dpdk wiki page`_. +1. ``[eal options]`` - same as the Generator config file. This specified the + EAL (Environmental Abstraction Layer) options. These are default values and + are not changed. See `dpdk wiki page`_. -2. ``[port 0]`` - This section describes the DPDK Port. The number following the keyword ``port`` usually refers to the DPDK Port Id. usually starting from ``0``. - Because you can have multiple ports this entry usually repeated. Eg. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4 port setup ``[port 0]``, ``[port 1]``, - ``[port 2]`` and ``[port 3]``:: +2. ``[port 0]`` - This section describes the DPDK Port. The number following + the keyword ``port`` usually refers to the DPDK Port Id. usually starting + from ``0``. Because you can have multiple ports this entry usually + repeated. E.g. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4 + port setup ``[port 0]``, ``[port 1]``, ``[port 2]`` and ``[port 3]``:: [port 0] name=if0 @@ -745,10 +831,14 @@ Now let's examine the components of the file in detail tx desc=2048 promiscuous=yes - a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any name can be assigned to a port. - b. ``mac=hardware`` sets the MAC address assigned by the hardware to data from this port. - c. ``rx desc=2048`` sets the number of available descriptors to allocate for receive packets. This can be changed and can effect performance. - d. ``tx desc=2048`` sets the number of available descriptors to allocate for transmit packets. This can be changed and can effect performance. + a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any + name can be assigned to a port. + b. ``mac=hardware`` sets the MAC address assigned by the hardware to data + from this port. + c. ``rx desc=2048`` sets the number of available descriptors to allocate + for receive packets. This can be changed and can effect performance. + d. ``tx desc=2048`` sets the number of available descriptors to allocate + for transmit packets. This can be changed and can effect performance. e. ``promiscuous=yes`` this enables promiscuous mode for this port. 3. ``[defaults]`` - Here default operations and settings can be over written.:: @@ -757,35 +847,46 @@ Now let's examine the components of the file in detail mempool size=8K memcache size=512 - a. In this example ``mempool size=8K`` the number of mbufs per task is altered. Altering this value could effect performance. See `prox options`_ for details. - b. ``memcache size=512`` - number of mbufs cached per core, default is 256 this is the cache_size. Altering this value could effect performance. + a. In this example ``mempool size=8K`` the number of mbufs per task is + altered. Altering this value could effect performance. See + `prox options`_ for details. + b. ``memcache size=512`` - number of mbufs cached per core, default is 256 + this is the cache_size. Altering this value could affect performance. -4. ``[global]`` - Here application wide setting are supported. Things like application name, start time, duration and memory configurations can be set here. +4. ``[global]`` - Here application wide setting are supported. Things like + application name, start time, duration and memory configurations can be set + here. In this example.:: [global] start time=5 name=Basic Gen - a. ``start time=5`` Time is seconds after which average stats will be started. + a. ``start time=5`` Time is seconds after which average stats will be + started. b. ``name=Handle L2FWD Multiflow (2x)`` Name of the configuration. -5. ``[core 0]`` - This core is designated the master core. Every Prox application must have a master core. The master mode must be assigned to +5. ``[core 0]`` - This core is designated the master core. Every Prox + application must have a master core. The master mode must be assigned to exactly one task, running alone on one core.:: [core 0] mode=master -6. ``[core 1]`` - This describes the activity on core 1. Cores can be configured by means of a set of [core #] sections, where # represents either: +6. ``[core 1]`` - This describes the activity on core 1. Cores can be + configured by means of a set of [core #] sections, where # represents + either: - a. an absolute core number: e.g. on a 10-core, dual socket system with hyper-threading, - cores are numbered from 0 to 39. + a. an absolute core number: e.g. on a 10-core, dual socket system with + hyper-threading, cores are numbered from 0 to 39. - b. PROX allows a core to be identified by a core number, the letter 's', and a socket number. - However NSB PROX is hardware agnostic (physical and virtual configurations are the same) it - is advisable no to use physical core numbering. + b. PROX allows a core to be identified by a core number, the letter 's', + and a socket number. However NSB PROX is hardware agnostic (physical and + virtual configurations are the same) it is advisable no to use physical + core numbering. - Each core can be assigned with a set of tasks, each running one of the implemented packet processing modes.:: + Each core can be assigned with a set of tasks, each running one of the + implemented packet processing modes.:: [core 1] name=none @@ -796,20 +897,33 @@ Now let's examine the components of the file in detail tx port=if1 a. ``name=none`` - No name assigned to the core. - b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``. Task 1 can be defined later in this core or - can be defined in another ``[core 1]`` section with ``task=1`` later in configuration file. Sometimes running - multiple task related to the same packet on the same physical core improves performance, however sometimes it - is optimal to move task to a separate core. This is best decided by checking performance. - c. ``mode=l2fwd`` - Specifies the action carried out by this task on this core. Supported modes are: acl, - classify, drop, gredecap, greencap, ipv6_decap, ipv6_encap, l2fwd, lbnetwork, lbpos, lbqinq, nop, - police, qinqdecapv4, qinqencapv4, qos, routing, impair, lb5tuple, mirror, unmpls, tagmpls, - nat, decapnsh, encapnsh, gen, genl4 and lat. This code does ``l2fwd`` .. ie it does the L2FWD. - - d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet will be set to the MAC address of ``Port 1`` of destination device. (The Traffic Generator/Verifier) - e. ``rx port=if0`` - This specifies that the packets are received from ``Port 0`` called if0 - f. ``tx port=if1`` - This specifies that the packets are transmitted to ``Port 1`` called if1 - - If this example we receive a packet on core on a port, carry out operation on the packet on the core and transmit it on on another port still using the same task on the same core. + b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``. + Task 1 can be defined later in this core or can be defined in another + ``[core 1]`` section with ``task=1`` later in configuration file. + Sometimes running multiple task related to the same packet on the same + physical core improves performance, however sometimes it is optimal to + move task to a separate core. This is best decided by checking + performance. + c. ``mode=l2fwd`` - Specifies the action carried out by this task on this + core. Supported modes are: ``acl``, ``classify``, ``drop``, + ``gredecap``, ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, + ``lbnetwork``, ``lbpos``, ``lbqinq``, ``nop``, ``police``, + ``qinqdecapv4``, ``qinqencapv4``, ``qos``, ``routing``, ``impair``, + ``lb5tuple``, ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, + ``decapnsh``, ``encapnsh``, ``gen``, ``genl4`` and ``lat``. This code + does ``l2fwd``. i.e. it does the L2FWD. + + d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet + will be set to the MAC address of ``Port 1`` of destination device. + (The Traffic Generator/Verifier) + e. ``rx port=if0`` - This specifies that the packets are received from + ``Port 0`` called if0 + f. ``tx port=if1`` - This specifies that the packets are transmitted to + ``Port 1`` called if1 + + In this example we receive a packet on core on a port, carry out operation + on the packet on the core and transmit it on on another port still using + the same task on the same core. On some implementation you may wish to use multiple tasks, like this.:: @@ -829,59 +943,76 @@ Now let's examine the components of the file in detail tx port=if0 drop=no - In this example you can see Core 1/Task 0 called ``rx_task`` receives the packet from if0 and perform the l2fwd. However instead of sending the packet to a - port it sends it to a core see ``tx cores=1t1``. In this case it sends it to Core 1/Task 1. + In this example you can see Core 1/Task 0 called ``rx_task`` receives the + packet from if0 and perform the l2fwd. However instead of sending the + packet to a port it sends it to a core see ``tx cores=1t1``. In this case it + sends it to Core 1/Task 1. - Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but from the ring. See ``rx ring=yes``. It does not perform any operation on the packet See ``mode=none`` - and sends the packets to ``if0`` see ``tx port=if0``. + Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but + from the ring. See ``rx ring=yes``. It does not perform any operation on the + packet See ``mode=none`` and sends the packets to ``if0`` see + ``tx port=if0``. - It is also possible to implement more complex operations be chaining multiple operations in sequence and using rings to pass packets from one core to another. + It is also possible to implement more complex operations by chaining + multiple operations in sequence and using rings to pass packets from one + core to another. - In thus example we show a Broadband Network Gateway (BNG) with Quality of Service (QoS). Communication from task to task is via rings. + In this example, we show a Broadband Network Gateway (BNG) with Quality of + Service (QoS). Communication from task to task is via rings. .. image:: images/PROX_BNG_QOS.png :width: 1000px :alt: NSB PROX Config File for BNG_QOS -*Baremetal Configuration file* ------------------------------- - -.. _baremetal-config-label: +Baremetal Configuration File +---------------------------- -This is required for baremetal testing. It describes the IP address of the various ports, the Network devices drivers and MAC addresses and the network +This is required for baremetal testing. It describes the IP address of the +various ports, the Network devices drivers and MAC addresses and the network configuration. -In this example we will describe a 2 port configuration. This file is the same for all 2 port NSB Prox tests on the same platforms/configuration. +In this example we will describe a 2 port configuration. This file is the same +for all 2 port NSB Prox tests on the same platforms/configuration. .. image:: images/PROX_Baremetal_config.png :width: 1000px :alt: NSB PROX Yardstick Config -Now lets describe the sections of the file. - - 1. ``TrafficGen`` - This section describes the Traffic Generator node of the test configuration. The name of the node ``trafficgen_1`` must match the node name - in the ``Test Description File for Baremetal`` mentioned earlier. The password attribute of the test needs to be configured. All other parameters - can remain as default settings. - 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic Generator. - 3. ``xe0`` is DPDK Port 0. ``lspci`` and `` ./dpdk-devbind.py -s`` can be used to provide the interface information. ``netmask`` and ``local_ip`` should not be changed - 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1`` section needs to be repeated and modified accordingly. - 5. ``vnf`` - This section describes the SUT of the test configuration. The name of the node ``vnf`` must match the node name in the - ``Test Description File for Baremetal`` mentioned earlier. The password attribute of the test needs to be configured. All other parameters - can remain as default settings +Now let's describe the sections of the file. + + 1. ``TrafficGen`` - This section describes the Traffic Generator node of the + test configuration. The name of the node ``trafficgen_1`` must match the + node name in the ``Test Description File for Baremetal`` mentioned + earlier. The password attribute of the test needs to be configured. All + other parameters can remain as default settings. + 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic + Generator. + 3. ``xe0`` is DPDK Port 0. ``lspci`` and ``./dpdk-devbind.py -s`` can be used + to provide the interface information. ``netmask`` and ``local_ip`` should + not be changed + 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1`` + section needs to be repeated and modified accordingly. + 5. ``vnf`` - This section describes the SUT of the test configuration. The + name of the node ``vnf`` must match the node name in the + ``Test Description File for Baremetal`` mentioned earlier. The password + attribute of the test needs to be configured. All other parameters can + remain as default settings 6. ``interfaces`` - This defines the DPDK interfaces on the SUT 7. ``xe0`` - Same as 3 but for the ``SUT``. 8. ``xe1`` - Same as 4 but for the ``SUT`` also. 9. ``routing_table`` - All parameters should remain unchanged. 10. ``nd_route_tbl`` - All parameters should remain unchanged. -*Grafana Dashboard* -------------------- +Grafana Dashboard +----------------- -The grafana dashboard visually displays the results of the tests. The steps required to produce a grafana dashboard are described here. +The grafana dashboard visually displays the results of the tests. The steps +required to produce a grafana dashboard are described here. .. _yardstick-config-label: - a. Configure ``yardstick`` to use influxDB to store test results. See file ``/etc/yardstick/yardstick.conf``. + a. Configure ``yardstick`` to use influxDB to store test results. See file + ``/etc/yardstick/yardstick.conf``. .. image:: images/PROX_Yardstick_config.png :width: 1000px @@ -890,10 +1021,12 @@ The grafana dashboard visually displays the results of the tests. The steps requ 1. Specify the dispatcher to use influxDB to store results. 2. "target = .. " - Specify location of influxDB to store results. "db_name = yardstick" - name of database. Do not change - "username = root" - username to use to store result. (Many tests are run as root) + "username = root" - username to use to store result. (Many tests are + run as root) "password = ... " - Please set to root user password - b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See `grafana deployment`_. + b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See + `grafana deployment`_. c. Generate the test data. Run the tests as follows .:: yardstick --debug task start tc_prox_<context>_<test>-ports.yaml @@ -910,7 +1043,8 @@ How to run NSB Prox Test on an baremetal environment In order to run the NSB PROX test. - 1. Install NSB on Traffic Generator node and Prox in SUT. See `NSB Installation`_ + 1. Install NSB on Traffic Generator node and Prox in SUT. See + `NSB Installation`_ 2. To enter container:: @@ -922,8 +1056,8 @@ In order to run the NSB PROX test. cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox - b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that topology - into this directory as per baremetal-config-label_ + b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that + topology into this directory as per `Baremetal Configuration File`_ c. Install and configure ``yardstick.conf`` :: @@ -968,12 +1102,11 @@ Frequently Asked Questions Here is a list of frequently asked questions. -*NSB Prox does not work on Baremetal, How do I resolve this?* -------------------------------------------------------------- +NSB Prox does not work on Baremetal, How do I resolve this? +----------------------------------------------------------- -If PROX NSB does not work on baremetal, problem is either in network configuration or test file. - -*Solution* +If PROX NSB does not work on baremetal, problem is either in network +configuration or test file. 1. Verify network configuration. Execute existing baremetal test.:: @@ -1011,13 +1144,11 @@ If PROX NSB does not work on baremetal, problem is either in network configurati See ``Link detected`` if ``yes`` .... Cable is good. If ``no`` you have an issue with your cable/port. -2. If existing baremetal works then issue is with your test. Check the traffic generator gen_<test>-<ports>.cfg to ensure - it is producing a valid packet. - -*How do I debug NSB Prox on Baremetal?* ---------------------------------------- +2. If existing baremetal works then issue is with your test. Check the traffic + generator gen_<test>-<ports>.cfg to ensure it is producing a valid packet. -*Solution* +How do I debug NSB Prox on Baremetal? +------------------------------------- 1. Execute the test as follows:: @@ -1033,7 +1164,8 @@ If PROX NSB does not work on baremetal, problem is either in network configurati cd /opt/nsb_bin/prox -f /tmp/handle_<test>-<ports>.cfg -4. Now let's examine the Generator Output. In this case the output of gen_l2fwd-4.cfg. +4. Now let's examine the Generator Output. In this case the output of + ``gen_l2fwd-4.cfg``. .. image:: images/PROX_Gen_GUI.png :width: 1000px @@ -1048,10 +1180,12 @@ If PROX NSB does not work on baremetal, problem is either in network configurati It appears what is transmitted is received. .. Caution:: - The number of packets MAY not exactly match because the ports are read in sequence. + The number of packets MAY not exactly match because the ports are read in + sequence. .. Caution:: - What is transmitted on PORT X may not always be received on same port. Please check the Test scenario. + What is transmitted on PORT X may not always be received on same port. + Please check the Test scenario. 5. Now lets examine the SUT Output @@ -1080,26 +1214,24 @@ If PROX NSB does not work on baremetal, problem is either in network configurati dump_tx 1 0 1 -*NSB Prox works on Baremetal but not in Openstack. How do I resolve this?* --------------------------------------------------------------------------- +NSB Prox works on Baremetal but not in Openstack. How do I resolve this? +------------------------------------------------------------------------ -NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A badly -formed packed may still work with PROX on Baremetal. However on +NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A +badly formed packed may still work with PROX on Baremetal. However on Openstack the packet must be correct and all fields of the header correct. -Eg A packet with an invalid Protocol ID would still work in Baremetal -but this packet would be rejected by openstack. +E.g. A packet with an invalid Protocol ID would still work in Baremetal but +this packet would be rejected by openstack. -*Solution* 1. Check the validity of the packet. 2. Use a known good packet in your test - 3. If using ``Random`` fields in the traffic generator, disable them and retry. - + 3. If using ``Random`` fields in the traffic generator, disable them and + retry. -*How do I debug NSB Prox on Openstack?* ---------------------------------------- -*Solution* +How do I debug NSB Prox on Openstack? +------------------------------------- 1. Execute the test as follows:: @@ -1111,7 +1243,8 @@ but this packet would be rejected by openstack. 3. Install openstack credentials. - Depending on your openstack deployment, the location of these credentials may vary. + Depending on your openstack deployment, the location of these credentials + may vary. On this platform I do this via:: scp root@10.237.222.55:/etc/kolla/admin-openrc.sh . @@ -1127,8 +1260,8 @@ but this packet would be rejected by openstack. b. Get the Floating IP of the Traffic Generator & SUT - This generates a lot of information. Please not the floating IP of the VNF and - the Traffic Generator. + This generates a lot of information. Please note the floating IP of the + VNF and the Traffic Generator. .. image:: images/PROX_Openstack_stack_show_a.png :width: 1000px @@ -1169,10 +1302,8 @@ but this packet would be rejected by openstack. Now continue as baremetal. -*How do I resolve "Quota exceeded for resources"* -------------------------------------------------- - -*Solution* +How do I resolve "Quota exceeded for resources" +----------------------------------------------- This usually occurs due to 2 reasons when executing an openstack test. @@ -1206,16 +1337,15 @@ This usually occurs due to 2 reasons when executing an openstack test. openstack quota set <resource> -*Openstack Cli fails or hangs. How do I resolve this?* ------------------------------------------------------- - -*Solution* +Openstack CLI fails or hangs. How do I resolve this? +---------------------------------------------------- If it fails due to :: Missing value auth-url required for auth plugin password -Check your shell environment for Openstack variables. One of them should contain the authentication URL :: +Check your shell environment for Openstack variables. One of them should +contain the authentication URL :: OS_AUTH_URL=``https://192.168.72.41:5000/v3`` @@ -1239,16 +1369,16 @@ Result :: and visible. -If the Openstack Cli appears to hang, then verify the proxys and no_proxy are set correctly. -They should be similar to :: +If the Openstack CLI appears to hang, then verify the proxys and ``no_proxy`` +are set correctly. They should be similar to :: - FTP_PROXY="http://proxy.ir.intel.com:911/" - HTTPS_PROXY="http://proxy.ir.intel.com:911/" - HTTP_PROXY="http://proxy.ir.intel.com:911/" + FTP_PROXY="http://<your_proxy>:<port>/" + HTTPS_PROXY="http://<your_proxy>:<port>/" + HTTP_PROXY="http://<your_proxy>:<port>/" NO_PROXY="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com" - ftp_proxy="http://proxy.ir.intel.com:911/" - http_proxy="http://proxy.ir.intel.com:911/" - https_proxy="http://proxy.ir.intel.com:911/" + ftp_proxy="http://<your_proxy>:<port>/" + http_proxy="http://<your_proxy>:<port>/" + https_proxy="http://<your_proxy>:<port>/" no_proxy="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com" Where @@ -1256,11 +1386,9 @@ Where 1) 10.237.222.55 = IP Address of deployment node 2) 10.237.223.80 = IP Address of Controller node 3) 10.237.222.134 = IP Address of Compute Node - 4) ir.intel.com = local no proxy - -*How to Understand the Grafana output?* ---------------------------------------- +How to Understand the Grafana output? +------------------------------------- .. image:: images/PROX_Grafana_1.png :width: 1000px @@ -1278,50 +1406,75 @@ Where :width: 1000px :alt: NSB PROX Grafana_4 -A. Test Parameters - Test interval, Duartion, Tolerated Loss and Test Precision + .. image:: images/PROX_Grafana_5.png + :width: 1000px + :alt: NSB PROX Grafana_5 + + .. image:: images/PROX_Grafana_6.png + :width: 1000px + :alt: NSB PROX Grafana_6 -B. Overall No of packets send and received during test +A. Test Parameters - Test interval, Duration, Tolerated Loss and Test Precision -C. Generator Stats - packets sent, received and attempted by Generator +B. No. of packets send and received during test -D. Packets Size +C. Generator Stats - Average Throughput per step (Step Duration is specified by + "Duration" field in A above) -E. No of packets received by SUT +D. Packet size -F. No of packets forwarded by SUT +E. No. of packets sent by the generator per second per interface in millions + of packets per second. -G. This is the number of packets sent by the generator per port, for each interval. +F. No. of packets recieved by the generator per second per interface in millions + of packets per second. -H. This is the number of packets received by the generator per port, for each interval. +G. No. of packets received by the SUT from the generator in millions of packets + per second. -I. This is the number of packets send and received by the generator and lost by the SUT - that meet the success criteria +H. No. of packets sent by the the SUT to the generator in millions of packets + per second. -J. This is the changes the Percentage of Line Rate used over a test, The MAX and the - MIN should converge to within the interval specified as the ``test-precision``. +I. No. of packets sent by the Generator to the SUT per step per interface + in millions of packets per second. -K. This is the packets Size supported during test. If "N/A" appears in any field the result has not been decided. +J. No. of packets received by the Generator from the SUT per step per interface + in millions of packets per second. -L. This is the calculated throughput in MPPS(Million Packets Per second) for this line rate. +K. No. of packets sent and received by the generator and lost by the SUT that + meet the success criteria -M. This is the actual No, of packets sent by the generator in MPPS +L. The change in the Percentage of Line Rate used over a test, The MAX and the + MIN should converge to within the interval specified as the + ``test-precision``. -N. This is the actual No. of packets received by the generator in MPPS +M. Packet size supported during test. If *N/A* appears in any field the + result has not been decided. -O. This is the total No. of packets sent by SUT. +N. The Theretical Maximum no. of packets per second that can be sent for this + packet size. -P. This is the total No. of packets received by the SUT +O. No. of packets sent by the generator in MPPS -Q. This is the total No. of packets dropped. (These packets were sent by the generator but not - received back by the generator, these may be dropped by the SUT or the Generator) +P. No. of packets received by the generator in MPPS -R. This is the tolerated no of packets that can be dropped. +Q. No. of packets sent by SUT. -S. This is the test Throughput in Gbps +R. No. of packets received by the SUT -T. This is the Latencey per Port +S. Total no. of dropped packets -- Packets sent but not received back by the + generator, these may be dropped by the SUT or the generator. -U. This is the CPU Utilization +T. The tolerated no. of dropped packets. +U. Test throughput in Gbps +V. Latencey per Port + * Va - Port XE0 + * Vb - Port XE1 + * Vc - Port XE0 + * Vd - Port XE0 +W. CPU Utilization + * Wa - CPU Utilization of the Generator + * Wb - CPU Utilization of the SUT diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_1.png b/docs/testing/developer/devguide/images/PROX_Grafana_1.png Binary files differindex d272edcf3..144000bb8 100644 --- a/docs/testing/developer/devguide/images/PROX_Grafana_1.png +++ b/docs/testing/developer/devguide/images/PROX_Grafana_1.png diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_2.png b/docs/testing/developer/devguide/images/PROX_Grafana_2.png Binary files differindex 4f7fd4cf5..af1ebb315 100644 --- a/docs/testing/developer/devguide/images/PROX_Grafana_2.png +++ b/docs/testing/developer/devguide/images/PROX_Grafana_2.png diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_3.png b/docs/testing/developer/devguide/images/PROX_Grafana_3.png Binary files differindex 5ae967698..a287670c9 100644 --- a/docs/testing/developer/devguide/images/PROX_Grafana_3.png +++ b/docs/testing/developer/devguide/images/PROX_Grafana_3.png diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_4.png b/docs/testing/developer/devguide/images/PROX_Grafana_4.png Binary files differindex 5353d1c7e..6752dc324 100644 --- a/docs/testing/developer/devguide/images/PROX_Grafana_4.png +++ b/docs/testing/developer/devguide/images/PROX_Grafana_4.png diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_5.png b/docs/testing/developer/devguide/images/PROX_Grafana_5.png Binary files differnew file mode 100644 index 000000000..45746d86b --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Grafana_5.png diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_6.png b/docs/testing/developer/devguide/images/PROX_Grafana_6.png Binary files differnew file mode 100644 index 000000000..5bdbcf26a --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Grafana_6.png diff --git a/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png b/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png Binary files differindex c09f7bb1b..84e640a20 100644 --- a/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png +++ b/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png diff --git a/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_1.png b/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_1.png Binary files differnew file mode 100644 index 000000000..73f6f2920 --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_1.png diff --git a/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_2.png b/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_2.png Binary files differnew file mode 100644 index 000000000..ad7d22822 --- /dev/null +++ b/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_2.png diff --git a/docs/testing/developer/devguide/images/vPE_Diagram.png b/docs/testing/developer/devguide/images/vPE_Diagram.png Binary files differnew file mode 100644 index 000000000..c3985b7d9 --- /dev/null +++ b/docs/testing/developer/devguide/images/vPE_Diagram.png diff --git a/docs/testing/developer/devguide/index.rst b/docs/testing/developer/devguide/index.rst index 9a76a32f1..194099a27 100644 --- a/docs/testing/developer/devguide/index.rst +++ b/docs/testing/developer/devguide/index.rst @@ -11,7 +11,6 @@ Yardstick Developer Guide .. toctree:: :maxdepth: 4 - :numbered: devguide devguide_nsb_prox diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst index 494b1ef3d..5fc2e8d0f 100755 --- a/docs/testing/user/userguide/01-introduction.rst +++ b/docs/testing/user/userguide/01-introduction.rst @@ -9,8 +9,8 @@ Introduction **Welcome to Yardstick's documentation !** -.. _Pharos: https://wiki.opnfv.org/pharos -.. _Yardstick: https://wiki.opnfv.org/yardstick +.. _Pharos: https://wiki.opnfv.org/display/pharos +.. _Yardstick: https://wiki.opnfv.org/display/yardstick .. _Presentation: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_yardstick_project.pdf?version=1&modificationDate=1458848320000&api=v2 Yardstick_ is an OPNFV Project. @@ -70,7 +70,7 @@ This document consists of the following chapters: Yardstick - Network service benchmarking to test real world usecase for a given VNF. -* Chapter :doc:`13-nsb_installation` provides instructions to install +* Chapter :doc:`13-nsb-installation` provides instructions to install *Yardstick - Network Service Benchmarking (NSB) testing*. * Chapter :doc:`14-nsb-operation` provides information on running *NSB* @@ -83,4 +83,4 @@ Contact Yardstick Feedback? `Contact us`_ -.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="[yardstick]" +.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="#yardstick" diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst index 886631510..62250d6a3 100755 --- a/docs/testing/user/userguide/03-architecture.rst +++ b/docs/testing/user/userguide/03-architecture.rst @@ -243,26 +243,27 @@ Yardstick Directory structure with support for different installers. *docs/* - All documentation is stored here, such as configuration guides, - user guides and Yardstick descriptions. + user guides and Yardstick test case descriptions. *etc/* - Used for test cases requiring specific POD configurations. *samples/* - test case samples are stored here, most of all scenario and - feature's samples are shown in this directory. + feature samples are shown in this directory. -*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as - well as the test cases run to verify the NFVI (*opnfv/*) are stored. - Also configurations of what to run daily and weekly at the different - PODs is located here. +*tests/* - The test cases run to verify the NFVI (*opnfv/*) are stored here. + The configurations of what to run daily and weekly at the different + PODs are also located here. -*tools/* - Currently contains tools to build image for VMs which are deployed - by Heat. Currently contains how to build the yardstick-trusty-server - image with the different tools that are needed from within the - image. +*tools/* - Contains tools to build image for VMs which are deployed by Heat. + Currently contains how to build the yardstick-image with the + different tools that are needed from within the image. *plugin/* - Plug-in configuration files are stored here. -*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts, - CLI parsing, keys, plotting tools, dispatcher, plugin +*yardstick/* - Contains the internals of Yardstick: :term:`Runners <runner>`, + :term:`Scenarios <scenario>`, :term:`Contexts <context>`, CLI + parsing, keys, plotting tools, dispatcher, plugin install/remove scripts and so on. +*yardstick/tests* - The Yardstick internal tests (*functional/* and *unit/*) + are stored here. diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst index d97078909..3ba312ce7 100644 --- a/docs/testing/user/userguide/04-installation.rst +++ b/docs/testing/user/userguide/04-installation.rst @@ -3,6 +3,17 @@ .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. +.. + Convention for heading levels in Yardstick documentation: + + ======= Heading 0 (reserved for the title in a document) + ------- Heading 1 + ~~~~~~~ Heading 2 + +++++++ Heading 3 + ''''''' Heading 4 + + Avoid deeper levels because they do not render well. + ====================== Yardstick Installation ====================== @@ -564,17 +575,17 @@ Grafana to display data in the following sections. Automatic deployment of InfluxDB and Grafana containers (**recommended**) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Firstly, enter the Yardstick container:: +1. Enter the Yardstick container:: - sudo -EH docker exec -it yardstick /bin/bash + sudo -EH docker exec -it yardstick /bin/bash -Secondly, create InfluxDB container and configure with the following command:: +2. Create InfluxDB container and configure with the following command:: - yardstick env influxdb + yardstick env influxdb -Thirdly, create and configure Grafana container:: +3. Create and configure Grafana container:: - yardstick env grafana + yardstick env grafana Then you can run a test case and visit http://host_ip:1948 (``admin``/``admin``) to see the results. @@ -602,21 +613,21 @@ Run influxDB:: sudo -EH docker run -d --name influxdb \ -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ tutum/influxdb - docker exec -it influxdb bash Configure influxDB:: - influx - >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES - >CREATE DATABASE yardstick; - >use yardstick; - >show MEASUREMENTS; + docker exec -it influxdb influx + > CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + > CREATE DATABASE yardstick; + > use yardstick; + > show MEASUREMENTS; + > exit Run Grafana:: sudo -EH docker run -d --name grafana -p 1948:3000 grafana/grafana -Log on http://{YOUR_IP_HERE}:1948 using ``admin``/``admin`` and configure +Log on to ``http://{YOUR_IP_HERE}:1948`` using ``admin``/``admin`` and configure database resource to be ``{YOUR_IP_HERE}:8086``. .. image:: images/Grafana_config.png @@ -629,7 +640,7 @@ Configure ``yardstick.conf``:: sudo cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf sudo vi /etc/yardstick/yardstick.conf -Modify ``yardstick.conf``:: +Modify ``yardstick.conf`` to add the ``influxdb`` dispatcher:: [DEFAULT] debug = True @@ -642,7 +653,7 @@ Modify ``yardstick.conf``:: username = root password = root -Now you can run Yardstick test cases and store the results in influxDB. +Now Yardstick will store results in InfluxDB when you run a testcase. Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**) diff --git a/docs/testing/user/userguide/05-operation.rst b/docs/testing/user/userguide/05-operation.rst index f390d1643..82539c97f 100644 --- a/docs/testing/user/userguide/05-operation.rst +++ b/docs/testing/user/userguide/05-operation.rst @@ -183,7 +183,7 @@ Combining these elements together, a sample Heat context config looks like: .. literalinclude:: ../../../../yardstick/tests/integration/dummy-scenario-heat-context.yaml :start-after: --- - :empahsise-lines: 14- + :emphasize-lines: 14- Using exisiting HOT Templates ''''''''''''''''''''''''''''' diff --git a/docs/testing/user/userguide/08-grafana.rst b/docs/testing/user/userguide/08-grafana.rst index 29bc23a08..020a08a65 100644 --- a/docs/testing/user/userguide/08-grafana.rst +++ b/docs/testing/user/userguide/08-grafana.rst @@ -36,7 +36,7 @@ of TC002. .. image:: images/TC002.png :width: 800px - :alt:TC002 dashboard + :alt: TC002 dashboard For each test case dashboard. On the top left, we have a dashboard selection, you can switch to different test cases using this pull-down menu. diff --git a/docs/testing/user/userguide/09-api.rst b/docs/testing/user/userguide/09-api.rst index f0ae3980b..1a896699b 100644 --- a/docs/testing/user/userguide/09-api.rst +++ b/docs/testing/user/userguide/09-api.rst @@ -433,7 +433,7 @@ Example:: /api/v2/yardstick/tasks/<task_id> --------------------------------- +--------------------------------- Description: This API is used to do some work related to yardstick tasks. For Euphrates, it supports: diff --git a/docs/testing/user/userguide/10-yardstick-user-interface.rst b/docs/testing/user/userguide/10-yardstick-user-interface.rst index cadec78ef..b3056ec99 100644 --- a/docs/testing/user/userguide/10-yardstick-user-interface.rst +++ b/docs/testing/user/userguide/10-yardstick-user-interface.rst @@ -2,29 +2,49 @@ Yardstick User Interface ======================== -This interface provides a user to view the test result -in table format and also values pinned on to a graph. +This chapter describes how to generate HTML reports, used to view, store, share +or publish test results in table and graph formats. +The following layouts are available: -Command -======= -:: +* The compact HTML report layout is suitable for testcases producing a few + metrics over a short period of time. All metrics for all timestamps are + displayed in the data table and on the graph. + +* The dynamic HTML report layout consists of a wider data table, a graph, and + a tree that allows selecting the metrics to be displayed. This layout is + suitable for testcases, such as NSB ones, producing a lot of metrics over + a longer period of time. + + +Commands +======== + +To generate the compact HTML report, run:: yardstick report generate <task-ID> <testcase-filename> +To generate the dynamic HTML report, run:: + + yardstick report generate-nsb <task-ID> <testcase-filename> + Description =========== -1. When the command is triggered using the task-id and the testcase -name provided the respective values are retrieved from the -database (influxdb in this particular case). +1. When the command is triggered, the relevant values for the + provided task-id and testcase name are retrieved from the + database (`InfluxDB`_ in this particular case). -2. The values are then formatted and then provided to the html -template framed with complete html body using Django Framework. +2. The values are then formatted and provided to the html + template to be rendered using `Jinja2`_. -3. Then the whole template is written into a html file. +3. Then the rendered template is written into a html file. The graph is framed with Timestamp on x-axis and output values (differ from testcase to testcase) on y-axis with the help of -"Highcharts". +`Chart.js`_. + +.. _InfluxDB: https://www.influxdata.com/time-series-platform/influxdb/ +.. _Jinja2: http://jinja.pocoo.org/docs/2.10/ +.. _Chart.js: https://www.chartjs.org/ diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst index 71a5c1130..7b0d46804 100644 --- a/docs/testing/user/userguide/12-nsb-overview.rst +++ b/docs/testing/user/userguide/12-nsb-overview.rst @@ -10,7 +10,7 @@ Network Services Benchmarking (NSB) Abstract ======== -.. _Yardstick: https://wiki.opnfv.org/yardstick +.. _Yardstick: https://wiki.opnfv.org/display/yardstick This chapter provides an overview of the NSB, a contribution to OPNFV Yardstick_ from Intel. diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst index 0b76cdd30..973d56628 100644 --- a/docs/testing/user/userguide/13-nsb-installation.rst +++ b/docs/testing/user/userguide/13-nsb-installation.rst @@ -1,14 +1,25 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. +.. (c) OPNFV, 2016-2018 Intel Corporation. + +.. + Convention for heading levels in Yardstick documentation: + + ======= Heading 0 (reserved for the title in a document) + ------- Heading 1 + ~~~~~~~ Heading 2 + +++++++ Heading 3 + ''''''' Heading 4 + + Avoid deeper levels because they do not render well. ===================================== Yardstick - NSB Testing -Installation ===================================== Abstract -======== +-------- The Network Service Benchmarking (NSB) extends the yardstick framework to do VNF characterization and benchmarking in three different execution @@ -27,7 +38,7 @@ The steps needed to run Yardstick with NSB testing are: Prerequisites -============= +------------- Refer chapter Yardstick Installation for more information on yardstick prerequisites @@ -46,7 +57,7 @@ Several prerequisites are needed for Yardstick (VNF testing): * intel-cmt-cat Hardware & Software Ingredients -------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SUT requirements: @@ -85,7 +96,7 @@ Boot and BIOS settings: Install Yardstick (NSB Testing) -=============================== +------------------------------- Download the source code and install Yardstick from it @@ -172,8 +183,8 @@ Another way to execute an installation for a Bare-Metal or a Standalone context is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation` for more details. -System Topology: -================ +System Topology +--------------- .. code-block:: console @@ -188,10 +199,10 @@ System Topology: Environment parameters and credentials -====================================== +-------------------------------------- Config yardstick conf ---------------------- +~~~~~~~~~~~~~~~~~~~~~ If user did not run 'yardstick env influxdb' inside the container, which will generate correct ``yardstick.conf``, then create the config file manually (run @@ -222,11 +233,11 @@ Add trex_path, trex_client_lib and bin_path in 'nsb' section. trex_client_lib=/opt/nsb_bin/trex_client/stl Run Yardstick - Network Service Testcases -========================================= +----------------------------------------- NS testing - using yardstick CLI --------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ See :doc:`04-installation` @@ -239,13 +250,13 @@ NS testing - using yardstick CLI yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case> Network Service Benchmarking - Bare-Metal -========================================= +----------------------------------------- Bare-Metal Config pod.yaml describing Topology ----------------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bare-Metal 2-Node setup -^^^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++++ .. code-block:: console +----------+ +----------+ @@ -258,7 +269,7 @@ Bare-Metal 2-Node setup trafficgen_1 vnf Bare-Metal 3-Node setup - Correlated Traffic -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++++++++++++++ .. code-block:: console +----------+ +----------+ +------------+ @@ -273,7 +284,7 @@ Bare-Metal 3-Node setup - Correlated Traffic Bare-Metal Config pod.yaml --------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~ Before executing Yardstick test cases, make sure that pod.yaml reflects the topology and update all the required fields.:: @@ -348,13 +359,13 @@ topology and update all the required fields.:: Network Service Benchmarking - Standalone Virtualization -======================================================== +-------------------------------------------------------- SR-IOV ------- +~~~~~~ SR-IOV Pre-requisites -^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++ On Host, where VM is created: a) Create and configure a bridge named ``br-int`` for VM to connect to external network. @@ -425,10 +436,10 @@ On Host, where VM is created: SR-IOV Config pod.yaml describing Topology -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++++++++++++ -SR-IOV 2-Node setup: -^^^^^^^^^^^^^^^^^^^^ +SR-IOV 2-Node setup ++++++++++++++++++++ .. code-block:: console +--------------------+ @@ -456,7 +467,7 @@ SR-IOV 2-Node setup: SR-IOV 3-Node setup - Correlated Traffic -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++++++++++ .. code-block:: console +--------------------+ @@ -492,7 +503,7 @@ topology and update all the required fields. .. note:: Update all the required fields like ip, user, password, pcis, etc... SR-IOV Config pod_trex.yaml -^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++++++++ .. code-block:: YAML @@ -521,7 +532,7 @@ SR-IOV Config pod_trex.yaml local_mac: "00:00.00:00:00:02" SR-IOV Config host_sriov.yaml -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++++++++++ .. code-block:: YAML @@ -537,7 +548,7 @@ SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml`` Update "contexts" section -""""""""""""""""""""""""" +''''''''''''''''''''''''' .. code-block:: YAML @@ -582,10 +593,10 @@ Update "contexts" section OVS-DPDK --------- +~~~~~~~~ OVS-DPDK Pre-requisites -^^^^^^^^^^^^^^^^^^^^^^^ +~~~~~~~~~~~~~~~~~~~~~~~ On Host, where VM is created: a) Create and configure a bridge named ``br-int`` for VM to connect to external network. @@ -659,11 +670,10 @@ On Host, where VM is created: OVS-DPDK Config pod.yaml describing Topology -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++++++++++++++ OVS-DPDK 2-Node setup -^^^^^^^^^^^^^^^^^^^^^ - ++++++++++++++++++++++ .. code-block:: console @@ -693,7 +703,7 @@ OVS-DPDK 2-Node setup OVS-DPDK 3-Node setup - Correlated Traffic -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++++++++++++ .. code-block:: console @@ -733,7 +743,7 @@ topology and update all the required fields. .. note:: Update all the required fields like ip, user, password, pcis, etc... OVS-DPDK Config pod_trex.yaml -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++++++++++ .. code-block:: YAML @@ -761,7 +771,7 @@ OVS-DPDK Config pod_trex.yaml local_mac: "00:00.00:00:00:02" OVS-DPDK Config host_ovs.yaml -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++++++++++ .. code-block:: YAML @@ -777,7 +787,7 @@ ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml`` Update "contexts" section -""""""""""""""""""""""""" +''''''''''''''''''''''''' .. code-block:: YAML @@ -832,7 +842,7 @@ Update "contexts" section Network Service Benchmarking - OpenStack with SR-IOV support -============================================================ +------------------------------------------------------------ This section describes how to run a Sample VNF test case, using Heat context, with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using @@ -840,7 +850,7 @@ DevStack, with SR-IOV support. Single node OpenStack setup with external TG --------------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: console @@ -871,7 +881,7 @@ Single node OpenStack setup with external TG Host pre-configuration -^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++ .. warning:: The following configuration requires sudo access to the system. Make sure that your user have the access. @@ -926,7 +936,7 @@ Setup system proxy (if needed). Add the following configuration into the ``/etc/environment`` file: .. note:: The proxy server name/port and IPs should be changed according to - actuall/current proxy configuration in the lab. + actual/current proxy configuration in the lab. .. code:: bash @@ -971,7 +981,7 @@ Setup SR-IOV ports on the host: DevStack installation -^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++ Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_ documentation to install OpenStack on a host. Please note, that stable @@ -993,7 +1003,7 @@ Start the devstack installation on a host. TG host configuration -^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++ Yardstick automatically install and configure Trex traffic generator on TG host based on provided POD file (see below). Anyway, it's recommended to check @@ -1002,7 +1012,7 @@ the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html. Run the Sample VNF test case -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++ There is an example of Sample VNF test case ready to be executed in an OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/ @@ -1027,7 +1037,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section. Multi node OpenStack TG and VNF setup (two nodes) -------------------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: console @@ -1058,14 +1068,14 @@ Multi node OpenStack TG and VNF setup (two nodes) Controller/Compute pre-configuration -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++++++++++++++++ Pre-configuration of the controller and compute hosts are the same as described in `Host pre-configuration`_ section. Follow the steps in the section. DevStack configuration -^^^^^^^^^^^^^^^^^^^^^^ +++++++++++++++++++++++ Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_ documentation to install OpenStack on a host. Please note, that stable @@ -1092,7 +1102,7 @@ Start the devstack installation on the controller and compute hosts. Run the sample vFW TC -^^^^^^^^^^^^^^^^^^^^^ ++++++++++++++++++++++ Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack context. @@ -1109,10 +1119,10 @@ and the following yardtick command line arguments: Enabling other Traffic generator -================================ +-------------------------------- IxLoad -^^^^^^ +~~~~~~ 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site) @@ -1153,7 +1163,7 @@ IxLoad ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml`` IxNetwork ---------- +~~~~~~~~~ IxNetwork testcases use IxNetwork API Python Bindings module, which is installed as part of the requirements of the project. @@ -1182,3 +1192,52 @@ installed as part of the requirements of the project. 3. Execute testcase in samplevnf folder e.g. ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml`` + +Spirent Landslide +----------------- + +In order to use Spirent Landslide for vEPC testcases, some dependencies have +to be preinstalled and properly configured. + +- Java + + 32-bit Java installation is required for the Spirent Landslide TCL API. + + | ``$ sudo apt-get install openjdk-8-jdk:i386`` + + .. important:: + Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details + check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>` + section of installation instructions. + +- LsApi (Tcl API module) + + Follow Landslide documentation for detailed instructions on Linux + installation of Tcl API and its dependencies + ``http://TAS_HOST_IP/tclapiinstall.html``. + For working with LsApi Python wrapper only steps 1-5 are required. + + .. note:: After installation make sure your API home path is included in + ``PYTHONPATH`` environment variable. + + .. important:: + The current version of LsApi module has an issue with reading LD_LIBRARY_PATH. + For LsApi module to initialize correctly following lines (184-186) in + lsapi.py + + .. code-block:: python + + ldpath = os.environ.get('LD_LIBRARY_PATH', '') + if ldpath == '': + environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath + + should be changed to: + + .. code-block:: python + + ldpath = os.environ.get('LD_LIBRARY_PATH', '') + if not ldpath == '': + environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath + +.. note:: The Spirent landslide TCL software package needs to be updated in case + the user upgrades to a new version of Spirent landslide software. diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst index a5f3a0cf6..72b1c4bbc 100644 --- a/docs/testing/user/userguide/14-nsb-operation.rst +++ b/docs/testing/user/userguide/14-nsb-operation.rst @@ -1,7 +1,7 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. +.. (c) OPNFV, 2016-2018 Intel Corporation. Yardstick - NSB Testing - Operation =================================== @@ -256,7 +256,7 @@ to the VNF. An example scale-up Heat testcase is: -.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml +.. literalinclude:: /../samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml :language: yaml This testcase template requires specifying the number of VCPUs, Memory and Ports. @@ -271,7 +271,7 @@ In order to support ports scale-up, traffic and topology templates need to be us A example topology template is: -.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml +.. literalinclude:: /../samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml :language: yaml This template has ``vports`` as an argument. To pass this argument it needs to @@ -293,7 +293,7 @@ For example: A example traffic profile template is: -.. literalinclude:: /submodules/yardstick/samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml +.. literalinclude:: /../samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml :language: yaml There is an option to provide predefined config for SampleVNFs. Path to config @@ -434,6 +434,43 @@ There two types of Standalone contexts available: OVS-DPDK and SRIOV. OVS-DPDK uses OVS network with DPDK drivers. SRIOV enables network traffic to bypass the software switch layer of the Hyper-V stack. +Emulated machine type +^^^^^^^^^^^^^^^^^^^^^ + +For better performance test results of emulated VM spawned by Yardstick SA +context (OvS-DPDK/SRIOV), it may be important to control the emulated machine +type used by QEMU emulator. This attribute can be configured via TC definition +in ``contexts`` section under ``extra_specs`` configuration. + +For example: + +.. code-block:: yaml + + contexts: + ... + - type: StandaloneSriov + ... + flavor: + ... + extra_specs: + ... + machine_type: pc-i440fx-bionic + +Where, ``machine_type`` can be set to one of the emulated machine type +supported by QEMU running on SUT platform. To get full list of supported +emulated machine types, the following command can be used on the target SUT +host. + +.. code-block:: yaml + + # qemu-system-x86_64 -machine ? + +By default, the ``machine_type`` option is set to ``pc-i440fx-xenial`` which is +suitable for running Ubuntu 16.04 VM image. So, if this type is not supported +by the target platform or another VM image is used for stand alone (SA) context +VM (e.g.: ``bionic`` image for Ubuntu 18.04), this configuration should be +changed accordingly. + Standalone with OVS-DPDK ^^^^^^^^^^^^^^^^^^^^^^^^ @@ -457,5 +494,140 @@ Sample test case file 4. Modify ``networks/phy_port`` accordingly to the baremetal setup. 5. Run test from: -.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml +.. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml :language: yaml + +Preparing test run of vEPC test case +------------------------------------ + +Provided vEPC test cases are examples of emulation of vEPC infrastructure +components, such as UE, eNodeB, MME, SGW, PGW. + +Location of vEPC test cases: ``samples/vnf_samples/nsut/vepc/``. + +Before running a specific vEPC test case using NSB, some preconfiguration +needs to be done. + +Update Spirent Landslide TG configuration in pod file +===================================================== + +Examples of ``pod.yaml`` files could be found in +:file:`etc/yardstick/nodes/standalone`. +The name of related pod file could be checked in the context section of NSB +test case. + +The ``pod.yaml`` related to vEPC test case uses some sub-structures that hold the +details of accessing the Spirent Landslide traffic generator. +These subsections and the changes to be done in provided example pod file are +described below. + +1. ``tas_manager``: data under this key holds the information required to +access Landslide TAS (Test Administration Server) and perform needed +configurations on it. + + * ``ip``: IP address of TAS Manager node; should be updated according to test + setup used + * ``super_user``: superuser name; could be retrieved from Landslide documentation + * ``super_user_password``: superuser password; could be retrieved from + Landslide documentation + * ``cfguser_password``: password of predefined user named 'cfguser'; default + password could be retrieved from Landslide documentation + * ``test_user``: username to be used during test run as a Landslide library + name; to be defined by test run operator + * ``test_user_password``: password of test user; to be defined by test run + operator + * ``proto``: *http* or *https*; to be defined by test run operator + * ``license``: Landslide license number installed on TAS + +2. The ``config`` section holds information about test servers (TSs) and +systems under test (SUTs). Data is represented as a list of entries. +Each such entry contains: + + * ``test_server``: this subsection represents data related to test server + configuration, such as: + + * ``name``: test server name; unique custom name to be defined by test + operator + * ``role``: this value is used as a key to bind specific Test Server and + TestCase; should be set to one of test types supported by TAS license + * ``ip``: Test Server IP address + * ``thread_model``: parameter related to Test Server performance mode. + The value should be one of the following: "Legacy" | "Max" | "Fireball". + Refer to Landslide documentation for details. + * ``phySubnets``: a structure used to specify IP ranges reservations on + specific network interfaces of related Test Server. Structure fields are: + + * ``base``: start of IP address range + * ``mask``: IP range mask in CIDR format + * ``name``: network interface name, e.g. *eth1* + * ``numIps``: size of IP address range + + * ``preResolvedArpAddress``: a structure used to specify the range of IP + addresses for which the ARP responses will be emulated + + * ``StartingAddress``: IP address specifying the start of IP address range + * ``NumNodes``: size of the IP address range + + * ``suts``: a structure that contains definitions of each specific SUT + (represents a vEPC component). SUT structure contains following key/value + pairs: + + * ``name``: unique custom string specifying SUT name + * ``role``: string value corresponding with an SUT role specified in the + session profile (test session template) file + * ``managementIp``: SUT management IP adress + * ``phy``: network interface name, e.g. *eth1* + * ``ip``: vEPC component IP address used in test case topology + * ``nextHop``: next hop IP address, to allow for vEPC inter-node communication + +Update NSB test case definitions +================================ +NSB test case file designated for vEPC testing contains an example of specific +test scenario configuration. +Test operator may change these definitions as required for the use case that +requires testing. +Specifically, following subsections of the vEPC test case (section **scenarios**) +may be changed. + +1. Subsection ``options``: contains custom parameters used for vEPC testing + + * subsection ``dmf``: may contain one or more parameters specified in + ``traffic_profile`` template file + * subsection ``test_cases``: contains re-definitions of parameters specified + in ``session_profile`` template file + + .. note:: All parameters in ``session_profile``, value of which is a + placeholder, needs to be re-defined to construct a valid test session. + +2. Subsection ``runner``: specifies the test duration and the interval of +TG and VNF side KPIs polling. For more details, refer to :doc:`03-architecture`. + +Preparing test run of vPE test case +----------------------------------- +The vPE (Provider Edge Router) is a :term: `VNF` approximation +serving as an Edge Router. The vPE is approximated using the +``ip_pipeline`` dpdk application. + + .. image:: images/vPE_Diagram.png + :width: 800px + :alt: NSB vPE Diagram + +The ``vpe_config`` file must be passed as it is not auto generated. +The ``vpe_script`` defines the rules applied to each of the pipelines. This can be +auto generated or a file can be passed using the ``script_file`` option in +``vnf_config`` as shown below. The ``full_tm_profile_file`` option must be +used if a traffic manager is defined in ``vpe_config``. + +.. code-block:: yaml + + vnf_config: { file: './vpe_config/vpe_config_2_ports', + action_bulk_file: './vpe_config/action_bulk_512.txt', + full_tm_profile_file: './vpe_config/full_tm_profile_10G.cfg', + script_file: './vpe_config/vpe_script_sample' } + +Testcases for vPE can be found in the ``vnf_samples/nsut/vpe`` directory. +A testcase can be started with the following command as an example: + +.. code-block:: bash + + yardstick task start /yardstick/samples/vnf_samples/nsut/vpe/tc_baremetal_rfc2544_ipv4_1flow_64B_ixia.yaml diff --git a/docs/testing/user/userguide/15-list-of-tcs.rst b/docs/testing/user/userguide/15-list-of-tcs.rst index 0efecebd1..8990800c1 100644 --- a/docs/testing/user/userguide/15-list-of-tcs.rst +++ b/docs/testing/user/userguide/15-list-of-tcs.rst @@ -29,6 +29,7 @@ Generic NFVI Test Case Descriptions opnfv_yardstick_tc002.rst opnfv_yardstick_tc004.rst opnfv_yardstick_tc005.rst + opnfv_yardstick_tc006.rst opnfv_yardstick_tc008.rst opnfv_yardstick_tc009.rst opnfv_yardstick_tc010.rst @@ -57,6 +58,7 @@ Generic NFVI Test Case Descriptions opnfv_yardstick_tc080.rst opnfv_yardstick_tc081.rst opnfv_yardstick_tc083.rst + opnfv_yardstick_tc084.rst OPNFV Feature Test Cases ======================== @@ -83,6 +85,10 @@ H A opnfv_yardstick_tc057.rst opnfv_yardstick_tc058.rst opnfv_yardstick_tc087.rst + opnfv_yardstick_tc088.rst + opnfv_yardstick_tc089.rst + opnfv_yardstick_tc090.rst + opnfv_yardstick_tc091.rst opnfv_yardstick_tc092.rst opnfv_yardstick_tc093.rst @@ -118,17 +124,6 @@ StorPerf opnfv_yardstick_tc074.rst -virtual Traffic Classifier --------------------------- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc006.rst - opnfv_yardstick_tc007.rst - opnfv_yardstick_tc020.rst - opnfv_yardstick_tc021.rst - Templates ========= diff --git a/docs/testing/user/userguide/comp-intro.rst b/docs/testing/user/userguide/comp-intro.rst index ad354b66d..bab6e60da 100644 --- a/docs/testing/user/userguide/comp-intro.rst +++ b/docs/testing/user/userguide/comp-intro.rst @@ -7,10 +7,10 @@ Yardstick ========= -.. _Yardstick: https://wiki.opnfv.org/yardstick +.. _Yardstick: https://wiki.opnfv.org/display/yardstick .. _Presentation: https://wiki.opnfv.org/_media/opnfv_summit_-_yardstick_project.pdf .. _NFV-TST001: https://docbox.etsi.org/ISG/NFV/Open/Drafts/TST001_-_Pre-deployment_Validation/ -.. _Yardsticktst: https://wiki.opnfv.org/_media/opnfv_summit_-_bridging_opnfv_and_etsi.pdf +.. _Yardsticktst: http://events17.linuxfoundation.org/sites/events/files/slides/OPNFV%20Summit%20-%20bridging_opnfv_and_etsi.pdf The project's goal is to verify infrastructure compliance, from the perspective of a Virtual Network Function (VNF). diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst index 1cbd0858f..ff0bb6f5d 100644 --- a/docs/testing/user/userguide/index.rst +++ b/docs/testing/user/userguide/index.rst @@ -11,7 +11,6 @@ Yardstick User Guide .. toctree:: :maxdepth: 4 - :numbered: 01-introduction 02-methodology diff --git a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst index 895837283..6c18c7d89 100644 --- a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst +++ b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst @@ -27,4 +27,12 @@ NSB PROX Test Case Descriptions tc_prox_context_buffering_port tc_prox_context_load_balancer_port tc_prox_context_vpe_port - tc_prox_context_lw_after_port + tc_prox_context_lw_aftr_port + tc_epc_default_bearer_landslide + tc_epc_dedicated_bearer_landslide + tc_epc_saegw_tput_relocation_landslide + tc_epc_network_service_request_landslide + tc_epc_ue_service_request_landslide + tc_vfw_rfc2544 + tc_vfw_rfc2544_correlated + tc_vfw_rfc3511 diff --git a/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst new file mode 100644 index 000000000..c8865ed93 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst @@ -0,0 +1,156 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2018 Intel Corporation. + +********************************************************* +Yardstick Test Case Description: NSB EPC DEDICATED BEARER +********************************************************* + ++-----------------------------------------------------------------------------+ +|NSB EPC dedicated bearer test case | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_epc_{initiator}_dedicated_bearer_landslide | +| | | +| | * initiator: dedicated bearer creation initiator side could | +| | be UE (ue) or Network (network). | +| | | ++--------------+--------------------------------------------------------------+ +|metric | All metrics provided by Spirent Landslide traffic generator | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The Spirent Landslide product provides one box solution which| +| | allows to fully emulate all EPC network nodes including | +| | mobile users, network host and generate control and data | +| | plane traffic. | +| | | +| | This test allows to check processing capability under | +| | different levels of load (number of subscriber, generated | +| | traffic throughput, etc.) for case when default and dedicated| +| | bearers are creating and using for traffic transferring. | +| | | +| | It's easy to replace emulated node or multiple nodes in test | +| | topology with real node or corresponding vEPC VNF as DUT and | +| | check it's processing capabilities under specific test case | +| | load conditions. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The EPC dedicated bearer test cases are listed below: | +| | | +| | * tc_epc_ue_dedicated_bearer_create_landslide.yaml | +| | * tc_epc_network_dedicated_bearer_create_landslide.yaml | +| | | +| | Test duration: | +| | | +| | * is set as 60sec (specified in test session profile); | +| | | +| | Traffic type: | +| | | +| | * UDP; | +| | | +| | Packet sizes: | +| | | +| | * 512 bytes; | +| | | +| | Traffic transaction rate: | +| | | +| | * 5 trans/s.; | +| | | +| | Number of mobile subscribers: | +| | | +| | * 20000; | +| | | +| | Number of default bearers per subscriber: | +| | | +| | * 1; | +| | | +| | Number of dedicated bearers per default bearer: | +| | | +| | * 1. | +| | | +| | The above fields and values are the main options used for the| +| | test case. Other configurable options could be found in test | +| | session profile yaml file. All these options have default | +| | values which can be overwritten in test case file. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Spirent Landslide | +| | | +| | The Spirent Landslide is a tool for functional and | +| | performance testing of different types of mobile networks. | +| | It emulates real-world control and data traffic of mobile | +| | subscribers moving through virtualized EPC network. | +| | Detailed description of Spirent Landslide product could be | +| | found here: https://www.spirent.com/Products/Landslide | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This EPC DEDICATED BEARER test cases can be configured with | +| | different: | +| | | +| | * packet sizes; | +| | * traffic transaction rate; | +| | * number of subscribers sessions; | +| | * number of default bearers per subscriber; | +| | * number of dedicated bearers per default; | +| | * subscribers connection rate; | +| | * subscribers disconnection rate; | +| | * dedicated bearers activation timeout; | +| | * DMF (traffic profile); | +| | * enable/disable Fireball DMF threading model that provides | +| | optimized performance; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI-NFV-TST001 | +| | | +| | 3GPP TS 32.455 | +| | | ++--------------+--------------------------------------------------------------+ +| pre-test | * All Spirent Landslide dependencies need to be installed. | +| conditions | The steps are described in NSB installation chapter for the| +| | Spirent Landslide vEPC tests; | +| | | +| | * The pod.yaml file contains all necessary information (TAS | +| | VM IP address, NICs, emulated SUTs and Test Nodes | +| | parameters (names, types, ip addresses, etc.). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Spirent Landslide components are running on the hosts | +| | specified in the pod file. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with Spirent Landslide Test | +| | Administrator Server (TAS) by TCL and REST API. The test | +| | will resolve the topology and instantiate all emulated EPC | +| | network nodes. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Test scenarios run, which performs the following steps: | +| | | +| | * Start the emulated EPC network nodes; | +| | * Establish the subscribers connections to EPC network | +| | (default bearers); | +| | * Establish the number of dedicated bearers as per per | +| | default bearer for each subscriber; | +| | * Create the sessions and transmit traffic through EPC | +| | network nodes during the specified traffic duration time; | +| | * Disconnect dedicated bearers; | +| | * Disconnect subscribers at the end of the test. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | During test run, all the metrics provided by Spirent | +| | Landslide are stored in the yardstick dispatcher. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The test case will create the test session in Spirent | +| | Landslide with the test case parameters and store the results| +| | in the database for benchmarking purposes. The aim is only | +| | to collect all the metrics that are provided by Spirent | +| | Landslide product for each test specific scenario. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst new file mode 100644 index 000000000..9e6d77825 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst @@ -0,0 +1,149 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2018 Intel Corporation. + +******************************************************* +Yardstick Test Case Description: NSB EPC DEFAULT BEARER +******************************************************* + ++-----------------------------------------------------------------------------+ +|NSB EPC default bearer test case | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_epc_default_bearer_landslide_{dmf_setup} | +| | | +| | * dmf_setup: single or multi dmf test session setup; | +| | | ++--------------+--------------------------------------------------------------+ +|metric | All metrics provided by Spirent Landslide traffic generator | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The Spirent Landslide product provides one box solution which| +| | allows to fully emulate all EPC network nodes including | +| | mobile users, network host and generate control and data | +| | plane traffic. | +| | | +| | This test allows to check processing capability of EPC under | +| | different levels of load (number of subscriber, generated | +| | traffic throughput) for case when only one default bearer is | +| | using for transferring traffic from UE to Network. | +| | | +| | It's easy to replace emulated node or multiple nodes in test | +| | topology with real node or corresponding vEPC VNF as DUT and | +| | check it's processing capabilities under specific test case | +| | load conditions. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The EPC default bearer test cases are listed below: | +| | | +| | * tc_epc_default_bearer_create_landslide.yaml | +| | * tc_epc_default_bearer_create_landslide_multi_dmf.yaml | +| | | +| | Test duration: | +| | | +| | * is set as 60sec (specified in test session profile); | +| | | +| | Traffic type: | +| | | +| | * UDP - for single DMF test case; | +| | * UDP and TCP - for multi DMF test case; | +| | | +| | Packet sizes: | +| | | +| | * 512 bytes for UDP packets; | +| | * 1518 bytes for TCP packets; | +| | | +| | Traffic transaction rate: | +| | | +| | * 5 trans/s.; | +| | | +| | Number of mobile subscribers: | +| | | +| | * 20000; | +| | | +| | Number of default bearers per subscriber: | +| | | +| | * 1. | +| | | +| | The above fields and values are the main options used for the| +| | test case. Other configurable options could be found in test | +| | session profile yaml file. All these options have default | +| | values which can be overwritten in test case file. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Spirent Landslide | +| | | +| | The Spirent Landslide is a tool for functional & performance | +| | testing of different types of mobile networks. It emulates | +| | real-world control and data traffic of mobile subscribers | +| | moving through virtualized EPC network. | +| | Detailed description of Spirent Landslide product could be | +| | found here: https://www.spirent.com/Products/Landslide | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This EPC DEFAULT BEARER test cases can be configured with | +| | different: | +| | | +| | * packet sizes; | +| | * traffic transaction rate; | +| | * number of subscribers sessions; | +| | * number of default bearers per subscriber; | +| | * subscribers connection rate; | +| | * subscribers disconnection rate; | +| | * DMF (traffic profile); | +| | * enable/disable Fireball DMF threading model that provides | +| | optimized performance; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI-NFV-TST001 | +| | | +| | 3GPP TS 32.455 | +| | | ++--------------+--------------------------------------------------------------+ +| pre-test | * All Spirent Landslide dependencies are installed (detailed | +| conditions | installation steps are described in Chapter 13- | +| | nsb-installation.rst and 14-nsb-operation.rst file for NSB | +| | Spirent Landslide vEPC tests; | +| | | +| | * The pod.yaml file contains all necessary information | +| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes | +| | parameters (names, types, ip addresses, etc.). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Spirent Landslide components are running on the hosts | +| | specified in the pod file. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with Spirent Landslide Test | +| | Administration Server (TAS) by TCL and REST API. The test | +| | will resolve the topology and instantiate all emulated EPC | +| | network nodes. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Test scenarios run, which performs the following steps: | +| | | +| | * Start emulated EPC network nodes; | +| | * Establish subscribers connections to EPC network (only | +| | default bearers are established); | +| | * Create the sessions and transmit traffic through EPC | +| | network nodes during the specified traffic duration time; | +| | * Disconnect subscribers at the end of the test. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | During test run, all the metrics provided by Spirent | +| | Landslide are stored in the yardstick dispatcher. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The test case will create the test session in Spirent | +| | Landslide with the test case parameters and store the | +| | results in the database for benchmarking purposes. The aim | +| | is only to collect all the metrics that are provided by | +| | Spirent Landslide product for each test specific scenario. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst new file mode 100644 index 000000000..85e6ce11a --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst @@ -0,0 +1,159 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2018 Intel Corporation. + +**************************************************************** +Yardstick Test Case Description: NSB EPC NETWORK SERVICE REQUEST +**************************************************************** + ++-----------------------------------------------------------------------------+ +|NSB EPC network service request test case | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_epc_network_service_request_landslide | +| | | +| | * initiator: service request initiator side could be UE (ue) | +| | or Network (network). | +| | | ++--------------+--------------------------------------------------------------+ +|metric | All metrics provided by Spirent Landslide traffic generator | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The Spirent Landslide product provides one box solution which| +| | allows to fully emulate all EPC network nodes including | +| | mobile users, network host and generate control and data | +| | plane traffic. | +| | | +| | This test covers case of network initiated service request & | +| | allows to check processing capabilities of EPC handling high | +| | amount of continuous Downlink Data Notification messages from| +| | network to UEs which are in Idle state. | +| | | +| | It's easy to replace emulated node or multiple nodes in test | +| | topology with real node or corresponding vEPC VNF as DUT and | +| | check it's processing capabilities under specific test case | +| | load conditions. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The EPC network service request test cases are listed below: | +| | | +| | * tc_epc_network_service_request_landslide.yaml | +| | | +| | Test duration: | +| | | +| | * is set as 60sec (specified in test session profile); | +| | | +| | Traffic type: | +| | | +| | * UDP; | +| | | +| | Packet sizes: | +| | | +| | * 512 bytes; | +| | | +| | Traffic transaction rate: | +| | | +| | * 0.1 trans/s.; | +| | | +| | Number of mobile subscribers: | +| | | +| | * 20000; | +| | | +| | Number of default bearers per subscriber: | +| | | +| | * 1; | +| | | +| | Idle entry time (timeout after which UE goes to Idle state): | +| | | +| | * 5s; | +| | | +| | Traffic start delay: | +| | | +| | * 1000ms. | +| | | +| | The above fields and values are the main options used for the| +| | test case. Other configurable options could be found in test | +| | session profile yaml file. All these options have default | +| | values which can be overwritten in test case file. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Spirent Landslide | +| | | +| | The Spirent Landslide is a tool for functional & performance | +| | testing of different types of mobile networks. It emulates | +| | real-world control and data traffic of mobile subscribers | +| | moving through virtualized EPC network. | +| | Detailed description of Spirent Landslide product could be | +| | found here: https://www.spirent.com/Products/Landslide | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This EPC NETWORK SERVICE REQUEST test case can be configured | +| | with different: | +| | | +| | * packet sizes; | +| | * traffic transaction rate; | +| | * number of subscribers sessions; | +| | * number of default bearers per subscriber; | +| | * subscribers connection rate; | +| | * subscribers disconnection rate; | +| | * timeout after which UE goes to Idle state; | +| | * Traffic start delay; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI-NFV-TST001 | +| | | +| | 3GPP TS 32.455 | +| | | ++--------------+--------------------------------------------------------------+ +| pre-test | * All Spirent Landslide dependencies are installed (detailed | +| conditions | installation steps are described in Chapter 13- | +| | nsb-installation.rst and 14-nsb-operation.rst file for NSB | +| | Spirent Landslide vEPC tests; | +| | | +| | * The pod.yaml file contains all necessary information | +| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes | +| | parameters (names, types, ip addresses, etc.). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Spirent Landslide components are running on the hosts | +| | specified in the pod file. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with Spirent Landslide Test | +| | Administration Server (TAS) by TCL and REST API. The test | +| | will resolve the topology and instantiate all emulated EPC | +| | network nodes. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Test scenarios run, which performs the following steps: | +| | | +| | * Start emulated EPC network nodes; | +| | * Establish subscribers connections to EPC network (default | +| | bearers); | +| | * Switch UE to Idle state after specified in test case | +| | timeout; | +| | * Send Downlink Data Notification from network to UE, that | +| | will return UE to active state. This process is continuous | +| | and during whole test run UEs will be going to Idle state | +| | and will be switched back to active state after Downlink | +| | Data Notification was received; | +| | * Disconnect subscribers at the end of the test. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | During test run, all the metrics provided by Spirent | +| | Landslide are stored in the yardstick dispatcher. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The test case will create the test session in Spirent | +| | Landslide with the test case parameters and store the | +| | results in the database for benchmarking purposes. The aim | +| | is only to collect all the metrics that are provided by | +| | Spirent Landslide product for each test specific scenario. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst new file mode 100644 index 000000000..102517562 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst @@ -0,0 +1,167 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2018 Intel Corporation. + +********************************************************* +Yardstick Test Case Description: NSB EPC SAEGW RELOCATION +********************************************************* + ++-----------------------------------------------------------------------------+ +|NSB EPC SAEGW throughput with relocation test case | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_epc_saegw_tput_relocation_landslide | +| | | +| | | ++--------------+--------------------------------------------------------------+ +|metric | All metrics provided by Spirent Landslide traffic generator | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The Spirent Landslide product provides one box solution which| +| | allows to fully emulate all EPC network nodes including | +| | mobile users, network host and generate control and data | +| | plane traffic. | +| | | +| | This test allows to check processing capability of EPC | +| | handling large amount of subscribers X2 handovers between | +| | different eNBs while UEs are sending traffic. | +| | | +| | It's easy to replace emulated node or multiple nodes in test | +| | topology with real node or corresponding vEPC VNF as DUT and | +| | check it's processing capabilities under specific test case | +| | load conditions. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The EPC SAEGW throughput with relocation tests are listed | +| | below: | +| | | +| | * tc_epc_saegw_tput_relocation_landslide.yaml | +| | | +| | Test duration: | +| | | +| | * is set as 60sec (specified in test session profile); | +| | | +| | Traffic type: | +| | | +| | * UDP; | +| | | +| | Packet sizes: | +| | | +| | * 512 bytes; | +| | | +| | Traffic transaction rate: | +| | | +| | * 5 trans/s.; | +| | | +| | Number of mobile subscribers: | +| | | +| | * 20000; | +| | | +| | Number of default bearers per subscriber: | +| | | +| | * 1; | +| | | +| | Handover type: | +| | | +| | * X2 handover; | +| | | +| | Mobility time (timeout after sessions were established after | +| | which handover will start): | +| | | +| | * 10000ms; | +| | | +| | Handover start type: | +| | | +| | * When all sessions started; | +| | | +| | Mobility mode: | +| | | +| | * Single handoff; | +| | | +| | Mobility Rate: | +| | | +| | * 120 subscribers/s. | +| | | +| | The above fields and values are the main options used for the| +| | test case. Other configurable options could be found in test | +| | session profile yaml file. All these options have default | +| | values which can be overwritten in test case file. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Spirent Landslide | +| | | +| | The Spirent Landslide is a tool for functional & performance | +| | testing of different types of mobile networks. It emulates | +| | real-world control and data traffic of mobile subscribers | +| | moving through virtualized EPC network. | +| | Detailed description of Spirent Landslide product could be | +| | found here: https://www.spirent.com/Products/Landslide | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This EPC UE SERVICE REQUEST test cases can be configured with| +| | different: | +| | | +| | * packet sizes; | +| | * traffic transaction rate; | +| | * number of subscribers sessions; | +| | * handover type; | +| | * mobility rate; | +| | * mobility time; | +| | * mobility mode; | +| | * handover start condition; | +| | * subscribers disconnection rate; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI-NFV-TST001 | +| | | +| | 3GPP TS 32.455 | +| | | ++--------------+--------------------------------------------------------------+ +| pre-test | * All Spirent Landslide dependencies are installed (detailed | +| conditions | installation steps are described in Chapter 13- | +| | nsb-installation.rst and 14-nsb-operation.rst file for NSB | +| | Spirent Landslide vEPC tests; | +| | | +| | * The pod.yaml file contains all necessary information | +| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes | +| | parameters (names, types, ip addresses, etc.). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Spirent Landslide components are running on the hosts | +| | specified in the pod file. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with Spirent Landslide Test | +| | Administration Server (TAS) by TCL and REST API. The test | +| | will resolve the topology and instantiate all emulated EPC | +| | network nodes. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Test scenarios run, which performs the following steps: | +| | | +| | * Start emulated EPC network nodes; | +| | * Establish subscribers connections to EPC network (default | +| | bearers); | +| | * Start run traffic; | +| | * After specified in test case mobility timeout, start | +| | handover process on specified mobility rate; | +| | * Disconnect subscribers at the end of the test. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | During test run, all the metrics provided by Spirent | +| | Landslide are stored in the yardstick dispatcher. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The test case will create the test session in Spirent | +| | Landslide with the test case parameters and store the | +| | results in the database for benchmarking purposes. The aim | +| | is only to collect all the metrics that are provided by | +| | Spirent Landslide product for each test specific scenario. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst new file mode 100644 index 000000000..0711a0ce3 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst @@ -0,0 +1,174 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2018 Intel Corporation. + +*********************************************************** +Yardstick Test Case Description: NSB EPC UE SERVICE REQUEST +*********************************************************** + ++-----------------------------------------------------------------------------+ +|NSB EPC UE service request test case | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_epc_{initiator}_service_request_landslide | +| | | +| | * initiator: service request initiator side could be UE (ue) | +| | or Network (nw). | +| | | ++--------------+--------------------------------------------------------------+ +|metric | All metrics provided by Spirent Landslide traffic generator | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The Spirent Landslide product provides one box solution which| +| | allows to fully emulate all EPC network nodes including | +| | mobile users, network host and generate control and data | +| | plane traffic. | +| | | +| | This test allows to check processing capabilities of EPC | +| | under high user connections rate and traffic load for case | +| | when UEs initiates service request (UE initiates bearer | +| | modification request to provide dedicated bearer for new | +| | type of traffic) | +| | | +| | It's easy to replace emulated node or multiple nodes in test | +| | topology with real node or corresponding vEPC VNF as DUT and | +| | check it's processing capabilities under specific test case | +| | load conditions. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The EPC ue service request test cases are listed below: | +| | | +| | * tc_epc_ue_service_request_landslide.yaml | +| | | +| | Test duration: | +| | | +| | * is set as 60sec (specified in test session profile); | +| | | +| | Traffic type: | +| | | +| | * UDP; | +| | | +| | Packet sizes: | +| | | +| | * 512 bytes; | +| | | +| | Traffic transaction rate: | +| | | +| | * 5 trans/s.; | +| | | +| | Number of mobile subscribers: | +| | | +| | * 20000; | +| | | +| | Number of default bearers per subscriber: | +| | | +| | * 1; | +| | | +| | Number of dedicated bearers per default bearer: | +| | | +| | * 1. | +| | | +| | TFT settings for dedicated bearers: | +| | | +| | * TFT configured to filter TCP traffic (Protocol ID 6) | +| | | +| | Modified TFT settings: | +| | | +| | * Create new TFT to filter UDP traffic (Protocol ID 17) from | +| | 2002 local port and 2003 remote port; | +| | | +| | Modified QoS settings: | +| | | +| | * Set QCI 5 for dedicated bearers; | +| | | +| | The above fields and values are the main options used for the| +| | test case. Other configurable options could be found in test | +| | session profile yaml file. All these options have default | +| | values which can be overwritten in test case file. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Spirent Landslide | +| | | +| | The Spirent Landslide is a tool for functional & performance | +| | testing of different types of mobile networks. It emulates | +| | real-world control and data traffic of mobile subscribers | +| | moving through virtualized EPC network. | +| | Detailed description of Spirent Landslide product could be | +| | found here: https://www.spirent.com/Products/Landslide | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | This EPC UE SERVICE REQUEST test case can be configured with | +| | different: | +| | | +| | * packet sizes; | +| | * traffic transaction rate; | +| | * number of subscribers sessions; | +| | * number of default bearers per subscriber; | +| | * number of dedicated bearers per default; | +| | * subscribers connection rate; | +| | * subscribers disconnection rate; | +| | * dedicated bearers activation timeout; | +| | * DMF (traffic profile); | +| | * enable/disable Fireball DMF threading model that provides | +| | optimized performance; | +| | * Starting TFT settings for dedicated bearers; | +| | * Modified TFT settings for dedicated bearers; | +| | * Modified QoS settings for dedicated bearers; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI-NFV-TST001 | +| | | +| | 3GPP TS 32.455 | +| | | ++--------------+--------------------------------------------------------------+ +| pre-test | * All Spirent Landslide dependencies are installed (detailed | +| conditions | installation steps are described in Chapter 13- | +| | nsb-installation.rst and 14-nsb-operation.rst file for NSB | +| | Spirent Landslide vEPC tests; | +| | | +| | * The pod.yaml file contains all necessary information | +| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes | +| | parameters (names, types, ip addresses, etc.). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Spirent Landslide components are running on the hosts | +| | specified in the pod file. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with Spirent Landslide Test | +| | Administration Server (TAS) by TCL and REST API. The test | +| | will resolve the topology and instantiate all emulated EPC | +| | network nodes. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Test scenarios run, which performs the following steps: | +| | | +| | * Start emulated EPC network nodes; | +| | * Establish subscribers connections to EPC network (default | +| | bearers); | +| | * Establish the number of dedicated bearer as specified in | +| | the test case as per default bearer for each subscriber; | +| | * start run users traffic through EPC network nodes; | +| | * During traffic is running, send bearer modification request| +| | after specified in test case timeout; | +| | * Disconnect dedicated bearers; | +| | * Disconnect subscribers at the end of the test. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | During test run, all the metrics provided by Spirent | +| | Landslide are stored in the yardstick dispatcher. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The test case will create the test session in Spirent | +| | Landslide with the test case parameters and store the | +| | results in the database for benchmarking purposes. The aim | +| | is only to collect all the metrics that are provided by | +| | Spirent Landslide product for each test specific scenario. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst new file mode 100644 index 000000000..139990bc3 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst @@ -0,0 +1,189 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2018 Intel Corporation. + +************************************************ +Yardstick Test Case Description: NSB vFW RFC2544 +************************************************ + ++------------------------------------------------------------------------------+ +| NSB vFW test for VNF characterization | +| | ++---------------+--------------------------------------------------------------+ +| test case id | tc_{context}_rfc2544_ipv4_1rule_1flow_{pkt_size}_{tg_type} | +| | | +| | * context = baremetal, heat, heat_external, ovs, sriov | +| | heat_sriov_external contexts; | +| | * tg_type = ixia (context != heat,heat_sriov_external), | +| | trex; | +| | * pkt_size = 64B - all contexts; | +| | 128B, 256B, 512B, 1024B, 1280B, 1518B - | +| | (context = heat, tg_type = ixia) | +| | | ++---------------+--------------------------------------------------------------+ +| metric | * Network Throughput; | +| | * TG Packets Out; | +| | * TG Packets In; | +| | * TG Latency; | +| | * VNF Packets Out; | +| | * VNF Packets In; | +| | * VNF Packets Fwd; | +| | * Dropped packets; | +| | | ++---------------+--------------------------------------------------------------+ +| test purpose | The VFW RFC2544 tests measure performance characteristics of | +| | the SUT (multiple ports) and sends UDP bidirectional traffic | +| | from all TG ports to SampleVNF vFW application. The | +| | application forwards received traffic based on rules | +| | provided by the user in the TC configuration and default | +| | rules created by vFW to send traffic from uplink ports to | +| | downlink and voice versa. | +| | | ++---------------+--------------------------------------------------------------+ +| configuration | The 2 ports RFC2544 test cases are listed below: | +| | | +| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml | +| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex.yaml | +| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1024B_ixia.yaml | +| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1280B_ixia.yaml | +| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_128B_ixia.yaml | +| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1518B_ixia.yaml | +| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_256B_ixia.yaml | +| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_512B_ixia.yaml | +| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml | +| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml | +| | * tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex. | +| | yaml | +| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml | +| | * tc_ovs_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml | +| | * tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml | +| | * tc_sriov_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml | +| | * tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml | +| | | +| | The 4 ports RFC2544 test cases are listed below: | +| | | +| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia_4port.yaml | +| | * tc_tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_4port. | +| | yaml | +| | * tc_tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_trex_4 | +| | port.yaml | +| | * tc_tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_4port.yaml | +| | | +| | The scale-up RFC2544 test cases are listed below: | +| | | +| | * tc_tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml | +| | | +| | The scale-out RFC2544 test cases are listed below: | +| | | +| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_out.yaml | +| | | +| | Test duration is set as 30 sec for each test and default | +| | number of rules are applied. These can be configured | +| | | ++---------------+--------------------------------------------------------------+ +| test tool | The vFW is a DPDK application that performs basic filtering | +| | for malformed packets and dynamic packet filtering of | +| | incoming packets using the connection tracker library. | +| | | ++---------------+--------------------------------------------------------------+ +| applicability | The vFW RFC2544 test cases can be configured with different: | +| | | +| | * packet sizes; | +| | * test duration; | +| | * tolerated loss; | +| | * traffic flows; | +| | * rules; | +| | | +| | Default values exist. | +| | | ++---------------+--------------------------------------------------------------+ +| pre-test | For OpenStack test case image (yardstick-samplevnf) needs | +| conditions | to be installed into Glance with vFW and DPDK included in | +| | it (NSB install). | +| | | +| | For Baremetal tests cases vFW and DPDK must be installed on | +| | the hosts where the test is executed. The pod.yaml file must | +| | have the necessary system and NIC information. | +| | | +| | For standalone (SA) SRIOV/OvS test cases the | +| | yardstick-samplevnf image needs to be installed on hosts and | +| | pod.yaml file must be provided with necessary system, NIC | +| | information. | +| | | ++---------------+--------------------------------------------------------------+ +| test sequence | Description and expected result | +| | | ++---------------+--------------------------------------------------------------+ +| step 1 | For Baremetal test: The TG (except IXIA) and VNF are started | +| | on the hosts based on the pod file. | +| | | +| | For Heat test: Two host VMs are booted, as Traffic generator | +| | and VNF(vFW) based on the test flavor. In case of scale-out | +| | scenario the multiple VNF VMs will be started. | +| | | +| | For Heat external test: vFW VM is booted and TG (except IXIA)| +| | generator is started on the external host based on the pod | +| | file. In case of scale-out scenario the multiple VNF VMs | +| | will be deployed. | +| | | +| | For Heat SRIOV external test: vFW VM is booted with network | +| | interfaces of `direct` type which are mapped to VFs that are | +| | available to OpenStack. TG (except IXIA) is started on the | +| | external host based on the pod file. In case of scale-out | +| | scenario the multiple VNF VMs will be deployed. | +| | | +| | For SRIOV test: VF ports are created on host's PFs specified | +| | in the TC file and VM is booed using those ports and image | +| | provided in the configuration. TG (except IXIA) is started | +| | on other host connected to VNF machine based on the pod | +| | file. The vFW is started in the booted VM. In case of | +| | scale-out scenario the multiple VNF VMs will be created. | +| | | +| | For OvS-DPDK test: OvS DPDK switch is started and bridges | +| | are created with ports specified in the TC file. DPDK vHost | +| | ports are added to corresponding bridge and VM is booed | +| | using those ports and image provided in the configuration. | +| | TG (except IXIA) is started on other host connected to VNF | +| | machine based on the pod file. The vFW is started in the | +| | booted VM. In case of scale-out scenario the multiple VNF | +| | VMs will be deployed. | +| | | ++---------------+--------------------------------------------------------------+ +| step 2 | Yardstick is connected with the TG and VNF by using ssh (in | +| | case of IXIA TG is connected via TCL interface). The test | +| | will resolve the topology and instantiate all VNFs | +| | and TG and collect the KPI's/metrics. | +| | | ++---------------+--------------------------------------------------------------+ +| step 3 | The TG will send packets to the VNFs. If the number of | +| | dropped packets is more than the tolerated loss the line | +| | rate or throughput is halved. This is done until the dropped | +| | packets are within an acceptable tolerated loss. | +| | | +| | The KPI is the number of packets per second for different | +| | packet size with an accepted minimal packet loss for the | +| | default configuration. | +| | | ++---------------+--------------------------------------------------------------+ +| step 4 | In Baremetal test: The test quits the application and unbind | +| | the DPDK ports. | +| | | +| | In Heat test: All VNF VMs and TG are deleted on test | +| | completion. | +| | | +| | In SRIOV test: The deployed VM with vFW is destroyed on the | +| | host and TG (exclude IXIA) is stopped. | +| | | +| | In Heat SRIOV test: The deployed VM with vFW is destroyed, | +| | VFs are released and TG (exclude IXIA) is stopped. | +| | | +| | In OvS test: The deployed VM with vFW is destroyed on the | +| | host and OvS DPDK switch is stopped and ports are unbinded. | +| | The TG (exclude IXIA) is stopped. | +| | | ++---------------+--------------------------------------------------------------+ +| test verdict | The test case will achieve a Throughput with an accepted | +| | minimal tolerated packet loss. | ++---------------+--------------------------------------------------------------+ + diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst new file mode 100644 index 000000000..de490900d --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst @@ -0,0 +1,130 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2018 Intel Corporation. + +************************************************************* +Yardstick Test Case Description: NSB vFW RFC2544 (correlated) +************************************************************* + ++------------------------------------------------------------------------------+ +| NSB vFW test for VNF characterization using correlated traffic | +| | ++---------------+--------------------------------------------------------------+ +| test case id | tc_{context}_rfc2544_ipv4_1rule_1flow_64B_trex_corelated | +| | | +| | * context = baremetal, heat | +| | | ++---------------+--------------------------------------------------------------+ +| metric | * Network Throughput; | +| | * TG Packets Out; | +| | * TG Packets In; | +| | * TG Latency; | +| | * VNF Packets Out; | +| | * VNF Packets In; | +| | * VNF Packets Fwd; | +| | * Dropped packets; | +| | | +| | NOTE: For correlated TCs the TG metrics are available on | +| | uplink ports. | +| | | ++---------------+--------------------------------------------------------------+ +| test purpose | The VFW RFC2544 correlated tests measure performance | +| | characteristics of the SUT (multiple ports) and sends UDP | +| | traffic from uplink TG ports to SampleVNF vFW application. | +| | The application forwards received traffic from uplink ports | +| | to downlink ports based on rules provided by the user in the | +| | TC configuration and default rules created by vFW. The VNF | +| | downlink traffic is received by another UDPReplay VNF and it | +| | is mirrored back to the VNF on the same port. Finally, the | +| | traffic is received back to the TG uplink port. | +| | | ++---------------+--------------------------------------------------------------+ +| configuration | The 2 ports RFC2544 correlated test cases are listed below: | +| | | +| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_corelated | +| | _traffic.yaml | +| | | +| | Multiple VNF (2, 4, 10) RFC2544 correlated test cases are | +| | listed below: | +| | | +| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated | +| | _scale_10.yaml | +| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale | +| | _2.yaml | +| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale | +| | _4.yaml | +| | | +| | The scale-out RFC2544 test cases are listed below: | +| | | +| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale | +| | _out.yaml | +| | | +| | Test duration is set as 30 sec for each test and default | +| | number of rules are applied. These can be configured | +| | | ++---------------+--------------------------------------------------------------+ +| test tool | The vFW is a DPDK application that performs basic filtering | +| | for malformed packets and dynamic packet filtering of | +| | incoming packets using the connection tracker library. | +| | | ++---------------+--------------------------------------------------------------+ +| applicability | The vFW RFC2544 test cases can be configured with different: | +| | | +| | * packet sizes; | +| | * test duration; | +| | * tolerated loss; | +| | * traffic flows; | +| | * rules; | +| | | +| | Default values exist. | +| | | ++---------------+--------------------------------------------------------------+ +| pre-test | For OpenStack test case image (yardstick-samplevnf) needs | +| conditions | to be installed into Glance with vFW and DPDK included in | +| | it (NSB install). | +| | | +| | For Baremetal tests cases vFW and DPDK must be installed on | +| | the hosts where the test is executed. The pod.yaml file must | +| | have the necessary system and NIC information. | +| | | ++---------------+--------------------------------------------------------------+ +| test sequence | Description and expected result | +| | | ++---------------+--------------------------------------------------------------+ +| step 1 | For Baremetal test: The TG (except IXIA), vFW and UDPReplay | +| | VNFs are started on the hosts based on the pod file. | +| | | +| | For Heat test: Three host VMs are booted, as Traffic | +| | generator, vFW and UDPReplay VNF(vFW) based on the test | +| | flavor. In case of scale-out scenario the multiple vFW VNF | +| | VMs will be started. | +| | | ++---------------+--------------------------------------------------------------+ +| step 2 | Yardstick is connected with the TG, vFW and UDPReplay VNF by | +| | using ssh (in case of IXIA TG is connected via TCL | +| | interface). The test will resolve the topology and | +| | instantiate all VNFs and TG and collect the KPI's/metrics. | +| | | ++---------------+--------------------------------------------------------------+ +| step 3 | The TG will send packets to the VNFs. If the number of | +| | dropped packets is more than the tolerated loss the line | +| | rate or throughput is halved. This is done until the dropped | +| | packets are within an acceptable tolerated loss. | +| | | +| | The KPI is the number of packets per second for 64B packet | +| | size with an accepted minimal packet loss for the default | +| | configuration. | +| | | ++---------------+--------------------------------------------------------------+ +| step 4 | In Baremetal test: The test quits the application and unbind | +| | the DPDK ports. | +| | | +| | In Heat test: All VNF VMs and TG are deleted on test | +| | completion. | +| | | ++---------------+--------------------------------------------------------------+ +| test verdict | The test case will achieve a Throughput with an accepted | +| | minimal tolerated packet loss. | ++---------------+--------------------------------------------------------------+ + diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst new file mode 100644 index 000000000..9051fc4df --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst @@ -0,0 +1,133 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2018 Intel Corporation. + +******************************************************* +Yardstick Test Case Description: NSB vFW RFC3511 (HTTP) +******************************************************* + ++------------------------------------------------------------------------------+ +| NSB vFW test for VNF characterization based on RFC3511 and IXIA | +| | ++---------------+--------------------------------------------------------------+ +| test case id | tc_{context}_http_ixload_{http_size}_Requests-65000_{type} | +| | | +| | * context = baremetal, heat_external | +| | * http_size = 1b, 4k, 64k, 256k, 512k, 1024k payload size | +| | * type = Concurrency, Connections, Throughput | +| | | ++---------------+--------------------------------------------------------------+ +| metric | * HTTP Total Throughput (Kbps); | +| | * HTTP Simulated Users; | +| | * HTTP Concurrent Connections; | +| | * HTTP Connection Rate; | +| | * HTTP Transaction Rate | +| | | ++---------------+--------------------------------------------------------------+ +| test purpose | The vFW RFC3511 tests measure performance characteristics of | +| | the SUT by sending the HTTP traffic from uplink to downlink | +| | TG ports through vFW VNF. The application forwards received | +| | traffic based on rules provided by the user in the TC | +| | configuration and default rules created by vFW to send | +| | traffic from uplink ports to downlink and voice versa. | +| | | ++---------------+--------------------------------------------------------------+ +| configuration | The 2 ports RFC3511 test cases are listed below: | +| | | +| | * tc_baremetal_http_ixload_1024k_Requests-65000 | +| | _Concurrency.yaml | +| | * tc_baremetal_http_ixload_1b_Requests-65000 | +| | _Concurrency.yaml | +| | * tc_baremetal_http_ixload_256k_Requests-65000 | +| | _Concurrency.yaml | +| | * tc_baremetal_http_ixload_4k_Requests-65000 | +| | _Concurrency.yaml | +| | * tc_baremetal_http_ixload_512k_Requests-65000 | +| | _Concurrency.yaml | +| | * tc_baremetal_http_ixload_64k_Requests-65000 | +| | _Concurrency.yaml | +| | * tc_heat_external_http_ixload_1b_Requests-10Gbps | +| | _Throughput.yaml | +| | * tc_heat_external_http_ixload_1b_Requests-65000 | +| | _Concurrency.yaml | +| | * tc_heat_external_http_ixload_1b_Requests-65000 | +| | _Connections.yaml | +| | | +| | The 4 ports RFC3511 test cases are listed below: | +| | | +| | * tc_baremetal_http_ixload_1b_Requests-65000 | +| | _Concurrency_4port.yaml | +| | | ++---------------+--------------------------------------------------------------+ +| test tool | The vFW is a DPDK application that performs basic filtering | +| | for malformed packets and dynamic packet filtering of | +| | incoming packets using the connection tracker library. | +| | | ++---------------+--------------------------------------------------------------+ +| applicability | The vFW RFC3511 test cases can be configured with different: | +| | | +| | * http payload sizes; | +| | * traffic flows; | +| | * rules; | +| | | +| | Default values exist. | +| | | ++---------------+--------------------------------------------------------------+ +| pre-test | For OpenStack test case image (yardstick-samplevnf) needs | +| conditions | to be installed into Glance with vFW and DPDK included in | +| | it (NSB install). | +| | | +| | For Baremetal tests cases vFW and DPDK must be installed on | +| | the hosts where the test is executed. The pod.yaml file must | +| | have the necessary system and NIC information. | +| | | ++---------------+--------------------------------------------------------------+ +| test sequence | Description and expected result | +| | | ++---------------+--------------------------------------------------------------+ +| step 1 | For Baremetal test: The vFW VNF is started on the hosts | +| | based on the pod file. | +| | | +| | For Heat external test: The vFW VM are deployed and booted. | +| | | ++---------------+--------------------------------------------------------------+ +| step 2 | Yardstick is connected with the TG (IxLoad) via IxLoad API | +| | and VNF by using ssh. The test will resolve the topology and | +| | instantiate all VNFs and TG and collect the KPI's/metrics. | +| | | ++---------------+--------------------------------------------------------------+ +| step 3 | The TG simulates HTTP traffic based on selected type of TC. | +| | | +| | Concurrency: | +| | The TC attempts to simulate some number of human users. | +| | The simulated users are gradually brought online until 64K | +| | users is met (the Ramp-Up phase), then taken offline (the | +| | Ramp Down phase). | +| | | +| | Connections: | +| | The TC creates some number of HTTP connections per second. | +| | It will attempt to generate the 64K of HTTP connections | +| | per second. | +| | | +| | Throughput: | +| | TC simultaneously transmits and receives TCP payload | +| | (bytes) at a certain rate measured in Megabits per second | +| | (Mbps), Kilobits per second (Kbps), or Gigabits per | +| | second. The 10 Gbits is default throughput. | +| | | +| | At the end of the TC, the KPIs are collected and stored | +| | (depends on the selected dispatcher). | +| | | ++---------------+--------------------------------------------------------------+ +| step 4 | In Baremetal test: The test quits the application and | +| | unbinds the DPDK ports. | +| | | +| | In Heat test: All VNF VMs are deleted and connections to TG | +| | are terminated. | +| | | ++---------------+--------------------------------------------------------------+ +| test verdict | The test case will try to achieve the configured HTTP | +| | Concurrency/Throughput/Connections. | ++---------------+--------------------------------------------------------------+ + diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst index 202307de6..19cc80e30 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst @@ -34,6 +34,7 @@ Yardstick Test Case Description TC010 | | | | | Lmbench is a suite of operating system microbenchmarks. This | | | test uses lat_mem_rd tool from that suite including: | +| | | | | * Context switching | | | * Networking: connection establishment, pipe, TCP, UDP, and | | | RPC hot potato | @@ -55,7 +56,7 @@ Yardstick Test Case Description TC010 | | The benchmark runs as two nested loops. The outer loop is | | | the stride size. The inner loop is the array size. For each | | | array size, the benchmark creates a ring of pointers that | -| | point backward one stride.Traversing the array is done by: | +| | point backward one stride. Traversing the array is done by:: | | | | | | p = (char **)*p; | | | | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst index 48bdef497..cbb1db91f 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst @@ -60,14 +60,14 @@ Yardstick Test Case Description TC011 | | | | | * options: | | | protocol: udp # The protocol used by iperf3 tools | -| | bandwidth: 20m # It will send the given number of packets | -| | without pausing | +| | # Send the given number of packets without pausing: | +| | bandwidth: 20m | | | * runner: | | | duration: 30 # Total test duration 30 seconds. | | | | | | * SLA (optional): | | | jitter: 10 (ms) # The maximum amount of jitter that is | -| | accepted. | +| | accepted. | | | | +--------------+--------------------------------------------------------------+ |applicability | Test can be configured with different: | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst index b56e829f5..2502f5d94 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst @@ -34,6 +34,7 @@ Yardstick Test Case Description TC012 | | | | | LMbench is a suite of operating system microbenchmarks. | | | This test uses bw_mem tool from that suite including: | +| | | | | * Cached file read | | | * Memory copy (bcopy) | | | * Memory read | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst index 8d79e011a..d27b201c5 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst @@ -43,20 +43,24 @@ Yardstick Test Case Description TC019 | | | +--------------+--------------------------------------------------------------+ |monitors | In this test case, two kinds of monitor are needed: | +| | | | | 1. the "openstack-cmd" monitor constantly request a specific | | | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request | +| | | +| | 1. monitor_type: which is used for finding the monitor | +| | class and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2. command_name: which is the command name used for | +| | request | | | | | | 2. the "process" monitor check whether a process is running | | | on a specific node, which needs three parameters: | -| | 1) monitor_type: which used for finding the monitor class | -| | and related scritps. It should be always set to "process" | -| | for this monitor. | -| | 2) process_name: which is the process name for monitor | -| | 3) host: which is the name of the node runing the process | +| | | +| | 1. monitor_type: which used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "process" for this monitor. | +| | 2. process_name: which is the process name for monitor | +| | 3. host: which is the name of the node runing the process | | | | | | e.g. | | | monitor1: | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst index 0e2e9a5f8..f3f9ea6bf 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst @@ -39,12 +39,15 @@ Yardstick Test Case Description TC025 | | | +--------------+--------------------------------------------------------------+ |monitors | In this test case, one kind of monitor are needed: | +| | | | | 1. the "openstack-cmd" monitor constantly request a specific | | | Openstack command, which needs two parameters | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scritps. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request | +| | | +| | 1) monitor_type: which is used for finding the monitor | +| | class and related scripts. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for | +| | request | | | | | | There are four instance of the "openstack-cmd" monitor: | | | monitor1: | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst index 125fd59fa..90790e2e3 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst @@ -7,7 +7,7 @@ Yardstick Test Case Description TC027 ************************************* -.. _ipv6: https://wiki.opnfv.org/ipv6_opnfv_project +.. _ipv6: https://wiki.opnfv.org/display/ipv6 +-----------------------------------------------------------------------------+ |IPv6 connectivity between nodes on the tenant network | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst index d62fbf787..4c73c9677 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst @@ -7,7 +7,7 @@ Yardstick Test Case Description TC040 ************************************* -.. _Parser: https://wiki.opnfv.org/parser +.. _Parser: https://wiki.opnfv.org/display/parser +-----------------------------------------------------------------------------+ |Verify Parser Yang-to-Tosca | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst index a0c487c7b..23b98c8f4 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst @@ -9,7 +9,7 @@ Yardstick Test Case Description TC042 .. _DPDK: http://dpdk.org/doc/guides/index.html .. _Testpmd: http://dpdk.org/doc/guides/testpmd_app_ug/index.html -.. _Pktgen-dpdk: http://pktgen.readthedocs.io/en/latest/index.html +.. _Pktgen-dpdk: https://pktgen-dpdk.readthedocs.io/en/latest/index.html +-----------------------------------------------------------------------------+ |Network Performance | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst index 82a491b72..7d01cb99a 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst @@ -35,18 +35,18 @@ Yardstick Test Case Description TC050 | | 3) interface: the network interface to be turned off. | | | | | | The interface to be closed by the attacker can be set by the | -| | variable of "{{ interface_name }}" | +| | variable of "{{ interface_name }}":: | | | | -| | attackers: | -| | - | -| | fault_type: "general-attacker" | -| | host: {{ attack_host }} | -| | key: "close-br-public" | -| | attack_key: "close-interface" | -| | action_parameter: | -| | interface: {{ interface_name }} | -| | rollback_parameter: | -| | interface: {{ interface_name }} | +| | attackers: | +| | - | +| | fault_type: "general-attacker" | +| | host: {{ attack_host }} | +| | key: "close-br-public" | +| | attack_key: "close-interface" | +| | action_parameter: | +| | interface: {{ interface_name }} | +| | rollback_parameter: | +| | interface: {{ interface_name }} | | | | +--------------+--------------------------------------------------------------+ |monitors | In this test case, the monitor named "openstack-cmd" is | @@ -56,19 +56,20 @@ Yardstick Test Case Description TC050 | | "openstack-cmd" for this monitor. | | | 2) command_name: which is the command name used for request | | | | -| | There are four instance of the "openstack-cmd" monitor: | -| | monitor1: | -| | - monitor_type: "openstack-cmd" | -| | - command_name: "nova image-list" | -| | monitor2: | -| | - monitor_type: "openstack-cmd" | -| | - command_name: "neutron router-list" | -| | monitor3: | -| | - monitor_type: "openstack-cmd" | -| | - command_name: "heat stack-list" | -| | monitor4: | -| | - monitor_type: "openstack-cmd" | -| | - command_name: "cinder list" | +| | There are four instance of the "openstack-cmd" monitor:: | +| | | +| | monitor1: | +| | - monitor_type: "openstack-cmd" | +| | - command_name: "nova image-list" | +| | monitor2: | +| | - monitor_type: "openstack-cmd" | +| | - command_name: "neutron router-list" | +| | monitor3: | +| | - monitor_type: "openstack-cmd" | +| | - command_name: "heat stack-list" | +| | monitor4: | +| | - monitor_type: "openstack-cmd" | +| | - command_name: "cinder list" | +--------------+--------------------------------------------------------------+ |metrics | In this test case, there is one metric: | | | 1)service_outage_time: which indicates the maximum outage | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst index 9514b6819..7f2be6e7d 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst @@ -65,15 +65,16 @@ Yardstick Test Case Description TC052 | | | | | In this case, the "operation" adds a flavor and the "result | | | checker" checks whether ths flavor is created. Their | -| | parameters show as follows: | -| | operation: | -| | -operation_type: "nova-create-flavor" | -| | -action_parameter: | -| | flavorconfig: "test-001 test-001 100 1 1" | -| | result checker: | -| | -checker_type: "check-flavor" | -| | -expectedValue: "test-001" | -| | -condition: "in" | +| | parameters show as follows:: | +| | | +| | operation: | +| | -operation_type: "nova-create-flavor" | +| | -action_parameter: | +| | flavorconfig: "test-001 test-001 100 1 1" | +| | result checker: | +| | -checker_type: "check-flavor" | +| | -expectedValue: "test-001" | +| | -condition: "in" | +--------------+--------------------------------------------------------------+ |metrics | In this test case, there is one metric: | | | 1)service_outage_time: which indicates the maximum outage | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst index c861ca90c..25703d3fb 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst @@ -7,7 +7,7 @@ Yardstick Test Case Description TC055 ************************************* -.. _/proc/cpuinfo: http://www.linfo.org/proc_cpuinfo.html +.. _`/proc/cpuinfo`: http://www.linfo.org/proc_cpuinfo.html +-----------------------------------------------------------------------------+ |Compute Capacity | @@ -41,7 +41,7 @@ Yardstick Test Case Description TC055 | | capacity output. | | | | +--------------+--------------------------------------------------------------+ -|references | /proc/cpuinfo_ | +|references | `/proc/cpuinfo`_ | | | | | | ETSI-NFV-TST001 | | | | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst index 1bb43c9e7..245a58e08 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst @@ -49,12 +49,15 @@ Yardstick Test Case Description TC057 | | -host: node1 | +--------------+--------------------------------------------------------------+ |monitors | In this test case, a kind of monitor is needed: | +| | | | | 1. the "openstack-cmd" monitor constantly request a specific | | | Openstack command, which needs two parameters: | -| | 1) monitor_type: which is used for finding the monitor class | -| | and related scripts. It should be always set to | -| | "openstack-cmd" for this monitor. | -| | 2) command_name: which is the command name used for request | +| | | +| | 1. monitor_type: which is used for finding the monitor | +| | class and related scripts. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2. command_name: which is the command name used for | +| | request | | | | | | In this case, the command_name of monitor1 should be | | | services that are managed by the cluster manager. | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst index a77653aa5..7b8ee06c7 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst @@ -58,6 +58,7 @@ Yardstick Test Case Description TC063 | | * count: 15 - how many times to stat disk utilization | | | type: int | | | unit: na | +| | | | | There are default values for each above-mentioned option. | | | Run in background with other test cases. | | | | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst index af0e64fbf..e1bfd5399 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst @@ -9,9 +9,6 @@ Yardstick Test Case Description TC069 .. _RAMspeed: http://alasir.com/software/ramspeed/ -.. table:: - :class: longtable - +-----------------------------------------------------------------------------+ |Memory Bandwidth | | | @@ -41,7 +38,8 @@ Yardstick Test Case Description TC069 | | * SLA (optional): 7000 (MBps) min_bandwidth: The minimum | | | amount of memory bandwidth that is accepted. | | | * type_id: 1 - runs a specified benchmark | -| | (by an ID number): | +| | (by an ID number):: | +| | | | | 1 -- INTmark [writing] 4 -- FLOATmark [writing] | | | 2 -- INTmark [reading] 5 -- FLOATmark [reading] | | | 3 -- INTmem 6 -- FLOATmem | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst index ad4526405..873c5c99e 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst @@ -7,7 +7,7 @@ Yardstick Test Case Description TC073 ************************************* -.. _netperf: http://www.netperf.org/netperf/training/Netperf.html +.. _netperf: https://hewlettpackard.github.io/netperf/ +-----------------------------------------------------------------------------+ |Throughput per NFVI node test | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst index d6beeaff9..8d025eecf 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst @@ -91,12 +91,15 @@ Yardstick Test Case Description TC074 | | * workload=[workload module] | | | If not specified, the default is to run all workloads. The | | | workload types are: | +| | | | | - rs: 100% Read, sequential data | | | - ws: 100% Write, sequential data | | | - rr: 100% Read, random access | | | - wr: 100% Write, random access | | | - rw: 70% Read / 30% write, random access | +| | | | | measurements. | +| | | | | * workloads={json maps} | | | This parameter supercedes the workload and calls the V2.0 | | | API in StorPerf. It allows for greater control of the | @@ -131,11 +134,13 @@ Yardstick Test Case Description TC074 | | | | | Storperf is required to be installed in the environment. | | | There are two possible methods for Storperf installation: | -| | Run container on Jump Host | -| | Run container in a VM | +| | | +| | - Run container on Jump Host | +| | - Run container in a VM | | | | | | Running StorPerf on Jump Host | | | Requirements: | +| | | | | - Docker must be installed | | | - Jump Host must have access to the OpenStack Controller | | | API | @@ -146,6 +151,7 @@ Yardstick Test Case Description TC074 | | | | | Running StorPerf in a VM | | | Requirements: | +| | | | | - VM has docker installed | | | - VM has OpenStack Controller credentials and can | | | communicate with the Controller API | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst index 793c3fdd5..df2192313 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst @@ -14,8 +14,8 @@ Yardstick Test Case Description TC081 |Network Latency | | | +--------------+--------------------------------------------------------------+ -|test case id | OPNFV_YARDSTICK_TC081_NETWORK_LATENCY_BETWEEN_CONTAINER_AND_ | -| | VM | +|test case id | OPNFV_YARDSTICK_TC081_NETWORK_LATENCY_BETWEEN_CONTAINER_AND | +| | _VM | | | | +--------------+--------------------------------------------------------------+ |metric | RTT (Round Trip Time) | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst index 2e7b28e25..b3d44c4bf 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst @@ -92,18 +92,19 @@ Yardstick Test Case Description TC084 +--------------+--------------------------------------------------------------+ |pre-test | To run and install SPEC CPU 2006, the following are | |conditions | required: | -| | * For SPECint 2006: Both C99 and C++98 compilers are | -| | installed in VM images; | -| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 | -| | compilers installed in VM images; | -| | * At least 4GB of disk space availabile on VM. | -| | | -| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu | -| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. | -| | Higher gcc and g++ version may cause compiling error. | -| | | -| | For more SPEC CPU 2006 dependencies please visit | -| | (https://www.spec.org/cpu2006/Docs/techsupport.html) | +| | | +| | * For SPECint 2006: Both C99 and C++98 compilers are | +| | installed in VM images; | +| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 | +| | compilers installed in VM images; | +| | * At least 4GB of disk space availabile on VM. | +| | | +| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu | +| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. | +| | Higher gcc and g++ version may cause compiling error. | +| | | +| | For more SPEC CPU 2006 dependencies please visit | +| | (https://www.spec.org/cpu2006/Docs/techsupport.html) | | | | +--------------+--------------------------------------------------------------+ |test sequence | description and expected result | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc087.rst b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst index 99bfeebfc..c11252606 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc087.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst @@ -41,6 +41,7 @@ Yardstick Test Case Description TC087 +--------------+--------------------------------------------------------------+ |attackers | In this test case, an attacker called “kill-process” is | | | needed. This attacker includes three parameters: | +| | | | | 1. fault_type: which is used for finding the attacker's | | | scripts. It should be set to 'kill-process' in this test | | | | @@ -58,6 +59,7 @@ Yardstick Test Case Description TC087 |monitors | This test case utilizes two monitors of type "ip-status" | | | and one monitor of type "process" to track the following | | | conditions: | +| | | | | 1. "ping_same_network_l2": monitor ICMP traffic between | | | VMs in the same Neutron network | | | | @@ -74,11 +76,13 @@ Yardstick Test Case Description TC087 | | | +--------------+--------------------------------------------------------------+ |operations | In this test case, the following operations are needed: | +| | | | | 1. "nova-create-instance-in_network": create a VM instance | | | in one of the existing Neutron network. | | | | +--------------+--------------------------------------------------------------+ |metrics | In this test case, there are two metrics: | +| | | | | 1. process_recover_time: which indicates the maximun | | | time (seconds) from the process being killed to | | | recovered | @@ -95,7 +99,9 @@ Yardstick Test Case Description TC087 | | | +--------------+--------------------------------------------------------------+ |configuration | This test case needs two configuration files: | +| | | | | 1. test case file: opnfv_yardstick_tc087.yaml | +| | | | | - Attackers: see above “attackers” discription | | | - waiting_time: which is the time (seconds) from the | | | process being killed to stoping monitors the monitors | @@ -126,7 +132,7 @@ Yardstick Test Case Description TC087 | | Neutron network. | | | | | | 2. Check connectivity from one VM to an external host on | -| | the Internet to verify SNAT functionality. +| | the Internet to verify SNAT functionality. | | | | | | Result: The monitor info will be collected. | | | | @@ -171,11 +177,14 @@ Yardstick Test Case Description TC087 |test verdict | This test fails if the SLAs are not met or if there is a | | | test case execution problem. The SLAs are define as follows | | | for this test: | +| | | | | * SDN Controller recovery | +| | | | | * process_recover_time <= 30 sec | | | | | | * no impact on data plane connectivity during SDN | | | controller failure and recovery. | +| | | | | * packet_drop == 0 | | | | +--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc092.rst b/docs/testing/user/userguide/opnfv_yardstick_tc092.rst index 895074a85..9c833fa23 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc092.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc092.rst @@ -43,6 +43,7 @@ Yardstick Test Case Description TC092 +--------------+--------------------------------------------------------------+ |attackers | In this test case, an attacker called “kill-process” is | | | needed. This attacker includes three parameters: | +| | | | | 1. ``fault_type``: which is used for finding the attacker's | | | scripts. It should be set to 'kill-process' in this test | | | | @@ -92,17 +93,20 @@ Yardstick Test Case Description TC092 | | | +--------------+--------------------------------------------------------------+ |configuration | This test case needs two configuration files: | -| | 1. test case file: opnfv_yardstick_tc092.yaml | -| | - Attackers: see above “attackers” discription | -| | - Monitors: see above “monitors” discription | -| | - waiting_time: which is the time (seconds) from the | -| | process being killed to stoping monitors the | -| | monitors | -| | - SLA: see above “metrics” discription | +| | 1. test case file: opnfv_yardstick_tc092.yaml | +| | | +| | - Attackers: see above “attackers” discription | +| | - Monitors: see above “monitors” discription | +| | | +| | - waiting_time: which is the time (seconds) from the | +| | process being killed to stoping monitors the | +| | monitors | | | | -| | 2. POD file: pod.yaml The POD configuration should record | -| | on pod.yaml first. the “host” item in this test case | -| | will use the node name in the pod.yaml. | +| | - SLA: see above “metrics” discription | +| | | +| | 2. POD file: pod.yaml The POD configuration should record | +| | on pod.yaml first. the “host” item in this test case | +| | will use the node name in the pod.yaml. | | | | +--------------+--------------------------------------------------------------+ |test sequence | Description and expected result | @@ -168,11 +172,12 @@ Yardstick Test Case Description TC092 | | | +--------------+--------------------------------------------------------------+ |step 8 | Start IP connectivity monitors for the new VM: | -| | 1. Check the L2 connectivity from the existing VMs to the | -| | new VM in the Neutron network. | | | | -| | 2. Check connectivity from one VM to an external host on | -| | the Internet to verify SNAT functionality. | +| | 1. Check the L2 connectivity from the existing VMs to the | +| | new VM in the Neutron network. | +| | | +| | 2. Check connectivity from one VM to an external host on | +| | the Internet to verify SNAT functionality. | | | | | | Result: The monitor info will be collected. | | | | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc093.rst b/docs/testing/user/userguide/opnfv_yardstick_tc093.rst index 31fa5d3d3..4e22e8bf3 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc093.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc093.rst @@ -43,14 +43,15 @@ Yardstick Test Case Description TC093 +--------------+--------------------------------------------------------------+ |attackers | In this test case, two attackers called “kill-process” are | | | needed. These attackers include three parameters: | -| | 1. fault_type: which is used for finding the attacker's | -| | scripts. It should be set to 'kill-process' in this test | | | | -| | 2. process_name: should be set to the name of the Vswitch | -| | process | +| | 1. fault_type: which is used for finding the attacker's | +| | scripts. It should be set to 'kill-process' in this test | | | | -| | 3. host: which is the name of the compute node where the | -| | Vswitch process is running | +| | 2. process_name: should be set to the name of the Vswitch | +| | process | +| | | +| | 3. host: which is the name of the compute node where the | +| | Vswitch process is running | | | | | | e.g. -fault_type: "kill-process" | | | -process_name: "openvswitch" | @@ -60,16 +61,17 @@ Yardstick Test Case Description TC093 |monitors | This test case utilizes two monitors of type "ip-status" | | | and one monitor of type "process" to track the following | | | conditions: | -| | 1. "ping_same_network_l2": monitor ICMP traffic between | -| | VMs in the same Neutron network | | | | -| | 2. "ping_external_snat": monitor ICMP traffic from VMs to | -| | an external host on the Internet to verify SNAT | -| | functionality. | +| | 1. "ping_same_network_l2": monitor ICMP traffic between | +| | VMs in the same Neutron network | +| | | +| | 2. "ping_external_snat": monitor ICMP traffic from VMs to | +| | an external host on the Internet to verify SNAT | +| | functionality. | | | | -| | 3. "Vswitch process monitor": a monitor checking the | -| | state of the specified Vswitch process. It measures | -| | the recovery time of the given process. | +| | 3. "Vswitch process monitor": a monitor checking the | +| | state of the specified Vswitch process. It measures | +| | the recovery time of the given process. | | | | | | Monitors of type "ip-status" use the "ping" utility to | | | verify reachability of a given target IP. | @@ -99,6 +101,7 @@ Yardstick Test Case Description TC093 +--------------+--------------------------------------------------------------+ |configuration | This test case needs two configuration files: | | | 1. test case file: opnfv_yardstick_tc093.yaml | +| | | | | - Attackers: see above “attackers” description | | | - monitor_time: which is the time (seconds) from | | | starting to stoping the monitors | @@ -173,12 +176,14 @@ Yardstick Test Case Description TC093 |test verdict | This test fails if the SLAs are not met or if there is a | | | test case execution problem. The SLAs are define as follows | | | for this test: | -| | * SDN Vswitch recovery | -| | * process_recover_time <= 30 sec | +| | * SDN Vswitch recovery | +| | | +| | * process_recover_time <= 30 sec | +| | | +| | * no impact on data plane connectivity during SDN | +| | Vswitch failure and recovery. | | | | -| | * no impact on data plane connectivity during SDN | -| | Vswitch failure and recovery. | -| | * packet_drop == 0 | +| | * packet_drop == 0 | | | | +--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/references.rst b/docs/testing/user/userguide/references.rst index 3e18c96e9..e6bc719fd 100644 --- a/docs/testing/user/userguide/references.rst +++ b/docs/testing/user/userguide/references.rst @@ -11,12 +11,12 @@ References OPNFV ===== -* Parser wiki: https://wiki.opnfv.org/parser -* Pharos wiki: https://wiki.opnfv.org/pharos +* Parser wiki: https://wiki.opnfv.org/display/parser +* Pharos wiki: https://wiki.opnfv.org/display/pharos * Yardstick CI: https://build.opnfv.org/ci/view/yardstick/ * Yardstick and ETSI TST001 presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925205%2Fopnfv_summit_-_bridging_opnfv_and_etsi.pdf * Yardstick Project presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925208%2Fopnfv_summit_-_yardstick_project.pdf -* Yardstick wiki: https://wiki.opnfv.org/yardstick +* Yardstick wiki: https://wiki.opnfv.org/display/yardstick References used in Test Cases ============================= @@ -25,22 +25,22 @@ References used in Test Cases * cirros-image: https://download.cirros-cloud.net * cyclictest: https://rt.wiki.kernel.org/index.php/Cyclictest * DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/ -* DPDK supported NICs: http://dpdk.org/doc/nics +* DPDK supported NICs: http://core.dpdk.org/supported/ * fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html -* fio: http://www.bluestop.org/fio/HOWTO.txt +* fio: https://bluestop.org/files/fio/HOWTO.txt * free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html * iperf3: https://iperf.fr/ -* iostat: http://linux.die.net/man/1/iostat +* iostat: https://linux.die.net/man/1/iostat * Lmbench man-pages: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html * Memory bandwidth man-pages: http://manpages.ubuntu.com/manpages/trusty/bw_mem.8.html * mpstat man-pages: http://manpages.ubuntu.com/manpages/trusty/man1/mpstat.1.html -* netperf: http://www.netperf.org/netperf/training/Netperf.html +* netperf: https://hewlettpackard.github.io/netperf/ * pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt * RAMspeed: http://alasir.com/software/ramspeed/ -* sar: http://linux.die.net/man/1/sar +* sar: https://linux.die.net/man/1/sar * SR-IOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking * Storperf: https://wiki.opnfv.org/display/storperf/Storperf -* unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench +* unixbench: https://github.com/kdlucas/byte-unixbench/tree/master/UnixBench Research @@ -53,7 +53,7 @@ Research Standards ========= -* ETSI NFV: http://www.etsi.org/technologies-clusters/technologies/nfv -* ETSI GS-NFV TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf +* ETSI NFV: https://www.etsi.org/technologies-clusters/technologies/nfv +* ETSI GS-NFV TST 001: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf * RFC2544: https://www.ietf.org/rfc/rfc2544.txt |