diff options
21 files changed, 1086 insertions, 284 deletions
diff --git a/docs/testing/developer/devguide/devguide.rst b/docs/testing/developer/devguide/devguide.rst index 76ed7c651..2065f6e0d 100755 --- a/docs/testing/developer/devguide/devguide.rst +++ b/docs/testing/developer/devguide/devguide.rst @@ -52,9 +52,9 @@ Where can I find some help to start? This guide is made for you. You can have a look at the `user guide`_. There are also references on documentation, video tutorials, tips in the -project `wiki page`_. You can also directly contact us by mail with [Yardstick] -prefix in the subject at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan -#opnfv-yardstick. +project `wiki page`_. You can also directly contact us by mail with +``#yardstick`` or ``[yardstick]`` prefix in the subject at +``opnfv-tech-discuss@lists.opnfv.org`` or on the IRC channel ``#opnfv-yardstick``. Yardstick developer areas diff --git a/docs/testing/developer/devguide/devguide_nsb_prox.rst b/docs/testing/developer/devguide/devguide_nsb_prox.rst index 582668bc5..be2b5be61 100755 --- a/docs/testing/developer/devguide/devguide_nsb_prox.rst +++ b/docs/testing/developer/devguide/devguide_nsb_prox.rst @@ -13,7 +13,8 @@ optimal system architectures and configurations. Prerequisites ============= -In order to integrate PROX tests into NSB, the following prerequisites are required. +In order to integrate PROX tests into NSB, the following prerequisites are +required. .. _`dpdk wiki page`: https://www.dpdk.org/ .. _`yardstick wiki page`: https://wiki.opnfv.org/display/yardstick/ @@ -159,11 +160,13 @@ A NSB Prox test is composed of the following components :- ``tc_prox_heat_context_vpe-4.yaml``. This file describes the components of the test, in the case of openstack the network description and server descriptions, in the case of baremetal the hardware - description location. It also contains the name of the Traffic Generator, the SUT config file - and the traffic profile description, all described below. See nsb-test-description-label_ + description location. It also contains the name of the Traffic Generator, + the SUT config file and the traffic profile description, all described below. + See nsb-test-description-label_ -* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the packet size, tolerated - loss, initial line rate to start traffic at, test interval etc See nsb-traffic-profile-label_ +* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the + packet size, tolerated loss, initial line rate to start traffic at, test + interval etc See nsb-traffic-profile-label_ * Traffic Generator Config file. Usually called ``gen_<test>-<ports>.cfg``. @@ -235,7 +238,8 @@ show you how to understand the test description file. Now let's examine the components of the file in detail 1. ``traffic_profile`` - This specifies the traffic profile for the - test. In this case ``prox_binsearch.yaml`` is used. See nsb-traffic-profile-label_ + test. In this case ``prox_binsearch.yaml`` is used. See + nsb-traffic-profile-label_ 2. ``topology`` - This is either ``prox-tg-topology-1.yaml`` or ``prox-tg-topology-2.yaml`` or ``prox-tg-topology-4.yaml`` @@ -330,11 +334,11 @@ This describes the details of the traffic flow. In this case :alt: NSB PROX Traffic Profile -1. ``name`` - The name of the traffic profile. This name should match the name specified in the - ``traffic_profile`` field in the Test Description File. +1. ``name`` - The name of the traffic profile. This name should match the name + specified in the ``traffic_profile`` field in the Test Description File. -2. ``traffic_type`` - This specifies the type of traffic pattern generated, This name matches - class name of the traffic generator See:: +2. ``traffic_type`` - This specifies the type of traffic pattern generated, + This name matches class name of the traffic generator. See:: network_services/traffic_profile/prox_binsearch.py class ProxBinSearchProfile(ProxProfile) @@ -704,15 +708,22 @@ Now let's examine the components of the file in detail physical core improves performance, however sometimes it is optimal to move task to a separate core. This is best decided by checking performance. - c. ``mode=lat`` - Specifies the action carried out by this task on this core. Supported modes are: acl, - classify, drop, gredecap, greencap, ipv6_decap, ipv6_encap, l2fwd, lbnetwork, lbpos, lbqinq, nop, - police, qinqdecapv4, qinqencapv4, qos, routing, impair, lb5tuple, mirror, unmpls, tagmpls, - nat, decapnsh, encapnsh, gen, genl4 and lat. This task(0) per core(3) receives packets on port. - d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will receive packets on ``Port 1``. - e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet. Note that the packet length should - be longer than ``lat pos`` + 4 bytes to avoid truncation of the timestamp. It defines where the timestamp is - to be read from. Note that the SUT workload might cause the position of the timestamp to change - (i.e. due to encapsulation). + c. ``mode=lat`` - Specifies the action carried out by this task on this + core. + Supported modes are: ``acl``, ``classify``, ``drop``, ``gredecap``, + ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, ``lbnetwork``, + ``lbpos``, ``lbqinq``, ``nop``, ``police``, ``qinqdecapv4``, + ``qinqencapv4``, ``qos``, ``routing``, ``impair``, ``lb5tuple``, + ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, ``decapnsh``, ``encapnsh``, + ``gen``, ``genl4`` and ``lat``. This task(0) per core(3) receives packets + on port. + d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will + receive packets on ``Port 1``. + e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet. + Note that the packet length should be longer than ``lat pos`` + 4 bytes + to avoid truncation of the timestamp. It defines where the timestamp is + to be read from. Note that the SUT workload might cause the position of + the timestamp to change (i.e. due to encapsulation). .. _nsb-sut-generator-label: @@ -720,7 +731,8 @@ Now let's examine the components of the file in detail ------------------------------- This section will describes the SUT(VNF) config file. This is the same for both -baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to explain the options. +baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to +explain the options. .. image:: images/PROX_Handle_2port_cfg.png :width: 1400px @@ -730,13 +742,15 @@ See `prox options`_ for details Now let's examine the components of the file in detail -1. ``[eal options]`` - same as the Generator config file. This specified the EAL (Environmental Abstraction Layer) - options. These are default values and are not changed. - See `dpdk wiki page`_. +1. ``[eal options]`` - same as the Generator config file. This specified the + EAL (Environmental Abstraction Layer) options. These are default values and + are not changed. See `dpdk wiki page`_. -2. ``[port 0]`` - This section describes the DPDK Port. The number following the keyword ``port`` usually refers to the DPDK Port Id. usually starting from ``0``. - Because you can have multiple ports this entry usually repeated. Eg. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4 port setup ``[port 0]``, ``[port 1]``, - ``[port 2]`` and ``[port 3]``:: +2. ``[port 0]`` - This section describes the DPDK Port. The number following + the keyword ``port`` usually refers to the DPDK Port Id. usually starting + from ``0``. Because you can have multiple ports this entry usually + repeated. E.g. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4 + port setup ``[port 0]``, ``[port 1]``, ``[port 2]`` and ``[port 3]``:: [port 0] name=if0 @@ -745,10 +759,14 @@ Now let's examine the components of the file in detail tx desc=2048 promiscuous=yes - a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any name can be assigned to a port. - b. ``mac=hardware`` sets the MAC address assigned by the hardware to data from this port. - c. ``rx desc=2048`` sets the number of available descriptors to allocate for receive packets. This can be changed and can effect performance. - d. ``tx desc=2048`` sets the number of available descriptors to allocate for transmit packets. This can be changed and can effect performance. + a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any + name can be assigned to a port. + b. ``mac=hardware`` sets the MAC address assigned by the hardware to data + from this port. + c. ``rx desc=2048`` sets the number of available descriptors to allocate + for receive packets. This can be changed and can effect performance. + d. ``tx desc=2048`` sets the number of available descriptors to allocate + for transmit packets. This can be changed and can effect performance. e. ``promiscuous=yes`` this enables promiscuous mode for this port. 3. ``[defaults]`` - Here default operations and settings can be over written.:: @@ -757,35 +775,46 @@ Now let's examine the components of the file in detail mempool size=8K memcache size=512 - a. In this example ``mempool size=8K`` the number of mbufs per task is altered. Altering this value could effect performance. See `prox options`_ for details. - b. ``memcache size=512`` - number of mbufs cached per core, default is 256 this is the cache_size. Altering this value could effect performance. + a. In this example ``mempool size=8K`` the number of mbufs per task is + altered. Altering this value could effect performance. See + `prox options`_ for details. + b. ``memcache size=512`` - number of mbufs cached per core, default is 256 + this is the cache_size. Altering this value could affect performance. -4. ``[global]`` - Here application wide setting are supported. Things like application name, start time, duration and memory configurations can be set here. +4. ``[global]`` - Here application wide setting are supported. Things like + application name, start time, duration and memory configurations can be set + here. In this example.:: [global] start time=5 name=Basic Gen - a. ``start time=5`` Time is seconds after which average stats will be started. + a. ``start time=5`` Time is seconds after which average stats will be + started. b. ``name=Handle L2FWD Multiflow (2x)`` Name of the configuration. -5. ``[core 0]`` - This core is designated the master core. Every Prox application must have a master core. The master mode must be assigned to +5. ``[core 0]`` - This core is designated the master core. Every Prox + application must have a master core. The master mode must be assigned to exactly one task, running alone on one core.:: [core 0] mode=master -6. ``[core 1]`` - This describes the activity on core 1. Cores can be configured by means of a set of [core #] sections, where # represents either: +6. ``[core 1]`` - This describes the activity on core 1. Cores can be + configured by means of a set of [core #] sections, where # represents + either: - a. an absolute core number: e.g. on a 10-core, dual socket system with hyper-threading, - cores are numbered from 0 to 39. + a. an absolute core number: e.g. on a 10-core, dual socket system with + hyper-threading, cores are numbered from 0 to 39. - b. PROX allows a core to be identified by a core number, the letter 's', and a socket number. - However NSB PROX is hardware agnostic (physical and virtual configurations are the same) it - is advisable no to use physical core numbering. + b. PROX allows a core to be identified by a core number, the letter 's', + and a socket number. However NSB PROX is hardware agnostic (physical and + virtual configurations are the same) it is advisable no to use physical + core numbering. - Each core can be assigned with a set of tasks, each running one of the implemented packet processing modes.:: + Each core can be assigned with a set of tasks, each running one of the + implemented packet processing modes.:: [core 1] name=none @@ -796,20 +825,33 @@ Now let's examine the components of the file in detail tx port=if1 a. ``name=none`` - No name assigned to the core. - b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``. Task 1 can be defined later in this core or - can be defined in another ``[core 1]`` section with ``task=1`` later in configuration file. Sometimes running - multiple task related to the same packet on the same physical core improves performance, however sometimes it - is optimal to move task to a separate core. This is best decided by checking performance. - c. ``mode=l2fwd`` - Specifies the action carried out by this task on this core. Supported modes are: acl, - classify, drop, gredecap, greencap, ipv6_decap, ipv6_encap, l2fwd, lbnetwork, lbpos, lbqinq, nop, - police, qinqdecapv4, qinqencapv4, qos, routing, impair, lb5tuple, mirror, unmpls, tagmpls, - nat, decapnsh, encapnsh, gen, genl4 and lat. This code does ``l2fwd`` .. ie it does the L2FWD. - - d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet will be set to the MAC address of ``Port 1`` of destination device. (The Traffic Generator/Verifier) - e. ``rx port=if0`` - This specifies that the packets are received from ``Port 0`` called if0 - f. ``tx port=if1`` - This specifies that the packets are transmitted to ``Port 1`` called if1 - - If this example we receive a packet on core on a port, carry out operation on the packet on the core and transmit it on on another port still using the same task on the same core. + b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``. + Task 1 can be defined later in this core or can be defined in another + ``[core 1]`` section with ``task=1`` later in configuration file. + Sometimes running multiple task related to the same packet on the same + physical core improves performance, however sometimes it is optimal to + move task to a separate core. This is best decided by checking + performance. + c. ``mode=l2fwd`` - Specifies the action carried out by this task on this + core. Supported modes are: ``acl``, ``classify``, ``drop``, + ``gredecap``, ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, + ``lbnetwork``, ``lbpos``, ``lbqinq``, ``nop``, ``police``, + ``qinqdecapv4``, ``qinqencapv4``, ``qos``, ``routing``, ``impair``, + ``lb5tuple``, ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, + ``decapnsh``, ``encapnsh``, ``gen``, ``genl4`` and ``lat``. This code + does ``l2fwd``. i.e. it does the L2FWD. + + d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet + will be set to the MAC address of ``Port 1`` of destination device. + (The Traffic Generator/Verifier) + e. ``rx port=if0`` - This specifies that the packets are received from + ``Port 0`` called if0 + f. ``tx port=if1`` - This specifies that the packets are transmitted to + ``Port 1`` called if1 + + In this example we receive a packet on core on a port, carry out operation + on the packet on the core and transmit it on on another port still using + the same task on the same core. On some implementation you may wish to use multiple tasks, like this.:: @@ -829,15 +871,22 @@ Now let's examine the components of the file in detail tx port=if0 drop=no - In this example you can see Core 1/Task 0 called ``rx_task`` receives the packet from if0 and perform the l2fwd. However instead of sending the packet to a - port it sends it to a core see ``tx cores=1t1``. In this case it sends it to Core 1/Task 1. + In this example you can see Core 1/Task 0 called ``rx_task`` receives the + packet from if0 and perform the l2fwd. However instead of sending the + packet to a port it sends it to a core see ``tx cores=1t1``. In this case it + sends it to Core 1/Task 1. - Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but from the ring. See ``rx ring=yes``. It does not perform any operation on the packet See ``mode=none`` - and sends the packets to ``if0`` see ``tx port=if0``. + Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but + from the ring. See ``rx ring=yes``. It does not perform any operation on the + packet See ``mode=none`` and sends the packets to ``if0`` see + ``tx port=if0``. - It is also possible to implement more complex operations be chaining multiple operations in sequence and using rings to pass packets from one core to another. + It is also possible to implement more complex operations by chaining + multiple operations in sequence and using rings to pass packets from one + core to another. - In thus example we show a Broadband Network Gateway (BNG) with Quality of Service (QoS). Communication from task to task is via rings. + In this example, we show a Broadband Network Gateway (BNG) with Quality of + Service (QoS). Communication from task to task is via rings. .. image:: images/PROX_BNG_QOS.png :width: 1000px @@ -848,26 +897,36 @@ Now let's examine the components of the file in detail .. _baremetal-config-label: -This is required for baremetal testing. It describes the IP address of the various ports, the Network devices drivers and MAC addresses and the network +This is required for baremetal testing. It describes the IP address of the +various ports, the Network devices drivers and MAC addresses and the network configuration. -In this example we will describe a 2 port configuration. This file is the same for all 2 port NSB Prox tests on the same platforms/configuration. +In this example we will describe a 2 port configuration. This file is the same +for all 2 port NSB Prox tests on the same platforms/configuration. .. image:: images/PROX_Baremetal_config.png :width: 1000px :alt: NSB PROX Yardstick Config -Now lets describe the sections of the file. - - 1. ``TrafficGen`` - This section describes the Traffic Generator node of the test configuration. The name of the node ``trafficgen_1`` must match the node name - in the ``Test Description File for Baremetal`` mentioned earlier. The password attribute of the test needs to be configured. All other parameters - can remain as default settings. - 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic Generator. - 3. ``xe0`` is DPDK Port 0. ``lspci`` and `` ./dpdk-devbind.py -s`` can be used to provide the interface information. ``netmask`` and ``local_ip`` should not be changed - 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1`` section needs to be repeated and modified accordingly. - 5. ``vnf`` - This section describes the SUT of the test configuration. The name of the node ``vnf`` must match the node name in the - ``Test Description File for Baremetal`` mentioned earlier. The password attribute of the test needs to be configured. All other parameters - can remain as default settings +Now let's describe the sections of the file. + + 1. ``TrafficGen`` - This section describes the Traffic Generator node of the + test configuration. The name of the node ``trafficgen_1`` must match the + node name in the ``Test Description File for Baremetal`` mentioned + earlier. The password attribute of the test needs to be configured. All + other parameters can remain as default settings. + 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic + Generator. + 3. ``xe0`` is DPDK Port 0. ``lspci`` and ``./dpdk-devbind.py -s`` can be used + to provide the interface information. ``netmask`` and ``local_ip`` should + not be changed + 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1`` + section needs to be repeated and modified accordingly. + 5. ``vnf`` - This section describes the SUT of the test configuration. The + name of the node ``vnf`` must match the node name in the + ``Test Description File for Baremetal`` mentioned earlier. The password + attribute of the test needs to be configured. All other parameters can + remain as default settings 6. ``interfaces`` - This defines the DPDK interfaces on the SUT 7. ``xe0`` - Same as 3 but for the ``SUT``. 8. ``xe1`` - Same as 4 but for the ``SUT`` also. @@ -877,11 +936,13 @@ Now lets describe the sections of the file. *Grafana Dashboard* ------------------- -The grafana dashboard visually displays the results of the tests. The steps required to produce a grafana dashboard are described here. +The grafana dashboard visually displays the results of the tests. The steps +required to produce a grafana dashboard are described here. .. _yardstick-config-label: - a. Configure ``yardstick`` to use influxDB to store test results. See file ``/etc/yardstick/yardstick.conf``. + a. Configure ``yardstick`` to use influxDB to store test results. See file + ``/etc/yardstick/yardstick.conf``. .. image:: images/PROX_Yardstick_config.png :width: 1000px @@ -890,10 +951,12 @@ The grafana dashboard visually displays the results of the tests. The steps requ 1. Specify the dispatcher to use influxDB to store results. 2. "target = .. " - Specify location of influxDB to store results. "db_name = yardstick" - name of database. Do not change - "username = root" - username to use to store result. (Many tests are run as root) + "username = root" - username to use to store result. (Many tests are + run as root) "password = ... " - Please set to root user password - b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See `grafana deployment`_. + b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See + `grafana deployment`_. c. Generate the test data. Run the tests as follows .:: yardstick --debug task start tc_prox_<context>_<test>-ports.yaml @@ -910,7 +973,8 @@ How to run NSB Prox Test on an baremetal environment In order to run the NSB PROX test. - 1. Install NSB on Traffic Generator node and Prox in SUT. See `NSB Installation`_ + 1. Install NSB on Traffic Generator node and Prox in SUT. See + `NSB Installation`_ 2. To enter container:: @@ -922,8 +986,8 @@ In order to run the NSB PROX test. cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox - b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that topology - into this directory as per baremetal-config-label_ + b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that + topology into this directory as per baremetal-config-label_ c. Install and configure ``yardstick.conf`` :: @@ -971,7 +1035,8 @@ Here is a list of frequently asked questions. *NSB Prox does not work on Baremetal, How do I resolve this?* ------------------------------------------------------------- -If PROX NSB does not work on baremetal, problem is either in network configuration or test file. +If PROX NSB does not work on baremetal, problem is either in network +configuration or test file. *Solution* @@ -1011,8 +1076,8 @@ If PROX NSB does not work on baremetal, problem is either in network configurati See ``Link detected`` if ``yes`` .... Cable is good. If ``no`` you have an issue with your cable/port. -2. If existing baremetal works then issue is with your test. Check the traffic generator gen_<test>-<ports>.cfg to ensure - it is producing a valid packet. +2. If existing baremetal works then issue is with your test. Check the traffic + generator gen_<test>-<ports>.cfg to ensure it is producing a valid packet. *How do I debug NSB Prox on Baremetal?* --------------------------------------- @@ -1033,7 +1098,8 @@ If PROX NSB does not work on baremetal, problem is either in network configurati cd /opt/nsb_bin/prox -f /tmp/handle_<test>-<ports>.cfg -4. Now let's examine the Generator Output. In this case the output of gen_l2fwd-4.cfg. +4. Now let's examine the Generator Output. In this case the output of + ``gen_l2fwd-4.cfg``. .. image:: images/PROX_Gen_GUI.png :width: 1000px @@ -1048,10 +1114,12 @@ If PROX NSB does not work on baremetal, problem is either in network configurati It appears what is transmitted is received. .. Caution:: - The number of packets MAY not exactly match because the ports are read in sequence. + The number of packets MAY not exactly match because the ports are read in + sequence. .. Caution:: - What is transmitted on PORT X may not always be received on same port. Please check the Test scenario. + What is transmitted on PORT X may not always be received on same port. + Please check the Test scenario. 5. Now lets examine the SUT Output @@ -1083,17 +1151,18 @@ If PROX NSB does not work on baremetal, problem is either in network configurati *NSB Prox works on Baremetal but not in Openstack. How do I resolve this?* -------------------------------------------------------------------------- -NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A badly -formed packed may still work with PROX on Baremetal. However on +NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A +badly formed packed may still work with PROX on Baremetal. However on Openstack the packet must be correct and all fields of the header correct. -Eg A packet with an invalid Protocol ID would still work in Baremetal -but this packet would be rejected by openstack. +E.g. A packet with an invalid Protocol ID would still work in Baremetal but +this packet would be rejected by openstack. *Solution* 1. Check the validity of the packet. 2. Use a known good packet in your test - 3. If using ``Random`` fields in the traffic generator, disable them and retry. + 3. If using ``Random`` fields in the traffic generator, disable them and + retry. *How do I debug NSB Prox on Openstack?* @@ -1111,7 +1180,8 @@ but this packet would be rejected by openstack. 3. Install openstack credentials. - Depending on your openstack deployment, the location of these credentials may vary. + Depending on your openstack deployment, the location of these credentials + may vary. On this platform I do this via:: scp root@10.237.222.55:/etc/kolla/admin-openrc.sh . @@ -1127,8 +1197,8 @@ but this packet would be rejected by openstack. b. Get the Floating IP of the Traffic Generator & SUT - This generates a lot of information. Please not the floating IP of the VNF and - the Traffic Generator. + This generates a lot of information. Please note the floating IP of the + VNF and the Traffic Generator. .. image:: images/PROX_Openstack_stack_show_a.png :width: 1000px @@ -1215,7 +1285,8 @@ If it fails due to :: Missing value auth-url required for auth plugin password -Check your shell environment for Openstack variables. One of them should contain the authentication URL :: +Check your shell environment for Openstack variables. One of them should +contain the authentication URL :: OS_AUTH_URL=``https://192.168.72.41:5000/v3`` @@ -1239,16 +1310,16 @@ Result :: and visible. -If the Openstack Cli appears to hang, then verify the proxys and no_proxy are set correctly. -They should be similar to :: +If the Openstack ClI appears to hang, then verify the proxys and ``no_proxy`` +are set correctly. They should be similar to :: - FTP_PROXY="http://proxy.ir.intel.com:911/" - HTTPS_PROXY="http://proxy.ir.intel.com:911/" - HTTP_PROXY="http://proxy.ir.intel.com:911/" + FTP_PROXY="http://<your_proxy>:<port>/" + HTTPS_PROXY="http://<your_proxy>:<port>/" + HTTP_PROXY="http://<your_proxy>:<port>/" NO_PROXY="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com" - ftp_proxy="http://proxy.ir.intel.com:911/" - http_proxy="http://proxy.ir.intel.com:911/" - https_proxy="http://proxy.ir.intel.com:911/" + ftp_proxy="http://<your_proxy>:<port>/" + http_proxy="http://<your_proxy>:<port>/" + https_proxy="http://<your_proxy>:<port>/" no_proxy="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com" Where @@ -1256,8 +1327,6 @@ Where 1) 10.237.222.55 = IP Address of deployment node 2) 10.237.223.80 = IP Address of Controller node 3) 10.237.222.134 = IP Address of Compute Node - 4) ir.intel.com = local no proxy - *How to Understand the Grafana output?* --------------------------------------- @@ -1280,48 +1349,48 @@ Where A. Test Parameters - Test interval, Duartion, Tolerated Loss and Test Precision -B. Overall No of packets send and received during test +B. No. of packets send and received during test C. Generator Stats - packets sent, received and attempted by Generator -D. Packets Size - -E. No of packets received by SUT - -F. No of packets forwarded by SUT - -G. This is the number of packets sent by the generator per port, for each interval. +D. Packet size -H. This is the number of packets received by the generator per port, for each interval. +E. No. of packets received by SUT -I. This is the number of packets send and received by the generator and lost by the SUT - that meet the success criteria +F. No. of packets forwarded by SUT -J. This is the changes the Percentage of Line Rate used over a test, The MAX and the - MIN should converge to within the interval specified as the ``test-precision``. +G. No. of packets sent by the generator per port, for each interval. -K. This is the packets Size supported during test. If "N/A" appears in any field the result has not been decided. +H. No. of packets received by the generator per port, for each interval. -L. This is the calculated throughput in MPPS(Million Packets Per second) for this line rate. +I. No. of packets sent and received by the generator and lost by the SUT that + meet the success criteria -M. This is the actual No, of packets sent by the generator in MPPS +J. The change in the Percentage of Line Rate used over a test, The MAX and the + MIN should converge to within the interval specified as the + ``test-precision``. -N. This is the actual No. of packets received by the generator in MPPS +K. Packet size supported during test. If *N/A* appears in any field the + result has not been decided. -O. This is the total No. of packets sent by SUT. +L. Calculated throughput in MPPS (Million Packets Per second) for this line + rate. -P. This is the total No. of packets received by the SUT +M. No. of packets sent by the generator in MPPS -Q. This is the total No. of packets dropped. (These packets were sent by the generator but not - received back by the generator, these may be dropped by the SUT or the Generator) +N. No. of packets received by the generator in MPPS -R. This is the tolerated no of packets that can be dropped. +O. No. of packets sent by SUT. -S. This is the test Throughput in Gbps +P. No. of packets received by the SUT -T. This is the Latencey per Port +Q. Total no. of dropped packets -- Packets sent but not received back by the + generator, these may be dropped by the SUT or the generator. -U. This is the CPU Utilization +R. The tolerated no. of dropped packets. +S. Test throughput in Gbps +T. Latencey per Port +U. CPU Utilization diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst index 74e752d63..5fc2e8d0f 100755 --- a/docs/testing/user/userguide/01-introduction.rst +++ b/docs/testing/user/userguide/01-introduction.rst @@ -83,4 +83,4 @@ Contact Yardstick Feedback? `Contact us`_ -.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="[yardstick]" +.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="#yardstick" diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst index 886631510..62250d6a3 100755 --- a/docs/testing/user/userguide/03-architecture.rst +++ b/docs/testing/user/userguide/03-architecture.rst @@ -243,26 +243,27 @@ Yardstick Directory structure with support for different installers. *docs/* - All documentation is stored here, such as configuration guides, - user guides and Yardstick descriptions. + user guides and Yardstick test case descriptions. *etc/* - Used for test cases requiring specific POD configurations. *samples/* - test case samples are stored here, most of all scenario and - feature's samples are shown in this directory. + feature samples are shown in this directory. -*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as - well as the test cases run to verify the NFVI (*opnfv/*) are stored. - Also configurations of what to run daily and weekly at the different - PODs is located here. +*tests/* - The test cases run to verify the NFVI (*opnfv/*) are stored here. + The configurations of what to run daily and weekly at the different + PODs are also located here. -*tools/* - Currently contains tools to build image for VMs which are deployed - by Heat. Currently contains how to build the yardstick-trusty-server - image with the different tools that are needed from within the - image. +*tools/* - Contains tools to build image for VMs which are deployed by Heat. + Currently contains how to build the yardstick-image with the + different tools that are needed from within the image. *plugin/* - Plug-in configuration files are stored here. -*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts, - CLI parsing, keys, plotting tools, dispatcher, plugin +*yardstick/* - Contains the internals of Yardstick: :term:`Runners <runner>`, + :term:`Scenarios <scenario>`, :term:`Contexts <context>`, CLI + parsing, keys, plotting tools, dispatcher, plugin install/remove scripts and so on. +*yardstick/tests* - The Yardstick internal tests (*functional/* and *unit/*) + are stored here. diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst index 6b3259299..2f8175c25 100644 --- a/docs/testing/user/userguide/04-installation.rst +++ b/docs/testing/user/userguide/04-installation.rst @@ -575,17 +575,17 @@ Grafana to display data in the following sections. Automatic deployment of InfluxDB and Grafana containers (**recommended**) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Firstly, enter the Yardstick container:: +1. Enter the Yardstick container:: - sudo -EH docker exec -it yardstick /bin/bash + sudo -EH docker exec -it yardstick /bin/bash -Secondly, create InfluxDB container and configure with the following command:: +2. Create InfluxDB container and configure with the following command:: - yardstick env influxdb + yardstick env influxdb -Thirdly, create and configure Grafana container:: +3. Create and configure Grafana container:: - yardstick env grafana + yardstick env grafana Then you can run a test case and visit http://host_ip:1948 (``admin``/``admin``) to see the results. @@ -613,21 +613,21 @@ Run influxDB:: sudo -EH docker run -d --name influxdb \ -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ tutum/influxdb - docker exec -it influxdb bash + docker exec -it influxdb influx Configure influxDB:: - influx - >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES - >CREATE DATABASE yardstick; - >use yardstick; - >show MEASUREMENTS; + > CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + > CREATE DATABASE yardstick; + > use yardstick; + > show MEASUREMENTS; + > quit Run Grafana:: sudo -EH docker run -d --name grafana -p 1948:3000 grafana/grafana -Log on http://{YOUR_IP_HERE}:1948 using ``admin``/``admin`` and configure +Log on to ``http://{YOUR_IP_HERE}:1948`` using ``admin``/``admin`` and configure database resource to be ``{YOUR_IP_HERE}:8086``. .. image:: images/Grafana_config.png @@ -640,7 +640,7 @@ Configure ``yardstick.conf``:: sudo cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf sudo vi /etc/yardstick/yardstick.conf -Modify ``yardstick.conf``:: +Modify ``yardstick.conf`` to add the ``influxdb`` dispatcher:: [DEFAULT] debug = True @@ -653,7 +653,7 @@ Modify ``yardstick.conf``:: username = root password = root -Now you can run Yardstick test cases and store the results in influxDB. +Now Yardstick will store results in InfluxDB when you run a testcase. Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**) diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst index 363ad4852..973d56628 100644 --- a/docs/testing/user/userguide/13-nsb-installation.rst +++ b/docs/testing/user/userguide/13-nsb-installation.rst @@ -1,7 +1,7 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. +.. (c) OPNFV, 2016-2018 Intel Corporation. .. Convention for heading levels in Yardstick documentation: @@ -936,7 +936,7 @@ Setup system proxy (if needed). Add the following configuration into the ``/etc/environment`` file: .. note:: The proxy server name/port and IPs should be changed according to - actuall/current proxy configuration in the lab. + actual/current proxy configuration in the lab. .. code:: bash @@ -1192,3 +1192,52 @@ installed as part of the requirements of the project. 3. Execute testcase in samplevnf folder e.g. ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml`` + +Spirent Landslide +----------------- + +In order to use Spirent Landslide for vEPC testcases, some dependencies have +to be preinstalled and properly configured. + +- Java + + 32-bit Java installation is required for the Spirent Landslide TCL API. + + | ``$ sudo apt-get install openjdk-8-jdk:i386`` + + .. important:: + Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details + check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>` + section of installation instructions. + +- LsApi (Tcl API module) + + Follow Landslide documentation for detailed instructions on Linux + installation of Tcl API and its dependencies + ``http://TAS_HOST_IP/tclapiinstall.html``. + For working with LsApi Python wrapper only steps 1-5 are required. + + .. note:: After installation make sure your API home path is included in + ``PYTHONPATH`` environment variable. + + .. important:: + The current version of LsApi module has an issue with reading LD_LIBRARY_PATH. + For LsApi module to initialize correctly following lines (184-186) in + lsapi.py + + .. code-block:: python + + ldpath = os.environ.get('LD_LIBRARY_PATH', '') + if ldpath == '': + environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath + + should be changed to: + + .. code-block:: python + + ldpath = os.environ.get('LD_LIBRARY_PATH', '') + if not ldpath == '': + environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath + +.. note:: The Spirent landslide TCL software package needs to be updated in case + the user upgrades to a new version of Spirent landslide software. diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst index b4adf7855..c96155804 100644 --- a/docs/testing/user/userguide/14-nsb-operation.rst +++ b/docs/testing/user/userguide/14-nsb-operation.rst @@ -1,7 +1,7 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. +.. (c) OPNFV, 2016-2018 Intel Corporation. Yardstick - NSB Testing - Operation =================================== @@ -459,3 +459,108 @@ Sample test case file .. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml :language: yaml + +Preparing test run of vEPC test case +------------------------------------ + +Provided vEPC test cases are examples of emulation of vEPC infrastructure +components, such as UE, eNodeB, MME, SGW, PGW. + +Location of vEPC test cases: ``samples/vnf_samples/nsut/vepc/``. + +Before running a specific vEPC test case using NSB, some preconfiguration +needs to be done. + +Update Spirent Landslide TG configuration in pod file +===================================================== + +Examples of ``pod.yaml`` files could be found in +:file:`etc/yardstick/nodes/standalone`. +The name of related pod file could be checked in the context section of NSB +test case. + +The ``pod.yaml`` related to vEPC test case uses some sub-structures that hold the +details of accessing the Spirent Landslide traffic generator. +These subsections and the changes to be done in provided example pod file are +described below. + +1. ``tas_manager``: data under this key holds the information required to +access Landslide TAS (Test Administration Server) and perform needed +configurations on it. + + * ``ip``: IP address of TAS Manager node; should be updated according to test + setup used + * ``super_user``: superuser name; could be retrieved from Landslide documentation + * ``super_user_password``: superuser password; could be retrieved from + Landslide documentation + * ``cfguser_password``: password of predefined user named 'cfguser'; default + password could be retrieved from Landslide documentation + * ``test_user``: username to be used during test run as a Landslide library + name; to be defined by test run operator + * ``test_user_password``: password of test user; to be defined by test run + operator + * ``proto``: *http* or *https*; to be defined by test run operator + * ``license``: Landslide license number installed on TAS + +2. The ``config`` section holds information about test servers (TSs) and +systems under test (SUTs). Data is represented as a list of entries. +Each such entry contains: + + * ``test_server``: this subsection represents data related to test server + configuration, such as: + + * ``name``: test server name; unique custom name to be defined by test + operator + * ``role``: this value is used as a key to bind specific Test Server and + TestCase; should be set to one of test types supported by TAS license + * ``ip``: Test Server IP address + * ``thread_model``: parameter related to Test Server performance mode. + The value should be one of the following: "Legacy" | "Max" | "Fireball". + Refer to Landslide documentation for details. + * ``phySubnets``: a structure used to specify IP ranges reservations on + specific network interfaces of related Test Server. Structure fields are: + + * ``base``: start of IP address range + * ``mask``: IP range mask in CIDR format + * ``name``: network interface name, e.g. *eth1* + * ``numIps``: size of IP address range + + * ``preResolvedArpAddress``: a structure used to specify the range of IP + addresses for which the ARP responses will be emulated + + * ``StartingAddress``: IP address specifying the start of IP address range + * ``NumNodes``: size of the IP address range + + * ``suts``: a structure that contains definitions of each specific SUT + (represents a vEPC component). SUT structure contains following key/value + pairs: + + * ``name``: unique custom string specifying SUT name + * ``role``: string value corresponding with an SUT role specified in the + session profile (test session template) file + * ``managementIp``: SUT management IP adress + * ``phy``: network interface name, e.g. *eth1* + * ``ip``: vEPC component IP address used in test case topology + * ``nextHop``: next hop IP address, to allow for vEPC inter-node communication + +Update NSB test case definitions +================================ +NSB test case file designated for vEPC testing contains an example of specific +test scenario configuration. +Test operator may change these definitions as required for the use case that +requires testing. +Specifically, following subsections of the vEPC test case (section **scenarios**) +may be changed. + +1. Subsection ``options``: contains custom parameters used for vEPC testing + + * subsection ``dmf``: may contain one or more parameters specified in + ``traffic_profile`` template file + * subsection ``test_cases``: contains re-definitions of parameters specified + in ``session_profile`` template file + + .. note:: All parameters in ``session_profile``, value of which is a + placeholder, needs to be re-defined to construct a valid test session. + +2. Subsection ``runner``: specifies the test duration and the interval of +TG and VNF side KPIs polling. For more details, refer to :doc:`03-architecture`. diff --git a/samples/vnf_samples/nsut/agnostic/HTTP_requests_concurrency.yaml b/samples/vnf_samples/nsut/agnostic/HTTP_requests_concurrency.yaml new file mode 100755 index 000000000..1e9b1e8a0 --- /dev/null +++ b/samples/vnf_samples/nsut/agnostic/HTTP_requests_concurrency.yaml @@ -0,0 +1,56 @@ +# Copyright (c) 2018 Intel Corporation
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or imp
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+schema: "nsb:traffic_profile:0.1"
+
+name: TrafficProfileGenericHTTP
+description: Traffic profile to run HTTP test
+traffic_profile:
+ traffic_type: TrafficProfileGenericHTTP
+
+uplink_0:
+ ip:
+ address: "152.16.100.32" # must be in same subnet with gateway
+ subnet_prefix: 24 # subnet prefix
+ mac: "Auto" # port mac addr or auto to generate automatically
+ gateway: <GATEWAY_ADDR> # will be taken from pod file
+
+ http_client:
+ simulated_users: {{ get(simulated_users, 'simulated_users.uplink_0', '65000') }} # number of threads to be run
+ page_object: {{ get(page_object, 'page_object.uplink_0', '/1b.html') }} # http locator to be read
+
+downlink_0:
+ ip:
+ address: "152.40.40.32" # must be in same subnet with gateway
+ subnet_prefix: 24 # subnet prefix
+ mac: "Auto" # port mac addr or auto to generate automatically
+ gateway: <GATEWAY_ADDR> # will be taken from pod file
+
+uplink_1:
+ ip:
+ address: "12.12.12.32"
+ subnet_prefix: 24
+ mac: "00:00:00:00:00:01"
+ gateway: <GATEWAY_ADDR>
+
+ http_client:
+ simulated_users: {{ get(simulated_users, 'simulated_users.uplink_1', '65000') }} # number of threads to be run
+ page_object: {{ get(page_object, 'page_object.uplink_1', '/1b.html') }} # http locator to be read
+
+downlink_1:
+ ip:
+ address: "13.13.13.32"
+ subnet_prefix: 24
+ mac: "00:00:00:00:00:02"
+ gateway: <GATEWAY_ADDR>
\ No newline at end of file diff --git a/samples/vnf_samples/nsut/agnostic/agnostic_vnf_topology_ixia_8ports.yaml b/samples/vnf_samples/nsut/agnostic/agnostic_vnf_topology_ixia_8ports.yaml new file mode 100644 index 000000000..88ddf6ccd --- /dev/null +++ b/samples/vnf_samples/nsut/agnostic/agnostic_vnf_topology_ixia_8ports.yaml @@ -0,0 +1,114 @@ +# Copyright (c) 2018 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +nsd:nsd-catalog: + nsd: + - id: agnostic-topology + name: agnostic-topology + short-name: agnostic-topology + description: agnostic-topology + constituent-vnfd: + - member-vnf-index: '1' + vnfd-id-ref: tg__0 + VNF model: ../../vnf_descriptors/ixia_rfc2544_tpl.yaml #TG type + - member-vnf-index: '2' + vnfd-id-ref: vnf__0 + VNF model: ../../vnf_descriptors/agnostic_vnf.yaml #VNF type + + vld: + - id: uplink_0 + name: tg__0 to vnf__0 link 1 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe0 + vnfd-id-ref: tg__0 + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe0 + vnfd-id-ref: vnf__0 + + - id: downlink_0 + name: vnf__0 to tg__0 link 2 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe1 + vnfd-id-ref: vnf__0 + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe1 + vnfd-id-ref: tg__0 + + - id: uplink_1 + name: tg__0 to vnf__0 link 3 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe2 + vnfd-id-ref: tg__0 + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe2 + vnfd-id-ref: vnf__0 + + - id: downlink_1 + name: vnf__0 to tg__0 link 4 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe3 + vnfd-id-ref: vnf__0 + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe3 + vnfd-id-ref: tg__0 + - id: uplink_2 + name: tg__0 to vnf__0 link 5 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe4 + vnfd-id-ref: tg__0 + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe4 + vnfd-id-ref: vnf__0 + + - id: downlink_2 + name: vnf__0 to tg__0 link 6 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe5 + vnfd-id-ref: vnf__0 + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe5 + vnfd-id-ref: tg__0 + + - id: uplink_3 + name: tg__0 to vnf__0 link 7 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe6 + vnfd-id-ref: tg__0 + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe6 + vnfd-id-ref: vnf__0 + + - id: downlink_3 + name: vnf__0 to tg__0 link 8 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe7 + vnfd-id-ref: vnf__0 + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe7 + vnfd-id-ref: tg__0 diff --git a/samples/vnf_samples/nsut/agnostic/agnostic_vnf_topology_ixload_2ports.yaml b/samples/vnf_samples/nsut/agnostic/agnostic_vnf_topology_ixload_2ports.yaml new file mode 100755 index 000000000..80f6dcf67 --- /dev/null +++ b/samples/vnf_samples/nsut/agnostic/agnostic_vnf_topology_ixload_2ports.yaml @@ -0,0 +1,50 @@ +# Copyright (c) 2018 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +nsd:nsd-catalog: + nsd: + - id: agnostic-topology + name: agnostic-topology + short-name: agnostic-topology + description: scenario with HTTP and Agnostic VNF + constituent-vnfd: + - member-vnf-index: '1' + vnfd-id-ref: tg__0 + VNF model: ../../vnf_descriptors/tg_ixload.yaml + - member-vnf-index: '2' + vnfd-id-ref: vnf__0 + VNF model: ../../vnf_descriptors/agnostic_vnf.yaml + + vld: + - id: uplink_0 + name: tg__0 to vnf__0 link 1 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe0 + vnfd-id-ref: tg__0 # HTTP Client + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe0 + vnfd-id-ref: vnf__0 # VNF + + - id: downlink_0 + name: vnf__0 to tg__0 link 2 + type: ELAN + vnfd-connection-point-ref: + - member-vnf-index-ref: '2' + vnfd-connection-point-ref: xe1 + vnfd-id-ref: vnf__0 # HTTP Server + - member-vnf-index-ref: '1' + vnfd-connection-point-ref: xe1 + vnfd-id-ref: tg__0 # VNF diff --git a/samples/vnf_samples/nsut/agnostic/tc_baremetal_http_ixload__Requests_Concurrency_suite.yaml b/samples/vnf_samples/nsut/agnostic/tc_baremetal_http_ixload__Requests_Concurrency_suite.yaml new file mode 100755 index 000000000..d3c75eb25 --- /dev/null +++ b/samples/vnf_samples/nsut/agnostic/tc_baremetal_http_ixload__Requests_Concurrency_suite.yaml @@ -0,0 +1,27 @@ +# Copyright (c) 2018 Intel Corporation
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+schema: "yardstick:suite:0.1"
+
+name: "http test suite"
+test_cases_dir: "samples/"
+test_cases:
+-
+ file_name: vnf_samples/nsut/agnostic/tc_baremetal_http_ixload__Requests_Concurrency_template.yaml
+ task_args:
+ default: '{"page": "/1b.html", "users" : "5000"}'
+-
+ file_name: vnf_samples/nsut/agnostic/tc_baremetal_http_ixload__Requests_Concurrency_template.yaml
+ task_args:
+ default: '{"page": "/1b.html", "users" : "6000"}'
diff --git a/samples/vnf_samples/nsut/agnostic/tc_baremetal_http_ixload__Requests_Concurrency_template.yaml b/samples/vnf_samples/nsut/agnostic/tc_baremetal_http_ixload__Requests_Concurrency_template.yaml new file mode 100755 index 000000000..de2a779f8 --- /dev/null +++ b/samples/vnf_samples/nsut/agnostic/tc_baremetal_http_ixload__Requests_Concurrency_template.yaml @@ -0,0 +1,40 @@ +# Copyright (c) 2018 Intel Corporation
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+---
+schema: yardstick:task:0.1
+{% set users = users or "10000" %}
+{% set page = page or "/1b.html" %}
+scenarios:
+- type: NSPerf
+ traffic_profile: "HTTP_requests_concurrency.yaml"
+ topology: agnostic_vnf_topology_ixload_2ports.yaml
+ nodes:
+ tg__0: trafficgen_1.yardstick
+ vnf__0: vnf.yardstick
+ options:
+ simulated_users:
+ uplink: [{{users}}]
+ page_object:
+ uplink: [{{page}}]
+ vnf__0: []
+ runner:
+ type: Duration
+ duration: 2
+ ixia_profile: ../../traffic_profiles/vfw/HTTP-vFW_IPv4_2Ports_Concurrency.rxf # Need vlan update
+context:
+ type: Node
+ name: yardstick
+ nfvi_type: baremetal
+ file: /etc/yardstick/nodes/pod_ixia.yaml
diff --git a/yardstick/benchmark/contexts/standalone/model.py b/yardstick/benchmark/contexts/standalone/model.py index 1004c62d1..ab5fec012 100644 --- a/yardstick/benchmark/contexts/standalone/model.py +++ b/yardstick/benchmark/contexts/standalone/model.py @@ -107,7 +107,7 @@ version: 2 ethernets: ens3: match: - mac_address: {mac_address} + macaddress: {mac_address} addresses: - {ip_address} EOF @@ -570,6 +570,8 @@ class StandaloneContextHelper(object): # Update image with public key key_filename = node.get('key_filename') ip_netmask = "{0}/{1}".format(node.get('ip'), node.get('netmask')) + ip_netmask = "{0}/{1}".format(node.get('ip'), + IPNetwork(ip_netmask).prefixlen) Libvirt.gen_cdrom_image(connection, cdrom_img, vm_name, user_name, key_filename, mac, ip_netmask) return node diff --git a/yardstick/network_services/constants.py b/yardstick/network_services/constants.py index 0064b4fc5..5a186be42 100644 --- a/yardstick/network_services/constants.py +++ b/yardstick/network_services/constants.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016-2017 Intel Corporation +# Copyright (c) 2016-2018 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -17,3 +17,4 @@ DEFAULT_VNF_TIMEOUT = 3600 PROCESS_JOIN_TIMEOUT = 3 ONE_GIGABIT_IN_BITS = 1000000000 NIC_GBPS_DEFAULT = 10 +RETRY_TIMEOUT = 5 diff --git a/yardstick/network_services/traffic_profile/http_ixload.py b/yardstick/network_services/traffic_profile/http_ixload.py index 3ccec637d..9210f3c6d 100644 --- a/yardstick/network_services/traffic_profile/http_ixload.py +++ b/yardstick/network_services/traffic_profile/http_ixload.py @@ -16,6 +16,14 @@ import sys import os import logging import collections +import subprocess +try: + libs = subprocess.check_output( + 'python -c "import site; print(site.getsitepackages())"', shell=True) + + sys.path.extend(libs[1:-1].replace("'", "").split(',')) +except subprocess.CalledProcessError: + pass # ixload uses its own py2. So importing jsonutils fails. So adding below # workaround to support call from yardstick @@ -24,7 +32,7 @@ try: except ImportError: import json as jsonutils -from yardstick.common import exceptions +from yardstick.common import exceptions #pylint: disable=wrong-import-position try: from IxLoad import IxLoad, StatCollectorUtils diff --git a/yardstick/network_services/traffic_profile/prox_profile.py b/yardstick/network_services/traffic_profile/prox_profile.py index de4b3f9a0..be450c9f7 100644 --- a/yardstick/network_services/traffic_profile/prox_profile.py +++ b/yardstick/network_services/traffic_profile/prox_profile.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016-2017 Intel Corporation +# Copyright (c) 2016-2018 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -17,6 +17,7 @@ from __future__ import absolute_import import logging import multiprocessing +import time from yardstick.network_services.traffic_profile.base import TrafficProfile from yardstick.network_services.vnf_generic.vnf.prox_helpers import ProxProfileHelper @@ -117,6 +118,7 @@ class ProxProfile(TrafficProfile): try: pkt_size = next(self.pkt_size_iterator) except StopIteration: + time.sleep(5) self.done.set() return diff --git a/yardstick/network_services/vnf_generic/vnf/prox_helpers.py b/yardstick/network_services/vnf_generic/vnf/prox_helpers.py index 321c05779..8d721c045 100644 --- a/yardstick/network_services/vnf_generic/vnf/prox_helpers.py +++ b/yardstick/network_services/vnf_generic/vnf/prox_helpers.py @@ -1,4 +1,4 @@ -# Copyright (c) 2017 Intel Corporation +# Copyright (c) 2018 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -21,6 +21,7 @@ import re import select import socket import time + from collections import OrderedDict, namedtuple from contextlib import contextmanager from itertools import repeat, chain @@ -325,6 +326,27 @@ class ProxSocketHelper(object): return ret_str, False + def get_string(self, pkt_dump_only=False, timeout=0.01): + + def is_ready_string(): + # recv() is blocking, so avoid calling it when no data is waiting. + ready = select.select([self._sock], [], [], timeout) + return bool(ready[0]) + + status = False + ret_str = "" + while status is False: + for status in iter(is_ready_string, False): + decoded_data = self._sock.recv(256).decode('utf-8') + ret_str, done = self._parse_socket_data(decoded_data, + pkt_dump_only) + if (done): + status = True + break + + LOG.debug("Received data from socket: [%s]", ret_str) + return status, ret_str + def get_data(self, pkt_dump_only=False, timeout=0.01): """ read data from the socket """ @@ -394,7 +416,6 @@ class ProxSocketHelper(object): """ stop all cores on the remote instance """ LOG.debug("Stop all") self.put_command("stop all\n") - time.sleep(3) def stop(self, cores, task=''): """ stop specific cores on the remote instance """ @@ -406,7 +427,6 @@ class ProxSocketHelper(object): LOG.debug("Stopping cores %s", tmpcores) self.put_command("stop {} {}\n".format(join_non_strings(',', tmpcores), task)) - time.sleep(3) def start_all(self): """ start all cores on the remote instance """ @@ -423,13 +443,11 @@ class ProxSocketHelper(object): LOG.debug("Starting cores %s", tmpcores) self.put_command("start {}\n".format(join_non_strings(',', tmpcores))) - time.sleep(3) def reset_stats(self): """ reset the statistics on the remote instance """ LOG.debug("Reset stats") self.put_command("reset stats\n") - time.sleep(1) def _run_template_over_cores(self, template, cores, *args): for core in cores: @@ -440,7 +458,6 @@ class ProxSocketHelper(object): LOG.debug("Set packet size for core(s) %s to %d", cores, pkt_size) pkt_size -= 4 self._run_template_over_cores("pkt_size {} 0 {}\n", cores, pkt_size) - time.sleep(1) def set_value(self, cores, offset, value, length): """ set value on the remote instance """ @@ -545,49 +562,44 @@ class ProxSocketHelper(object): return rx, tx, drop, tsc def multi_port_stats(self, ports): - """get counter values from all ports port""" - - ports_str = "" - for port in ports: - ports_str = ports_str + str(port) + "," - ports_str = ports_str[:-1] + """get counter values from all ports at once""" + ports_str = ",".join(map(str, ports)) ports_all_data = [] tot_result = [0] * len(ports) - retry_counter = 0 port_index = 0 - while (len(ports) is not len(ports_all_data)) and (retry_counter < 10): + while (len(ports) is not len(ports_all_data)): self.put_command("multi port stats {}\n".format(ports_str)) - ports_all_data = self.get_data().split(";") + status, ports_all_data_str = self.get_string() + + if not status: + return False, [] + + ports_all_data = ports_all_data_str.split(";") if len(ports) is len(ports_all_data): for port_data_str in ports_all_data: + tmpdata = [] try: - tot_result[port_index] = [try_int(s, 0) for s in port_data_str.split(",")] + tmpdata = [try_int(s, 0) for s in port_data_str.split(",")] except (IndexError, TypeError): - LOG.error("Port Index error %d %s - retrying ", port_index, port_data_str) - - if (len(tot_result[port_index]) is not 6) or \ - tot_result[port_index][0] is not ports[port_index]: - ports_all_data = [] - tot_result = [0] * len(ports) - port_index = 0 - time.sleep(0.1) + LOG.error("Unpacking data error %s", port_data_str) + return False, [] + + if (len(tmpdata) < 6) or tmpdata[0] not in ports: LOG.error("Corrupted PACKET %s - retrying", port_data_str) - break + return False, [] else: + tot_result[port_index] = tmpdata port_index = port_index + 1 else: LOG.error("Empty / too much data - retry -%s-", ports_all_data) - ports_all_data = [] - tot_result = [0] * len(ports) - port_index = 0 - time.sleep(0.1) + return False, [] - retry_counter = retry_counter + 1 - return tot_result + LOG.debug("Multi port packet ..OK.. %s", tot_result) + return True, tot_result def port_stats(self, ports): """get counter values from a specific port""" @@ -1070,41 +1082,70 @@ class ProxDataHelper(object): def totals_and_pps(self): if self._totals_and_pps is None: rx_total = tx_total = 0 - all_ports = self.sut.multi_port_stats(range(self.port_count)) - for port in all_ports: - rx_total = rx_total + port[1] - tx_total = tx_total + port[2] - requested_pps = self.value / 100.0 * self.line_rate_to_pps() - self._totals_and_pps = rx_total, tx_total, requested_pps + ok = False + timeout = time.time() + constants.RETRY_TIMEOUT + while not ok: + ok, all_ports = self.sut.multi_port_stats([ + self.vnfd_helper.port_num(port_name) + for port_name in self.vnfd_helper.port_pairs.all_ports]) + if time.time() > timeout: + break + if ok: + for port in all_ports: + rx_total = rx_total + port[1] + tx_total = tx_total + port[2] + requested_pps = self.value / 100.0 * self.line_rate_to_pps() + self._totals_and_pps = rx_total, tx_total, requested_pps return self._totals_and_pps @property def rx_total(self): - return self.totals_and_pps[0] + try: + ret_val = self.totals_and_pps[0] + except (AttributeError, ValueError, TypeError, LookupError): + ret_val = 0 + return ret_val @property def tx_total(self): - return self.totals_and_pps[1] + try: + ret_val = self.totals_and_pps[1] + except (AttributeError, ValueError, TypeError, LookupError): + ret_val = 0 + return ret_val @property def requested_pps(self): - return self.totals_and_pps[2] + try: + ret_val = self.totals_and_pps[2] + except (AttributeError, ValueError, TypeError, LookupError): + ret_val = 0 + return ret_val @property def samples(self): samples = {} ports = [] - port_names = [] + port_names = {} for port_name, port_num in self.vnfd_helper.ports_iter(): ports.append(port_num) - port_names.append(port_name) - - results = self.sut.multi_port_stats(ports) - for result in results: - port_num = result[0] - samples[port_names[port_num]] = { - "in_packets": result[1], - "out_packets": result[2]} + port_names[port_num] = port_name + + ok = False + timeout = time.time() + constants.RETRY_TIMEOUT + while not ok: + ok, results = self.sut.multi_port_stats(ports) + if time.time() > timeout: + break + if ok: + for result in results: + port_num = result[0] + try: + samples[port_names[port_num]] = { + "in_packets": result[1], + "out_packets": result[2]} + except (IndexError, KeyError): + pass return samples def __enter__(self): diff --git a/yardstick/network_services/vnf_generic/vnf/prox_vnf.py b/yardstick/network_services/vnf_generic/vnf/prox_vnf.py index 839f30967..c3b50369b 100644 --- a/yardstick/network_services/vnf_generic/vnf/prox_vnf.py +++ b/yardstick/network_services/vnf_generic/vnf/prox_vnf.py @@ -1,4 +1,4 @@ -# Copyright (c) 2017 Intel Corporation +# Copyright (c) 2018 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -15,6 +15,7 @@ import errno import logging import datetime +import time from yardstick.common.process import check_if_process_failed from yardstick.network_services.vnf_generic.vnf.prox_helpers import ProxDpdkVnfSetupEnvHelper @@ -81,6 +82,8 @@ class ProxApproxVnf(SampleVNF): "packets_in": 0, "packets_dropped": 0, "packets_fwd": 0, + "curr_packets_in": 0, + "curr_packets_fwd": 0, "collect_stats": {"core": {}}, }) return result @@ -97,15 +100,26 @@ class ProxApproxVnf(SampleVNF): raise RuntimeError("Failed ..Invalid no of ports .. " "1, 2 or 4 ports only supported at this time") - all_port_stats = self.vnf_execute('multi_port_stats', range(port_count)) - rx_total = tx_total = tsc = 0 - try: - for single_port_stats in all_port_stats: - rx_total = rx_total + single_port_stats[1] - tx_total = tx_total + single_port_stats[2] - tsc = tsc + single_port_stats[5] - except (TypeError, IndexError): - LOG.error("Invalid data ...") + tmpPorts = [self.vnfd_helper.port_num(port_name) + for port_name in self.vnfd_helper.port_pairs.all_ports] + ok = False + timeout = time.time() + constants.RETRY_TIMEOUT + while not ok: + ok, all_port_stats = self.vnf_execute('multi_port_stats', tmpPorts) + if time.time() > timeout: + break + + if ok: + rx_total = tx_total = tsc = 0 + try: + for single_port_stats in all_port_stats: + rx_total = rx_total + single_port_stats[1] + tx_total = tx_total + single_port_stats[2] + tsc = tsc + single_port_stats[5] + except (TypeError, IndexError): + LOG.error("Invalid data ...") + return {} + else: return {} tsc = tsc / port_count diff --git a/yardstick/tests/unit/benchmark/contexts/standalone/test_model.py b/yardstick/tests/unit/benchmark/contexts/standalone/test_model.py index 98d2b1836..10e1e3ba0 100644 --- a/yardstick/tests/unit/benchmark/contexts/standalone/test_model.py +++ b/yardstick/tests/unit/benchmark/contexts/standalone/test_model.py @@ -17,6 +17,7 @@ import os import uuid import mock +import netaddr import unittest from xml.etree import ElementTree @@ -292,6 +293,7 @@ class ModelLibvirtTestCase(unittest.TestCase): hostname = root.find('name').text mac = "00:11:22:33:44:55" ip = "{0}/{1}".format(node.get('ip'), node.get('netmask')) + ip = "{0}/{1}".format(node.get('ip'), netaddr.IPNetwork(ip).prefixlen) model.StandaloneContextHelper.check_update_key(self.mock_ssh, node, hostname, id_name, cdrom_img, mac) mock_gen_cdrom_image.assert_called_once_with(self.mock_ssh, cdrom_img, hostname, diff --git a/yardstick/tests/unit/network_services/vnf_generic/vnf/test_prox_helpers.py b/yardstick/tests/unit/network_services/vnf_generic/vnf/test_prox_helpers.py index 3d6ebb25b..894b16e13 100644 --- a/yardstick/tests/unit/network_services/vnf_generic/vnf/test_prox_helpers.py +++ b/yardstick/tests/unit/network_services/vnf_generic/vnf/test_prox_helpers.py @@ -1,4 +1,4 @@ -# Copyright (c) 2016-2017 Intel Corporation +# Copyright (c) 2016-2018 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -320,7 +320,8 @@ class TestProxSocketHelper(unittest.TestCase): self.assertEqual(len(prox._pkt_dumps), 0) mock_select.select.reset_mock() - mock_select.select.side_effect = chain([['a'], ['']], repeat([1], 3)) + mock_select.select.side_effect = chain([['a'], ['']], + repeat([1], 3)) mock_recv.decode.return_value = PACKET_DUMP_1 ret = prox.get_data() self.assertEqual(mock_select.select.call_count, 2) @@ -328,13 +329,54 @@ class TestProxSocketHelper(unittest.TestCase): self.assertEqual(len(prox._pkt_dumps), 1) mock_select.select.reset_mock() - mock_select.select.side_effect = chain([[object()], [None]], repeat([1], 3)) + mock_select.select.side_effect = chain([[object()], [None]], + repeat([1], 3)) mock_recv.decode.return_value = PACKET_DUMP_2 ret = prox.get_data() self.assertEqual(mock_select.select.call_count, 1) self.assertEqual(ret, 'jumped over') self.assertEqual(len(prox._pkt_dumps), 3) + @mock.patch.object(prox_helpers, 'select') + def test_get_string(self, mock_select): + mock_select.select.side_effect = [[1], [0]] + mock_socket = mock.MagicMock() + mock_recv = mock_socket.recv() + mock_recv.decode.return_value = "" + prox = prox_helpers.ProxSocketHelper(mock_socket) + status, ret = prox.get_string() + self.assertEqual(ret, "") + self.assertTrue(status) + self.assertEqual(len(prox._pkt_dumps), 0) + + @mock.patch.object(prox_helpers, 'select') + def test_get_string2(self, mock_select): + mock_select.select.side_effect = chain([['a'], ['']], + repeat([1], 3)) + mock_socket = mock.MagicMock() + mock_recv = mock_socket.recv() + mock_recv.decode.return_value = PACKET_DUMP_1 + prox = prox_helpers.ProxSocketHelper(mock_socket) + status, ret = prox.get_string() + self.assertEqual(mock_select.select.call_count, 2) + self.assertEqual(ret, 'pktdump,3,11') + self.assertTrue(status) + self.assertEqual(len(prox._pkt_dumps), 1) + + @mock.patch.object(prox_helpers, 'select') + def test_get_string3(self, mock_select): + mock_select.select.side_effect = chain([[object()], [None]], + repeat([1], 3)) + mock_socket = mock.MagicMock() + mock_recv = mock_socket.recv() + mock_recv.decode.return_value = PACKET_DUMP_2 + prox = prox_helpers.ProxSocketHelper(mock_socket) + status, ret = prox.get_string() + self.assertTrue(status) + self.assertTrue(mock_select.select.assert_called_once) + self.assertEqual(ret, 'jumped over') + self.assertEqual(len(prox._pkt_dumps), 2) + def test__parse_socket_data_mixed_data(self): prox = prox_helpers.ProxSocketHelper(mock.MagicMock()) ret, _ = prox._parse_socket_data(PACKET_DUMP_NON_1, False) @@ -551,26 +593,60 @@ class TestProxSocketHelper(unittest.TestCase): def test_multi_port_stats(self, *args): mock_socket = mock.MagicMock() prox = prox_helpers.ProxSocketHelper(mock_socket) - prox.get_data = mock.MagicMock(return_value='0,1,2,3,4,5;1,1,2,3,4,5') + prox.get_string = mock.MagicMock(return_value=(True, '0,1,2,3,4,5;1,1,2,3,4,5')) expected = [[0, 1, 2, 3, 4, 5], [1, 1, 2, 3, 4, 5]] - result = prox.multi_port_stats([0, 1]) + status, result = prox.multi_port_stats([0, 1]) self.assertEqual(result, expected) - - prox.get_data = mock.MagicMock(return_value='0,1,2,3,4,5;1,1,2,3,4,5') - result = prox.multi_port_stats([0]) - expected = [0] - self.assertEqual(result, expected) - - prox.get_data = mock.MagicMock(return_value='0,1,2,3;1,1,2,3,4,5') - result = prox.multi_port_stats([0, 1]) - expected = [0] * 2 - self.assertEqual(result, expected) - - prox.get_data = mock.MagicMock(return_value='99,1,2,3,4,5;1,1,2,3,4,5') - expected = [0] * 2 - result = prox.multi_port_stats([0, 1]) + self.assertEqual(status, True) + + prox.get_string = mock.MagicMock( + return_value=(True, '0,1,2,3,4,5;1,1,2,3,4,5')) + status, result = prox.multi_port_stats([0]) + self.assertEqual(status, False) + + prox.get_string = mock.MagicMock( + return_value=(True, '0,1,2,3,4,5;1,1,2,3,4,5')) + status, result = prox.multi_port_stats([0, 1, 2]) + self.assertEqual(status, False) + + prox.get_string = mock.MagicMock( + return_value=(True, '0,1,2,3;1,1,2,3,4,5')) + status, result = prox.multi_port_stats([0, 1]) + self.assertEqual(status, False) + + prox.get_string = mock.MagicMock( + return_value=(True, '99,1,2,3,4,5;1,1,2,3,4,5')) + status, result = prox.multi_port_stats([0, 1]) + self.assertEqual(status, False) + + prox.get_string = mock.MagicMock( + return_value=(True, '99,1,2,3,4,5;1,1,2,3,4,5')) + status, result = prox.multi_port_stats([99, 1]) + expected = [[99, 1, 2, 3, 4, 5], [1, 1, 2, 3, 4, 5]] + self.assertEqual(status, True) self.assertEqual(result, expected) + prox.get_string = mock.MagicMock( + return_value=(True, + '2,21,22,23,24,25;1,11,12,13,14,15;0,1,2,3,4,5')) + + sample1 = [0, 1, 2, 3, 4, 5] + sample2 = [1, 11, 12, 13, 14, 15] + sample3 = [2, 21, 22, 23, 24, 25] + expected = [sample3, sample2, sample1] + status, result = prox.multi_port_stats([1, 2, 0]) + self.assertTrue(status) + self.assertListEqual(result, expected) + + prox.get_string = mock.MagicMock( + return_value=(True, '6,21,22,23,24,25;1,11,12,13,14,15;0,1,2,3,4,5')) + ok, result = prox.multi_port_stats([1, 6, 0]) + sample1 = [6, 21, 22, 23, 24, 25] + sample2 = [1, 11, 12, 13, 14, 15] + sample3 = [0, 1, 2, 3, 4, 5] + expected = [sample1, sample2, sample3] + self.assertListEqual(result, expected) + self.assertTrue(ok) def test_port_stats(self): port_stats = [ @@ -1584,8 +1660,9 @@ class TestProxDataHelper(unittest.TestCase): vnfd_helper.port_pairs.all_ports = list(range(4)) sut = mock.MagicMock() - sut.multi_port_stats.return_value = [[0, 1, 2, 3, 4, 5], [1, 1, 2, 3, 4, 5], - [2, 1, 2, 3, 4, 5], [3, 1, 2, 3, 4, 5]] + sut.multi_port_stats.return_value = (True, + [[0, 1, 2, 3, 4, 5], [1, 1, 2, 3, 4, 5], + [2, 1, 2, 3, 4, 5], [3, 1, 2, 3, 4, 5]]) data_helper = prox_helpers.ProxDataHelper( vnfd_helper, sut, pkt_size, 25, None, @@ -1593,14 +1670,77 @@ class TestProxDataHelper(unittest.TestCase): self.assertEqual(data_helper.rx_total, 4) self.assertEqual(data_helper.tx_total, 8) - self.assertEqual(data_helper.requested_pps, 6.25e6) + self.assertEqual(data_helper.requested_pps, 6250000.0) + + vnfd_helper = mock.MagicMock() + vnfd_helper.port_pairs.all_ports = [3, 4] + + sut = mock.MagicMock() + sut.multi_port_stats.return_value = (True, + [[3, 1, 2, 3, 4, 5], [4, 1, 2, 3, 4, 5]]) + + data_helper = prox_helpers.ProxDataHelper( + vnfd_helper, sut, pkt_size, 25, None, + constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + + self.assertEqual(data_helper.rx_total, 2) + self.assertEqual(data_helper.tx_total, 4) + self.assertEqual(data_helper.requested_pps, 3125000.0) + + vnfd_helper = mock.MagicMock() + vnfd_helper.port_pairs.all_ports = [0, 1, 2, 3, 4, 6, 7] + + sut = mock.MagicMock() + sut.multi_port_stats.return_value = (True, + [[8, 1, 2, 3, 4, 5], [9, 1, 2, 3, 4, 5]]) + + data_helper = prox_helpers.ProxDataHelper( + vnfd_helper, sut, pkt_size, 25, None, + constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + + self.assertEqual(data_helper.rx_total, 2) + self.assertEqual(data_helper.tx_total, 4) + self.assertEqual(data_helper.requested_pps, 10937500.0) + + vnfd_helper = mock.MagicMock() + vnfd_helper.port_pairs.all_ports = [] + + sut = mock.MagicMock() + sut.multi_port_stats.return_value = (True, + [[8, 1, 2, 3, 4, 5], [9, 1, 2, 3, 4, 5]]) + + data_helper = prox_helpers.ProxDataHelper( + vnfd_helper, sut, pkt_size, 25, None, + constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + + self.assertEqual(data_helper.rx_total, 2) + self.assertEqual(data_helper.tx_total, 4) + self.assertEqual(data_helper.requested_pps, 0.0) + + def test_totals_and_pps2(self): + pkt_size = 180 + vnfd_helper = mock.MagicMock() + vnfd_helper.port_pairs.all_ports = list(range(4)) + + sut = mock.MagicMock() + sut.multi_port_stats.return_value = (True, + [[0, 'A', 2, 3, 4, 5], [1, 'B', 'C', 3, 4, 5], + ['D', 1, 2, 3, 4, 5], [3, 1, 2, 3, 4, 'F']]) + + data_helper = prox_helpers.ProxDataHelper( + vnfd_helper, sut, pkt_size, 25, None, + constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + + self.assertEqual(data_helper.rx_total, 0) + self.assertEqual(data_helper.tx_total, 0) + self.assertEqual(data_helper.requested_pps, 0) def test_samples(self): vnfd_helper = mock.MagicMock() vnfd_helper.ports_iter.return_value = [('xe0', 0), ('xe1', 1)] sut = mock.MagicMock() - sut.multi_port_stats.return_value = [[0, 1, 2, 3, 4, 5], [1, 11, 12, 3, 4, 5]] + sut.multi_port_stats.return_value = (True, [[0, 1, 2, 3, 4, 5], [1, 11, 12, 3, 4, 5]]) data_helper = prox_helpers.ProxDataHelper( vnfd_helper, sut, None, None, None, None) @@ -1618,13 +1758,35 @@ class TestProxDataHelper(unittest.TestCase): result = data_helper.samples self.assertDictEqual(result, expected) + def test_samples2(self): + vnfd_helper = mock.MagicMock() + vnfd_helper.ports_iter.return_value = [('xe1', 3), ('xe2', 7)] + + sut = mock.MagicMock() + sut.multi_port_stats.return_value = (True, [[3, 1, 2, 3, 4, 5], [7, 11, 12, 3, 4, 5]]) + + data_helper = prox_helpers.ProxDataHelper( + vnfd_helper, sut, None, None, None, None) + + expected = { + 'xe1': { + 'in_packets': 1, + 'out_packets': 2, + }, + 'xe2': { + 'in_packets': 11, + 'out_packets': 12, + }, + } + result = data_helper.samples + self.assertDictEqual(result, expected) + def test___enter__(self): vnfd_helper = mock.MagicMock() vnfd_helper.port_pairs.all_ports = list(range(4)) vnfd_helper.ports_iter.return_value = [('xe1', 3), ('xe2', 7)] sut = mock.MagicMock() - sut.port_stats.return_value = list(range(10)) data_helper = prox_helpers.ProxDataHelper(vnfd_helper, sut, None, None, 5.4, constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) @@ -1978,7 +2140,6 @@ class TestProxProfileHelper(unittest.TestCase): client = mock.MagicMock() client.hz.return_value = 2 - client.port_stats.return_value = tuple(range(12)) helper.client = client helper.get_latency = mock.MagicMock(return_value=[3.3, 3.6, 3.8]) @@ -1988,18 +2149,20 @@ class TestProxProfileHelper(unittest.TestCase): with helper.traffic_context(64, 1): pass - @mock.patch('yardstick.network_services.vnf_generic.vnf.prox_helpers.time') - def test_run_test(self, _): + def test_run_test(self, *args): resource_helper = mock.MagicMock() resource_helper.step_delta = 0.4 resource_helper.vnfd_helper.port_pairs.all_ports = list(range(2)) - resource_helper.sut.port_stats.return_value = list(range(10)) + resource_helper.sut.multi_port_stats.return_value = (True, [[0, 1, 1, 2, 4, 5], + [1, 1, 2, 3, 4, 5]]) helper = prox_helpers.ProxProfileHelper(resource_helper) - helper.run_test(120, 5, 6.5, - constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) - + helper.run_test(pkt_size=120, duration=5, value=6.5, tolerated_loss=0.0, + line_speed=constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + self.assertTrue(resource_helper.sut.multi_port_stats.called) + self.assertTrue(resource_helper.sut.stop_all.called) + self.assertTrue(resource_helper.sut.reset_stats.called) class TestProxMplsProfileHelper(unittest.TestCase): @@ -2135,22 +2298,30 @@ class TestProxBngProfileHelper(unittest.TestCase): self.assertEqual(helper.arp_task_cores, expected_arp_task) self.assertEqual(helper._cores_tuple, expected_combined) - @mock.patch('yardstick.network_services.vnf_generic.vnf.prox_helpers.time') - def test_run_test(self, _): + def test_run_test(self, *args): resource_helper = mock.MagicMock() resource_helper.step_delta = 0.4 resource_helper.vnfd_helper.port_pairs.all_ports = list(range(2)) - resource_helper.sut.port_stats.return_value = list(range(10)) + resource_helper.sut.multi_port_stats.return_value = (True, [[0, 1, 1, 2, 4, 5], + [1, 1, 2, 3, 4, 5]]) helper = prox_helpers.ProxBngProfileHelper(resource_helper) - helper.run_test(120, 5, 6.5, - constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + helper.run_test(pkt_size=120, duration=5, value=6.5, tolerated_loss=0.0, + line_speed=constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + self.assertTrue(resource_helper.sut.multi_port_stats.called) + self.assertTrue(resource_helper.sut.stop_all.called) + self.assertTrue(resource_helper.sut.reset_stats.called) + + resource_helper.reset_mock() # negative pkt_size is the only way to make ratio > 1 - helper.run_test(-1000, 5, 6.5, - constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + helper.run_test(pkt_size=-1000, duration=5, value=6.5, tolerated_loss=0.0, + line_speed=constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + self.assertTrue(resource_helper.sut.multi_port_stats.called) + self.assertTrue(resource_helper.sut.stop_all.called) + self.assertTrue(resource_helper.sut.reset_stats.called) class TestProxVpeProfileHelper(unittest.TestCase): @@ -2253,18 +2424,21 @@ class TestProxVpeProfileHelper(unittest.TestCase): self.assertEqual(helper.inet_ports, expected_inet) self.assertEqual(helper._ports_tuple, expected_combined) - @mock.patch('yardstick.network_services.vnf_generic.vnf.prox_helpers.time') - def test_run_test(self, _): + def test_run_test(self, *args): resource_helper = mock.MagicMock() resource_helper.step_delta = 0.4 resource_helper.vnfd_helper.port_pairs.all_ports = list(range(2)) - resource_helper.sut.port_stats.return_value = list(range(10)) + resource_helper.sut.multi_port_stats.return_value = (True, [[0, 1, 1, 2, 4, 5], + [1, 1, 2, 3, 4, 5]]) helper = prox_helpers.ProxVpeProfileHelper(resource_helper) - helper.run_test(120, 5, 6.5) - helper.run_test(-1000, 5, 6.5) # negative pkt_size is the only way to make ratio > 1 + helper.run_test(pkt_size=120, duration=5, value=6.5, tolerated_loss=0.0, + line_speed=constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + # negative pkt_size is the only way to make ratio > 1 + helper.run_test(pkt_size=-1000, duration=5, value=6.5, tolerated_loss=0.0, + line_speed=constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) class TestProxlwAFTRProfileHelper(unittest.TestCase): @@ -2367,14 +2541,18 @@ class TestProxlwAFTRProfileHelper(unittest.TestCase): self.assertEqual(helper.inet_ports, expected_inet) self.assertEqual(helper._ports_tuple, expected_combined) - @mock.patch('yardstick.network_services.vnf_generic.vnf.prox_helpers.time') - def test_run_test(self, _): + def test_run_test(self, *args): resource_helper = mock.MagicMock() resource_helper.step_delta = 0.4 resource_helper.vnfd_helper.port_pairs.all_ports = list(range(2)) - resource_helper.sut.port_stats.return_value = list(range(10)) + resource_helper.sut.multi_port_stats.return_value = (True, [[0, 1, 2, 4, 6, 5], + [1, 1, 2, 3, 4, 5]]) helper = prox_helpers.ProxlwAFTRProfileHelper(resource_helper) - helper.run_test(120, 5, 6.5) - helper.run_test(-1000, 5, 6.5) # negative pkt_size is the only way to make ratio > 1 + helper.run_test(pkt_size=120, duration=5, value=6.5, tolerated_loss=0.0, + line_speed=constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) + + # negative pkt_size is the only way to make ratio > 1 + helper.run_test(pkt_size=-1000, duration=5, value=6.5, tolerated_loss=0.0, + line_speed=constants.NIC_GBPS_DEFAULT * constants.ONE_GIGABIT_IN_BITS) diff --git a/yardstick/tests/unit/network_services/vnf_generic/vnf/test_prox_vnf.py b/yardstick/tests/unit/network_services/vnf_generic/vnf/test_prox_vnf.py index f144e8c42..62cbea0bb 100644 --- a/yardstick/tests/unit/network_services/vnf_generic/vnf/test_prox_vnf.py +++ b/yardstick/tests/unit/network_services/vnf_generic/vnf/test_prox_vnf.py @@ -335,6 +335,8 @@ class TestProxApproxVnf(unittest.TestCase): 'packets_in': 0, 'packets_dropped': 0, 'packets_fwd': 0, + 'curr_packets_in': 0, + 'curr_packets_fwd': 0, 'collect_stats': {'core': {}} } result = prox_approx_vnf.collect_kpi() @@ -346,8 +348,8 @@ class TestProxApproxVnf(unittest.TestCase): mock_ssh(ssh) resource_helper = mock.MagicMock() - resource_helper.execute.return_value = [[0, 1, 2, 3, 4, 5], [1, 1, 2, 3, 4, 5], - [2, 1, 2, 3, 4, 5], [3, 1, 2, 3, 4, 5]] + resource_helper.execute.return_value = (True, + [[0, 1, 2, 3, 4, 5], [1, 1, 2, 3, 4, 5]]) resource_helper.collect_collectd_kpi.return_value = {'core': {'result': 234}} prox_approx_vnf = prox_vnf.ProxApproxVnf(NAME, self.VNFD0, 'task_id') @@ -355,20 +357,61 @@ class TestProxApproxVnf(unittest.TestCase): 'nodes': {prox_approx_vnf.name: "mock"} } prox_approx_vnf.resource_helper = resource_helper + prox_approx_vnf.tsc_hz = 1000 expected = { + 'curr_packets_in': 200, + 'curr_packets_fwd': 400, 'physical_node': 'mock_node', - 'packets_in': 4, - 'packets_dropped': 4, - 'packets_fwd': 8, + 'packets_in': 2, + 'packets_dropped': 2, + 'packets_fwd': 4, 'collect_stats': {'core': {'result': 234}}, } result = prox_approx_vnf.collect_kpi() self.assertEqual(result['packets_in'], expected['packets_in']) self.assertEqual(result['packets_dropped'], expected['packets_dropped']) self.assertEqual(result['packets_fwd'], expected['packets_fwd']) - self.assertNotEqual(result['packets_fwd'], 0) - self.assertNotEqual(result['packets_fwd'], 0) + self.assertEqual(result['curr_packets_in'], expected['curr_packets_in']) + self.assertEqual(result['curr_packets_fwd'], expected['curr_packets_fwd']) + + @mock.patch.object(ctx_base.Context, 'get_physical_node_from_server', return_value='mock_node') + @mock.patch(SSH_HELPER) + def test_collect_kpi_bad_input(self, ssh, *args): + mock_ssh(ssh) + + resource_helper = mock.MagicMock() + resource_helper.execute.return_value = (True, + [[0, 'A', 'B', 'C', 'D', 'E'], + ['F', 1, 2, 3, 4, 5]]) + + prox_approx_vnf = prox_vnf.ProxApproxVnf(NAME, self.VNFD0, 'task_id') + prox_approx_vnf.scenario_helper.scenario_cfg = { + 'nodes': {prox_approx_vnf.name: "mock"} + } + prox_approx_vnf.resource_helper = resource_helper + + result = prox_approx_vnf.collect_kpi() + self.assertDictEqual(result, {}) + + @mock.patch.object(ctx_base.Context, 'get_physical_node_from_server', return_value='mock_node') + @mock.patch(SSH_HELPER) + def test_collect_kpi_bad_input2(self, ssh, *args): + mock_ssh(ssh) + + resource_helper = mock.MagicMock() + resource_helper.execute.return_value = (False, + [[0, 'A', 'B', 'C', 'D', 'E'], + ['F', 1, 2, 3, 4, 5]]) + + prox_approx_vnf = prox_vnf.ProxApproxVnf(NAME, self.VNFD0, 'task_id') + prox_approx_vnf.scenario_helper.scenario_cfg = { + 'nodes': {prox_approx_vnf.name: "mock"} + } + prox_approx_vnf.resource_helper = resource_helper + + result = prox_approx_vnf.collect_kpi() + self.assertDictEqual(result, {}) @mock.patch.object(ctx_base.Context, 'get_physical_node_from_server', return_value='mock_node') @mock.patch(SSH_HELPER) |