aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing/developer/devguide/devguide_nsb_prox.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing/developer/devguide/devguide_nsb_prox.rst')
-rwxr-xr-xdocs/testing/developer/devguide/devguide_nsb_prox.rst601
1 files changed, 377 insertions, 224 deletions
diff --git a/docs/testing/developer/devguide/devguide_nsb_prox.rst b/docs/testing/developer/devguide/devguide_nsb_prox.rst
index 79990055a..44a216b75 100755
--- a/docs/testing/developer/devguide/devguide_nsb_prox.rst
+++ b/docs/testing/developer/devguide/devguide_nsb_prox.rst
@@ -8,14 +8,17 @@ for Telco customers, investigate the impact of new Intel compute,
network and storage technologies, characterize performance, and develop
optimal system architectures and configurations.
+NSB PROX Supports Baremetal, Openstack and standalone configuration.
+
.. contents::
Prerequisites
=============
-In order to integrate PROX tests into NSB, the following prerequisites are required.
+In order to integrate PROX tests into NSB, the following prerequisites are
+required.
-.. _`dpdk wiki page`: http://dpdk.org/
+.. _`dpdk wiki page`: https://www.dpdk.org/
.. _`yardstick wiki page`: https://wiki.opnfv.org/display/yardstick/
.. _`Prox documentation`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation
.. _`openstack wiki page`: https://wiki.openstack.org/wiki/Main_Page
@@ -159,11 +162,13 @@ A NSB Prox test is composed of the following components :-
``tc_prox_heat_context_vpe-4.yaml``. This file describes the components
of the test, in the case of openstack the network description and
server descriptions, in the case of baremetal the hardware
- description location. It also contains the name of the Traffic Generator, the SUT config file
- and the traffic profile description, all described below. See nsb-test-description-label_
+ description location. It also contains the name of the Traffic Generator,
+ the SUT config file and the traffic profile description, all described below.
+ See `Test Description File`_
-* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the packet size, tolerated
- loss, initial line rate to start traffic at, test interval etc See nsb-traffic-profile-label_
+* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the
+ packet size, tolerated loss, initial line rate to start traffic at, test
+ interval etc See `Traffic Profile File`_
* Traffic Generator Config file. Usually called ``gen_<test>-<ports>.cfg``.
@@ -176,7 +181,7 @@ A NSB Prox test is composed of the following components :-
under test.
Example traffic generator config file ``gen_l2fwd-4.cfg``
- See nsb-traffic-generator-label_
+ See `Traffic Generator Config file`_
* SUT Config file. Usually called ``handle_<test>-<ports>.cfg``.
@@ -190,7 +195,7 @@ A NSB Prox test is composed of the following components :-
the ports to the Traffic Verifier tasks of the Traffic Generator.
Example traffic generator config file ``handle_l2fwd-4.cfg``
- See nsb-sut-generator-label_
+ See `SUT Config File`_
* NSB PROX Baremetal Configuration file. Usually called
``prox-baremetal-<ports>.yaml``
@@ -199,12 +204,12 @@ A NSB Prox test is composed of the following components :-
This is required for baremetal only. This describes hardware, NICs,
IP addresses, Network drivers, usernames and passwords.
- See baremetal-config-label_
+ See `Baremetal Configuration File`_
* Grafana Dashboard. Usually called
``Prox_<context>_<test>-<port>-<DateAndTime>.json`` where
- * <context> Is either ``BM`` or ``heat``
+ * <context> Is ``BM``,``heat``,``ovs_dpdk`` or ``sriov``
* <test> Is the a one or 2 word description of the test.
* <port> is the number of ports used express as ``2Port`` or ``4Port``
* <DateAndTime> is the Date and Time expressed as a string.
@@ -214,15 +219,15 @@ A NSB Prox test is composed of the following components :-
Other files may be required. These are test specific files and will be
covered later.
-.. _nsb-test-description-label:
-**Test Description File**
+Test Description File
+---------------------
-Here we will discuss the test description for both
-baremetal and openstack.
+Here we will discuss the test description for
+baremetal, openstack and standalone.
-*Test Description File for Baremetal*
--------------------------------------
+Test Description File for Baremetal
+-----------------------------------
This section will introduce the meaning of the Test case description
file. We will use ``tc_prox_baremetal_l2fwd-2.yaml`` as an example to
@@ -235,7 +240,8 @@ show you how to understand the test description file.
Now let's examine the components of the file in detail
1. ``traffic_profile`` - This specifies the traffic profile for the
- test. In this case ``prox_binsearch.yaml`` is used. See nsb-traffic-profile-label_
+ test. In this case ``prox_binsearch.yaml`` is used. See
+ `Traffic Profile File`_
2. ``topology`` - This is either ``prox-tg-topology-1.yaml`` or
``prox-tg-topology-2.yaml`` or ``prox-tg-topology-4.yaml``
@@ -247,10 +253,13 @@ Now let's examine the components of the file in detail
4. ``interface_speed_gbps`` - This is an optional parameter. If not present
the system defaults to 10Gbps. This defines the speed of the interfaces.
-5. ``prox_path`` - Location of the Prox executable on the traffic
+5. ``collectd`` - (Optional) This specifies we want to collect NFVI statistics
+ like CPU Utilization,
+
+6. ``prox_path`` - Location of the Prox executable on the traffic
generator (Either baremetal or Openstack Virtual Machine)
-6. ``prox_config`` - This is the ``SUT Config File``.
+7. ``prox_config`` - This is the ``SUT Config File``.
In this case it is ``handle_l2fwd-2.cfg``
A number of additional parameters can be added. This example
@@ -259,6 +268,14 @@ Now let's examine the components of the file in detail
options:
interface_speed_gbps: 10
+ traffic_config:
+ tolerated_loss: 0.01
+ test_precision: 0.01
+ packet_sizes: [64]
+ duration: 30
+ lower_bound: 0.0
+ upper_bound: 100.0
+
vnf__0:
prox_path: /opt/nsb_bin/prox
prox_config: ``configs/handle_vpe-4.cfg``
@@ -272,55 +289,60 @@ Now let's examine the components of the file in detail
``configs/vpe_rules.lua`` : ````
prox_generate_parameter: True
- ``interface_speed_gbps`` - this specifies the speed of the interface
- in Gigabits Per Second. This is used to calculate pps(packets per second).
- If the interfaces are of different speeds, then this specifies the speed
- of the slowest interface. This parameter is optional. If omitted the
- interface speed defaults to 10Gbps.
+ ``interface_speed_gbps`` - this specifies the speed of the interface
+ in Gigabits Per Second. This is used to calculate pps(packets per second).
+ If the interfaces are of different speeds, then this specifies the speed
+ of the slowest interface. This parameter is optional. If omitted the
+ interface speed defaults to 10Gbps.
- ``prox_files`` - this specified that a number of addition files
- need to be provided for the test to run correctly. This files
- could provide routing information,hashing information or a
- hashing algorithm and ip/mac information.
+ ``traffic_config`` - This allows the values here to override the values in
+ in the traffic_profile file. e.g. "prox_binsearch.yaml". Values provided
+ here override values provided in the "traffic_profile" section of the
+ traffic_profile file. Some, all or none of the values can be provided here.
- ``prox_generate_parameter`` - this specifies that the NSB application
- is required to provide information to the nsb Prox in the form
- of a file called ``parameters.lua``, which contains information
- retrieved from either the hardware or the openstack configuration.
+ The values describes the packet size, tolerated loss, initial line rate
+ to start traffic at, test interval etc See `Traffic Profile File`_
-7. ``prox_args`` - this specifies the command line arguments to start
- prox. See `prox command line`_.
+ ``prox_files`` - this specified that a number of addition files
+ need to be provided for the test to run correctly. This files
+ could provide routing information,hashing information or a
+ hashing algorithm and ip/mac information.
+
+ ``prox_generate_parameter`` - this specifies that the NSB application
+ is required to provide information to the nsb Prox in the form
+ of a file called ``parameters.lua``, which contains information
+ retrieved from either the hardware or the openstack configuration.
-8. ``prox_config`` - This specifies the Traffic Generator config file.
+8. ``prox_args`` - this specifies the command line arguments to start
+ prox. See `prox command line`_.
-9. ``runner`` - This is set to ``ProxDuration`` - This specifies that the
- test runs for a set duration. Other runner types are available
- but it is recommend to use ``ProxDuration``
+9. ``prox_config`` - This specifies the Traffic Generator config file.
- The following parrameters are supported
+10. ``runner`` - This is set to ``ProxDuration`` - This specifies that the
+ test runs for a set duration. Other runner types are available
+ but it is recommend to use ``ProxDuration``. The following parameters
+ are supported
- ``interval`` - (optional) - This specifies the sampling interval.
- Default is 1 sec
+ ``interval`` - (optional) - This specifies the sampling interval.
+ Default is 1 sec
- ``sampled`` - (optional) - This specifies if sampling information is
- required. Default ``no``
+ ``sampled`` - (optional) - This specifies if sampling information is
+ required. Default ``no``
- ``duration`` - This is the length of the test in seconds. Default
- is 60 seconds.
+ ``duration`` - This is the length of the test in seconds. Default
+ is 60 seconds.
- ``confirmation`` - This specifies the number of confirmation retests to
- be made before deciding to increase or decrease line speed. Default 0.
+ ``confirmation`` - This specifies the number of confirmation retests to
+ be made before deciding to increase or decrease line speed. Default 0.
-10. ``context`` - This is ``context`` for a 2 port Baremetal configuration.
+11. ``context`` - This is ``context`` for a 2 port Baremetal configuration.
If a 4 port configuration was required then file
``prox-baremetal-4.yaml`` would be used. This is the NSB Prox
baremetal configuration file.
-.. _nsb-traffic-profile-label:
-
-*Traffic Profile file*
-----------------------
+Traffic Profile File
+--------------------
This describes the details of the traffic flow. In this case
``prox_binsearch.yaml`` is used.
@@ -330,11 +352,12 @@ This describes the details of the traffic flow. In this case
:alt: NSB PROX Traffic Profile
-1. ``name`` - The name of the traffic profile. This name should match the name specified in the
- ``traffic_profile`` field in the Test Description File.
+1. ``name`` - The name of the traffic profile. This name should match the
+ name specified in the ``traffic_profile`` field in the Test
+ Description File.
-2. ``traffic_type`` - This specifies the type of traffic pattern generated, This name matches
- class name of the traffic generator See::
+2. ``traffic_type`` - This specifies the type of traffic pattern generated,
+ This name matches class name of the traffic generator. See::
network_services/traffic_profile/prox_binsearch.py class ProxBinSearchProfile(ProxProfile)
@@ -392,19 +415,20 @@ See this prox_vpe.yaml as example::
lower_bound: 0.0
upper_bound: 100.0
-*Test Description File for Openstack*
--------------------------------------
+Test Description File for Openstack
+-----------------------------------
We will use ``tc_prox_heat_context_l2fwd-2.yaml`` as a example to show
you how to understand the test description file.
-.. image:: images/PROX_Test_HEAT_Script1.png
- :width: 800px
- :alt: NSB PROX Test Description File - Part 1
+ .. image:: images/PROX_Test_HEAT_Script1.png
+ :width: 800px
+ :alt: NSB PROX Test Description File - Part 1
-.. image:: images/PROX_Test_HEAT_Script2.png
- :width: 800px
- :alt: NSB PROX Test Description File - Part 2
+
+ .. image:: images/PROX_Test_HEAT_Script2.png
+ :width: 800px
+ :alt: NSB PROX Test Description File - Part 2
Now lets examine the components of the file in detail
@@ -460,10 +484,64 @@ F. ``networks`` - is composed of a management network labeled ``mgmt``
port_security_enabled: False
enable_dhcp: 'false'
-.. _nsb-traffic-generator-label:
+Test Description File for Standalone
+------------------------------------
+
+We will use ``tc_prox_ovs-dpdk_l2fwd-2.yaml`` as a example to show
+you how to understand the test description file.
+
+ .. image:: images/PROX_Test_ovs_dpdk_Script_1.png
+ :width: 800px
+ :alt: NSB PROX Test Standalone Description File - Part 1
-*Traffic Generator Config file*
--------------------------------
+ .. image:: images/PROX_Test_ovs_dpdk_Script_2.png
+ :width: 800px
+ :alt: NSB PROX Test Standalone Description File - Part 2
+
+Now lets examine the components of the file in detail
+
+Sections 1 to 9 are exactly the same in Baremetal and in Heat. Section
+``10`` is replaced with sections A to F. Section 10 was for a baremetal
+configuration file. This has no place in a heat configuration.
+
+A. ``file`` - Pod file for Baremetal Traffic Generator configuration:
+ IP Address, User/Password & Interfaces
+
+B. ``type`` - This defines the type of standalone configuration.
+ Possible values are ``StandaloneOvsDpdk`` or ``StandaloneSriov``
+
+C. ``file`` - Pod file for Standalone host configuration:
+ IP Address, User/Password & Interfaces
+
+D. ``vm_deploy`` - Deploy a new VM or use an existing VM
+
+E. ``ovs_properties`` - OVS Version, DPDK Version and configuration
+ to use.
+
+F. ``flavor``- NSB image generated when installing NSB using ansible-playbook::
+
+ ram- Configurable RAM for SUT VM
+ extra_specs
+ hw:cpu_sockets - Configurable number of Sockets for SUT VM
+ hw:cpu_cores - Configurable number of Cores for SUT VM
+ hw:cpu_threads- Configurable number of Threads for SUT VM
+
+G. ``mgmt`` - Management port of the SUT VM. Preconfig needed on TG & SUT host machines.
+ is the system under test.
+
+
+H. ``xe0`` - Upline Network port
+
+I. ``xe1`` - Downline Network port
+
+J. ``uplink_0`` - Uplink Phy port of the NIC on the host. This will be used to create
+ the Virtual Functions.
+
+K. ``downlink_0`` - Downlink Phy port of the NIC on the host. This will be used to
+ create the Virtual Functions.
+
+Traffic Generator Config file
+-----------------------------
This section will describe the traffic generator config file.
This is the same for both baremetal and heat. See this example
@@ -704,23 +782,29 @@ Now let's examine the components of the file in detail
physical core improves performance, however sometimes it
is optimal to move task to a separate core. This is
best decided by checking performance.
- c. ``mode=lat`` - Specifies the action carried out by this task on this core. Supported modes are: acl,
- classify, drop, gredecap, greencap, ipv6_decap, ipv6_encap, l2fwd, lbnetwork, lbpos, lbqinq, nop,
- police, qinqdecapv4, qinqencapv4, qos, routing, impair, lb5tuple, mirror, unmpls, tagmpls,
- nat, decapnsh, encapnsh, gen, genl4 and lat. This task(0) per core(3) receives packets on port.
- d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will receive packets on ``Port 1``.
- e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet. Note that the packet length should
- be longer than ``lat pos`` + 4 bytes to avoid truncation of the timestamp. It defines where the timestamp is
- to be read from. Note that the SUT workload might cause the position of the timestamp to change
- (i.e. due to encapsulation).
-
-.. _nsb-sut-generator-label:
-
-*SUT Config file*
--------------------------------
+ c. ``mode=lat`` - Specifies the action carried out by this task on this
+ core.
+ Supported modes are: ``acl``, ``classify``, ``drop``, ``gredecap``,
+ ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, ``lbnetwork``,
+ ``lbpos``, ``lbqinq``, ``nop``, ``police``, ``qinqdecapv4``,
+ ``qinqencapv4``, ``qos``, ``routing``, ``impair``, ``lb5tuple``,
+ ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, ``decapnsh``, ``encapnsh``,
+ ``gen``, ``genl4`` and ``lat``. This task(0) per core(3) receives packets
+ on port.
+ d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will
+ receive packets on ``Port 1``.
+ e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet.
+ Note that the packet length should be longer than ``lat pos`` + 4 bytes
+ to avoid truncation of the timestamp. It defines where the timestamp is
+ to be read from. Note that the SUT workload might cause the position of
+ the timestamp to change (i.e. due to encapsulation).
+
+SUT Config File
+---------------
This section will describes the SUT(VNF) config file. This is the same for both
-baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to explain the options.
+baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to
+explain the options.
.. image:: images/PROX_Handle_2port_cfg.png
:width: 1400px
@@ -730,13 +814,15 @@ See `prox options`_ for details
Now let's examine the components of the file in detail
-1. ``[eal options]`` - same as the Generator config file. This specified the EAL (Environmental Abstraction Layer)
- options. These are default values and are not changed.
- See `dpdk wiki page`_.
+1. ``[eal options]`` - same as the Generator config file. This specified the
+ EAL (Environmental Abstraction Layer) options. These are default values and
+ are not changed. See `dpdk wiki page`_.
-2. ``[port 0]`` - This section describes the DPDK Port. The number following the keyword ``port`` usually refers to the DPDK Port Id. usually starting from ``0``.
- Because you can have multiple ports this entry usually repeated. Eg. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4 port setup ``[port 0]``, ``[port 1]``,
- ``[port 2]`` and ``[port 3]``::
+2. ``[port 0]`` - This section describes the DPDK Port. The number following
+ the keyword ``port`` usually refers to the DPDK Port Id. usually starting
+ from ``0``. Because you can have multiple ports this entry usually
+ repeated. E.g. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4
+ port setup ``[port 0]``, ``[port 1]``, ``[port 2]`` and ``[port 3]``::
[port 0]
name=if0
@@ -745,10 +831,14 @@ Now let's examine the components of the file in detail
tx desc=2048
promiscuous=yes
- a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any name can be assigned to a port.
- b. ``mac=hardware`` sets the MAC address assigned by the hardware to data from this port.
- c. ``rx desc=2048`` sets the number of available descriptors to allocate for receive packets. This can be changed and can effect performance.
- d. ``tx desc=2048`` sets the number of available descriptors to allocate for transmit packets. This can be changed and can effect performance.
+ a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any
+ name can be assigned to a port.
+ b. ``mac=hardware`` sets the MAC address assigned by the hardware to data
+ from this port.
+ c. ``rx desc=2048`` sets the number of available descriptors to allocate
+ for receive packets. This can be changed and can effect performance.
+ d. ``tx desc=2048`` sets the number of available descriptors to allocate
+ for transmit packets. This can be changed and can effect performance.
e. ``promiscuous=yes`` this enables promiscuous mode for this port.
3. ``[defaults]`` - Here default operations and settings can be over written.::
@@ -757,35 +847,46 @@ Now let's examine the components of the file in detail
mempool size=8K
memcache size=512
- a. In this example ``mempool size=8K`` the number of mbufs per task is altered. Altering this value could effect performance. See `prox options`_ for details.
- b. ``memcache size=512`` - number of mbufs cached per core, default is 256 this is the cache_size. Altering this value could effect performance.
+ a. In this example ``mempool size=8K`` the number of mbufs per task is
+ altered. Altering this value could effect performance. See
+ `prox options`_ for details.
+ b. ``memcache size=512`` - number of mbufs cached per core, default is 256
+ this is the cache_size. Altering this value could affect performance.
-4. ``[global]`` - Here application wide setting are supported. Things like application name, start time, duration and memory configurations can be set here.
+4. ``[global]`` - Here application wide setting are supported. Things like
+ application name, start time, duration and memory configurations can be set
+ here.
In this example.::
[global]
start time=5
name=Basic Gen
- a. ``start time=5`` Time is seconds after which average stats will be started.
+ a. ``start time=5`` Time is seconds after which average stats will be
+ started.
b. ``name=Handle L2FWD Multiflow (2x)`` Name of the configuration.
-5. ``[core 0]`` - This core is designated the master core. Every Prox application must have a master core. The master mode must be assigned to
+5. ``[core 0]`` - This core is designated the master core. Every Prox
+ application must have a master core. The master mode must be assigned to
exactly one task, running alone on one core.::
[core 0]
mode=master
-6. ``[core 1]`` - This describes the activity on core 1. Cores can be configured by means of a set of [core #] sections, where # represents either:
+6. ``[core 1]`` - This describes the activity on core 1. Cores can be
+ configured by means of a set of [core #] sections, where # represents
+ either:
- a. an absolute core number: e.g. on a 10-core, dual socket system with hyper-threading,
- cores are numbered from 0 to 39.
+ a. an absolute core number: e.g. on a 10-core, dual socket system with
+ hyper-threading, cores are numbered from 0 to 39.
- b. PROX allows a core to be identified by a core number, the letter 's', and a socket number.
- However NSB PROX is hardware agnostic (physical and virtual configurations are the same) it
- is advisable no to use physical core numbering.
+ b. PROX allows a core to be identified by a core number, the letter 's',
+ and a socket number. However NSB PROX is hardware agnostic (physical and
+ virtual configurations are the same) it is advisable no to use physical
+ core numbering.
- Each core can be assigned with a set of tasks, each running one of the implemented packet processing modes.::
+ Each core can be assigned with a set of tasks, each running one of the
+ implemented packet processing modes.::
[core 1]
name=none
@@ -796,20 +897,33 @@ Now let's examine the components of the file in detail
tx port=if1
a. ``name=none`` - No name assigned to the core.
- b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``. Task 1 can be defined later in this core or
- can be defined in another ``[core 1]`` section with ``task=1`` later in configuration file. Sometimes running
- multiple task related to the same packet on the same physical core improves performance, however sometimes it
- is optimal to move task to a separate core. This is best decided by checking performance.
- c. ``mode=l2fwd`` - Specifies the action carried out by this task on this core. Supported modes are: acl,
- classify, drop, gredecap, greencap, ipv6_decap, ipv6_encap, l2fwd, lbnetwork, lbpos, lbqinq, nop,
- police, qinqdecapv4, qinqencapv4, qos, routing, impair, lb5tuple, mirror, unmpls, tagmpls,
- nat, decapnsh, encapnsh, gen, genl4 and lat. This code does ``l2fwd`` .. ie it does the L2FWD.
-
- d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet will be set to the MAC address of ``Port 1`` of destination device. (The Traffic Generator/Verifier)
- e. ``rx port=if0`` - This specifies that the packets are received from ``Port 0`` called if0
- f. ``tx port=if1`` - This specifies that the packets are transmitted to ``Port 1`` called if1
-
- If this example we receive a packet on core on a port, carry out operation on the packet on the core and transmit it on on another port still using the same task on the same core.
+ b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``.
+ Task 1 can be defined later in this core or can be defined in another
+ ``[core 1]`` section with ``task=1`` later in configuration file.
+ Sometimes running multiple task related to the same packet on the same
+ physical core improves performance, however sometimes it is optimal to
+ move task to a separate core. This is best decided by checking
+ performance.
+ c. ``mode=l2fwd`` - Specifies the action carried out by this task on this
+ core. Supported modes are: ``acl``, ``classify``, ``drop``,
+ ``gredecap``, ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``,
+ ``lbnetwork``, ``lbpos``, ``lbqinq``, ``nop``, ``police``,
+ ``qinqdecapv4``, ``qinqencapv4``, ``qos``, ``routing``, ``impair``,
+ ``lb5tuple``, ``mirror``, ``unmpls``, ``tagmpls``, ``nat``,
+ ``decapnsh``, ``encapnsh``, ``gen``, ``genl4`` and ``lat``. This code
+ does ``l2fwd``. i.e. it does the L2FWD.
+
+ d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet
+ will be set to the MAC address of ``Port 1`` of destination device.
+ (The Traffic Generator/Verifier)
+ e. ``rx port=if0`` - This specifies that the packets are received from
+ ``Port 0`` called if0
+ f. ``tx port=if1`` - This specifies that the packets are transmitted to
+ ``Port 1`` called if1
+
+ In this example we receive a packet on core on a port, carry out operation
+ on the packet on the core and transmit it on on another port still using
+ the same task on the same core.
On some implementation you may wish to use multiple tasks, like this.::
@@ -829,59 +943,76 @@ Now let's examine the components of the file in detail
tx port=if0
drop=no
- In this example you can see Core 1/Task 0 called ``rx_task`` receives the packet from if0 and perform the l2fwd. However instead of sending the packet to a
- port it sends it to a core see ``tx cores=1t1``. In this case it sends it to Core 1/Task 1.
+ In this example you can see Core 1/Task 0 called ``rx_task`` receives the
+ packet from if0 and perform the l2fwd. However instead of sending the
+ packet to a port it sends it to a core see ``tx cores=1t1``. In this case it
+ sends it to Core 1/Task 1.
- Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but from the ring. See ``rx ring=yes``. It does not perform any operation on the packet See ``mode=none``
- and sends the packets to ``if0`` see ``tx port=if0``.
+ Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but
+ from the ring. See ``rx ring=yes``. It does not perform any operation on the
+ packet See ``mode=none`` and sends the packets to ``if0`` see
+ ``tx port=if0``.
- It is also possible to implement more complex operations be chaining multiple operations in sequence and using rings to pass packets from one core to another.
+ It is also possible to implement more complex operations by chaining
+ multiple operations in sequence and using rings to pass packets from one
+ core to another.
- In thus example we show a Broadband Network Gateway (BNG) with Quality of Service (QoS). Communication from task to task is via rings.
+ In this example, we show a Broadband Network Gateway (BNG) with Quality of
+ Service (QoS). Communication from task to task is via rings.
.. image:: images/PROX_BNG_QOS.png
:width: 1000px
:alt: NSB PROX Config File for BNG_QOS
-*Baremetal Configuration file*
-------------------------------
-
-.. _baremetal-config-label:
+Baremetal Configuration File
+----------------------------
-This is required for baremetal testing. It describes the IP address of the various ports, the Network devices drivers and MAC addresses and the network
+This is required for baremetal testing. It describes the IP address of the
+various ports, the Network devices drivers and MAC addresses and the network
configuration.
-In this example we will describe a 2 port configuration. This file is the same for all 2 port NSB Prox tests on the same platforms/configuration.
+In this example we will describe a 2 port configuration. This file is the same
+for all 2 port NSB Prox tests on the same platforms/configuration.
.. image:: images/PROX_Baremetal_config.png
:width: 1000px
:alt: NSB PROX Yardstick Config
-Now lets describe the sections of the file.
-
- 1. ``TrafficGen`` - This section describes the Traffic Generator node of the test configuration. The name of the node ``trafficgen_1`` must match the node name
- in the ``Test Description File for Baremetal`` mentioned earlier. The password attribute of the test needs to be configured. All other parameters
- can remain as default settings.
- 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic Generator.
- 3. ``xe0`` is DPDK Port 0. ``lspci`` and `` ./dpdk-devbind.py -s`` can be used to provide the interface information. ``netmask`` and ``local_ip`` should not be changed
- 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1`` section needs to be repeated and modified accordingly.
- 5. ``vnf`` - This section describes the SUT of the test configuration. The name of the node ``vnf`` must match the node name in the
- ``Test Description File for Baremetal`` mentioned earlier. The password attribute of the test needs to be configured. All other parameters
- can remain as default settings
+Now let's describe the sections of the file.
+
+ 1. ``TrafficGen`` - This section describes the Traffic Generator node of the
+ test configuration. The name of the node ``trafficgen_1`` must match the
+ node name in the ``Test Description File for Baremetal`` mentioned
+ earlier. The password attribute of the test needs to be configured. All
+ other parameters can remain as default settings.
+ 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic
+ Generator.
+ 3. ``xe0`` is DPDK Port 0. ``lspci`` and ``./dpdk-devbind.py -s`` can be used
+ to provide the interface information. ``netmask`` and ``local_ip`` should
+ not be changed
+ 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1``
+ section needs to be repeated and modified accordingly.
+ 5. ``vnf`` - This section describes the SUT of the test configuration. The
+ name of the node ``vnf`` must match the node name in the
+ ``Test Description File for Baremetal`` mentioned earlier. The password
+ attribute of the test needs to be configured. All other parameters can
+ remain as default settings
6. ``interfaces`` - This defines the DPDK interfaces on the SUT
7. ``xe0`` - Same as 3 but for the ``SUT``.
8. ``xe1`` - Same as 4 but for the ``SUT`` also.
9. ``routing_table`` - All parameters should remain unchanged.
10. ``nd_route_tbl`` - All parameters should remain unchanged.
-*Grafana Dashboard*
--------------------
+Grafana Dashboard
+-----------------
-The grafana dashboard visually displays the results of the tests. The steps required to produce a grafana dashboard are described here.
+The grafana dashboard visually displays the results of the tests. The steps
+required to produce a grafana dashboard are described here.
.. _yardstick-config-label:
- a. Configure ``yardstick`` to use influxDB to store test results. See file ``/etc/yardstick/yardstick.conf``.
+ a. Configure ``yardstick`` to use influxDB to store test results. See file
+ ``/etc/yardstick/yardstick.conf``.
.. image:: images/PROX_Yardstick_config.png
:width: 1000px
@@ -890,10 +1021,12 @@ The grafana dashboard visually displays the results of the tests. The steps requ
1. Specify the dispatcher to use influxDB to store results.
2. "target = .. " - Specify location of influxDB to store results.
"db_name = yardstick" - name of database. Do not change
- "username = root" - username to use to store result. (Many tests are run as root)
+ "username = root" - username to use to store result. (Many tests are
+ run as root)
"password = ... " - Please set to root user password
- b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See `grafana deployment`_.
+ b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See
+ `grafana deployment`_.
c. Generate the test data. Run the tests as follows .::
yardstick --debug task start tc_prox_<context>_<test>-ports.yaml
@@ -910,7 +1043,8 @@ How to run NSB Prox Test on an baremetal environment
In order to run the NSB PROX test.
- 1. Install NSB on Traffic Generator node and Prox in SUT. See `NSB Installation`_
+ 1. Install NSB on Traffic Generator node and Prox in SUT. See
+ `NSB Installation`_
2. To enter container::
@@ -922,8 +1056,8 @@ In order to run the NSB PROX test.
cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox
- b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that topology
- into this directory as per baremetal-config-label_
+ b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that
+ topology into this directory as per `Baremetal Configuration File`_
c. Install and configure ``yardstick.conf`` ::
@@ -968,12 +1102,11 @@ Frequently Asked Questions
Here is a list of frequently asked questions.
-*NSB Prox does not work on Baremetal, How do I resolve this?*
--------------------------------------------------------------
+NSB Prox does not work on Baremetal, How do I resolve this?
+-----------------------------------------------------------
-If PROX NSB does not work on baremetal, problem is either in network configuration or test file.
-
-*Solution*
+If PROX NSB does not work on baremetal, problem is either in network
+configuration or test file.
1. Verify network configuration. Execute existing baremetal test.::
@@ -1011,13 +1144,11 @@ If PROX NSB does not work on baremetal, problem is either in network configurati
See ``Link detected`` if ``yes`` .... Cable is good. If ``no`` you have an issue with your cable/port.
-2. If existing baremetal works then issue is with your test. Check the traffic generator gen_<test>-<ports>.cfg to ensure
- it is producing a valid packet.
-
-*How do I debug NSB Prox on Baremetal?*
----------------------------------------
+2. If existing baremetal works then issue is with your test. Check the traffic
+ generator gen_<test>-<ports>.cfg to ensure it is producing a valid packet.
-*Solution*
+How do I debug NSB Prox on Baremetal?
+-------------------------------------
1. Execute the test as follows::
@@ -1033,7 +1164,8 @@ If PROX NSB does not work on baremetal, problem is either in network configurati
cd
/opt/nsb_bin/prox -f /tmp/handle_<test>-<ports>.cfg
-4. Now let's examine the Generator Output. In this case the output of gen_l2fwd-4.cfg.
+4. Now let's examine the Generator Output. In this case the output of
+ ``gen_l2fwd-4.cfg``.
.. image:: images/PROX_Gen_GUI.png
:width: 1000px
@@ -1048,10 +1180,12 @@ If PROX NSB does not work on baremetal, problem is either in network configurati
It appears what is transmitted is received.
.. Caution::
- The number of packets MAY not exactly match because the ports are read in sequence.
+ The number of packets MAY not exactly match because the ports are read in
+ sequence.
.. Caution::
- What is transmitted on PORT X may not always be received on same port. Please check the Test scenario.
+ What is transmitted on PORT X may not always be received on same port.
+ Please check the Test scenario.
5. Now lets examine the SUT Output
@@ -1080,26 +1214,24 @@ If PROX NSB does not work on baremetal, problem is either in network configurati
dump_tx 1 0 1
-*NSB Prox works on Baremetal but not in Openstack. How do I resolve this?*
---------------------------------------------------------------------------
+NSB Prox works on Baremetal but not in Openstack. How do I resolve this?
+------------------------------------------------------------------------
-NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A badly
-formed packed may still work with PROX on Baremetal. However on
+NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A
+badly formed packed may still work with PROX on Baremetal. However on
Openstack the packet must be correct and all fields of the header correct.
-Eg A packet with an invalid Protocol ID would still work in Baremetal
-but this packet would be rejected by openstack.
+E.g. A packet with an invalid Protocol ID would still work in Baremetal but
+this packet would be rejected by openstack.
-*Solution*
1. Check the validity of the packet.
2. Use a known good packet in your test
- 3. If using ``Random`` fields in the traffic generator, disable them and retry.
-
+ 3. If using ``Random`` fields in the traffic generator, disable them and
+ retry.
-*How do I debug NSB Prox on Openstack?*
----------------------------------------
-*Solution*
+How do I debug NSB Prox on Openstack?
+-------------------------------------
1. Execute the test as follows::
@@ -1111,7 +1243,8 @@ but this packet would be rejected by openstack.
3. Install openstack credentials.
- Depending on your openstack deployment, the location of these credentials may vary.
+ Depending on your openstack deployment, the location of these credentials
+ may vary.
On this platform I do this via::
scp root@10.237.222.55:/etc/kolla/admin-openrc.sh .
@@ -1127,8 +1260,8 @@ but this packet would be rejected by openstack.
b. Get the Floating IP of the Traffic Generator & SUT
- This generates a lot of information. Please not the floating IP of the VNF and
- the Traffic Generator.
+ This generates a lot of information. Please note the floating IP of the
+ VNF and the Traffic Generator.
.. image:: images/PROX_Openstack_stack_show_a.png
:width: 1000px
@@ -1169,10 +1302,8 @@ but this packet would be rejected by openstack.
Now continue as baremetal.
-*How do I resolve "Quota exceeded for resources"*
--------------------------------------------------
-
-*Solution*
+How do I resolve "Quota exceeded for resources"
+-----------------------------------------------
This usually occurs due to 2 reasons when executing an openstack test.
@@ -1206,16 +1337,15 @@ This usually occurs due to 2 reasons when executing an openstack test.
openstack quota set <resource>
-*Openstack Cli fails or hangs. How do I resolve this?*
-------------------------------------------------------
-
-*Solution*
+Openstack CLI fails or hangs. How do I resolve this?
+----------------------------------------------------
If it fails due to ::
Missing value auth-url required for auth plugin password
-Check your shell environment for Openstack variables. One of them should contain the authentication URL ::
+Check your shell environment for Openstack variables. One of them should
+contain the authentication URL ::
OS_AUTH_URL=``https://192.168.72.41:5000/v3``
@@ -1239,16 +1369,16 @@ Result ::
and visible.
-If the Openstack Cli appears to hang, then verify the proxys and no_proxy are set correctly.
-They should be similar to ::
+If the Openstack CLI appears to hang, then verify the proxys and ``no_proxy``
+are set correctly. They should be similar to ::
- FTP_PROXY="http://proxy.ir.intel.com:911/"
- HTTPS_PROXY="http://proxy.ir.intel.com:911/"
- HTTP_PROXY="http://proxy.ir.intel.com:911/"
+ FTP_PROXY="http://<your_proxy>:<port>/"
+ HTTPS_PROXY="http://<your_proxy>:<port>/"
+ HTTP_PROXY="http://<your_proxy>:<port>/"
NO_PROXY="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
- ftp_proxy="http://proxy.ir.intel.com:911/"
- http_proxy="http://proxy.ir.intel.com:911/"
- https_proxy="http://proxy.ir.intel.com:911/"
+ ftp_proxy="http://<your_proxy>:<port>/"
+ http_proxy="http://<your_proxy>:<port>/"
+ https_proxy="http://<your_proxy>:<port>/"
no_proxy="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
Where
@@ -1256,11 +1386,9 @@ Where
1) 10.237.222.55 = IP Address of deployment node
2) 10.237.223.80 = IP Address of Controller node
3) 10.237.222.134 = IP Address of Compute Node
- 4) ir.intel.com = local no proxy
-
-*How to Understand the Grafana output?*
----------------------------------------
+How to Understand the Grafana output?
+-------------------------------------
.. image:: images/PROX_Grafana_1.png
:width: 1000px
@@ -1278,50 +1406,75 @@ Where
:width: 1000px
:alt: NSB PROX Grafana_4
-A. Test Parameters - Test interval, Duartion, Tolerated Loss and Test Precision
+ .. image:: images/PROX_Grafana_5.png
+ :width: 1000px
+ :alt: NSB PROX Grafana_5
+
+ .. image:: images/PROX_Grafana_6.png
+ :width: 1000px
+ :alt: NSB PROX Grafana_6
-B. Overall No of packets send and received during test
+A. Test Parameters - Test interval, Duration, Tolerated Loss and Test Precision
-C. Generator Stats - packets sent, received and attempted by Generator
+B. No. of packets send and received during test
-D. Packets Size
+C. Generator Stats - Average Throughput per step (Step Duration is specified by
+ "Duration" field in A above)
-E. No of packets received by SUT
+D. Packet size
-F. No of packets forwarded by SUT
+E. No. of packets sent by the generator per second per interface in millions
+ of packets per second.
-G. This is the number of packets sent by the generator per port, for each interval.
+F. No. of packets recieved by the generator per second per interface in millions
+ of packets per second.
-H. This is the number of packets received by the generator per port, for each interval.
+G. No. of packets received by the SUT from the generator in millions of packets
+ per second.
-I. This is the number of packets send and received by the generator and lost by the SUT
- that meet the success criteria
+H. No. of packets sent by the the SUT to the generator in millions of packets
+ per second.
-J. This is the changes the Percentage of Line Rate used over a test, The MAX and the
- MIN should converge to within the interval specified as the ``test-precision``.
+I. No. of packets sent by the Generator to the SUT per step per interface
+ in millions of packets per second.
-K. This is the packets Size supported during test. If "N/A" appears in any field the result has not been decided.
+J. No. of packets received by the Generator from the SUT per step per interface
+ in millions of packets per second.
-L. This is the calculated throughput in MPPS(Million Packets Per second) for this line rate.
+K. No. of packets sent and received by the generator and lost by the SUT that
+ meet the success criteria
-M. This is the actual No, of packets sent by the generator in MPPS
+L. The change in the Percentage of Line Rate used over a test, The MAX and the
+ MIN should converge to within the interval specified as the
+ ``test-precision``.
-N. This is the actual No. of packets received by the generator in MPPS
+M. Packet size supported during test. If *N/A* appears in any field the
+ result has not been decided.
-O. This is the total No. of packets sent by SUT.
+N. The Theretical Maximum no. of packets per second that can be sent for this
+ packet size.
-P. This is the total No. of packets received by the SUT
+O. No. of packets sent by the generator in MPPS
-Q. This is the total No. of packets dropped. (These packets were sent by the generator but not
- received back by the generator, these may be dropped by the SUT or the Generator)
+P. No. of packets received by the generator in MPPS
-R. This is the tolerated no of packets that can be dropped.
+Q. No. of packets sent by SUT.
-S. This is the test Throughput in Gbps
+R. No. of packets received by the SUT
-T. This is the Latencey per Port
+S. Total no. of dropped packets -- Packets sent but not received back by the
+ generator, these may be dropped by the SUT or the generator.
-U. This is the CPU Utilization
+T. The tolerated no. of dropped packets.
+U. Test throughput in Gbps
+V. Latencey per Port
+ * Va - Port XE0
+ * Vb - Port XE1
+ * Vc - Port XE0
+ * Vd - Port XE0
+W. CPU Utilization
+ * Wa - CPU Utilization of the Generator
+ * Wb - CPU Utilization of the SUT