summaryrefslogtreecommitdiffstats
path: root/docs/testing
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing')
-rw-r--r--docs/testing/developer/design/LICENSE2
-rw-r--r--docs/testing/developer/design/factory_and_loader.pngbin0 -> 25586 bytes
-rw-r--r--docs/testing/developer/design/traffic_controller.pngbin0 -> 57868 bytes
-rw-r--r--docs/testing/developer/design/trafficgen_integration_guide.rst238
-rw-r--r--docs/testing/developer/design/vsperf.pngbin0 -> 93029 bytes
-rw-r--r--docs/testing/developer/design/vswitchperf_design.rst871
-rw-r--r--docs/testing/developer/index.rst74
-rw-r--r--docs/testing/developer/requirements/LICENSE2
-rw-r--r--docs/testing/developer/requirements/ietf_draft/LICENSE12
-rw-r--r--docs/testing/developer/requirements/ietf_draft/draft-ietf-bmwg-vswitch-opnfv-00.xml1016
-rw-r--r--docs/testing/developer/requirements/ietf_draft/draft-ietf-bmwg-vswitch-opnfv-01.xml1027
-rw-r--r--docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-00.xml964
-rw-r--r--docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-01.xml1016
-rw-r--r--docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-02.xml1016
-rw-r--r--docs/testing/developer/requirements/vm2vm_alternative_benchmark.pngbin0 -> 104244 bytes
-rw-r--r--docs/testing/developer/requirements/vm2vm_benchmark.pngbin0 -> 80797 bytes
-rw-r--r--docs/testing/developer/requirements/vm2vm_hypervisor_benchmark.pngbin0 -> 122975 bytes
-rw-r--r--docs/testing/developer/requirements/vm2vm_virtual_interface_benchmark.pngbin0 -> 99544 bytes
-rw-r--r--docs/testing/developer/requirements/vswitchperf_ltd.rst1712
-rw-r--r--docs/testing/developer/requirements/vswitchperf_ltp.rst1344
-rw-r--r--docs/testing/developer/results/results.rst38
-rw-r--r--docs/testing/developer/results/scenario.rst47
-rw-r--r--docs/testing/user/configguide/LICENSE2
-rw-r--r--docs/testing/user/configguide/TCLServerProperties.pngbin0 -> 11667 bytes
-rw-r--r--docs/testing/user/configguide/index.rst51
-rw-r--r--docs/testing/user/configguide/installation.rst310
-rw-r--r--docs/testing/user/configguide/trafficgen.rst671
-rw-r--r--docs/testing/user/configguide/upgrade.rst183
-rw-r--r--docs/testing/user/userguide/index.rst10
-rw-r--r--docs/testing/user/userguide/integration.rst504
-rw-r--r--docs/testing/user/userguide/teststeps.rst667
-rw-r--r--docs/testing/user/userguide/testusage.rst848
-rw-r--r--docs/testing/user/userguide/yardstick.rst250
33 files changed, 12875 insertions, 0 deletions
diff --git a/docs/testing/developer/design/LICENSE b/docs/testing/developer/design/LICENSE
new file mode 100644
index 00000000..7bc572ce
--- /dev/null
+++ b/docs/testing/developer/design/LICENSE
@@ -0,0 +1,2 @@
+This work is licensed under a Creative Commons Attribution 4.0 International License.
+http://creativecommons.org/licenses/by/4.0
diff --git a/docs/testing/developer/design/factory_and_loader.png b/docs/testing/developer/design/factory_and_loader.png
new file mode 100644
index 00000000..290c0af6
--- /dev/null
+++ b/docs/testing/developer/design/factory_and_loader.png
Binary files differ
diff --git a/docs/testing/developer/design/traffic_controller.png b/docs/testing/developer/design/traffic_controller.png
new file mode 100644
index 00000000..598296ec
--- /dev/null
+++ b/docs/testing/developer/design/traffic_controller.png
Binary files differ
diff --git a/docs/testing/developer/design/trafficgen_integration_guide.rst b/docs/testing/developer/design/trafficgen_integration_guide.rst
new file mode 100644
index 00000000..382cedcb
--- /dev/null
+++ b/docs/testing/developer/design/trafficgen_integration_guide.rst
@@ -0,0 +1,238 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+===================================
+Traffic Generator Integration Guide
+===================================
+
+Intended Audience
+=================
+
+This document is intended to aid those who want to integrate new traffic
+generator into the vsperf code. It is expected, that reader has already
+read generic part of :ref:`vsperf-design`.
+
+Let us create a sample traffic generator called **sample_tg**, step by step.
+
+Step 1 - create a directory
+===========================
+
+Implementation of trafficgens is located at tools/pkt_gen/ directory,
+where every implementation has its dedicated sub-directory. It is
+required to create a new directory for new traffic generator
+implementations.
+
+E.g.
+
+.. code-block:: console
+
+ $ mkdir tools/pkt_gen/sample_tg
+
+Step 2 - create a trafficgen module
+===================================
+
+Every trafficgen class must inherit from generic **ITrafficGenerator**
+interface class. VSPERF during its initialization scans content of pkt_gen
+directory for all python modules, that inherit from **ITrafficGenerator**. These
+modules are automatically added into the list of supported traffic generators.
+
+Example:
+
+Let us create a draft of tools/pkt_gen/sample_tg/sample_tg.py module.
+
+.. code-block:: python
+
+ from tools.pkt_gen import trafficgen
+
+ class SampleTG(trafficgen.ITrafficGenerator):
+ """
+ A sample traffic generator implementation
+ """
+ pass
+
+VSPERF is immediately aware of the new class:
+
+.. code-block:: console
+
+ $ ./vsperf --list-trafficgen
+
+Output should look like:
+
+.. code-block:: console
+
+ Classes derived from: ITrafficGenerator
+ ======
+
+ * Ixia: A wrapper around the IXIA traffic generator.
+
+ * IxNet: A wrapper around IXIA IxNetwork applications.
+
+ * Dummy: A dummy traffic generator whose data is generated by the user.
+
+ * SampleTG: A sample traffic generator implementation
+
+ * TestCenter: Spirent TestCenter
+
+
+Step 3 - configuration
+======================
+
+All configuration values, required for correct traffic generator function, are passed
+from VSPERF to the traffic generator in a dictionary. Default values shared among
+all traffic generators are defined in **conf/03_traffic.conf** within **TRAFFIC**
+dictionary. Default values are loaded by **ITrafficGenerator** interface class
+automatically, so it is not needed to load them explicitly. In case that there are
+any traffic generator specific default values, then they should be set within class
+specific **__init__** function.
+
+VSPERF passes test specific configuration within **traffic** dictionary to every
+start and send function. So implementation of these functions must ensure,
+that default values are updated with the testcase specific values. Proper merge
+of values is assured by call of **merge_spec** function from **conf** module.
+
+Example of **merge_spec** usage in **tools/pkt_gen/sample_tg/sample_tg.py** module:
+
+.. code-block:: python
+
+ from conf import merge_spec
+
+ def start_rfc2544_throughput(self, traffic=None, duration=30):
+ self._params = {}
+ self._params['traffic'] = self.traffic_defaults.copy()
+ if traffic:
+ self._params['traffic'] = merge_spec(
+ self._params['traffic'], traffic)
+
+
+Step 4 - generic functions
+==========================
+
+There are some generic functions, which every traffic generator should provide.
+Although these functions are mainly optional, at least empty implementation must
+be provided. This is required, so that developer is explicitly aware of these
+functions.
+
+The **connect** function is called from the traffic generator controller from its
+**__enter__** method. This function should assure proper connection initialization
+between DUT and traffic generator. In case, that such implementation is not needed,
+empty implementation is required.
+
+The **disconnect** function should perform clean up of any connection specific
+actions called from the **connect** function.
+
+Example in **tools/pkt_gen/sample_tg/sample_tg.py** module:
+
+.. code-block:: python
+
+ def connect(self):
+ pass
+
+ def disconnect(self):
+ pass
+
+.. _step-5-supported-traffic-types:
+
+Step 5 - supported traffic types
+================================
+
+Currently VSPERF supports three different types of tests for traffic generators,
+these are identified in vsperf through the traffic type, which include:
+
+ * RFC2544 throughput - Send fixed size packets at different rates, using
+ traffic configuration, until minimum rate at which no packet loss is
+ detected is found. Methods with its implementation have suffix
+ **_rfc2544_throughput**.
+
+ * RFC2544 back2back - Send fixed size packets at a fixed rate, using traffic
+ configuration, for specified time interval. Methods with its
+ implementation have suffix **_rfc2544_back2back**.
+
+ * continuous flow - Send fixed size packets at given framerate, using traffic
+ configuration, for specified time interval. Methods with its
+ implementation have suffix **_cont_traffic**.
+
+In general, both synchronous and asynchronous interfaces must be implemented
+for each traffic type. Synchronous functions start with prefix **send_**.
+Asynchronous with prefixes **start_** and **wait_** in case of throughput
+and back2back and **start_** and **stop_** in case of continuous traffic type.
+
+Example of synchronous interfaces:
+
+.. code-block:: python
+
+ def send_rfc2544_throughput(self, traffic=None, tests=1, duration=20,
+ lossrate=0.0):
+ def send_rfc2544_back2back(self, traffic=None, tests=1, duration=20,
+ lossrate=0.0):
+ def send_cont_traffic(self, traffic=None, duration=20):
+
+Example of asynchronous interfaces:
+
+.. code-block:: python
+
+ def start_rfc2544_throughput(self, traffic=None, tests=1, duration=20,
+ lossrate=0.0):
+ def wait_rfc2544_throughput(self):
+
+ def start_rfc2544_back2back(self, traffic=None, tests=1, duration=20,
+ lossrate=0.0):
+ def wait_rfc2544_back2back(self):
+
+ def start_cont_traffic(self, traffic=None, duration=20):
+ def stop_cont_traffic(self):
+
+Description of parameters used by **send**, **start**, **wait** and **stop**
+functions:
+
+ * param **traffic**: A dictionary with detailed definition of traffic
+ pattern. It contains following parameters to be implemented by
+ traffic generator.
+
+ Note: Traffic dictionary has also virtual switch related parameters,
+ which are not listed below.
+
+ Note: There are parameters specific to testing of tunnelling protocols,
+ which are discussed in detail at :ref:`integration-tests` userguide.
+
+ * param **traffic_type**: One of the supported traffic types,
+ e.g. **rfc2544_throughput**, **rfc2544_continuous**
+ or **rfc2544_back2back**.
+ * param **frame_rate**: Defines desired percentage of frame
+ rate used during continuous stream tests.
+ * param **bidir**: Specifies if generated traffic will be full-duplex
+ (true) or half-duplex (false).
+ * param **multistream**: Defines number of flows simulated by traffic
+ generator. Value 0 disables MultiStream feature.
+ * param **stream_type**: Stream Type defines ISO OSI network layer
+ used for simulation of multiple streams.
+ Supported values:
+
+ * **L2** - iteration of destination MAC address
+ * **L3** - iteration of destination IP address
+ * **L4** - iteration of destination port of selected transport protocol
+
+ * param **l2**: A dictionary with data link layer details, e.g. **srcmac**,
+ **dstmac** and **framesize**.
+ * param **l3**: A dictionary with network layer details, e.g. **srcip**,
+ **dstip** and **proto**.
+ * param **l3**: A dictionary with transport layer details, e.g. **srcport**,
+ **dstport**.
+ * param **vlan**: A dictionary with vlan specific parameters,
+ e.g. **priority**, **cfi**, **id** and vlan on/off switch **enabled**.
+
+ * param **tests**: Number of times the test is executed.
+ * param **duration**: Duration of continuous test or per iteration duration
+ in case of RFC2544 throughput or back2back traffic types.
+ * param **lossrate**: Acceptable lossrate percentage.
+
+Step 6 - passing back results
+=============================
+
+It is expected that methods **send**, **wait** and **stop** will return
+values measured by traffic generator within a dictionary. Dictionary keys
+are defined in **ResultsConstants** implemented in
+**core/results/results_constants.py**. Please check sections for RFC2544
+Throughput & Continuous and for Back2Back. The same key names should
+be used by all traffic generator implementations.
+
diff --git a/docs/testing/developer/design/vsperf.png b/docs/testing/developer/design/vsperf.png
new file mode 100644
index 00000000..4af2ac62
--- /dev/null
+++ b/docs/testing/developer/design/vsperf.png
Binary files differ
diff --git a/docs/testing/developer/design/vswitchperf_design.rst b/docs/testing/developer/design/vswitchperf_design.rst
new file mode 100644
index 00000000..aa7fb342
--- /dev/null
+++ b/docs/testing/developer/design/vswitchperf_design.rst
@@ -0,0 +1,871 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+.. _vsperf-design:
+
+======================
+VSPERF Design Document
+======================
+
+Intended Audience
+=================
+
+This document is intended to aid those who want to modify the vsperf code. Or
+to extend it - for example to add support for new traffic generators,
+deployment scenarios and so on.
+
+Usage
+=====
+
+Example Connectivity to DUT
+---------------------------
+
+Establish connectivity to the VSPERF DUT Linux host. If this is in an OPNFV lab
+following the steps provided by `Pharos <https://www.opnfv.org/community/projects/pharos>`_
+ to `access the POD <https://wiki.opnfv.org/display/pharos/Pharos+Lab+Support>`_
+
+The followign steps establish the VSPERF environment.
+
+Example Command Lines
+---------------------
+
+List all the cli options:
+
+.. code-block:: console
+
+ $ ./vsperf -h
+
+Run all tests that have ``tput`` in their name - ``phy2phy_tput``, ``pvp_tput`` etc.:
+
+.. code-block:: console
+
+ $ ./vsperf --tests 'tput'
+
+As above but override default configuration with settings in '10_custom.conf'.
+This is useful as modifying configuration directly in the configuration files
+in ``conf/NN_*.py`` shows up as changes under git source control:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests 'tput'
+
+Override specific test parameters. Useful for shortening the duration of tests
+for development purposes:
+
+.. code-block:: console
+
+ $ ./vsperf --test-params 'TRAFFICGEN_DURATION=10;TRAFFICGEN_RFC2544_TESTS=1;' \
+ 'TRAFFICGEN_PKT_SIZES=(64,)' pvp_tput
+
+Typical Test Sequence
+=====================
+
+This is a typical flow of control for a test.
+
+.. image:: vsperf.png
+
+.. _design-configuration:
+
+Configuration
+=============
+
+The conf package contains the configuration files (``*.conf``) for all system
+components, it also provides a ``settings`` object that exposes all of these
+settings.
+
+Settings are not passed from component to component. Rather they are available
+globally to all components once they import the conf package.
+
+.. code-block:: python
+
+ from conf import settings
+ ...
+ log_file = settings.getValue('LOG_FILE_DEFAULT')
+
+Settings files (``*.conf``) are valid python code so can be set to complex
+types such as lists and dictionaries as well as scalar types:
+
+.. code-block:: python
+
+ first_packet_size = settings.getValue('PACKET_SIZE_LIST')[0]
+
+Configuration Procedure and Precedence
+--------------------------------------
+
+Configuration files follow a strict naming convention that allows them to be
+processed in a specific order. All the .conf files are named ``NN_name.conf``,
+where NN is a decimal number. The files are processed in order from 00_name.conf
+to 99_name.conf so that if the name setting is given in both a lower and higher
+numbered conf file then the higher numbered file is the effective setting as it
+is processed after the setting in the lower numbered file.
+
+The values in the file specified by ``--conf-file`` takes precedence over all
+the other configuration files and does not have to follow the naming
+convention.
+
+.. _paths-documentation:
+
+Configuration of PATHS dictionary
+---------------------------------
+
+VSPERF uses external tools like Open vSwitch and Qemu for execution of testcases. These
+tools may be downloaded and built automatically (see :ref:`vsperf-installation-script`)
+or installed manually by user from binary packages. It is also possible to use a combination
+of both approaches, but it is essential to correctly set paths to all required tools.
+These paths are stored within a PATHS dictionary, which is evaluated before execution
+of each testcase, in order to setup testcase specific environment. Values selected for testcase
+execution are internally stored inside TOOLS dictionary, which is used by VSPERF to execute
+external tools, load kernel modules, etc.
+
+The default configuration of PATHS dictionary is spread among three different configuration files
+to follow logical grouping of configuration options. Basic description of PATHS dictionary
+is placed inside ``conf/00_common.conf``. The configuration specific to DPDK and vswitches
+is located at ``conf/02_vswitch.conf``. The last part related to the Qemu is defined inside
+``conf/04_vnf.conf``. Default configuration values can be used in case, that all required
+tools were downloaded and built automatically by vsperf itself. In case, that some of
+tools were installed manually from binary packages, then it will be necessary to modify
+the content of PATHS dictionary accordingly.
+
+Dictionary has a specific section of configuration options for every tool type, it means:
+
+ * ``PATHS['vswitch']`` - contains a separete dictionary for each of vswitches supported by VSPEF
+
+ Example:
+
+ .. code-block:: python
+
+ PATHS['vswitch'] = {
+ 'OvsDpdkVhost': { ... },
+ 'OvsVanilla' : { ... },
+ ...
+ }
+
+ * ``PATHS['dpdk']`` - contains paths to the dpdk sources, kernel modules and tools (e.g. testpmd)
+
+ Example:
+
+ .. code-block:: python
+
+ PATHS['dpdk'] = {
+ 'type' : 'src',
+ 'src': {
+ 'path': os.path.join(ROOT_DIR, 'src/dpdk/dpdk/'),
+ 'modules' : ['uio', os.path.join(RTE_TARGET, 'kmod/igb_uio.ko')],
+ 'bind-tool': 'tools/dpdk*bind.py',
+ 'testpmd': os.path.join(RTE_TARGET, 'app', 'testpmd'),
+ },
+ ...
+ }
+
+ * ``PATHS['qemu']`` - contains paths to the qemu sources and executable file
+
+ Example:
+
+ .. code-block:: python
+
+ PATHS['qemu'] = {
+ 'type' : 'bin',
+ 'bin': {
+ 'qemu-system': 'qemu-system-x86_64'
+ },
+ ...
+ }
+
+Every section specific to the particular vswitch, dpdk or qemu may contain following types
+of configuration options:
+
+ * option ``type`` - is a string, which defines the type of configured paths ('src' or 'bin')
+ to be selected for a given section:
+
+ * value ``src`` means, that VSPERF will use vswitch, DPDK or QEMU built from sources
+ e.g. by execution of ``systems/build_base_machine.sh`` script during VSPERF
+ installation
+
+ * value ``bin`` means, that VSPERF will use vswitch, DPDK or QEMU binaries installed
+ directly in the operating system, e.g. via OS specific packaging system
+
+ * option ``path`` - is a string with a valid system path; Its content is checked for
+ existence, prefixed with section name and stored into TOOLS for later use
+ e.g. ``TOOLS['dpdk_src']`` or ``TOOLS['vswitch_src']``
+
+ * option ``modules`` - is list of strings with names of kernel modules; Every module name
+ from given list is checked for a '.ko' suffix. In case that it matches and if it is not
+ an absolute path to the module, then module name is prefixed with value of ``path``
+ option defined for the same section
+
+ Example:
+
+ .. code-block:: python
+
+ """
+ snippet of PATHS definition from the configuration file:
+ """
+ PATHS['vswitch'] = {
+ 'OvsVanilla' = {
+ 'type' : 'src',
+ 'src': {
+ 'path': '/tmp/vsperf/src_vanilla/ovs/ovs/',
+ 'modules' : ['datapath/linux/openvswitch.ko'],
+ ...
+ },
+ ...
+ }
+ ...
+ }
+
+ """
+ Final content of TOOLS dictionary used during runtime:
+ """
+ TOOLS['vswitch_modules'] = ['/tmp/vsperf/src_vanilla/ovs/ovs/datapath/linux/openvswitch.ko']
+
+ * all other options are strings with names and paths to specific tools; If a given string
+ contains a relative path and option ``path`` is defined for a given section, then string
+ content will be prefixed with content of the ``path``. Otherwise the name of the tool will be
+ searched within standard system directories. In case that filename contains OS specific
+ wildcards, then they will be expanded to the real path. At the end of the processing, every
+ absolute path will be checked for its existence. In case that temporary path (i.e. path with
+ a ``_tmp`` suffix) does not exist, then log will be written and vsperf will continue. If any
+ other path will not exist, then vsperf execution will be terminated with a runtime error.
+
+ Example:
+
+ .. code-block:: python
+
+ """
+ snippet of PATHS definition from the configuration file:
+ """
+ PATHS['vswitch'] = {
+ 'OvsDpdkVhost': {
+ 'type' : 'src',
+ 'src': {
+ 'path': '/tmp/vsperf/src_vanilla/ovs/ovs/',
+ 'ovs-vswitchd': 'vswitchd/ovs-vswitchd',
+ 'ovsdb-server': 'ovsdb/ovsdb-server',
+ ...
+ }
+ ...
+ }
+ ...
+ }
+
+ """
+ Final content of TOOLS dictionary used during runtime:
+ """
+ TOOLS['ovs-vswitchd'] = '/tmp/vsperf/src_vanilla/ovs/ovs/vswitchd/ovs-vswitchd'
+ TOOLS['ovsdb-server'] = '/tmp/vsperf/src_vanilla/ovs/ovs/ovsdb/ovsdb-server'
+
+Note: In case that ``bin`` type is set for DPDK, then ``TOOLS['dpdk_src']`` will be set to
+the value of ``PATHS['dpdk']['src']['path']``. The reason is, that VSPERF uses downloaded
+DPDK sources to copy DPDK and testpmd into the GUEST, where testpmd is built. In case,
+that DPDK sources are not available, then vsperf will continue with test execution,
+but testpmd can't be used as a guest loopback. This is useful in case, that other guest
+loopback applications (e.g. buildin or l2fwd) are used.
+
+Note: In case of RHEL 7.3 OS usage, binary package configuration is required
+for Vanilla OVS tests. With the installation of a supported rpm for OVS there is
+a section in the ``conf\10_custom.conf`` file that can be used.
+
+.. _configuration-of-traffic-dictionary:
+
+Configuration of TRAFFIC dictionary
+-----------------------------------
+
+TRAFFIC dictionary is used for configuration of traffic generator. Default values
+can be found in configuration file ``conf/03_traffic.conf``. These default values
+can be modified by (first option has the highest priorty):
+
+ 1. ``Parameters`` section of testcase defintion
+ 2. command line options specified by ``--test-params`` argument
+ 3. custom configuration file
+
+It is to note, that in case of option 1 and 2, it is possible to specify only
+values, which should be changed. In case of custom configuration file, it is
+required to specify whole ``TRAFFIC`` dictionary with its all values or explicitly
+call and update() method of ``TRAFFIC`` dictionary.
+
+Detailed description of ``TRAFFIC`` dictionary items follows:
+
+.. code-block:: console
+
+ 'traffic_type' - One of the supported traffic types.
+ E.g. rfc2544_throughput, rfc2544_back2back
+ or rfc2544_continuous
+ Data type: str
+ Default value: "rfc2544_throughput".
+ 'bidir' - Specifies if generated traffic will be full-duplex (True)
+ or half-duplex (False)
+ Data type: str
+ Supported values: "True", "False"
+ Default value: "False".
+ 'frame_rate' - Defines desired percentage of frame rate used during
+ continuous stream tests.
+ Data type: int
+ Default value: 100.
+ 'multistream' - Defines number of flows simulated by traffic generator.
+ Value 0 disables multistream feature
+ Data type: int
+ Supported values: 0-65536 for 'L4' stream type
+ unlimited for 'L2' and 'L3' stream types
+ Default value: 0.
+ 'stream_type' - Stream type is an extension of the "multistream" feature.
+ If multistream is disabled, then stream type will be
+ ignored. Stream type defines ISO OSI network layer used
+ for simulation of multiple streams.
+ Data type: str
+ Supported values:
+ "L2" - iteration of destination MAC address
+ "L3" - iteration of destination IP address
+ "L4" - iteration of destination port
+ of selected transport protocol
+ Default value: "L4".
+ 'pre_installed_flows'
+ - Pre-installed flows is an extension of the "multistream"
+ feature. If enabled, it will implicitly insert a flow
+ for each stream. If multistream is disabled, then
+ pre-installed flows will be ignored.
+ Note: It is supported only for p2p deployment scenario.
+ Data type: str
+ Supported values:
+ "Yes" - flows will be inserted into OVS
+ "No" - flows won't be inserted into OVS
+ Default value: "No".
+ 'flow_type' - Defines flows complexity.
+ Data type: str
+ Supported values:
+ "port" - flow is defined by ingress ports
+ "IP" - flow is defined by ingress ports
+ and src and dst IP addresses
+ Default value: "port"
+ 'l2' - A dictionary with l2 network layer details. Supported
+ values are:
+ 'srcmac' - Specifies source MAC address filled by traffic generator.
+ NOTE: It can be modified by vsperf in some scenarios.
+ Data type: str
+ Default value: "00:00:00:00:00:00".
+ 'dstmac' - Specifies destination MAC address filled by traffic generator.
+ NOTE: It can be modified by vsperf in some scenarios.
+ Data type: str
+ Default value: "00:00:00:00:00:00".
+ 'framesize' - Specifies default frame size. This value should not be
+ changed directly. It will be overridden during testcase
+ execution by values specified by list TRAFFICGEN_PKT_SIZES.
+ Data type: int
+ Default value: 64
+ 'l3' - A dictionary with l3 network layer details. Supported
+ values are:
+ 'srcip' - Specifies source MAC address filled by traffic generator.
+ NOTE: It can be modified by vsperf in some scenarios.
+ Data type: str
+ Default value: "1.1.1.1".
+ 'dstip' - Specifies destination MAC address filled by traffic generator.
+ NOTE: It can be modified by vsperf in some scenarios.
+ Data type: str
+ Default value: "90.90.90.90".
+ 'proto' - Specifies deflaut protocol type.
+ Please check particular traffic generator implementation
+ for supported protocol types.
+ Data type: str
+ Default value: "udp".
+ 'l4' - A dictionary with l4 network layer details. Supported
+ values are:
+ 'srcport' - Specifies source port of selected transport protocol.
+ NOTE: It can be modified by vsperf in some scenarios.
+ Data type: int
+ Default value: 3000
+ 'dstport' - Specifies destination port of selected transport protocol.
+ NOTE: It can be modified by vsperf in some scenarios.
+ Data type: int
+ Default value: 3001
+ 'vlan' - A dictionary with vlan encapsulation details. Supported
+ values are:
+ 'enabled' - Specifies if vlan encapsulation should be enabled or
+ disabled.
+ Data type: bool
+ Default value: False
+ 'id' - Specifies vlan id.
+ Data type: int (NOTE: must fit to 12 bits)
+ Default value: 0
+ 'priority' - Specifies a vlan priority (PCP header field).
+ Data type: int (NOTE: must fit to 3 bits)
+ Default value: 0
+ 'cfi' - Specifies if frames can or cannot be dropped during
+ congestion (DEI header field).
+ Data type: int (NOTE: must fit to 1 bit)
+ Default value: 0
+
+.. _configuration-of-guest-options:
+
+Configuration of GUEST options
+------------------------------
+
+VSPERF is able to setup scenarios involving a number of VMs in series or in parallel.
+All configuration options related to a particular VM instance are defined as
+lists and prefixed with ``GUEST_`` label. It is essential, that there is enough
+items in all ``GUEST_`` options to cover all VM instances involved in the test.
+In case there is not enough items, then VSPERF will use the first item of
+particular ``GUEST_`` option to expand the list to required length.
+
+Example of option expansion for 4 VMs:
+
+ .. code-block:: python
+
+ """
+ Original values:
+ """
+ GUEST_SMP = ['2']
+ GUEST_MEMORY = ['2048', '4096']
+
+ """
+ Values after automatic expansion:
+ """
+ GUEST_SMP = ['2', '2', '2', '2']
+ GUEST_MEMORY = ['2048', '4096', '2048', '2048']
+
+
+First option can contain macros starting with ``#`` to generate VM specific values.
+These macros can be used only for options of ``list`` or ``str`` types with ``GUEST_``
+prefix.
+
+Example of macros and their expnasion for 2 VMs:
+
+ .. code-block:: python
+
+ """
+ Original values:
+ """
+ GUEST_SHARE_DIR = ['/tmp/qemu#VMINDEX_share']
+ GUEST_BRIDGE_IP = ['#IP(1.1.1.5)/16']
+
+ """
+ Values after automatic expansion:
+ """
+ GUEST_SHARE_DIR = ['/tmp/qemu0_share', '/tmp/qemu1_share']
+ GUEST_BRIDGE_IP = ['1.1.1.5/16', '1.1.1.6/16']
+
+Additional examples are available at ``04_vnf.conf``.
+
+Note: In case, that macro is detected in the first item of the list, then
+all other items are ignored and list content is created automatically.
+
+Multiple macros can be used inside one configuration option definition, but macros
+cannot be used inside other macros. The only exception is macro ``#VMINDEX``, which
+is expanded first and thus it can be used inside other macros.
+
+Following macros are supported:
+
+ * ``#VMINDEX`` - it is replaced by index of VM being executed; This macro
+ is expanded first, so it can be used inside other macros.
+
+ Example:
+
+ .. code-block:: python
+
+ GUEST_SHARE_DIR = ['/tmp/qemu#VMINDEX_share']
+
+ * ``#MAC(mac_address[, step])`` - it will iterate given ``mac_address``
+ with optional ``step``. In case that step is not defined, then it is set to 1.
+ It means, that first VM will use the value of ``mac_address``, second VM
+ value of ``mac_address`` increased by ``step``, etc.
+
+ Example:
+
+ .. code-block:: python
+
+ GUEST_NICS = [[{'mac' : '#MAC(00:00:00:00:00:01,2)'}]]
+
+ * ``#IP(ip_address[, step])`` - it will iterate given ``ip_address``
+ with optional ``step``. In case that step is not defined, then it is set to 1.
+ It means, that first VM will use the value of ``ip_address``, second VM
+ value of ``ip_address`` increased by ``step``, etc.
+
+ Example:
+
+ .. code-block:: python
+
+ GUEST_BRIDGE_IP = ['#IP(1.1.1.5)/16']
+
+ * ``#EVAL(expression)`` - it will evaluate given ``expression`` as python code;
+ Only simple expressions should be used. Call of the functions is not supported.
+
+ Example:
+
+ .. code-block:: python
+
+ GUEST_CORE_BINDING = [('#EVAL(6+2*#VMINDEX)', '#EVAL(7+2*#VMINDEX)')]
+
+Other Configuration
+-------------------
+
+``conf.settings`` also loads configuration from the command line and from the environment.
+
+.. _pxp-deployment:
+
+PXP Deployment
+==============
+
+Every testcase uses one of the supported deployment scenarios to setup test environment.
+The controller responsible for a given scenario configures flows in the vswitch to route
+traffic among physical interfaces connected to the traffic generator and virtual
+machines. VSPERF supports several deployments including PXP deployment, which can
+setup various scenarios with multiple VMs.
+
+These scenarios are realized by VswitchControllerPXP class, which can configure and
+execute given number of VMs in serial or parallel configurations. Every VM can be
+configured with just one or an even number of interfaces. In case that VM has more than
+2 interfaces, then traffic is properly routed among pairs of interfaces.
+
+Example of traffic routing for VM with 4 NICs in serial configuration:
+
+.. code-block:: console
+
+ +------------------------------------------+
+ | VM with 4 NICs |
+ | +---------------+ +---------------+ |
+ | | Application | | Application | |
+ | +---------------+ +---------------+ |
+ | ^ | ^ | |
+ | | v | v |
+ | +---------------+ +---------------+ |
+ | | logical ports | | logical ports | |
+ | | 0 1 | | 2 3 | |
+ +--+---------------+----+---------------+--+
+ ^ : ^ :
+ | | | |
+ : v : v
+ +-----------+---------------+----+---------------+----------+
+ | vSwitch | 0 1 | | 2 3 | |
+ | | logical ports | | logical ports | |
+ | previous +---------------+ +---------------+ next |
+ | VM or PHY ^ | ^ | VM or PHY|
+ | port -----+ +------------+ +---> port |
+ +-----------------------------------------------------------+
+
+It is also possible to define different number of interfaces for each VM to better
+simulate real scenarios.
+
+Example of traffic routing for 2 VMs in serial configuration, where 1st VM has
+4 NICs and 2nd VM 2 NICs:
+
+.. code-block:: console
+
+ +------------------------------------------+ +---------------------+
+ | 1st VM with 4 NICs | | 2nd VM with 2 NICs |
+ | +---------------+ +---------------+ | | +---------------+ |
+ | | Application | | Application | | | | Application | |
+ | +---------------+ +---------------+ | | +---------------+ |
+ | ^ | ^ | | | ^ | |
+ | | v | v | | | v |
+ | +---------------+ +---------------+ | | +---------------+ |
+ | | logical ports | | logical ports | | | | logical ports | |
+ | | 0 1 | | 2 3 | | | | 0 1 | |
+ +--+---------------+----+---------------+--+ +--+---------------+--+
+ ^ : ^ : ^ :
+ | | | | | |
+ : v : v : v
+ +-----------+---------------+----+---------------+-------+---------------+----------+
+ | vSwitch | 0 1 | | 2 3 | | 4 5 | |
+ | | logical ports | | logical ports | | logical ports | |
+ | previous +---------------+ +---------------+ +---------------+ next |
+ | VM or PHY ^ | ^ | ^ | VM or PHY|
+ | port -----+ +------------+ +---------------+ +----> port |
+ +-----------------------------------------------------------------------------------+
+
+The number of VMs involved in the test and the type of their connection is defined
+by deployment name as follows:
+
+ * ``pvvp[number]`` - configures scenario with VMs connected in series with
+ optional ``number`` of VMs. In case that ``number`` is not specified, then
+ 2 VMs will be used.
+
+ Example of 2 VMs in a serial configuration:
+
+ .. code-block:: console
+
+ +----------------------+ +----------------------+
+ | 1st VM | | 2nd VM |
+ | +---------------+ | | +---------------+ |
+ | | Application | | | | Application | |
+ | +---------------+ | | +---------------+ |
+ | ^ | | | ^ | |
+ | | v | | | v |
+ | +---------------+ | | +---------------+ |
+ | | logical ports | | | | logical ports | |
+ | | 0 1 | | | | 0 1 | |
+ +---+---------------+--+ +---+---------------+--+
+ ^ : ^ :
+ | | | |
+ : v : v
+ +---+---------------+---------+---------------+--+
+ | | 0 1 | | 3 4 | |
+ | | logical ports | vSwitch | logical ports | |
+ | +---------------+ +---------------+ |
+ | ^ | ^ | |
+ | | +-----------------+ v |
+ | +----------------------------------------+ |
+ | | physical ports | |
+ | | 0 1 | |
+ +---+----------------------------------------+---+
+ ^ :
+ | |
+ : v
+ +------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +------------------------------------------------+
+
+ * ``pvpv[number]`` - configures scenario with VMs connected in parallel with
+ optional ``number`` of VMs. In case that ``number`` is not specified, then
+ 2 VMs will be used. Multistream feature is used to route traffic to particular
+ VMs (or NIC pairs of every VM). It means, that VSPERF will enable multistream
+ feaure and sets the number of streams to the number of VMs and their NIC
+ pairs. Traffic will be dispatched based on Stream Type, i.e. by UDP port,
+ IP address or MAC address.
+
+ Example of 2 VMs in a parallel configuration, where traffic is dispatched
+ based on the UDP port.
+
+ .. code-block:: console
+
+ +----------------------+ +----------------------+
+ | 1st VM | | 2nd VM |
+ | +---------------+ | | +---------------+ |
+ | | Application | | | | Application | |
+ | +---------------+ | | +---------------+ |
+ | ^ | | | ^ | |
+ | | v | | | v |
+ | +---------------+ | | +---------------+ |
+ | | logical ports | | | | logical ports | |
+ | | 0 1 | | | | 0 1 | |
+ +---+---------------+--+ +---+---------------+--+
+ ^ : ^ :
+ | | | |
+ : v : v
+ +---+---------------+---------+---------------+--+
+ | | 0 1 | | 3 4 | |
+ | | logical ports | vSwitch | logical ports | |
+ | +---------------+ +---------------+ |
+ | ^ | ^ : |
+ | | ......................: : |
+ | UDP | UDP : | : |
+ | port| port: +--------------------+ : |
+ | 0 | 1 : | : |
+ | | : v v |
+ | +----------------------------------------+ |
+ | | physical ports | |
+ | | 0 1 | |
+ +---+----------------------------------------+---+
+ ^ :
+ | |
+ : v
+ +------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +------------------------------------------------+
+
+
+PXP deployment is backward compatible with PVP deployment, where ``pvp`` is
+an alias for ``pvvp1`` and it executes just one VM.
+
+The number of interfaces used by VMs is defined by configuration option
+``GUEST_NICS_NR``. In case that more than one pair of interfaces is defined
+for VM, then:
+
+ * for ``pvvp`` (serial) scenario every NIC pair is connected in serial
+ before connection to next VM is created
+ * for ``pvpv`` (parallel) scenario every NIC pair is directly connected
+ to the physical ports and unique traffic stream is assigned to it
+
+Examples:
+
+ * Deployment ``pvvp10`` will start 10 VMs and connects them in series
+ * Deployment ``pvpv4`` will start 4 VMs and connects them in parallel
+ * Deployment ``pvpv1`` and GUEST_NICS_NR = [4] will start 1 VM with
+ 4 interfaces and every NIC pair is directly connected to the
+ physical ports
+ * Deployment ``pvvp`` and GUEST_NICS_NR = [2, 4] will start 2 VMs;
+ 1st VM will have 2 interfaces and 2nd VM 4 interfaces. These interfaces
+ will be connected in serial, i.e. traffic will flow as follows:
+ PHY1 -> VM1_1 -> VM1_2 -> VM2_1 -> VM2_2 -> VM2_3 -> VM2_4 -> PHY2
+
+Note: In case that only 1 or more than 2 NICs are configured for VM,
+then ``testpmd`` should be used as forwarding application inside the VM.
+As it is able to forward traffic between multiple VM NIC pairs.
+
+Note: In case of ``linux_bridge``, all NICs are connected to the same
+bridge inside the VM.
+
+VM, vSwitch, Traffic Generator Independence
+===========================================
+
+VSPERF supports different vSwithes, Traffic Generators, VNFs
+and Forwarding Applications by using standard object-oriented polymorphism:
+
+ * Support for vSwitches is implemented by a class inheriting from IVSwitch.
+ * Support for Traffic Generators is implemented by a class inheriting from
+ ITrafficGenerator.
+ * Support for VNF is implemented by a class inheriting from IVNF.
+ * Support for Forwarding Applications is implemented by a class inheriting
+ from IPktFwd.
+
+By dealing only with the abstract interfaces the core framework can support
+many implementations of different vSwitches, Traffic Generators, VNFs
+and Forwarding Applications.
+
+IVSwitch
+--------
+
+.. code-block:: python
+
+ class IVSwitch:
+ start(self)
+ stop(self)
+ add_switch(switch_name)
+ del_switch(switch_name)
+ add_phy_port(switch_name)
+ add_vport(switch_name)
+ get_ports(switch_name)
+ del_port(switch_name, port_name)
+ add_flow(switch_name, flow)
+ del_flow(switch_name, flow=None)
+
+ITrafficGenerator
+-----------------
+
+.. code-block:: python
+
+ class ITrafficGenerator:
+ connect()
+ disconnect()
+
+ send_burst_traffic(traffic, numpkts, time, framerate)
+
+ send_cont_traffic(traffic, time, framerate)
+ start_cont_traffic(traffic, time, framerate)
+ stop_cont_traffic(self):
+
+ send_rfc2544_throughput(traffic, tests, duration, lossrate)
+ start_rfc2544_throughput(traffic, tests, duration, lossrate)
+ wait_rfc2544_throughput(self)
+
+ send_rfc2544_back2back(traffic, tests, duration, lossrate)
+ start_rfc2544_back2back(traffic, , tests, duration, lossrate)
+ wait_rfc2544_back2back()
+
+Note ``send_xxx()`` blocks whereas ``start_xxx()`` does not and must be followed by a subsequent call to ``wait_xxx()``.
+
+IVnf
+----
+
+.. code-block:: python
+
+ class IVnf:
+ start(memory, cpus,
+ monitor_path, shared_path_host,
+ shared_path_guest, guest_prompt)
+ stop()
+ execute(command)
+ wait(guest_prompt)
+ execute_and_wait (command)
+
+IPktFwd
+--------
+
+ .. code-block:: python
+
+ class IPktFwd:
+ start()
+ stop()
+
+
+Controllers
+-----------
+
+Controllers are used in conjunction with abstract interfaces as way
+of decoupling the control of vSwtiches, VNFs, TrafficGenerators
+and Forwarding Applications from other components.
+
+The controlled classes provide basic primitive operations. The Controllers
+sequence and co-ordinate these primitive operation in to useful actions. For
+instance the vswitch_controller_p2p can be used to bring any vSwitch (that
+implements the primitives defined in IVSwitch) into the configuration required
+by the Phy-to-Phy Deployment Scenario.
+
+In order to support a new vSwitch only a new implementation of IVSwitch needs
+be created for the new vSwitch to be capable of fulfilling all the Deployment
+Scenarios provided for by existing or future vSwitch Controllers.
+
+Similarly if a new Deployment Scenario is required it only needs to be written
+once as a new vSwitch Controller and it will immediately be capable of
+controlling all existing and future vSwitches in to that Deployment Scenario.
+
+Similarly the Traffic Controllers can be used to co-ordinate basic operations
+provided by implementers of ITrafficGenerator to provide useful tests. Though
+traffic generators generally already implement full test cases i.e. they both
+generate suitable traffic and analyse returned traffic in order to implement a
+test which has typically been predefined in an RFC document. However the
+Traffic Controller class allows for the possibility of further enhancement -
+such as iterating over tests for various packet sizes or creating new tests.
+
+Traffic Controller's Role
+-------------------------
+
+.. image:: traffic_controller.png
+
+
+Loader & Component Factory
+--------------------------
+
+The working of the Loader package (which is responsible for *finding* arbitrary
+classes based on configuration data) and the Component Factory which is
+responsible for *choosing* the correct class for a particular situation - e.g.
+Deployment Scenario can be seen in this diagram.
+
+.. image:: factory_and_loader.png
+
+Routing Tables
+==============
+
+Vsperf uses a standard set of routing tables in order to allow tests to easily
+mix and match Deployment Scenarios (PVP, P2P topology), Tuple Matching and
+Frame Modification requirements.
+
+.. code-block:: console
+
+ +--------------+
+ | |
+ | Table 0 | table#0 - Match table. Flows designed to force 5 & 10
+ | | tuple matches go here.
+ | |
+ +--------------+
+ |
+ |
+ v
+ +--------------+ table#1 - Routing table. Flow entries to forward
+ | | packets between ports goes here.
+ | Table 1 | The chosen port is communicated to subsequent tables by
+ | | setting the metadata value to the egress port number.
+ | | Generally this table is set-up by by the
+ +--------------+ vSwitchController.
+ |
+ |
+ v
+ +--------------+ table#2 - Frame modification table. Frame modification
+ | | flow rules are isolated in this table so that they can
+ | Table 2 | be turned on or off without affecting the routing or
+ | | tuple-matching flow rules. This allows the frame
+ | | modification and tuple matching required by the tests
+ | | in the VSWITCH PERFORMANCE FOR TELCO NFV test
+ +--------------+ specification to be independent of the Deployment
+ | Scenario set up by the vSwitchController.
+ |
+ v
+ +--------------+
+ | |
+ | Table 3 | table#3 - Egress table. Egress packets on the ports
+ | | setup in Table 1.
+ +--------------+
+
+
diff --git a/docs/testing/developer/index.rst b/docs/testing/developer/index.rst
new file mode 100644
index 00000000..c89f27fc
--- /dev/null
+++ b/docs/testing/developer/index.rst
@@ -0,0 +1,74 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T, Red Hat, Spirent, Ixia and others.
+
+.. OPNFV VSPERF Documentation master file.
+
+======
+VSPERF
+======
+
+VSPERF is an OPNFV testing project.
+
+VSPERF provides an automated test-framework and comprehensive test suite based on
+industry standards for measuring data-plane performance of Telco NFV switching
+technologies as well as physical and virtual network interfaces (NFVI). The VSPERF
+architecture is switch and traffic generator agnostic and provides full control of
+software component versions and configurations as well as test-case customization.
+
+The Danube release of VSPERF includes improvements in documentation and capabilities.
+This includes additional test-cases such as RFC 5481 Latency test and RFC-2889
+address-learning-rate test. Hardware traffic generator support is now provided for
+Spirent and Xena in addition to Ixia. The Moongen software traffic generator is also
+now fully supported. VSPERF can be used in a variety of modes for configuration and
+setup of the network and/or for control of the test-generator and test execution.
+
+* Wiki: https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+* Repository: https://git.opnfv.org/vswitchperf
+* Artifacts: https://artifacts.opnfv.org/vswitchperf.html
+* Continuous Integration status: https://build.opnfv.org/ci/view/vswitchperf/
+
+
+****************************
+VSPERF Developer Guide
+****************************
+
+.. toctree::
+ :caption: VSPERF Developer Guide
+ :maxdepth: 5
+ :numbered: 5
+
+ ./design/trafficgen_integration_guide.rst
+ ./design/vswitchperf_design.rst
+
+ ./requirements/vswitchperf_ltd.rst
+ ./requirements/vswitchperf_ltp.rst
+
+*****************************
+VSPERF - IETF Internet Draft
+*****************************
+
+.. toctree::
+ :caption: IETF Internet Draft
+ :maxdepth: 5
+ :numbered: 5
+
+`Benchmarking Virtual Switches in OPNFV <https://tools.ietf.org/html/draft-ietf-bmwg-vswitch-opnfv-01>`_
+
+Drafts (xml) are maintained in IETF repo. https://tools.ietf.org/html/
+
+********************************
+VSPERF Scenarios and CI Results
+********************************
+
+.. toctree::
+ :caption: VSPERF Scenarios & Results
+ :maxdepth: 5
+ :numbered: 5
+
+ ./results/scenario.rst
+ ./results/results.rst
+
+Indices
+=======
+* :ref:`search`
diff --git a/docs/testing/developer/requirements/LICENSE b/docs/testing/developer/requirements/LICENSE
new file mode 100644
index 00000000..7bc572ce
--- /dev/null
+++ b/docs/testing/developer/requirements/LICENSE
@@ -0,0 +1,2 @@
+This work is licensed under a Creative Commons Attribution 4.0 International License.
+http://creativecommons.org/licenses/by/4.0
diff --git a/docs/testing/developer/requirements/ietf_draft/LICENSE b/docs/testing/developer/requirements/ietf_draft/LICENSE
new file mode 100644
index 00000000..7fc9ae14
--- /dev/null
+++ b/docs/testing/developer/requirements/ietf_draft/LICENSE
@@ -0,0 +1,12 @@
+Copyright (c) 2016 IETF Trust and the persons identified as the
+document authors. All rights reserved.
+
+This document is subject to BCP 78 and the IETF Trust's Legal
+Provisions Relating to IETF Documents
+(http://trustee.ietf.org/license-info) in effect on the date of
+publication of this document. Please review these documents
+carefully, as they describe your rights and restrictions with respect
+to this document. Code Components extracted from this document must
+include Simplified BSD License text as described in Section 4.e of
+the Trust Legal Provisions and are provided without warranty as
+described in the Simplified BSD License.
diff --git a/docs/testing/developer/requirements/ietf_draft/draft-ietf-bmwg-vswitch-opnfv-00.xml b/docs/testing/developer/requirements/ietf_draft/draft-ietf-bmwg-vswitch-opnfv-00.xml
new file mode 100644
index 00000000..2259b23c
--- /dev/null
+++ b/docs/testing/developer/requirements/ietf_draft/draft-ietf-bmwg-vswitch-opnfv-00.xml
@@ -0,0 +1,1016 @@
+<?xml version="1.0" encoding="US-ASCII"?>
+<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
+<?rfc toc="yes"?>
+<?rfc tocompact="yes"?>
+<?rfc tocdepth="3"?>
+<?rfc tocindent="yes"?>
+<?rfc symrefs="yes"?>
+<?rfc sortrefs="yes"?>
+<?rfc comments="yes"?>
+<?rfc inline="yes"?>
+<?rfc compact="yes"?>
+<?rfc subcompact="no"?>
+<rfc category="info" docName="draft-ietf-bmwg-vswitch-opnfv-00"
+ ipr="trust200902">
+ <front>
+ <title abbrev="Benchmarking vSwitches">Benchmarking Virtual Switches in
+ OPNFV</title>
+
+ <author fullname="Maryam Tahhan" initials="M." surname="Tahhan">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>maryam.tahhan@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Billy O'Mahony" initials="B." surname="O'Mahony">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>billy.o.mahony@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Al Morton" initials="A." surname="Morton">
+ <organization>AT&amp;T Labs</organization>
+
+ <address>
+ <postal>
+ <street>200 Laurel Avenue South</street>
+
+ <city>Middletown,</city>
+
+ <region>NJ</region>
+
+ <code>07748</code>
+
+ <country>USA</country>
+ </postal>
+
+ <phone>+1 732 420 1571</phone>
+
+ <facsimile>+1 732 368 1192</facsimile>
+
+ <email>acmorton@att.com</email>
+
+ <uri>http://home.comcast.net/~acmacm/</uri>
+ </address>
+ </author>
+
+ <date day="8" month="July" year="2016"/>
+
+ <abstract>
+ <t>This memo describes the progress of the Open Platform for NFV (OPNFV)
+ project on virtual switch performance "VSWITCHPERF". This project
+ intends to build on the current and completed work of the Benchmarking
+ Methodology Working Group in IETF, by referencing existing literature.
+ The Benchmarking Methodology Working Group has traditionally conducted
+ laboratory characterization of dedicated physical implementations of
+ internetworking functions. Therefore, this memo begins to describe the
+ additional considerations when virtual switches are implemented in
+ general-purpose hardware. The expanded tests and benchmarks are also
+ influenced by the OPNFV mission to support virtualization of the "telco"
+ infrastructure.</t>
+ </abstract>
+
+ <note title="Requirements Language">
+ <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+ "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
+ document are to be interpreted as described in <xref
+ target="RFC2119">RFC 2119</xref>.</t>
+
+ <t/>
+ </note>
+ </front>
+
+ <middle>
+ <section title="Introduction">
+ <t>Benchmarking Methodology Working Group (BMWG) has traditionally
+ conducted laboratory characterization of dedicated physical
+ implementations of internetworking functions. The Black-box Benchmarks
+ of Throughput, Latency, Forwarding Rates and others have served our
+ industry for many years. Now, Network Function Virtualization (NFV) has
+ the goal to transform how internetwork functions are implemented, and
+ therefore has garnered much attention.</t>
+
+ <t>This memo summarizes the progress of the Open Platform for NFV
+ (OPNFV) project on virtual switch performance characterization,
+ "VSWITCHPERF", through the Brahmaputra (second) release <xref
+ target="BrahRel"/>. This project intends to build on the current and
+ completed work of the Benchmarking Methodology Working Group in IETF, by
+ referencing existing literature. For example, currently the most often
+ referenced RFC is <xref target="RFC2544"/> (which depends on <xref
+ target="RFC1242"/>) and foundation of the benchmarking work in OPNFV is
+ common and strong.</t>
+
+ <t>See
+ https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+ for more background, and the OPNFV website for general information:
+ https://www.opnfv.org/</t>
+
+ <t>The authors note that OPNFV distinguishes itself from other open
+ source compute and networking projects through its emphasis on existing
+ "telco" services as opposed to cloud-computing. There are many ways in
+ which telco requirements have different emphasis on performance
+ dimensions when compared to cloud computing: support for and transfer of
+ isochronous media streams is one example.</t>
+
+ <t>Note also that the move to NFV Infrastructure has resulted in many
+ new benchmarking initiatives across the industry. The authors are
+ currently doing their best to maintain alignment with many other
+ projects, and this Internet Draft is one part of the efforts. We
+ acknowledge the early work in <xref
+ target="I-D.huang-bmwg-virtual-network-performance"/>, and useful
+ discussion with the authors.</t>
+ </section>
+
+ <section title="Scope">
+ <t>The primary purpose and scope of the memo is to inform the industry
+ of work-in-progress that builds on the body of extensive BMWG literature
+ and experience, and describe the extensions needed for benchmarking
+ virtual switches. Inital feedback indicates that many of these
+ extensions may be applicable beyond the current scope (to hardware
+ switches in the NFV Infrastructure and to virtual routers, for example).
+ Additionally, this memo serves as a vehicle to include more detail and
+ commentary from BMWG and other Open Source communities, under BMWG's
+ chartered work to characterize the NFV Infrastructure (a virtual switch
+ is an important aspect of that infrastructure).</t>
+ </section>
+
+ <section title="Benchmarking Considerations">
+ <t>This section highlights some specific considerations (from <xref
+ target="I-D.ietf-bmwg-virtual-net"/>)related to Benchmarks for virtual
+ switches. The OPNFV project is sharing its present view on these areas,
+ as they develop their specifications in the Level Test Design (LTD)
+ document.</t>
+
+ <section title="Comparison with Physical Network Functions">
+ <t>To compare the performance of virtual designs and implementations
+ with their physical counterparts, identical benchmarks are needed.
+ BMWG has developed specifications for many network functions this memo
+ re-uses existing benchmarks through references, and expands them
+ during development of new methods. A key configuration aspect is the
+ number of parallel cores required to achieve comparable performance
+ with a given physical device, or whether some limit of scale was
+ reached before the cores could achieve the comparable level.</t>
+
+ <t>It's unlikely that the virtual switch will be the only application
+ running on the SUT, so CPU utilization, Cache utilization, and Memory
+ footprint should also be recorded for the virtual implementations of
+ internetworking functions.</t>
+ </section>
+
+ <section title="Continued Emphasis on Black-Box Benchmarks">
+ <t>External observations remain essential as the basis for Benchmarks.
+ Internal observations with fixed specification and interpretation will
+ be provided in parallel to assist the development of operations
+ procedures when the technology is deployed.</t>
+ </section>
+
+ <section title="New Configuration Parameters">
+ <t>A key consideration when conducting any sort of benchmark is trying
+ to ensure the consistency and repeatability of test results. When
+ benchmarking the performance of a vSwitch there are many factors that
+ can affect the consistency of results, one key factor is matching the
+ various hardware and software details of the SUT. This section lists
+ some of the many new parameters which this project believes are
+ critical to report in order to achieve repeatability.</t>
+
+ <t>Hardware details including:</t>
+
+ <t><list style="symbols">
+ <t>Platform details</t>
+
+ <t>Processor details</t>
+
+ <t>Memory information (type and size)</t>
+
+ <t>Number of enabled cores</t>
+
+ <t>Number of cores used for the test</t>
+
+ <t>Number of physical NICs, as well as their details
+ (manufacturer, versions, type and the PCI slot they are plugged
+ into)</t>
+
+ <t>NIC interrupt configuration</t>
+
+ <t>BIOS version, release date and any configurations that were
+ modified</t>
+
+ <t>CPU microcode level</t>
+
+ <t>Memory DIMM configurations (quad rank performance may not be
+ the same as dual rank) in size, freq and slot locations</t>
+
+ <t>PCI configuration parameters (payload size, early ack
+ option...)</t>
+
+ <t>Power management at all levels (ACPI sleep states, processor
+ package, OS...)</t>
+ </list>Software details including:</t>
+
+ <t><list style="symbols">
+ <t>OS parameters and behavior (text vs graphical no one typing at
+ the console on one system)</t>
+
+ <t>OS version (for host and VNF)</t>
+
+ <t>Kernel version (for host and VNF)</t>
+
+ <t>GRUB boot parameters (for host and VNF)</t>
+
+ <t>Hypervisor details (Type and version)</t>
+
+ <t>Selected vSwitch, version number or commit id used</t>
+
+ <t>vSwitch launch command line if it has been parameterised</t>
+
+ <t>Memory allocation to the vSwitch</t>
+
+ <t>which NUMA node it is using, and how many memory channels</t>
+
+ <t>DPDK or any other SW dependency version number or commit id
+ used</t>
+
+ <t>Memory allocation to a VM - if it's from Hugpages/elsewhere</t>
+
+ <t>VM storage type: snapshot/independent persistent/independent
+ non-persistent</t>
+
+ <t>Number of VMs</t>
+
+ <t>Number of Virtual NICs (vNICs), versions, type and driver</t>
+
+ <t>Number of virtual CPUs and their core affinity on the host</t>
+
+ <t>Number vNIC interrupt configuration</t>
+
+ <t>Thread affinitization for the applications (including the
+ vSwitch itself) on the host</t>
+
+ <t>Details of Resource isolation, such as CPUs designated for
+ Host/Kernel (isolcpu) and CPUs designated for specific processes
+ (taskset). - Test duration. - Number of flows.</t>
+ </list></t>
+
+ <t>Test Traffic Information:<list style="symbols">
+ <t>Traffic type - UDP, TCP, IMIX / Other</t>
+
+ <t>Packet Sizes</t>
+
+ <t>Deployment Scenario</t>
+ </list></t>
+
+ <t/>
+ </section>
+
+ <section title="Flow classification">
+ <t>Virtual switches group packets into flows by processing and
+ matching particular packet or frame header information, or by matching
+ packets based on the input ports. Thus a flow can be thought of a
+ sequence of packets that have the same set of header field values
+ (5-tuple) or have arrived on the same port. Performance results can
+ vary based on the parameters the vSwitch uses to match for a flow. The
+ recommended flow classification parameters for any vSwitch performance
+ tests are: the input port, the source IP address, the destination IP
+ address and the Ethernet protocol type field. It is essential to
+ increase the flow timeout time on a vSwitch before conducting any
+ performance tests that do not measure the flow setup time. Normally
+ the first packet of a particular stream will install the flow in the
+ virtual switch which adds an additional latency, subsequent packets of
+ the same flow are not subject to this latency if the flow is already
+ installed on the vSwitch.</t>
+ </section>
+
+ <section title="Benchmarks using Baselines with Resource Isolation">
+ <t>This outline describes measurement of baseline with isolated
+ resources at a high level, which is the intended approach at this
+ time.</t>
+
+ <t><list style="numbers">
+ <t>Baselines: <list style="symbols">
+ <t>Optional: Benchmark platform forwarding capability without
+ a vswitch or VNF for at least 72 hours (serves as a means of
+ platform validation and a means to obtain the base performance
+ for the platform in terms of its maximum forwarding rate and
+ latency). <figure>
+ <preamble>Benchmark platform forwarding
+ capability</preamble>
+
+ <artwork align="right"><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | Simple Forwarding App | | Host
+ | | | | |
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmark VNF forwarding capability with direct
+ connectivity (vSwitch bypass, e.g., SR/IOV) for at least 72
+ hours (serves as a means of VNF validation and a means to
+ obtain the base performance for the VNF in terms of its
+ maximum forwarding rate and latency). The metrics gathered
+ from this test will serve as a key comparison point for
+ vSwitch bypass technologies performance and vSwitch
+ performance. <figure align="right">
+ <preamble>Benchmark VNF forwarding capability</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | VNF | | |
+ | | | | |
+ | +------------------------------------------+ | |
+ | | Passthrough/SR-IOV | | Host
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmarking with isolated resources alone, with other
+ resources (both HW&amp;SW) disabled Example, vSw and VM are
+ SUT</t>
+
+ <t>Benchmarking with isolated resources alone, leaving some
+ resources unused</t>
+
+ <t>Benchmark with isolated resources and all resources
+ occupied</t>
+ </list></t>
+
+ <t>Next Steps<list style="symbols">
+ <t>Limited sharing</t>
+
+ <t>Production scenarios</t>
+
+ <t>Stressful scenarios</t>
+ </list></t>
+ </list></t>
+ </section>
+ </section>
+
+ <section title="VSWITCHPERF Specification Summary">
+ <t>The overall specification in preparation is referred to as a Level
+ Test Design (LTD) document, which will contain a suite of performance
+ tests. The base performance tests in the LTD are based on the
+ pre-existing specifications developed by BMWG to test the performance of
+ physical switches. These specifications include:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2544"/> Benchmarking Methodology for Network
+ Interconnect Devices</t>
+
+ <t><xref target="RFC2889"/> Benchmarking Methodology for LAN
+ Switching</t>
+
+ <t><xref target="RFC6201"/> Device Reset Characterization</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t>Some of the above/newer RFCs are being applied in benchmarking for
+ the first time, and represent a development challenge for test equipment
+ developers. Fortunately, many members of the testing system community
+ have engaged on the VSPERF project, including an open source test
+ system.</t>
+
+ <t>In addition to this, the LTD also re-uses the terminology defined
+ by:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2285"/> Benchmarking Terminology for LAN
+ Switching Devices</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t/>
+
+ <t>Specifications to be included in future updates of the LTD
+ include:<list style="symbols">
+ <t><xref target="RFC3918"/> Methodology for IP Multicast
+ Benchmarking</t>
+
+ <t><xref target="RFC4737"/> Packet Reordering Metrics</t>
+ </list></t>
+
+ <t>As one might expect, the most fundamental internetworking
+ characteristics of Throughput and Latency remain important when the
+ switch is virtualized, and these benchmarks figure prominently in the
+ specification.</t>
+
+ <t>When considering characteristics important to "telco" network
+ functions, we must begin to consider additional performance metrics. In
+ this case, the project specifications have referenced metrics from the
+ IETF IP Performance Metrics (IPPM) literature. This means that the <xref
+ target="RFC2544"/> test of Latency is replaced by measurement of a
+ metric derived from IPPM's <xref target="RFC2679"/>, where a set of
+ statistical summaries will be provided (mean, max, min, etc.). Further
+ metrics planned to be benchmarked include packet delay variation as
+ defined by <xref target="RFC5481"/> , reordering, burst behaviour, DUT
+ availability, DUT capacity and packet loss in long term testing at
+ Throughput level, where some low-level of background loss may be present
+ and characterized.</t>
+
+ <t>Tests have been (or will be) designed to collect the metrics
+ below:</t>
+
+ <t><list style="symbols">
+ <t>Throughput Tests to measure the maximum forwarding rate (in
+ frames per second or fps) and bit rate (in Mbps) for a constant load
+ (as defined by <xref target="RFC1242"/>) without traffic loss.</t>
+
+ <t>Packet and Frame Delay Distribution Tests to measure average, min
+ and max packet and frame delay for constant loads.</t>
+
+ <t>Packet Delay Tests to understand latency distribution for
+ different packet sizes and over an extended test run to uncover
+ outliers.</t>
+
+ <t>Scalability Tests to understand how the virtual switch performs
+ as the number of flows, active ports, complexity of the forwarding
+ logic&rsquo;s configuration&hellip; it has to deal with
+ increases.</t>
+
+ <t>Stream Performance Tests (TCP, UDP) to measure bulk data transfer
+ performance, i.e. how fast systems can send and receive data through
+ the switch.</t>
+
+ <t>Control Path and Datapath Coupling Tests, to understand how
+ closely coupled the datapath and the control path are as well as the
+ effect of this coupling on the performance of the DUT (example:
+ delay of the initial packet of a flow).</t>
+
+ <t>CPU and Memory Consumption Tests to understand the virtual
+ switch&rsquo;s footprint on the system, usually conducted as
+ auxiliary measurements with benchmarks above. They include: CPU
+ utilization, Cache utilization and Memory footprint.</t>
+
+ <t>The so-called "Soak" tests, where the selected test is conducted
+ over a long period of time (with an ideal duration of 24 hours, and
+ at least 6 hours). The purpose of soak tests is to capture transient
+ changes in performance which may occur due to infrequent processes
+ or the low probability coincidence of two or more processes. The
+ performance must be evaluated periodically during continuous
+ testing, and this results in use of <xref target="RFC2889"/> Frame
+ Rate metrics instead of <xref target="RFC2544"/> Throughput (which
+ requires stopping traffic to allow time for all traffic to exit
+ internal queues).</t>
+ </list></t>
+
+ <t>Future/planned test specs include:<list style="symbols">
+ <t>Request/Response Performance Tests (TCP, UDP) which measure the
+ transaction rate through the switch.</t>
+
+ <t>Noisy Neighbour Tests, to understand the effects of resource
+ sharing on the performance of a virtual switch.</t>
+
+ <t>Tests derived from examination of ETSI NFV Draft GS IFA003
+ requirements <xref target="IFA003"/> on characterization of
+ acceleration technologies applied to vswitches.</t>
+ </list>The flexibility of deployment of a virtual switch within a
+ network means that the BMWG IETF existing literature needs to be used to
+ characterize the performance of a switch in various deployment
+ scenarios. The deployment scenarios under consideration include:</t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to physical
+ port</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +--------------------+ | |
+ | | | | |
+ | | v | | Host
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure></t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ : | |
+ | | | | | Guest
+ | : v | |
+ | +---------------+ +---------------+ | |
+ | | logical port 0| | logical port 1| | |
+ +---+---------------+-----------+---------------+---+ __|
+ ^ :
+ | |
+ : v __
+ +---+---------------+----------+---------------+---+ |
+ | | logical port 0| | logical port 1| | |
+ | +---------------+ +---------------+ | |
+ | ^ : | |
+ | | | | | Host
+ | : v | |
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+---------+---------------+--+ |
+ | | 0 1 | | 3 4 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | ^ | ^ | | | Host
+ | | |-----------------| v | |
+ | +--------------+ +--------------+ | |
+ | | phy ports | vSwitch | phy ports | | |
+ +---+--------------+----------+--------------+---+_|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ | |
+ | | | | Guest
+ | : | |
+ | +---------------+ | |
+ | | logical port 0| | |
+ +---+---------------+-------------------------------+ __|
+ ^
+ |
+ : __
+ +---+---------------+------------------------------+ |
+ | | logical port 0| | |
+ | +---------------+ | |
+ | ^ | |
+ | | | | Host
+ | : | |
+ | +--------------+ | |
+ | | phy port | vSwitch | |
+ +---+--------------+------------ -------------- ---+ __|
+ ^
+ |
+ :
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | : | |
+ | | | | Guest
+ | v | |
+ | +---------------+ | |
+ | | logical port | | |
+ +-------------------------------+---------------+---+ __|
+ :
+ |
+ v __
+ +------------------------------+---------------+---+ |
+ | | logical port | | |
+ | +---------------+ | |
+ | : | |
+ | | | | Host
+ | v | |
+ | +--------------+ | |
+ | vSwitch | phy port | | |
+ +-------------------------------+--------------+---+ __|
+ :
+ |
+ v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | | | | ^ | |
+ | v | | | | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 | | | | 0 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ : ^
+ | |
+ v : _
+ +---+---------------+---------+---------------+--+ |
+ | | 1 | | 1 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | | ^ | | Host
+ | L-----------------+ | |
+ | | |
+ | vSwitch | |
+ +------------------------------------------------+_|]]></artwork>
+ </figure></t>
+
+ <t>A set of Deployment Scenario figures is available on the VSPERF Test
+ Methodology Wiki page <xref target="TestTopo"/>.</t>
+ </section>
+
+ <section title="3x3 Matrix Coverage">
+ <t>This section organizes the many existing test specifications into the
+ "3x3" matrix (introduced in <xref target="I-D.ietf-bmwg-virtual-net"/>).
+ Because the LTD specification ID names are quite long, this section is
+ organized into lists for each occupied cell of the matrix (not all are
+ occupied, also the matrix has grown to 3x4 to accommodate scale metrics
+ when displaying the coverage of many metrics/benchmarks). The current
+ version of the LTD specification is available <xref target="LTD"/>.</t>
+
+ <t>The tests listed below assess the activation of paths in the data
+ plane, rather than the control plane.</t>
+
+ <t>A complete list of tests with short summaries is available on the
+ VSPERF "LTD Test Spec Overview" Wiki page <xref target="LTDoverV"/>.</t>
+
+ <section title="Speed of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressLearningRate</t>
+
+ <t>PacketLatency.InitialPacketProcessingLatency</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Activation section">
+ <t><list style="symbols">
+ <t>CPDP.Coupling.Flow.Addition</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Activation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.SystemRecoveryTime</t>
+
+ <t>Throughput.RFC2544.ResetTime</t>
+ </list></t>
+ </section>
+
+ <section title="Scale of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressCachingCapacity</t>
+ </list></t>
+ </section>
+
+ <section title="Speed of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.PacketLossRate</t>
+
+ <t>CPU.RFC2544.0PacketLoss</t>
+
+ <t>Throughput.RFC2544.PacketLossRateFrameModification</t>
+
+ <t>Throughput.RFC2544.BackToBackFrames</t>
+
+ <t>Throughput.RFC2889.MaxForwardingRate</t>
+
+ <t>Throughput.RFC2889.ForwardPressure</t>
+
+ <t>Throughput.RFC2889.BroadcastFrameForwarding</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.ErrorFramesFiltering</t>
+
+ <t>Throughput.RFC2544.Profile</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.Soak</t>
+
+ <t>Throughput.RFC2889.SoakFrameModification</t>
+
+ <t>PacketDelayVariation.RFC3393.Soak</t>
+ </list></t>
+ </section>
+
+ <section title="Scalability of Operation">
+ <t><list style="symbols">
+ <t>Scalability.RFC2544.0PacketLoss</t>
+
+ <t>MemoryBandwidth.RFC2544.0PacketLoss.Scalability</t>
+ </list></t>
+ </section>
+
+ <section title="Summary">
+ <t><figure>
+ <artwork><![CDATA[|------------------------------------------------------------------------|
+| | | | | |
+| | SPEED | ACCURACY | RELIABILITY | SCALE |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Activation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Operation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| De-activation | | | | |
+| | | | | |
+|------------------------------------------------------------------------|]]></artwork>
+ </figure></t>
+ </section>
+ </section>
+
+ <section title="Security Considerations">
+ <t>Benchmarking activities as described in this memo are limited to
+ technology characterization of a Device Under Test/System Under Test
+ (DUT/SUT) using controlled stimuli in a laboratory environment, with
+ dedicated address space and the constraints specified in the sections
+ above.</t>
+
+ <t>The benchmarking network topology will be an independent test setup
+ and MUST NOT be connected to devices that may forward the test traffic
+ into a production network, or misroute traffic to the test management
+ network.</t>
+
+ <t>Further, benchmarking is performed on a "black-box" basis, relying
+ solely on measurements observable external to the DUT/SUT.</t>
+
+ <t>Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+ benchmarking purposes. Any implications for network security arising
+ from the DUT/SUT SHOULD be identical in the lab and in production
+ networks.</t>
+ </section>
+
+ <section anchor="IANA" title="IANA Considerations">
+ <t>No IANA Action is requested at this time.</t>
+ </section>
+
+ <section title="Acknowledgements">
+ <t>The authors appreciate and acknowledge comments from Scott Bradner,
+ Marius Georgescu, Ramki Krishnan, Doug Montgomery, Martin Klozik,
+ Christian Trautman, and others for their reviews.</t>
+ </section>
+ </middle>
+
+ <back>
+ <references title="Normative References">
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2119"?>
+
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2330"?>
+
+ <?rfc include='reference.RFC.2544'?>
+
+ <?rfc include="reference.RFC.2679"?>
+
+ <?rfc include='reference.RFC.2680'?>
+
+ <?rfc include='reference.RFC.3393'?>
+
+ <?rfc include='reference.RFC.3432'?>
+
+ <?rfc include='reference.RFC.2681'?>
+
+ <?rfc include='reference.RFC.5905'?>
+
+ <?rfc include='reference.RFC.4689'?>
+
+ <?rfc include='reference.RFC.4737'?>
+
+ <?rfc include='reference.RFC.5357'?>
+
+ <?rfc include='reference.RFC.2889'?>
+
+ <?rfc include='reference.RFC.3918'?>
+
+ <?rfc include='reference.RFC.6201'?>
+
+ <?rfc include='reference.RFC.2285'?>
+
+ <reference anchor="NFV.PER001">
+ <front>
+ <title>Network Function Virtualization: Performance and Portability
+ Best Practices</title>
+
+ <author fullname="ETSI NFV" initials="" surname="">
+ <organization/>
+ </author>
+
+ <date month="June" year="2014"/>
+ </front>
+
+ <seriesInfo name="Group Specification"
+ value="ETSI GS NFV-PER 001 V1.1.1 (2014-06)"/>
+
+ <format type="PDF"/>
+ </reference>
+ </references>
+
+ <references title="Informative References">
+ <?rfc include='reference.RFC.1242'?>
+
+ <?rfc include='reference.RFC.5481'?>
+
+ <?rfc include='reference.RFC.6049'?>
+
+ <?rfc include='reference.RFC.6248'?>
+
+ <?rfc include='reference.RFC.6390'?>
+
+ <?rfc include='reference.I-D.ietf-bmwg-virtual-net'?>
+
+ <?rfc include='reference.I-D.huang-bmwg-virtual-network-performance'?>
+
+ <reference anchor="TestTopo">
+ <front>
+ <title>Test Topologies
+ https://wiki.opnfv.org/vsperf/test_methodology</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTDoverV">
+ <front>
+ <title>LTD Test Spec Overview
+ https://wiki.opnfv.org/wiki/vswitchperf_test_spec_review</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTD">
+ <front>
+ <title>LTD Test Specification
+ http://artifacts.opnfv.org/vswitchperf/docs/requirements/index.html</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="BrahRel">
+ <front>
+ <title>Brahmaputra, Second OPNFV Release
+ https://www.opnfv.org/brahmaputra</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="IFA003">
+ <front>
+ <title>https://docbox.etsi.org/ISG/NFV/Open/Drafts/IFA003_Acceleration_-_vSwitch_Spec/</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+ </references>
+ </back>
+</rfc>
diff --git a/docs/testing/developer/requirements/ietf_draft/draft-ietf-bmwg-vswitch-opnfv-01.xml b/docs/testing/developer/requirements/ietf_draft/draft-ietf-bmwg-vswitch-opnfv-01.xml
new file mode 100644
index 00000000..c8a3d99b
--- /dev/null
+++ b/docs/testing/developer/requirements/ietf_draft/draft-ietf-bmwg-vswitch-opnfv-01.xml
@@ -0,0 +1,1027 @@
+<?xml version="1.0" encoding="US-ASCII"?>
+<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
+<?rfc toc="yes"?>
+<?rfc tocompact="yes"?>
+<?rfc tocdepth="3"?>
+<?rfc tocindent="yes"?>
+<?rfc symrefs="yes"?>
+<?rfc sortrefs="yes"?>
+<?rfc comments="yes"?>
+<?rfc inline="yes"?>
+<?rfc compact="yes"?>
+<?rfc subcompact="no"?>
+<rfc category="info" docName="draft-ietf-bmwg-vswitch-opnfv-01"
+ ipr="trust200902">
+ <front>
+ <title abbrev="Benchmarking vSwitches">Benchmarking Virtual Switches in
+ OPNFV</title>
+
+ <author fullname="Maryam Tahhan" initials="M." surname="Tahhan">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>maryam.tahhan@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Billy O'Mahony" initials="B." surname="O'Mahony">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>billy.o.mahony@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Al Morton" initials="A." surname="Morton">
+ <organization>AT&amp;T Labs</organization>
+
+ <address>
+ <postal>
+ <street>200 Laurel Avenue South</street>
+
+ <city>Middletown,</city>
+
+ <region>NJ</region>
+
+ <code>07748</code>
+
+ <country>USA</country>
+ </postal>
+
+ <phone>+1 732 420 1571</phone>
+
+ <facsimile>+1 732 368 1192</facsimile>
+
+ <email>acmorton@att.com</email>
+
+ <uri>http://home.comcast.net/~acmacm/</uri>
+ </address>
+ </author>
+
+ <date day="10" month="October" year="2016"/>
+
+ <abstract>
+ <t>This memo describes the progress of the Open Platform for NFV (OPNFV)
+ project on virtual switch performance "VSWITCHPERF". This project
+ intends to build on the current and completed work of the Benchmarking
+ Methodology Working Group in IETF, by referencing existing literature.
+ The Benchmarking Methodology Working Group has traditionally conducted
+ laboratory characterization of dedicated physical implementations of
+ internetworking functions. Therefore, this memo begins to describe the
+ additional considerations when virtual switches are implemented in
+ general-purpose hardware. The expanded tests and benchmarks are also
+ influenced by the OPNFV mission to support virtualization of the "telco"
+ infrastructure.</t>
+ </abstract>
+
+ <note title="Requirements Language">
+ <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+ "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
+ document are to be interpreted as described in <xref
+ target="RFC2119">RFC 2119</xref>.</t>
+
+ <t/>
+ </note>
+ </front>
+
+ <middle>
+ <section title="Introduction">
+ <t>Benchmarking Methodology Working Group (BMWG) has traditionally
+ conducted laboratory characterization of dedicated physical
+ implementations of internetworking functions. The Black-box Benchmarks
+ of Throughput, Latency, Forwarding Rates and others have served our
+ industry for many years. Now, Network Function Virtualization (NFV) has
+ the goal to transform how internetwork functions are implemented, and
+ therefore has garnered much attention.</t>
+
+ <t>This memo summarizes the progress of the Open Platform for NFV
+ (OPNFV) project on virtual switch performance characterization,
+ "VSWITCHPERF", through the Brahmaputra (second) release <xref
+ target="BrahRel"/>. This project intends to build on the current and
+ completed work of the Benchmarking Methodology Working Group in IETF, by
+ referencing existing literature. For example, currently the most often
+ referenced RFC is <xref target="RFC2544"/> (which depends on <xref
+ target="RFC1242"/>) and foundation of the benchmarking work in OPNFV is
+ common and strong.</t>
+
+ <t>See
+ https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+ for more background, and the OPNFV website for general information:
+ https://www.opnfv.org/</t>
+
+ <t>The authors note that OPNFV distinguishes itself from other open
+ source compute and networking projects through its emphasis on existing
+ "telco" services as opposed to cloud-computing. There are many ways in
+ which telco requirements have different emphasis on performance
+ dimensions when compared to cloud computing: support for and transfer of
+ isochronous media streams is one example.</t>
+
+ <t>Note also that the move to NFV Infrastructure has resulted in many
+ new benchmarking initiatives across the industry. The authors are
+ currently doing their best to maintain alignment with many other
+ projects, and this Internet Draft is one part of the efforts. We
+ acknowledge the early work in <xref
+ target="I-D.huang-bmwg-virtual-network-performance"/>, and useful
+ discussion with the authors.</t>
+ </section>
+
+ <section title="Scope">
+ <t>The primary purpose and scope of the memo is to inform the industry
+ of work-in-progress that builds on the body of extensive BMWG literature
+ and experience, and describe the extensions needed for benchmarking
+ virtual switches. Inital feedback indicates that many of these
+ extensions may be applicable beyond the current scope (to hardware
+ switches in the NFV Infrastructure and to virtual routers, for example).
+ Additionally, this memo serves as a vehicle to include more detail and
+ commentary from BMWG and other Open Source communities, under BMWG's
+ chartered work to characterize the NFV Infrastructure (a virtual switch
+ is an important aspect of that infrastructure).</t>
+
+ <t>The benchmarking covered in this memo should be applicable to many
+ types of vswitches, and remain vswitch-agnostic to great degree. There
+ has been no attempt to track and test all features of any specific
+ vswitch implementation.</t>
+ </section>
+
+ <section title="Benchmarking Considerations">
+ <t>This section highlights some specific considerations (from <xref
+ target="I-D.ietf-bmwg-virtual-net"/>)related to Benchmarks for virtual
+ switches. The OPNFV project is sharing its present view on these areas,
+ as they develop their specifications in the Level Test Design (LTD)
+ document.</t>
+
+ <section title="Comparison with Physical Network Functions">
+ <t>To compare the performance of virtual designs and implementations
+ with their physical counterparts, identical benchmarks are needed.
+ BMWG has developed specifications for many network functions this memo
+ re-uses existing benchmarks through references, and expands them
+ during development of new methods. A key configuration aspect is the
+ number of parallel cores required to achieve comparable performance
+ with a given physical device, or whether some limit of scale was
+ reached before the cores could achieve the comparable level.</t>
+
+ <t>It's unlikely that the virtual switch will be the only application
+ running on the SUT, so CPU utilization, Cache utilization, and Memory
+ footprint should also be recorded for the virtual implementations of
+ internetworking functions.</t>
+ </section>
+
+ <section title="Continued Emphasis on Black-Box Benchmarks">
+ <t>External observations remain essential as the basis for Benchmarks.
+ Internal observations with fixed specification and interpretation will
+ be provided in parallel to assist the development of operations
+ procedures when the technology is deployed.</t>
+ </section>
+
+ <section title="New Configuration Parameters">
+ <t>A key consideration when conducting any sort of benchmark is trying
+ to ensure the consistency and repeatability of test results. When
+ benchmarking the performance of a vSwitch there are many factors that
+ can affect the consistency of results, one key factor is matching the
+ various hardware and software details of the SUT. This section lists
+ some of the many new parameters which this project believes are
+ critical to report in order to achieve repeatability.</t>
+
+ <t>Hardware details including:</t>
+
+ <t><list style="symbols">
+ <t>Platform details</t>
+
+ <t>Processor details</t>
+
+ <t>Memory information (type and size)</t>
+
+ <t>Number of enabled cores</t>
+
+ <t>Number of cores used for the test</t>
+
+ <t>Number of physical NICs, as well as their details
+ (manufacturer, versions, type and the PCI slot they are plugged
+ into)</t>
+
+ <t>NIC interrupt configuration</t>
+
+ <t>BIOS version, release date and any configurations that were
+ modified</t>
+
+ <t>CPU microcode level</t>
+
+ <t>Memory DIMM configurations (quad rank performance may not be
+ the same as dual rank) in size, freq and slot locations</t>
+
+ <t>PCI configuration parameters (payload size, early ack
+ option...)</t>
+
+ <t>Power management at all levels (ACPI sleep states, processor
+ package, OS...)</t>
+ </list>Software details including:</t>
+
+ <t><list style="symbols">
+ <t>OS parameters and behavior (text vs graphical no one typing at
+ the console on one system)</t>
+
+ <t>OS version (for host and VNF)</t>
+
+ <t>Kernel version (for host and VNF)</t>
+
+ <t>GRUB boot parameters (for host and VNF)</t>
+
+ <t>Hypervisor details (Type and version)</t>
+
+ <t>Selected vSwitch, version number or commit id used</t>
+
+ <t>vSwitch launch command line if it has been parameterised</t>
+
+ <t>Memory allocation to the vSwitch</t>
+
+ <t>which NUMA node it is using, and how many memory channels</t>
+
+ <t>DPDK or any other SW dependency version number or commit id
+ used</t>
+
+ <t>Memory allocation to a VM - if it's from Hugpages/elsewhere</t>
+
+ <t>VM storage type: snapshot/independent persistent/independent
+ non-persistent</t>
+
+ <t>Number of VMs</t>
+
+ <t>Number of Virtual NICs (vNICs), versions, type and driver</t>
+
+ <t>Number of virtual CPUs and their core affinity on the host</t>
+
+ <t>Number vNIC interrupt configuration</t>
+
+ <t>Thread affinitization for the applications (including the
+ vSwitch itself) on the host</t>
+
+ <t>Details of Resource isolation, such as CPUs designated for
+ Host/Kernel (isolcpu) and CPUs designated for specific processes
+ (taskset). - Test duration. - Number of flows.</t>
+ </list></t>
+
+ <t>Test Traffic Information:<list style="symbols">
+ <t>Traffic type - UDP, TCP, IMIX / Other</t>
+
+ <t>Packet Sizes</t>
+
+ <t>Deployment Scenario</t>
+ </list></t>
+
+ <t/>
+ </section>
+
+ <section title="Flow classification">
+ <t>Virtual switches group packets into flows by processing and
+ matching particular packet or frame header information, or by matching
+ packets based on the input ports. Thus a flow can be thought of a
+ sequence of packets that have the same set of header field values
+ (5-tuple) or have arrived on the same port. Performance results can
+ vary based on the parameters the vSwitch uses to match for a flow. The
+ recommended flow classification parameters for any vSwitch performance
+ tests are: the input port, the source IP address, the destination IP
+ address and the Ethernet protocol type field. It is essential to
+ increase the flow timeout time on a vSwitch before conducting any
+ performance tests that do not measure the flow setup time. Normally
+ the first packet of a particular stream will install the flow in the
+ virtual switch which adds an additional latency, subsequent packets of
+ the same flow are not subject to this latency if the flow is already
+ installed on the vSwitch.</t>
+ </section>
+
+ <section title="Benchmarks using Baselines with Resource Isolation">
+ <t>This outline describes measurement of baseline with isolated
+ resources at a high level, which is the intended approach at this
+ time.</t>
+
+ <t><list style="numbers">
+ <t>Baselines: <list style="symbols">
+ <t>Optional: Benchmark platform forwarding capability without
+ a vswitch or VNF for at least 72 hours (serves as a means of
+ platform validation and a means to obtain the base performance
+ for the platform in terms of its maximum forwarding rate and
+ latency). <figure>
+ <preamble>Benchmark platform forwarding
+ capability</preamble>
+
+ <artwork align="right"><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | Simple Forwarding App | | Host
+ | | | | |
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmark VNF forwarding capability with direct
+ connectivity (vSwitch bypass, e.g., SR/IOV) for at least 72
+ hours (serves as a means of VNF validation and a means to
+ obtain the base performance for the VNF in terms of its
+ maximum forwarding rate and latency). The metrics gathered
+ from this test will serve as a key comparison point for
+ vSwitch bypass technologies performance and vSwitch
+ performance. <figure align="right">
+ <preamble>Benchmark VNF forwarding capability</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | VNF | | |
+ | | | | |
+ | +------------------------------------------+ | |
+ | | Passthrough/SR-IOV | | Host
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmarking with isolated resources alone, with other
+ resources (both HW&amp;SW) disabled Example, vSw and VM are
+ SUT</t>
+
+ <t>Benchmarking with isolated resources alone, leaving some
+ resources unused</t>
+
+ <t>Benchmark with isolated resources and all resources
+ occupied</t>
+ </list></t>
+
+ <t>Next Steps<list style="symbols">
+ <t>Limited sharing</t>
+
+ <t>Production scenarios</t>
+
+ <t>Stressful scenarios</t>
+ </list></t>
+ </list></t>
+ </section>
+ </section>
+
+ <section title="VSWITCHPERF Specification Summary">
+ <t>The overall specification in preparation is referred to as a Level
+ Test Design (LTD) document, which will contain a suite of performance
+ tests. The base performance tests in the LTD are based on the
+ pre-existing specifications developed by BMWG to test the performance of
+ physical switches. These specifications include:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2544"/> Benchmarking Methodology for Network
+ Interconnect Devices</t>
+
+ <t><xref target="RFC2889"/> Benchmarking Methodology for LAN
+ Switching</t>
+
+ <t><xref target="RFC6201"/> Device Reset Characterization</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t>Some of the above/newer RFCs are being applied in benchmarking for
+ the first time, and represent a development challenge for test equipment
+ developers. Fortunately, many members of the testing system community
+ have engaged on the VSPERF project, including an open source test
+ system.</t>
+
+ <t>In addition to this, the LTD also re-uses the terminology defined
+ by:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2285"/> Benchmarking Terminology for LAN
+ Switching Devices</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t/>
+
+ <t>Specifications to be included in future updates of the LTD
+ include:<list style="symbols">
+ <t><xref target="RFC3918"/> Methodology for IP Multicast
+ Benchmarking</t>
+
+ <t><xref target="RFC4737"/> Packet Reordering Metrics</t>
+ </list></t>
+
+ <t>As one might expect, the most fundamental internetworking
+ characteristics of Throughput and Latency remain important when the
+ switch is virtualized, and these benchmarks figure prominently in the
+ specification.</t>
+
+ <t>When considering characteristics important to "telco" network
+ functions, we must begin to consider additional performance metrics. In
+ this case, the project specifications have referenced metrics from the
+ IETF IP Performance Metrics (IPPM) literature. This means that the <xref
+ target="RFC2544"/> test of Latency is replaced by measurement of a
+ metric derived from IPPM's <xref target="RFC2679"/>, where a set of
+ statistical summaries will be provided (mean, max, min, etc.). Further
+ metrics planned to be benchmarked include packet delay variation as
+ defined by <xref target="RFC5481"/> , reordering, burst behaviour, DUT
+ availability, DUT capacity and packet loss in long term testing at
+ Throughput level, where some low-level of background loss may be present
+ and characterized.</t>
+
+ <t>Tests have been (or will be) designed to collect the metrics
+ below:</t>
+
+ <t><list style="symbols">
+ <t>Throughput Tests to measure the maximum forwarding rate (in
+ frames per second or fps) and bit rate (in Mbps) for a constant load
+ (as defined by <xref target="RFC1242"/>) without traffic loss.</t>
+
+ <t>Packet and Frame Delay Distribution Tests to measure average, min
+ and max packet and frame delay for constant loads.</t>
+
+ <t>Packet Delay Tests to understand latency distribution for
+ different packet sizes and over an extended test run to uncover
+ outliers.</t>
+
+ <t>Scalability Tests to understand how the virtual switch performs
+ as the number of flows, active ports, complexity of the forwarding
+ logic&rsquo;s configuration&hellip; it has to deal with
+ increases.</t>
+
+ <t>Stream Performance Tests (TCP, UDP) to measure bulk data transfer
+ performance, i.e. how fast systems can send and receive data through
+ the switch.</t>
+
+ <t>Control Path and Datapath Coupling Tests, to understand how
+ closely coupled the datapath and the control path are as well as the
+ effect of this coupling on the performance of the DUT (example:
+ delay of the initial packet of a flow).</t>
+
+ <t>CPU and Memory Consumption Tests to understand the virtual
+ switch&rsquo;s footprint on the system, usually conducted as
+ auxiliary measurements with benchmarks above. They include: CPU
+ utilization, Cache utilization and Memory footprint.</t>
+
+ <t>The so-called "Soak" tests, where the selected test is conducted
+ over a long period of time (with an ideal duration of 24 hours, but
+ only long enough to determine that stability issues exist when
+ found; there is no requirement to continue a test when a DUT
+ exhibits instability over time). The key performance characteristics
+ and benchmarks for a DUT are determined (using short duration tests)
+ prior to conducting soak tests. The purpose of soak tests is to
+ capture transient changes in performance which may occur due to
+ infrequent processes, memory leaks, or the low probability
+ coincidence of two or more processes. The stability of the DUT is
+ the paramount consideration, so performance must be evaluated
+ periodically during continuous testing, and this results in use of
+ <xref target="RFC2889"/> Frame Rate metrics instead of <xref
+ target="RFC2544"/> Throughput (which requires stopping traffic to
+ allow time for all traffic to exit internal queues), for
+ example.</t>
+ </list></t>
+
+ <t>Future/planned test specs include:<list style="symbols">
+ <t>Request/Response Performance Tests (TCP, UDP) which measure the
+ transaction rate through the switch.</t>
+
+ <t>Noisy Neighbour Tests, to understand the effects of resource
+ sharing on the performance of a virtual switch.</t>
+
+ <t>Tests derived from examination of ETSI NFV Draft GS IFA003
+ requirements <xref target="IFA003"/> on characterization of
+ acceleration technologies applied to vswitches.</t>
+ </list>The flexibility of deployment of a virtual switch within a
+ network means that the BMWG IETF existing literature needs to be used to
+ characterize the performance of a switch in various deployment
+ scenarios. The deployment scenarios under consideration include:</t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to physical
+ port</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +--------------------+ | |
+ | | | | |
+ | | v | | Host
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure></t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ : | |
+ | | | | | Guest
+ | : v | |
+ | +---------------+ +---------------+ | |
+ | | logical port 0| | logical port 1| | |
+ +---+---------------+-----------+---------------+---+ __|
+ ^ :
+ | |
+ : v __
+ +---+---------------+----------+---------------+---+ |
+ | | logical port 0| | logical port 1| | |
+ | +---------------+ +---------------+ | |
+ | ^ : | |
+ | | | | | Host
+ | : v | |
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+---------+---------------+--+ |
+ | | 0 1 | | 3 4 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | ^ | ^ | | | Host
+ | | |-----------------| v | |
+ | +--------------+ +--------------+ | |
+ | | phy ports | vSwitch | phy ports | | |
+ +---+--------------+----------+--------------+---+_|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ | |
+ | | | | Guest
+ | : | |
+ | +---------------+ | |
+ | | logical port 0| | |
+ +---+---------------+-------------------------------+ __|
+ ^
+ |
+ : __
+ +---+---------------+------------------------------+ |
+ | | logical port 0| | |
+ | +---------------+ | |
+ | ^ | |
+ | | | | Host
+ | : | |
+ | +--------------+ | |
+ | | phy port | vSwitch | |
+ +---+--------------+------------ -------------- ---+ __|
+ ^
+ |
+ :
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | : | |
+ | | | | Guest
+ | v | |
+ | +---------------+ | |
+ | | logical port | | |
+ +-------------------------------+---------------+---+ __|
+ :
+ |
+ v __
+ +------------------------------+---------------+---+ |
+ | | logical port | | |
+ | +---------------+ | |
+ | : | |
+ | | | | Host
+ | v | |
+ | +--------------+ | |
+ | vSwitch | phy port | | |
+ +-------------------------------+--------------+---+ __|
+ :
+ |
+ v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | | | | ^ | |
+ | v | | | | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 | | | | 0 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ : ^
+ | |
+ v : _
+ +---+---------------+---------+---------------+--+ |
+ | | 1 | | 1 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | | ^ | | Host
+ | L-----------------+ | |
+ | | |
+ | vSwitch | |
+ +------------------------------------------------+_|]]></artwork>
+ </figure></t>
+
+ <t>A set of Deployment Scenario figures is available on the VSPERF Test
+ Methodology Wiki page <xref target="TestTopo"/>.</t>
+ </section>
+
+ <section title="3x3 Matrix Coverage">
+ <t>This section organizes the many existing test specifications into the
+ "3x3" matrix (introduced in <xref target="I-D.ietf-bmwg-virtual-net"/>).
+ Because the LTD specification ID names are quite long, this section is
+ organized into lists for each occupied cell of the matrix (not all are
+ occupied, also the matrix has grown to 3x4 to accommodate scale metrics
+ when displaying the coverage of many metrics/benchmarks). The current
+ version of the LTD specification is available <xref target="LTD"/>.</t>
+
+ <t>The tests listed below assess the activation of paths in the data
+ plane, rather than the control plane.</t>
+
+ <t>A complete list of tests with short summaries is available on the
+ VSPERF "LTD Test Spec Overview" Wiki page <xref target="LTDoverV"/>.</t>
+
+ <section title="Speed of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressLearningRate</t>
+
+ <t>PacketLatency.InitialPacketProcessingLatency</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Activation section">
+ <t><list style="symbols">
+ <t>CPDP.Coupling.Flow.Addition</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Activation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.SystemRecoveryTime</t>
+
+ <t>Throughput.RFC2544.ResetTime</t>
+ </list></t>
+ </section>
+
+ <section title="Scale of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressCachingCapacity</t>
+ </list></t>
+ </section>
+
+ <section title="Speed of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.PacketLossRate</t>
+
+ <t>CPU.RFC2544.0PacketLoss</t>
+
+ <t>Throughput.RFC2544.PacketLossRateFrameModification</t>
+
+ <t>Throughput.RFC2544.BackToBackFrames</t>
+
+ <t>Throughput.RFC2889.MaxForwardingRate</t>
+
+ <t>Throughput.RFC2889.ForwardPressure</t>
+
+ <t>Throughput.RFC2889.BroadcastFrameForwarding</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.ErrorFramesFiltering</t>
+
+ <t>Throughput.RFC2544.Profile</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.Soak</t>
+
+ <t>Throughput.RFC2889.SoakFrameModification</t>
+
+ <t>PacketDelayVariation.RFC3393.Soak</t>
+ </list></t>
+ </section>
+
+ <section title="Scalability of Operation">
+ <t><list style="symbols">
+ <t>Scalability.RFC2544.0PacketLoss</t>
+
+ <t>MemoryBandwidth.RFC2544.0PacketLoss.Scalability</t>
+ </list></t>
+ </section>
+
+ <section title="Summary">
+ <t><figure>
+ <artwork><![CDATA[|------------------------------------------------------------------------|
+| | | | | |
+| | SPEED | ACCURACY | RELIABILITY | SCALE |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Activation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Operation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| De-activation | | | | |
+| | | | | |
+|------------------------------------------------------------------------|]]></artwork>
+ </figure></t>
+ </section>
+ </section>
+
+ <section title="Security Considerations">
+ <t>Benchmarking activities as described in this memo are limited to
+ technology characterization of a Device Under Test/System Under Test
+ (DUT/SUT) using controlled stimuli in a laboratory environment, with
+ dedicated address space and the constraints specified in the sections
+ above.</t>
+
+ <t>The benchmarking network topology will be an independent test setup
+ and MUST NOT be connected to devices that may forward the test traffic
+ into a production network, or misroute traffic to the test management
+ network.</t>
+
+ <t>Further, benchmarking is performed on a "black-box" basis, relying
+ solely on measurements observable external to the DUT/SUT.</t>
+
+ <t>Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+ benchmarking purposes. Any implications for network security arising
+ from the DUT/SUT SHOULD be identical in the lab and in production
+ networks.</t>
+ </section>
+
+ <section anchor="IANA" title="IANA Considerations">
+ <t>No IANA Action is requested at this time.</t>
+ </section>
+
+ <section title="Acknowledgements">
+ <t>The authors appreciate and acknowledge comments from Scott Bradner,
+ Marius Georgescu, Ramki Krishnan, Doug Montgomery, Martin Klozik,
+ Christian Trautman, and others for their reviews.</t>
+ </section>
+ </middle>
+
+ <back>
+ <references title="Normative References">
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2119"?>
+
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2330"?>
+
+ <?rfc include='reference.RFC.2544'?>
+
+ <?rfc include="reference.RFC.2679"?>
+
+ <?rfc include='reference.RFC.2680'?>
+
+ <?rfc include='reference.RFC.3393'?>
+
+ <?rfc include='reference.RFC.3432'?>
+
+ <?rfc include='reference.RFC.2681'?>
+
+ <?rfc include='reference.RFC.5905'?>
+
+ <?rfc include='reference.RFC.4689'?>
+
+ <?rfc include='reference.RFC.4737'?>
+
+ <?rfc include='reference.RFC.5357'?>
+
+ <?rfc include='reference.RFC.2889'?>
+
+ <?rfc include='reference.RFC.3918'?>
+
+ <?rfc include='reference.RFC.6201'?>
+
+ <?rfc include='reference.RFC.2285'?>
+
+ <reference anchor="NFV.PER001">
+ <front>
+ <title>Network Function Virtualization: Performance and Portability
+ Best Practices</title>
+
+ <author fullname="ETSI NFV" initials="" surname="">
+ <organization/>
+ </author>
+
+ <date month="June" year="2014"/>
+ </front>
+
+ <seriesInfo name="Group Specification"
+ value="ETSI GS NFV-PER 001 V1.1.1 (2014-06)"/>
+
+ <format type="PDF"/>
+ </reference>
+ </references>
+
+ <references title="Informative References">
+ <?rfc include='reference.RFC.1242'?>
+
+ <?rfc include='reference.RFC.5481'?>
+
+ <?rfc include='reference.RFC.6049'?>
+
+ <?rfc include='reference.RFC.6248'?>
+
+ <?rfc include='reference.RFC.6390'?>
+
+ <?rfc include='reference.I-D.ietf-bmwg-virtual-net'?>
+
+ <?rfc include='reference.I-D.huang-bmwg-virtual-network-performance'?>
+
+ <reference anchor="TestTopo">
+ <front>
+ <title>Test Topologies
+ https://wiki.opnfv.org/vsperf/test_methodology</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTDoverV">
+ <front>
+ <title>LTD Test Spec Overview
+ https://wiki.opnfv.org/wiki/vswitchperf_test_spec_review</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTD">
+ <front>
+ <title>LTD Test Specification
+ http://artifacts.opnfv.org/vswitchperf/brahmaputra/docs/requirements/index.html</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="BrahRel">
+ <front>
+ <title>Brahmaputra, Second OPNFV Release
+ https://www.opnfv.org/brahmaputra</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="IFA003">
+ <front>
+ <title>https://docbox.etsi.org/ISG/NFV/Open/Drafts/IFA003_Acceleration_-_vSwitch_Spec/</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+ </references>
+ </back>
+</rfc>
diff --git a/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-00.xml b/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-00.xml
new file mode 100644
index 00000000..b5f7f833
--- /dev/null
+++ b/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-00.xml
@@ -0,0 +1,964 @@
+<?xml version="1.0" encoding="US-ASCII"?>
+<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
+<?rfc toc="yes"?>
+<?rfc tocompact="yes"?>
+<?rfc tocdepth="3"?>
+<?rfc tocindent="yes"?>
+<?rfc symrefs="yes"?>
+<?rfc sortrefs="yes"?>
+<?rfc comments="yes"?>
+<?rfc inline="yes"?>
+<?rfc compact="yes"?>
+<?rfc subcompact="no"?>
+<rfc category="info" docName="draft-vsperf-bmwg-vswitch-opnfv-01"
+ ipr="trust200902">
+ <front>
+ <title abbrev="Benchmarking vSwitches">Benchmarking Virtual Switches in
+ OPNFV</title>
+
+ <author fullname="Maryam Tahhan" initials="M." surname="Tahhan">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>maryam.tahhan@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Billy O'Mahony" initials="B." surname="O'Mahony">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>billy.o.mahony@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Al Morton" initials="A." surname="Morton">
+ <organization>AT&amp;T Labs</organization>
+
+ <address>
+ <postal>
+ <street>200 Laurel Avenue South</street>
+
+ <city>Middletown,</city>
+
+ <region>NJ</region>
+
+ <code>07748</code>
+
+ <country>USA</country>
+ </postal>
+
+ <phone>+1 732 420 1571</phone>
+
+ <facsimile>+1 732 368 1192</facsimile>
+
+ <email>acmorton@att.com</email>
+
+ <uri>http://home.comcast.net/~acmacm/</uri>
+ </address>
+ </author>
+
+ <date day="14" month="October" year="2015"/>
+
+ <abstract>
+ <t>This memo describes the progress of the Open Platform for NFV (OPNFV)
+ project on virtual switch performance "VSWITCHPERF". This project
+ intends to build on the current and completed work of the Benchmarking
+ Methodology Working Group in IETF, by referencing existing literature.
+ The Benchmarking Methodology Working Group has traditionally conducted
+ laboratory characterization of dedicated physical implementations of
+ internetworking functions. Therefore, this memo begins to describe the
+ additional considerations when virtual switches are implemented in
+ general-purpose hardware. The expanded tests and benchmarks are also
+ influenced by the OPNFV mission to support virtualization of the "telco"
+ infrastructure.</t>
+ </abstract>
+
+ <note title="Requirements Language">
+ <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+ "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
+ document are to be interpreted as described in <xref
+ target="RFC2119">RFC 2119</xref>.</t>
+
+ <t/>
+ </note>
+ </front>
+
+ <middle>
+ <section title="Introduction">
+ <t>Benchmarking Methodology Working Group (BMWG) has traditionally
+ conducted laboratory characterization of dedicated physical
+ implementations of internetworking functions. The Black-box Benchmarks
+ of Throughput, Latency, Forwarding Rates and others have served our
+ industry for many years. Now, Network Function Virtualization (NFV) has
+ the goal to transform how internetwork functions are implemented, and
+ therefore has garnered much attention.</t>
+
+ <t>This memo describes the progress of the Open Platform for NFV (OPNFV)
+ project on virtual switch performance characterization, "VSWITCHPERF".
+ This project intends to build on the current and completed work of the
+ Benchmarking Methodology Working Group in IETF, by referencing existing
+ literature. For example, currently the most often referenced RFC is
+ <xref target="RFC2544"/> (which depends on <xref target="RFC1242"/>) and
+ foundation of the benchmarking work in OPNFV is common and strong.</t>
+
+ <t>See
+ https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+ for more background, and the OPNFV website for general information:
+ https://www.opnfv.org/</t>
+
+ <t>The authors note that OPNFV distinguishes itself from other open
+ source compute and networking projects through its emphasis on existing
+ "telco" services as opposed to cloud-computing. There are many ways in
+ which telco requirements have different emphasis on performance
+ dimensions when compared to cloud computing: support for and transfer of
+ isochronous media streams is one example.</t>
+
+ <t>Note also that the move to NFV Infrastructure has resulted in many
+ new benchmarking initiatives across the industry, and the authors are
+ currently doing their best to maintain alignment with many other
+ projects, and this Internet Draft is evidence of the efforts.</t>
+ </section>
+
+ <section title="Scope">
+ <t>The primary purpose and scope of the memo is to inform BMWG of
+ work-in-progress that builds on the body of extensive literature and
+ experience. Additionally, once the initial information conveyed here is
+ received, this memo may be expanded to include more detail and
+ commentary from both BMWG and OPNFV communities, under BMWG's chartered
+ work to characterize the NFV Infrastructure (a virtual switch is an
+ important aspect of that infrastructure).</t>
+ </section>
+
+ <section title="Benchmarking Considerations">
+ <t>This section highlights some specific considerations (from <xref
+ target="I-D.ietf-bmwg-virtual-net"/>)related to Benchmarks for virtual
+ switches. The OPNFV project is sharing its present view on these areas,
+ as they develop their specifications in the Level Test Design (LTD)
+ document.</t>
+
+ <section title="Comparison with Physical Network Functions">
+ <t>To compare the performance of virtual designs and implementations
+ with their physical counterparts, identical benchmarks are needed.
+ BMWG has developed specifications for many network functions this memo
+ re-uses existing benchmarks through references, and expands them
+ during development of new methods. A key configuration aspect is the
+ number of parallel cores required to achieve comparable performance
+ with a given physical device, or whether some limit of scale was
+ reached before the cores could achieve the comparable level.</t>
+
+ <t>It's unlikely that the virtual switch will be the only application
+ running on the SUT, so CPU utilization, Cache utilization, and Memory
+ footprint should also be recorded for the virtual implementations of
+ internetworking functions.</t>
+ </section>
+
+ <section title="Continued Emphasis on Black-Box Benchmarks">
+ <t>External observations remain essential as the basis for Benchmarks.
+ Internal observations with fixed specification and interpretation will
+ be provided in parallel to assist the development of operations
+ procedures when the technology is deployed.</t>
+ </section>
+
+ <section title="New Configuration Parameters">
+ <t>A key consideration when conducting any sort of benchmark is trying
+ to ensure the consistency and repeatability of test results. When
+ benchmarking the performance of a vSwitch there are many factors that
+ can affect the consistency of results, one key factor is matching the
+ various hardware and software details of the SUT. This section lists
+ some of the many new parameters which this project believes are
+ critical to report in order to achieve repeatability.</t>
+
+ <t>Hardware details including:</t>
+
+ <t><list style="symbols">
+ <t>Platform details</t>
+
+ <t>Processor details</t>
+
+ <t>Memory information (type and size)</t>
+
+ <t>Number of enabled cores</t>
+
+ <t>Number of cores used for the test</t>
+
+ <t>Number of physical NICs, as well as their details
+ (manufacturer, versions, type and the PCI slot they are plugged
+ into)</t>
+
+ <t>NIC interrupt configuration</t>
+
+ <t>BIOS version, release date and any configurations that were
+ modified</t>
+
+ <t>CPU microcode level</t>
+
+ <t>Memory DIMM configurations (quad rank performance may not be
+ the same as dual rank) in size, freq and slot locations</t>
+
+ <t>PCI configuration parameters (payload size, early ack
+ option...)</t>
+
+ <t>Power management at all levels (ACPI sleep states, processor
+ package, OS...)</t>
+ </list>Software details including:</t>
+
+ <t><list style="symbols">
+ <t>OS parameters and behavior (text vs graphical no one typing at
+ the console on one system)</t>
+
+ <t>OS version (for host and VNF)</t>
+
+ <t>Kernel version (for host and VNF)</t>
+
+ <t>GRUB boot parameters (for host and VNF)</t>
+
+ <t>Hypervisor details (Type and version)</t>
+
+ <t>Selected vSwitch, version number or commit id used</t>
+
+ <t>vSwitch launch command line if it has been parameterised</t>
+
+ <t>Memory allocation to the vSwitch</t>
+
+ <t>which NUMA node it is using, and how many memory channels</t>
+
+ <t>DPDK or any other SW dependency version number or commit id
+ used</t>
+
+ <t>Memory allocation to a VM - if it's from Hugpages/elsewhere</t>
+
+ <t>VM storage type: snapshot/independent persistent/independent
+ non-persistent</t>
+
+ <t>Number of VMs</t>
+
+ <t>Number of Virtual NICs (vNICs), versions, type and driver</t>
+
+ <t>Number of virtual CPUs and their core affinity on the host</t>
+
+ <t>Number vNIC interrupt configuration</t>
+
+ <t>Thread affinitization for the applications (including the
+ vSwitch itself) on the host</t>
+
+ <t>Details of Resource isolation, such as CPUs designated for
+ Host/Kernel (isolcpu) and CPUs designated for specific processes
+ (taskset). - Test duration. - Number of flows.</t>
+ </list></t>
+
+ <t>Test Traffic Information:<list style="symbols">
+ <t>Traffic type - UDP, TCP, IMIX / Other</t>
+
+ <t>Packet Sizes</t>
+
+ <t>Deployment Scenario</t>
+ </list></t>
+
+ <t/>
+ </section>
+
+ <section title="Flow classification">
+ <t>Virtual switches group packets into flows by processing and
+ matching particular packet or frame header information, or by matching
+ packets based on the input ports. Thus a flow can be thought of a
+ sequence of packets that have the same set of header field values or
+ have arrived on the same port. Performance results can vary based on
+ the parameters the vSwitch uses to match for a flow. The recommended
+ flow classification parameters for any vSwitch performance tests are:
+ the input port, the source IP address, the destination IP address and
+ the Ethernet protocol type field. It is essential to increase the flow
+ timeout time on a vSwitch before conducting any performance tests that
+ do not measure the flow setup time. Normally the first packet of a
+ particular stream will install the flow in the virtual switch which
+ adds an additional latency, subsequent packets of the same flow are
+ not subject to this latency if the flow is already installed on the
+ vSwitch.</t>
+ </section>
+
+ <section title="Benchmarks using Baselines with Resource Isolation">
+ <t>This outline describes measurement of baseline with isolated
+ resources at a high level, which is the intended approach at this
+ time.</t>
+
+ <t><list style="numbers">
+ <t>Baselines: <list style="symbols">
+ <t>Optional: Benchmark platform forwarding capability without
+ a vswitch or VNF for at least 72 hours (serves as a means of
+ platform validation and a means to obtain the base performance
+ for the platform in terms of its maximum forwarding rate and
+ latency). <figure>
+ <preamble>Benchmark platform forwarding
+ capability</preamble>
+
+ <artwork align="right"><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | Simple Forwarding App | | Host
+ | | | | |
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmark VNF forwarding capability with direct
+ connectivity (vSwitch bypass, e.g., SR/IOV) for at least 72
+ hours (serves as a means of VNF validation and a means to
+ obtain the base performance for the VNF in terms of its
+ maximum forwarding rate and latency). The metrics gathered
+ from this test will serve as a key comparison point for
+ vSwitch bypass technologies performance and vSwitch
+ performance. <figure align="right">
+ <preamble>Benchmark VNF forwarding capability</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | VNF | | |
+ | | | | |
+ | +------------------------------------------+ | |
+ | | Passthrough/SR-IOV | | Host
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmarking with isolated resources alone, with other
+ resources (both HW&amp;SW) disabled Example, vSw and VM are
+ SUT</t>
+
+ <t>Benchmarking with isolated resources alone, leaving some
+ resources unused</t>
+
+ <t>Benchmark with isolated resources and all resources
+ occupied</t>
+ </list></t>
+
+ <t>Next Steps<list style="symbols">
+ <t>Limited sharing</t>
+
+ <t>Production scenarios</t>
+
+ <t>Stressful scenarios</t>
+ </list></t>
+ </list></t>
+ </section>
+ </section>
+
+ <section title="VSWITCHPERF Specification Summary">
+ <t>The overall specification in preparation is referred to as a Level
+ Test Design (LTD) document, which will contain a suite of performance
+ tests. The base performance tests in the LTD are based on the
+ pre-existing specifications developed by BMWG to test the performance of
+ physical switches. These specifications include:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2544"/> Benchmarking Methodology for Network
+ Interconnect Devices</t>
+
+ <t><xref target="RFC2889"/> Benchmarking Methodology for LAN
+ Switching</t>
+
+ <t><xref target="RFC6201"/> Device Reset Characterization</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t>Some of the above/newer RFCs are being applied in benchmarking for
+ the first time, and represent a development challenge for test equipment
+ developers. Fortunately, many members of the testing system community
+ have engaged on the VSPERF project, including an open source test
+ system.</t>
+
+ <t>In addition to this, the LTD also re-uses the terminology defined
+ by:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2285"/> Benchmarking Terminology for LAN
+ Switching Devices</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t/>
+
+ <t>Specifications to be included in future updates of the LTD
+ include:<list style="symbols">
+ <t><xref target="RFC3918"/> Methodology for IP Multicast
+ Benchmarking</t>
+
+ <t><xref target="RFC4737"/> Packet Reordering Metrics</t>
+ </list></t>
+
+ <t>As one might expect, the most fundamental internetworking
+ characteristics of Throughput and Latency remain important when the
+ switch is virtualized, and these benchmarks figure prominently in the
+ specification.</t>
+
+ <t>When considering characteristics important to "telco" network
+ functions, we must begin to consider additional performance metrics. In
+ this case, the project specifications have referenced metrics from the
+ IETF IP Performance Metrics (IPPM) literature. This means that the <xref
+ target="RFC2544"/> test of Latency is replaced by measurement of a
+ metric derived from IPPM's <xref target="RFC2679"/>, where a set of
+ statistical summaries will be provided (mean, max, min, etc.). Further
+ metrics planned to be benchmarked include packet delay variation as
+ defined by <xref target="RFC5481"/> , reordering, burst behaviour, DUT
+ availability, DUT capacity and packet loss in long term testing at
+ Throughput level, where some low-level of background loss may be present
+ and characterized.</t>
+
+ <t>Tests have been (or will be) designed to collect the metrics
+ below:</t>
+
+ <t><list style="symbols">
+ <t>Throughput Tests to measure the maximum forwarding rate (in
+ frames per second or fps) and bit rate (in Mbps) for a constant load
+ (as defined by RFC1242) without traffic loss.</t>
+
+ <t>Packet and Frame Delay Distribution Tests to measure average, min
+ and max packet and frame delay for constant loads.</t>
+
+ <t>Packet Delay Tests to understand latency distribution for
+ different packet sizes and over an extended test run to uncover
+ outliers.</t>
+
+ <t>Scalability Tests to understand how the virtual switch performs
+ as the number of flows, active ports, complexity of the forwarding
+ logic&rsquo;s configuration&hellip; it has to deal with
+ increases.</t>
+
+ <t>Stream Performance Tests (TCP, UDP) to measure bulk data transfer
+ performance, i.e. how fast systems can send and receive data through
+ the switch.</t>
+
+ <t>Control Path and Datapath Coupling Tests, to understand how
+ closely coupled the datapath and the control path are as well as the
+ effect of this coupling on the performance of the DUT (example:
+ delay of the initial packet of a flow).</t>
+
+ <t>CPU and Memory Consumption Tests to understand the virtual
+ switch&rsquo;s footprint on the system, usually conducted as
+ auxiliary measurements with benchmarks above. They include: CPU
+ utilization, Cache utilization and Memory footprint.</t>
+ </list></t>
+
+ <t>Future/planned test specs include:<list style="symbols">
+ <t>Request/Response Performance Tests (TCP, UDP) which measure the
+ transaction rate through the switch.</t>
+
+ <t>Noisy Neighbour Tests, to understand the effects of resource
+ sharing on the performance of a virtual switch.</t>
+
+ <t>Tests derived from examination of ETSI NFV Draft GS IFA003
+ requirements <xref target="IFA003"/> on characterization of
+ acceleration technologies applied to vswitches.</t>
+ </list>The flexibility of deployment of a virtual switch within a
+ network means that the BMWG IETF existing literature needs to be used to
+ characterize the performance of a switch in various deployment
+ scenarios. The deployment scenarios under consideration include:</t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to physical
+ port</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +--------------------+ | |
+ | | | | |
+ | | v | | Host
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure></t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ : | |
+ | | | | | Guest
+ | : v | |
+ | +---------------+ +---------------+ | |
+ | | logical port 0| | logical port 1| | |
+ +---+---------------+-----------+---------------+---+ __|
+ ^ :
+ | |
+ : v __
+ +---+---------------+----------+---------------+---+ |
+ | | logical port 0| | logical port 1| | |
+ | +---------------+ +---------------+ | |
+ | ^ : | |
+ | | | | | Host
+ | : v | |
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+---------+---------------+--+ |
+ | | 0 1 | | 3 4 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | ^ | ^ | | | Host
+ | | |-----------------| v | |
+ | +--------------+ +--------------+ | |
+ | | phy ports | vSwitch | phy ports | | |
+ +---+--------------+----------+--------------+---+_|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ | |
+ | | | | Guest
+ | : | |
+ | +---------------+ | |
+ | | logical port 0| | |
+ +---+---------------+-------------------------------+ __|
+ ^
+ |
+ : __
+ +---+---------------+------------------------------+ |
+ | | logical port 0| | |
+ | +---------------+ | |
+ | ^ | |
+ | | | | Host
+ | : | |
+ | +--------------+ | |
+ | | phy port | vSwitch | |
+ +---+--------------+------------ -------------- ---+ __|
+ ^
+ |
+ :
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | : | |
+ | | | | Guest
+ | v | |
+ | +---------------+ | |
+ | | logical port | | |
+ +-------------------------------+---------------+---+ __|
+ :
+ |
+ v __
+ +------------------------------+---------------+---+ |
+ | | logical port | | |
+ | +---------------+ | |
+ | : | |
+ | | | | Host
+ | v | |
+ | +--------------+ | |
+ | vSwitch | phy port | | |
+ +-------------------------------+--------------+---+ __|
+ :
+ |
+ v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | | | | ^ | |
+ | v | | | | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 | | | | 0 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ : ^
+ | |
+ v : _
+ +---+---------------+---------+---------------+--+ |
+ | | 1 | | 1 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | | ^ | | Host
+ | L-----------------+ | |
+ | | |
+ | vSwitch | |
+ +------------------------------------------------+_|]]></artwork>
+ </figure></t>
+
+ <t>A set of Deployment Scenario figures is available on the VSPERF Test
+ Methodology Wiki page <xref target="TestTopo"/>. </t>
+ </section>
+
+ <section title="3x3 Matrix Coverage">
+ <t>This section organizes the many existing test specifications into the
+ "3x3" matrix (introduced in <xref target="I-D.ietf-bmwg-virtual-net"/>).
+ Because the LTD specification ID names are quite long, this section is
+ organized into lists for each occupied cell of the matrix (not all are
+ occupied, also the matrix has grown to 3x4 to accommodate scale metrics
+ when displaying the coverage of many metrics/benchmarks).</t>
+
+ <t>The tests listed below assess the activation of paths in the data
+ plane, rather than the control plane.</t>
+
+ <t>A complete list of tests with short summaries is available on the
+ VSPERF "LTD Test Spec Overview" Wiki page <xref target="LTDoverV"/>.</t>
+
+ <section title="Speed of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressLearningRate</t>
+
+ <t>PacketLatency.InitialPacketProcessingLatency</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Activation section">
+ <t><list style="symbols">
+ <t>CPDP.Coupling.Flow.Addition</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Activation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.SystemRecoveryTime</t>
+
+ <t>Throughput.RFC2544.ResetTime</t>
+ </list></t>
+ </section>
+
+ <section title="Scale of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressCachingCapacity</t>
+ </list></t>
+ </section>
+
+ <section title="Speed of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.PacketLossRate</t>
+
+ <t>CPU.RFC2544.0PacketLoss</t>
+
+ <t>Throughput.RFC2544.PacketLossRateFrameModification</t>
+
+ <t>Throughput.RFC2544.BackToBackFrames</t>
+
+ <t>Throughput.RFC2889.MaxForwardingRate</t>
+
+ <t>Throughput.RFC2889.ForwardPressure</t>
+
+ <t>Throughput.RFC2889.BroadcastFrameForwarding</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.ErrorFramesFiltering</t>
+
+ <t>Throughput.RFC2544.Profile</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.Soak</t>
+
+ <t>Throughput.RFC2889.SoakFrameModification</t>
+
+ <t>PacketDelayVariation.RFC3393.Soak</t>
+ </list></t>
+ </section>
+
+ <section title="Scalability of Operation">
+ <t><list style="symbols">
+ <t>Scalability.RFC2544.0PacketLoss</t>
+
+ <t>MemoryBandwidth.RFC2544.0PacketLoss.Scalability</t>
+ </list></t>
+ </section>
+
+ <section title="Summary">
+ <t><figure>
+ <artwork><![CDATA[|------------------------------------------------------------------------|
+| | | | | |
+| | SPEED | ACCURACY | RELIABILITY | SCALE |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Activation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Operation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| De-activation | | | | |
+| | | | | |
+|------------------------------------------------------------------------|]]></artwork>
+ </figure></t>
+ </section>
+ </section>
+
+ <section title="Security Considerations">
+ <t>Benchmarking activities as described in this memo are limited to
+ technology characterization of a Device Under Test/System Under Test
+ (DUT/SUT) using controlled stimuli in a laboratory environment, with
+ dedicated address space and the constraints specified in the sections
+ above.</t>
+
+ <t>The benchmarking network topology will be an independent test setup
+ and MUST NOT be connected to devices that may forward the test traffic
+ into a production network, or misroute traffic to the test management
+ network.</t>
+
+ <t>Further, benchmarking is performed on a "black-box" basis, relying
+ solely on measurements observable external to the DUT/SUT.</t>
+
+ <t>Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+ benchmarking purposes. Any implications for network security arising
+ from the DUT/SUT SHOULD be identical in the lab and in production
+ networks.</t>
+ </section>
+
+ <section anchor="IANA" title="IANA Considerations">
+ <t>No IANA Action is requested at this time.</t>
+ </section>
+
+ <section title="Acknowledgements">
+ <t>The authors acknowledge</t>
+ </section>
+ </middle>
+
+ <back>
+ <references title="Normative References">
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2119"?>
+
+ <?rfc include="reference.RFC.2330"?>
+
+ <?rfc include='reference.RFC.2544'?>
+
+ <?rfc include="reference.RFC.2679"?>
+
+ <?rfc include='reference.RFC.2680'?>
+
+ <?rfc include='reference.RFC.3393'?>
+
+ <?rfc include='reference.RFC.3432'?>
+
+ <?rfc include='reference.RFC.2681'?>
+
+ <?rfc include='reference.RFC.5905'?>
+
+ <?rfc include='reference.RFC.4689'?>
+
+ <?rfc include='reference.RFC.4737'?>
+
+ <?rfc include='reference.RFC.5357'?>
+
+ <?rfc include='reference.RFC.2889'?>
+
+ <?rfc include='reference.RFC.3918'?>
+
+ <?rfc include='reference.RFC.6201'?>
+
+ <?rfc include='reference.RFC.2285'?>
+
+ <reference anchor="NFV.PER001">
+ <front>
+ <title>Network Function Virtualization: Performance and Portability
+ Best Practices</title>
+
+ <author fullname="ETSI NFV" initials="" surname="">
+ <organization/>
+ </author>
+
+ <date month="June" year="2014"/>
+ </front>
+
+ <seriesInfo name="Group Specification"
+ value="ETSI GS NFV-PER 001 V1.1.1 (2014-06)"/>
+
+ <format type="PDF"/>
+ </reference>
+ </references>
+
+ <references title="Informative References">
+ <?rfc include='reference.RFC.1242'?>
+
+ <?rfc include='reference.RFC.5481'?>
+
+ <?rfc include='reference.RFC.6049'?>
+
+ <?rfc include='reference.RFC.6248'?>
+
+ <?rfc include='reference.RFC.6390'?>
+
+ <?rfc include='reference.I-D.ietf-bmwg-virtual-net'?>
+
+ <reference anchor="TestTopo">
+ <front>
+ <title>Test Topologies
+ https://wiki.opnfv.org/vsperf/test_methodology</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTDoverV">
+ <front>
+ <title>LTD Test Spec Overview
+ https://wiki.opnfv.org/wiki/vswitchperf_test_spec_review </title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="IFA003">
+ <front>
+ <title>https://docbox.etsi.org/ISG/NFV/Open/Drafts/IFA003_Acceleration_-_vSwitch_Spec/</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+ </references>
+ </back>
+</rfc>
diff --git a/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-01.xml b/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-01.xml
new file mode 100644
index 00000000..a9405a77
--- /dev/null
+++ b/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-01.xml
@@ -0,0 +1,1016 @@
+<?xml version="1.0" encoding="US-ASCII"?>
+<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
+<?rfc toc="yes"?>
+<?rfc tocompact="yes"?>
+<?rfc tocdepth="3"?>
+<?rfc tocindent="yes"?>
+<?rfc symrefs="yes"?>
+<?rfc sortrefs="yes"?>
+<?rfc comments="yes"?>
+<?rfc inline="yes"?>
+<?rfc compact="yes"?>
+<?rfc subcompact="no"?>
+<rfc category="info" docName="draft-vsperf-bmwg-vswitch-opnfv-02"
+ ipr="trust200902">
+ <front>
+ <title abbrev="Benchmarking vSwitches">Benchmarking Virtual Switches in
+ OPNFV</title>
+
+ <author fullname="Maryam Tahhan" initials="M." surname="Tahhan">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>maryam.tahhan@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Billy O'Mahony" initials="B." surname="O'Mahony">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>billy.o.mahony@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Al Morton" initials="A." surname="Morton">
+ <organization>AT&amp;T Labs</organization>
+
+ <address>
+ <postal>
+ <street>200 Laurel Avenue South</street>
+
+ <city>Middletown,</city>
+
+ <region>NJ</region>
+
+ <code>07748</code>
+
+ <country>USA</country>
+ </postal>
+
+ <phone>+1 732 420 1571</phone>
+
+ <facsimile>+1 732 368 1192</facsimile>
+
+ <email>acmorton@att.com</email>
+
+ <uri>http://home.comcast.net/~acmacm/</uri>
+ </address>
+ </author>
+
+ <date day="20" month="March" year="2016"/>
+
+ <abstract>
+ <t>This memo describes the progress of the Open Platform for NFV (OPNFV)
+ project on virtual switch performance "VSWITCHPERF". This project
+ intends to build on the current and completed work of the Benchmarking
+ Methodology Working Group in IETF, by referencing existing literature.
+ The Benchmarking Methodology Working Group has traditionally conducted
+ laboratory characterization of dedicated physical implementations of
+ internetworking functions. Therefore, this memo begins to describe the
+ additional considerations when virtual switches are implemented in
+ general-purpose hardware. The expanded tests and benchmarks are also
+ influenced by the OPNFV mission to support virtualization of the "telco"
+ infrastructure.</t>
+ </abstract>
+
+ <note title="Requirements Language">
+ <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+ "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
+ document are to be interpreted as described in <xref
+ target="RFC2119">RFC 2119</xref>.</t>
+
+ <t/>
+ </note>
+ </front>
+
+ <middle>
+ <section title="Introduction">
+ <t>Benchmarking Methodology Working Group (BMWG) has traditionally
+ conducted laboratory characterization of dedicated physical
+ implementations of internetworking functions. The Black-box Benchmarks
+ of Throughput, Latency, Forwarding Rates and others have served our
+ industry for many years. Now, Network Function Virtualization (NFV) has
+ the goal to transform how internetwork functions are implemented, and
+ therefore has garnered much attention.</t>
+
+ <t>This memo summarizes the progress of the Open Platform for NFV
+ (OPNFV) project on virtual switch performance characterization,
+ "VSWITCHPERF", through the Brahmaputra (second) release <xref
+ target="BrahRel"/>. This project intends to build on the current and
+ completed work of the Benchmarking Methodology Working Group in IETF, by
+ referencing existing literature. For example, currently the most often
+ referenced RFC is <xref target="RFC2544"/> (which depends on <xref
+ target="RFC1242"/>) and foundation of the benchmarking work in OPNFV is
+ common and strong.</t>
+
+ <t>See
+ https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+ for more background, and the OPNFV website for general information:
+ https://www.opnfv.org/</t>
+
+ <t>The authors note that OPNFV distinguishes itself from other open
+ source compute and networking projects through its emphasis on existing
+ "telco" services as opposed to cloud-computing. There are many ways in
+ which telco requirements have different emphasis on performance
+ dimensions when compared to cloud computing: support for and transfer of
+ isochronous media streams is one example.</t>
+
+ <t>Note also that the move to NFV Infrastructure has resulted in many
+ new benchmarking initiatives across the industry. The authors are
+ currently doing their best to maintain alignment with many other
+ projects, and this Internet Draft is one part of the efforts. We
+ acknowledge the early work in <xref
+ target="I-D.huang-bmwg-virtual-network-performance"/>, and useful
+ discussion with the authors.</t>
+ </section>
+
+ <section title="Scope">
+ <t>The primary purpose and scope of the memo is to inform the industry
+ of work-in-progress that builds on the body of extensive BMWG literature
+ and experience, and describe the extensions needed for benchmarking
+ virtual switches. Inital feedback indicates that many of these
+ extensions may be applicable beyond the current scope (to hardware
+ switches in the NFV Infrastructure and to virtual routers, for example).
+ Additionally, this memo serves as a vehicle to include more detail and
+ commentary from BMWG and other Open Source communities, under BMWG's
+ chartered work to characterize the NFV Infrastructure (a virtual switch
+ is an important aspect of that infrastructure).</t>
+ </section>
+
+ <section title="Benchmarking Considerations">
+ <t>This section highlights some specific considerations (from <xref
+ target="I-D.ietf-bmwg-virtual-net"/>)related to Benchmarks for virtual
+ switches. The OPNFV project is sharing its present view on these areas,
+ as they develop their specifications in the Level Test Design (LTD)
+ document.</t>
+
+ <section title="Comparison with Physical Network Functions">
+ <t>To compare the performance of virtual designs and implementations
+ with their physical counterparts, identical benchmarks are needed.
+ BMWG has developed specifications for many network functions this memo
+ re-uses existing benchmarks through references, and expands them
+ during development of new methods. A key configuration aspect is the
+ number of parallel cores required to achieve comparable performance
+ with a given physical device, or whether some limit of scale was
+ reached before the cores could achieve the comparable level.</t>
+
+ <t>It's unlikely that the virtual switch will be the only application
+ running on the SUT, so CPU utilization, Cache utilization, and Memory
+ footprint should also be recorded for the virtual implementations of
+ internetworking functions.</t>
+ </section>
+
+ <section title="Continued Emphasis on Black-Box Benchmarks">
+ <t>External observations remain essential as the basis for Benchmarks.
+ Internal observations with fixed specification and interpretation will
+ be provided in parallel to assist the development of operations
+ procedures when the technology is deployed.</t>
+ </section>
+
+ <section title="New Configuration Parameters">
+ <t>A key consideration when conducting any sort of benchmark is trying
+ to ensure the consistency and repeatability of test results. When
+ benchmarking the performance of a vSwitch there are many factors that
+ can affect the consistency of results, one key factor is matching the
+ various hardware and software details of the SUT. This section lists
+ some of the many new parameters which this project believes are
+ critical to report in order to achieve repeatability.</t>
+
+ <t>Hardware details including:</t>
+
+ <t><list style="symbols">
+ <t>Platform details</t>
+
+ <t>Processor details</t>
+
+ <t>Memory information (type and size)</t>
+
+ <t>Number of enabled cores</t>
+
+ <t>Number of cores used for the test</t>
+
+ <t>Number of physical NICs, as well as their details
+ (manufacturer, versions, type and the PCI slot they are plugged
+ into)</t>
+
+ <t>NIC interrupt configuration</t>
+
+ <t>BIOS version, release date and any configurations that were
+ modified</t>
+
+ <t>CPU microcode level</t>
+
+ <t>Memory DIMM configurations (quad rank performance may not be
+ the same as dual rank) in size, freq and slot locations</t>
+
+ <t>PCI configuration parameters (payload size, early ack
+ option...)</t>
+
+ <t>Power management at all levels (ACPI sleep states, processor
+ package, OS...)</t>
+ </list>Software details including:</t>
+
+ <t><list style="symbols">
+ <t>OS parameters and behavior (text vs graphical no one typing at
+ the console on one system)</t>
+
+ <t>OS version (for host and VNF)</t>
+
+ <t>Kernel version (for host and VNF)</t>
+
+ <t>GRUB boot parameters (for host and VNF)</t>
+
+ <t>Hypervisor details (Type and version)</t>
+
+ <t>Selected vSwitch, version number or commit id used</t>
+
+ <t>vSwitch launch command line if it has been parameterised</t>
+
+ <t>Memory allocation to the vSwitch</t>
+
+ <t>which NUMA node it is using, and how many memory channels</t>
+
+ <t>DPDK or any other SW dependency version number or commit id
+ used</t>
+
+ <t>Memory allocation to a VM - if it's from Hugpages/elsewhere</t>
+
+ <t>VM storage type: snapshot/independent persistent/independent
+ non-persistent</t>
+
+ <t>Number of VMs</t>
+
+ <t>Number of Virtual NICs (vNICs), versions, type and driver</t>
+
+ <t>Number of virtual CPUs and their core affinity on the host</t>
+
+ <t>Number vNIC interrupt configuration</t>
+
+ <t>Thread affinitization for the applications (including the
+ vSwitch itself) on the host</t>
+
+ <t>Details of Resource isolation, such as CPUs designated for
+ Host/Kernel (isolcpu) and CPUs designated for specific processes
+ (taskset). - Test duration. - Number of flows.</t>
+ </list></t>
+
+ <t>Test Traffic Information:<list style="symbols">
+ <t>Traffic type - UDP, TCP, IMIX / Other</t>
+
+ <t>Packet Sizes</t>
+
+ <t>Deployment Scenario</t>
+ </list></t>
+
+ <t/>
+ </section>
+
+ <section title="Flow classification">
+ <t>Virtual switches group packets into flows by processing and
+ matching particular packet or frame header information, or by matching
+ packets based on the input ports. Thus a flow can be thought of a
+ sequence of packets that have the same set of header field values or
+ have arrived on the same port. Performance results can vary based on
+ the parameters the vSwitch uses to match for a flow. The recommended
+ flow classification parameters for any vSwitch performance tests are:
+ the input port, the source IP address, the destination IP address and
+ the Ethernet protocol type field. It is essential to increase the flow
+ timeout time on a vSwitch before conducting any performance tests that
+ do not measure the flow setup time. Normally the first packet of a
+ particular stream will install the flow in the virtual switch which
+ adds an additional latency, subsequent packets of the same flow are
+ not subject to this latency if the flow is already installed on the
+ vSwitch.</t>
+ </section>
+
+ <section title="Benchmarks using Baselines with Resource Isolation">
+ <t>This outline describes measurement of baseline with isolated
+ resources at a high level, which is the intended approach at this
+ time.</t>
+
+ <t><list style="numbers">
+ <t>Baselines: <list style="symbols">
+ <t>Optional: Benchmark platform forwarding capability without
+ a vswitch or VNF for at least 72 hours (serves as a means of
+ platform validation and a means to obtain the base performance
+ for the platform in terms of its maximum forwarding rate and
+ latency). <figure>
+ <preamble>Benchmark platform forwarding
+ capability</preamble>
+
+ <artwork align="right"><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | Simple Forwarding App | | Host
+ | | | | |
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmark VNF forwarding capability with direct
+ connectivity (vSwitch bypass, e.g., SR/IOV) for at least 72
+ hours (serves as a means of VNF validation and a means to
+ obtain the base performance for the VNF in terms of its
+ maximum forwarding rate and latency). The metrics gathered
+ from this test will serve as a key comparison point for
+ vSwitch bypass technologies performance and vSwitch
+ performance. <figure align="right">
+ <preamble>Benchmark VNF forwarding capability</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | VNF | | |
+ | | | | |
+ | +------------------------------------------+ | |
+ | | Passthrough/SR-IOV | | Host
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmarking with isolated resources alone, with other
+ resources (both HW&amp;SW) disabled Example, vSw and VM are
+ SUT</t>
+
+ <t>Benchmarking with isolated resources alone, leaving some
+ resources unused</t>
+
+ <t>Benchmark with isolated resources and all resources
+ occupied</t>
+ </list></t>
+
+ <t>Next Steps<list style="symbols">
+ <t>Limited sharing</t>
+
+ <t>Production scenarios</t>
+
+ <t>Stressful scenarios</t>
+ </list></t>
+ </list></t>
+ </section>
+ </section>
+
+ <section title="VSWITCHPERF Specification Summary">
+ <t>The overall specification in preparation is referred to as a Level
+ Test Design (LTD) document, which will contain a suite of performance
+ tests. The base performance tests in the LTD are based on the
+ pre-existing specifications developed by BMWG to test the performance of
+ physical switches. These specifications include:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2544"/> Benchmarking Methodology for Network
+ Interconnect Devices</t>
+
+ <t><xref target="RFC2889"/> Benchmarking Methodology for LAN
+ Switching</t>
+
+ <t><xref target="RFC6201"/> Device Reset Characterization</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t>Some of the above/newer RFCs are being applied in benchmarking for
+ the first time, and represent a development challenge for test equipment
+ developers. Fortunately, many members of the testing system community
+ have engaged on the VSPERF project, including an open source test
+ system.</t>
+
+ <t>In addition to this, the LTD also re-uses the terminology defined
+ by:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2285"/> Benchmarking Terminology for LAN
+ Switching Devices</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t/>
+
+ <t>Specifications to be included in future updates of the LTD
+ include:<list style="symbols">
+ <t><xref target="RFC3918"/> Methodology for IP Multicast
+ Benchmarking</t>
+
+ <t><xref target="RFC4737"/> Packet Reordering Metrics</t>
+ </list></t>
+
+ <t>As one might expect, the most fundamental internetworking
+ characteristics of Throughput and Latency remain important when the
+ switch is virtualized, and these benchmarks figure prominently in the
+ specification.</t>
+
+ <t>When considering characteristics important to "telco" network
+ functions, we must begin to consider additional performance metrics. In
+ this case, the project specifications have referenced metrics from the
+ IETF IP Performance Metrics (IPPM) literature. This means that the <xref
+ target="RFC2544"/> test of Latency is replaced by measurement of a
+ metric derived from IPPM's <xref target="RFC2679"/>, where a set of
+ statistical summaries will be provided (mean, max, min, etc.). Further
+ metrics planned to be benchmarked include packet delay variation as
+ defined by <xref target="RFC5481"/> , reordering, burst behaviour, DUT
+ availability, DUT capacity and packet loss in long term testing at
+ Throughput level, where some low-level of background loss may be present
+ and characterized.</t>
+
+ <t>Tests have been (or will be) designed to collect the metrics
+ below:</t>
+
+ <t><list style="symbols">
+ <t>Throughput Tests to measure the maximum forwarding rate (in
+ frames per second or fps) and bit rate (in Mbps) for a constant load
+ (as defined by <xref target="RFC1242"/>) without traffic loss.</t>
+
+ <t>Packet and Frame Delay Distribution Tests to measure average, min
+ and max packet and frame delay for constant loads.</t>
+
+ <t>Packet Delay Tests to understand latency distribution for
+ different packet sizes and over an extended test run to uncover
+ outliers.</t>
+
+ <t>Scalability Tests to understand how the virtual switch performs
+ as the number of flows, active ports, complexity of the forwarding
+ logic&rsquo;s configuration&hellip; it has to deal with
+ increases.</t>
+
+ <t>Stream Performance Tests (TCP, UDP) to measure bulk data transfer
+ performance, i.e. how fast systems can send and receive data through
+ the switch.</t>
+
+ <t>Control Path and Datapath Coupling Tests, to understand how
+ closely coupled the datapath and the control path are as well as the
+ effect of this coupling on the performance of the DUT (example:
+ delay of the initial packet of a flow).</t>
+
+ <t>CPU and Memory Consumption Tests to understand the virtual
+ switch&rsquo;s footprint on the system, usually conducted as
+ auxiliary measurements with benchmarks above. They include: CPU
+ utilization, Cache utilization and Memory footprint.</t>
+
+ <t>The so-called "Soak" tests, where the selected test is conducted
+ over a long period of time (with an ideal duration of 24 hours, and
+ at least 6 hours). The purpose of soak tests is to capture transient
+ changes in performance which may occur due to infrequent processes
+ or the low probability coincidence of two or more processes. The
+ performance must be evaluated periodically during continuous
+ testing, and this results in use of <xref target="RFC2889"/> Frame
+ Rate metrics instead of <xref target="RFC2544"/> Throughput (which
+ requires stopping traffic to allow time for all traffic to exit
+ internal queues).</t>
+ </list></t>
+
+ <t>Future/planned test specs include:<list style="symbols">
+ <t>Request/Response Performance Tests (TCP, UDP) which measure the
+ transaction rate through the switch.</t>
+
+ <t>Noisy Neighbour Tests, to understand the effects of resource
+ sharing on the performance of a virtual switch.</t>
+
+ <t>Tests derived from examination of ETSI NFV Draft GS IFA003
+ requirements <xref target="IFA003"/> on characterization of
+ acceleration technologies applied to vswitches.</t>
+ </list>The flexibility of deployment of a virtual switch within a
+ network means that the BMWG IETF existing literature needs to be used to
+ characterize the performance of a switch in various deployment
+ scenarios. The deployment scenarios under consideration include:</t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to physical
+ port</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +--------------------+ | |
+ | | | | |
+ | | v | | Host
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure></t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ : | |
+ | | | | | Guest
+ | : v | |
+ | +---------------+ +---------------+ | |
+ | | logical port 0| | logical port 1| | |
+ +---+---------------+-----------+---------------+---+ __|
+ ^ :
+ | |
+ : v __
+ +---+---------------+----------+---------------+---+ |
+ | | logical port 0| | logical port 1| | |
+ | +---------------+ +---------------+ | |
+ | ^ : | |
+ | | | | | Host
+ | : v | |
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+---------+---------------+--+ |
+ | | 0 1 | | 3 4 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | ^ | ^ | | | Host
+ | | |-----------------| v | |
+ | +--------------+ +--------------+ | |
+ | | phy ports | vSwitch | phy ports | | |
+ +---+--------------+----------+--------------+---+_|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ | |
+ | | | | Guest
+ | : | |
+ | +---------------+ | |
+ | | logical port 0| | |
+ +---+---------------+-------------------------------+ __|
+ ^
+ |
+ : __
+ +---+---------------+------------------------------+ |
+ | | logical port 0| | |
+ | +---------------+ | |
+ | ^ | |
+ | | | | Host
+ | : | |
+ | +--------------+ | |
+ | | phy port | vSwitch | |
+ +---+--------------+------------ -------------- ---+ __|
+ ^
+ |
+ :
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | : | |
+ | | | | Guest
+ | v | |
+ | +---------------+ | |
+ | | logical port | | |
+ +-------------------------------+---------------+---+ __|
+ :
+ |
+ v __
+ +------------------------------+---------------+---+ |
+ | | logical port | | |
+ | +---------------+ | |
+ | : | |
+ | | | | Host
+ | v | |
+ | +--------------+ | |
+ | vSwitch | phy port | | |
+ +-------------------------------+--------------+---+ __|
+ :
+ |
+ v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | | | | ^ | |
+ | v | | | | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 | | | | 0 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ : ^
+ | |
+ v : _
+ +---+---------------+---------+---------------+--+ |
+ | | 1 | | 1 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | | ^ | | Host
+ | L-----------------+ | |
+ | | |
+ | vSwitch | |
+ +------------------------------------------------+_|]]></artwork>
+ </figure></t>
+
+ <t>A set of Deployment Scenario figures is available on the VSPERF Test
+ Methodology Wiki page <xref target="TestTopo"/>.</t>
+ </section>
+
+ <section title="3x3 Matrix Coverage">
+ <t>This section organizes the many existing test specifications into the
+ "3x3" matrix (introduced in <xref target="I-D.ietf-bmwg-virtual-net"/>).
+ Because the LTD specification ID names are quite long, this section is
+ organized into lists for each occupied cell of the matrix (not all are
+ occupied, also the matrix has grown to 3x4 to accommodate scale metrics
+ when displaying the coverage of many metrics/benchmarks). The current
+ version of the LTD specification is available <xref target="LTD"/>.</t>
+
+ <t>The tests listed below assess the activation of paths in the data
+ plane, rather than the control plane.</t>
+
+ <t>A complete list of tests with short summaries is available on the
+ VSPERF "LTD Test Spec Overview" Wiki page <xref target="LTDoverV"/>.</t>
+
+ <section title="Speed of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressLearningRate</t>
+
+ <t>PacketLatency.InitialPacketProcessingLatency</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Activation section">
+ <t><list style="symbols">
+ <t>CPDP.Coupling.Flow.Addition</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Activation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.SystemRecoveryTime</t>
+
+ <t>Throughput.RFC2544.ResetTime</t>
+ </list></t>
+ </section>
+
+ <section title="Scale of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressCachingCapacity</t>
+ </list></t>
+ </section>
+
+ <section title="Speed of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.PacketLossRate</t>
+
+ <t>CPU.RFC2544.0PacketLoss</t>
+
+ <t>Throughput.RFC2544.PacketLossRateFrameModification</t>
+
+ <t>Throughput.RFC2544.BackToBackFrames</t>
+
+ <t>Throughput.RFC2889.MaxForwardingRate</t>
+
+ <t>Throughput.RFC2889.ForwardPressure</t>
+
+ <t>Throughput.RFC2889.BroadcastFrameForwarding</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.ErrorFramesFiltering</t>
+
+ <t>Throughput.RFC2544.Profile</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.Soak</t>
+
+ <t>Throughput.RFC2889.SoakFrameModification</t>
+
+ <t>PacketDelayVariation.RFC3393.Soak</t>
+ </list></t>
+ </section>
+
+ <section title="Scalability of Operation">
+ <t><list style="symbols">
+ <t>Scalability.RFC2544.0PacketLoss</t>
+
+ <t>MemoryBandwidth.RFC2544.0PacketLoss.Scalability</t>
+ </list></t>
+ </section>
+
+ <section title="Summary">
+ <t><figure>
+ <artwork><![CDATA[|------------------------------------------------------------------------|
+| | | | | |
+| | SPEED | ACCURACY | RELIABILITY | SCALE |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Activation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Operation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| De-activation | | | | |
+| | | | | |
+|------------------------------------------------------------------------|]]></artwork>
+ </figure></t>
+ </section>
+ </section>
+
+ <section title="Security Considerations">
+ <t>Benchmarking activities as described in this memo are limited to
+ technology characterization of a Device Under Test/System Under Test
+ (DUT/SUT) using controlled stimuli in a laboratory environment, with
+ dedicated address space and the constraints specified in the sections
+ above.</t>
+
+ <t>The benchmarking network topology will be an independent test setup
+ and MUST NOT be connected to devices that may forward the test traffic
+ into a production network, or misroute traffic to the test management
+ network.</t>
+
+ <t>Further, benchmarking is performed on a "black-box" basis, relying
+ solely on measurements observable external to the DUT/SUT.</t>
+
+ <t>Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+ benchmarking purposes. Any implications for network security arising
+ from the DUT/SUT SHOULD be identical in the lab and in production
+ networks.</t>
+ </section>
+
+ <section anchor="IANA" title="IANA Considerations">
+ <t>No IANA Action is requested at this time.</t>
+ </section>
+
+ <section title="Acknowledgements">
+ <t>The authors appreciate and acknowledge comments from Scott Bradner,
+ Marius Georgescu, Ramki Krishnan, and Doug Montgomery, and others for
+ their reviews.</t>
+ </section>
+ </middle>
+
+ <back>
+ <references title="Normative References">
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2119"?>
+
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2330"?>
+
+ <?rfc include='reference.RFC.2544'?>
+
+ <?rfc include="reference.RFC.2679"?>
+
+ <?rfc include='reference.RFC.2680'?>
+
+ <?rfc include='reference.RFC.3393'?>
+
+ <?rfc include='reference.RFC.3432'?>
+
+ <?rfc include='reference.RFC.2681'?>
+
+ <?rfc include='reference.RFC.5905'?>
+
+ <?rfc include='reference.RFC.4689'?>
+
+ <?rfc include='reference.RFC.4737'?>
+
+ <?rfc include='reference.RFC.5357'?>
+
+ <?rfc include='reference.RFC.2889'?>
+
+ <?rfc include='reference.RFC.3918'?>
+
+ <?rfc include='reference.RFC.6201'?>
+
+ <?rfc include='reference.RFC.2285'?>
+
+ <reference anchor="NFV.PER001">
+ <front>
+ <title>Network Function Virtualization: Performance and Portability
+ Best Practices</title>
+
+ <author fullname="ETSI NFV" initials="" surname="">
+ <organization/>
+ </author>
+
+ <date month="June" year="2014"/>
+ </front>
+
+ <seriesInfo name="Group Specification"
+ value="ETSI GS NFV-PER 001 V1.1.1 (2014-06)"/>
+
+ <format type="PDF"/>
+ </reference>
+ </references>
+
+ <references title="Informative References">
+ <?rfc include='reference.RFC.1242'?>
+
+ <?rfc include='reference.RFC.5481'?>
+
+ <?rfc include='reference.RFC.6049'?>
+
+ <?rfc include='reference.RFC.6248'?>
+
+ <?rfc include='reference.RFC.6390'?>
+
+ <?rfc include='reference.I-D.ietf-bmwg-virtual-net'?>
+
+ <?rfc include='reference.I-D.huang-bmwg-virtual-network-performance'?>
+
+ <reference anchor="TestTopo">
+ <front>
+ <title>Test Topologies
+ https://wiki.opnfv.org/vsperf/test_methodology</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTDoverV">
+ <front>
+ <title>LTD Test Spec Overview
+ https://wiki.opnfv.org/wiki/vswitchperf_test_spec_review</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTD">
+ <front>
+ <title>LTD Test Specification
+ http://artifacts.opnfv.org/vswitchperf/docs/requirements/index.html</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="BrahRel">
+ <front>
+ <title>Brahmaputra, Second OPNFV Release
+ https://www.opnfv.org/brahmaputra</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="IFA003">
+ <front>
+ <title>https://docbox.etsi.org/ISG/NFV/Open/Drafts/IFA003_Acceleration_-_vSwitch_Spec/</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+ </references>
+ </back>
+</rfc>
diff --git a/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-02.xml b/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-02.xml
new file mode 100644
index 00000000..9157763e
--- /dev/null
+++ b/docs/testing/developer/requirements/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-02.xml
@@ -0,0 +1,1016 @@
+<?xml version="1.0" encoding="US-ASCII"?>
+<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
+<?rfc toc="yes"?>
+<?rfc tocompact="yes"?>
+<?rfc tocdepth="3"?>
+<?rfc tocindent="yes"?>
+<?rfc symrefs="yes"?>
+<?rfc sortrefs="yes"?>
+<?rfc comments="yes"?>
+<?rfc inline="yes"?>
+<?rfc compact="yes"?>
+<?rfc subcompact="no"?>
+<rfc category="info" docName="draft-vsperf-bmwg-vswitch-opnfv-02"
+ ipr="trust200902">
+ <front>
+ <title abbrev="Benchmarking vSwitches">Benchmarking Virtual Switches in
+ OPNFV</title>
+
+ <author fullname="Maryam Tahhan" initials="M." surname="Tahhan">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>maryam.tahhan@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Billy O'Mahony" initials="B." surname="O'Mahony">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>billy.o.mahony@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Al Morton" initials="A." surname="Morton">
+ <organization>AT&amp;T Labs</organization>
+
+ <address>
+ <postal>
+ <street>200 Laurel Avenue South</street>
+
+ <city>Middletown,</city>
+
+ <region>NJ</region>
+
+ <code>07748</code>
+
+ <country>USA</country>
+ </postal>
+
+ <phone>+1 732 420 1571</phone>
+
+ <facsimile>+1 732 368 1192</facsimile>
+
+ <email>acmorton@att.com</email>
+
+ <uri>http://home.comcast.net/~acmacm/</uri>
+ </address>
+ </author>
+
+ <date day="21" month="March" year="2016"/>
+
+ <abstract>
+ <t>This memo describes the progress of the Open Platform for NFV (OPNFV)
+ project on virtual switch performance "VSWITCHPERF". This project
+ intends to build on the current and completed work of the Benchmarking
+ Methodology Working Group in IETF, by referencing existing literature.
+ The Benchmarking Methodology Working Group has traditionally conducted
+ laboratory characterization of dedicated physical implementations of
+ internetworking functions. Therefore, this memo begins to describe the
+ additional considerations when virtual switches are implemented in
+ general-purpose hardware. The expanded tests and benchmarks are also
+ influenced by the OPNFV mission to support virtualization of the "telco"
+ infrastructure.</t>
+ </abstract>
+
+ <note title="Requirements Language">
+ <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+ "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
+ document are to be interpreted as described in <xref
+ target="RFC2119">RFC 2119</xref>.</t>
+
+ <t/>
+ </note>
+ </front>
+
+ <middle>
+ <section title="Introduction">
+ <t>Benchmarking Methodology Working Group (BMWG) has traditionally
+ conducted laboratory characterization of dedicated physical
+ implementations of internetworking functions. The Black-box Benchmarks
+ of Throughput, Latency, Forwarding Rates and others have served our
+ industry for many years. Now, Network Function Virtualization (NFV) has
+ the goal to transform how internetwork functions are implemented, and
+ therefore has garnered much attention.</t>
+
+ <t>This memo summarizes the progress of the Open Platform for NFV
+ (OPNFV) project on virtual switch performance characterization,
+ "VSWITCHPERF", through the Brahmaputra (second) release <xref
+ target="BrahRel"/>. This project intends to build on the current and
+ completed work of the Benchmarking Methodology Working Group in IETF, by
+ referencing existing literature. For example, currently the most often
+ referenced RFC is <xref target="RFC2544"/> (which depends on <xref
+ target="RFC1242"/>) and foundation of the benchmarking work in OPNFV is
+ common and strong.</t>
+
+ <t>See
+ https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+ for more background, and the OPNFV website for general information:
+ https://www.opnfv.org/</t>
+
+ <t>The authors note that OPNFV distinguishes itself from other open
+ source compute and networking projects through its emphasis on existing
+ "telco" services as opposed to cloud-computing. There are many ways in
+ which telco requirements have different emphasis on performance
+ dimensions when compared to cloud computing: support for and transfer of
+ isochronous media streams is one example.</t>
+
+ <t>Note also that the move to NFV Infrastructure has resulted in many
+ new benchmarking initiatives across the industry. The authors are
+ currently doing their best to maintain alignment with many other
+ projects, and this Internet Draft is one part of the efforts. We
+ acknowledge the early work in <xref
+ target="I-D.huang-bmwg-virtual-network-performance"/>, and useful
+ discussion with the authors.</t>
+ </section>
+
+ <section title="Scope">
+ <t>The primary purpose and scope of the memo is to inform the industry
+ of work-in-progress that builds on the body of extensive BMWG literature
+ and experience, and describe the extensions needed for benchmarking
+ virtual switches. Inital feedback indicates that many of these
+ extensions may be applicable beyond the current scope (to hardware
+ switches in the NFV Infrastructure and to virtual routers, for example).
+ Additionally, this memo serves as a vehicle to include more detail and
+ commentary from BMWG and other Open Source communities, under BMWG's
+ chartered work to characterize the NFV Infrastructure (a virtual switch
+ is an important aspect of that infrastructure).</t>
+ </section>
+
+ <section title="Benchmarking Considerations">
+ <t>This section highlights some specific considerations (from <xref
+ target="I-D.ietf-bmwg-virtual-net"/>)related to Benchmarks for virtual
+ switches. The OPNFV project is sharing its present view on these areas,
+ as they develop their specifications in the Level Test Design (LTD)
+ document.</t>
+
+ <section title="Comparison with Physical Network Functions">
+ <t>To compare the performance of virtual designs and implementations
+ with their physical counterparts, identical benchmarks are needed.
+ BMWG has developed specifications for many network functions this memo
+ re-uses existing benchmarks through references, and expands them
+ during development of new methods. A key configuration aspect is the
+ number of parallel cores required to achieve comparable performance
+ with a given physical device, or whether some limit of scale was
+ reached before the cores could achieve the comparable level.</t>
+
+ <t>It's unlikely that the virtual switch will be the only application
+ running on the SUT, so CPU utilization, Cache utilization, and Memory
+ footprint should also be recorded for the virtual implementations of
+ internetworking functions.</t>
+ </section>
+
+ <section title="Continued Emphasis on Black-Box Benchmarks">
+ <t>External observations remain essential as the basis for Benchmarks.
+ Internal observations with fixed specification and interpretation will
+ be provided in parallel to assist the development of operations
+ procedures when the technology is deployed.</t>
+ </section>
+
+ <section title="New Configuration Parameters">
+ <t>A key consideration when conducting any sort of benchmark is trying
+ to ensure the consistency and repeatability of test results. When
+ benchmarking the performance of a vSwitch there are many factors that
+ can affect the consistency of results, one key factor is matching the
+ various hardware and software details of the SUT. This section lists
+ some of the many new parameters which this project believes are
+ critical to report in order to achieve repeatability.</t>
+
+ <t>Hardware details including:</t>
+
+ <t><list style="symbols">
+ <t>Platform details</t>
+
+ <t>Processor details</t>
+
+ <t>Memory information (type and size)</t>
+
+ <t>Number of enabled cores</t>
+
+ <t>Number of cores used for the test</t>
+
+ <t>Number of physical NICs, as well as their details
+ (manufacturer, versions, type and the PCI slot they are plugged
+ into)</t>
+
+ <t>NIC interrupt configuration</t>
+
+ <t>BIOS version, release date and any configurations that were
+ modified</t>
+
+ <t>CPU microcode level</t>
+
+ <t>Memory DIMM configurations (quad rank performance may not be
+ the same as dual rank) in size, freq and slot locations</t>
+
+ <t>PCI configuration parameters (payload size, early ack
+ option...)</t>
+
+ <t>Power management at all levels (ACPI sleep states, processor
+ package, OS...)</t>
+ </list>Software details including:</t>
+
+ <t><list style="symbols">
+ <t>OS parameters and behavior (text vs graphical no one typing at
+ the console on one system)</t>
+
+ <t>OS version (for host and VNF)</t>
+
+ <t>Kernel version (for host and VNF)</t>
+
+ <t>GRUB boot parameters (for host and VNF)</t>
+
+ <t>Hypervisor details (Type and version)</t>
+
+ <t>Selected vSwitch, version number or commit id used</t>
+
+ <t>vSwitch launch command line if it has been parameterised</t>
+
+ <t>Memory allocation to the vSwitch</t>
+
+ <t>which NUMA node it is using, and how many memory channels</t>
+
+ <t>DPDK or any other SW dependency version number or commit id
+ used</t>
+
+ <t>Memory allocation to a VM - if it's from Hugpages/elsewhere</t>
+
+ <t>VM storage type: snapshot/independent persistent/independent
+ non-persistent</t>
+
+ <t>Number of VMs</t>
+
+ <t>Number of Virtual NICs (vNICs), versions, type and driver</t>
+
+ <t>Number of virtual CPUs and their core affinity on the host</t>
+
+ <t>Number vNIC interrupt configuration</t>
+
+ <t>Thread affinitization for the applications (including the
+ vSwitch itself) on the host</t>
+
+ <t>Details of Resource isolation, such as CPUs designated for
+ Host/Kernel (isolcpu) and CPUs designated for specific processes
+ (taskset). - Test duration. - Number of flows.</t>
+ </list></t>
+
+ <t>Test Traffic Information:<list style="symbols">
+ <t>Traffic type - UDP, TCP, IMIX / Other</t>
+
+ <t>Packet Sizes</t>
+
+ <t>Deployment Scenario</t>
+ </list></t>
+
+ <t/>
+ </section>
+
+ <section title="Flow classification">
+ <t>Virtual switches group packets into flows by processing and
+ matching particular packet or frame header information, or by matching
+ packets based on the input ports. Thus a flow can be thought of a
+ sequence of packets that have the same set of header field values or
+ have arrived on the same port. Performance results can vary based on
+ the parameters the vSwitch uses to match for a flow. The recommended
+ flow classification parameters for any vSwitch performance tests are:
+ the input port, the source IP address, the destination IP address and
+ the Ethernet protocol type field. It is essential to increase the flow
+ timeout time on a vSwitch before conducting any performance tests that
+ do not measure the flow setup time. Normally the first packet of a
+ particular stream will install the flow in the virtual switch which
+ adds an additional latency, subsequent packets of the same flow are
+ not subject to this latency if the flow is already installed on the
+ vSwitch.</t>
+ </section>
+
+ <section title="Benchmarks using Baselines with Resource Isolation">
+ <t>This outline describes measurement of baseline with isolated
+ resources at a high level, which is the intended approach at this
+ time.</t>
+
+ <t><list style="numbers">
+ <t>Baselines: <list style="symbols">
+ <t>Optional: Benchmark platform forwarding capability without
+ a vswitch or VNF for at least 72 hours (serves as a means of
+ platform validation and a means to obtain the base performance
+ for the platform in terms of its maximum forwarding rate and
+ latency). <figure>
+ <preamble>Benchmark platform forwarding
+ capability</preamble>
+
+ <artwork align="right"><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | Simple Forwarding App | | Host
+ | | | | |
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmark VNF forwarding capability with direct
+ connectivity (vSwitch bypass, e.g., SR/IOV) for at least 72
+ hours (serves as a means of VNF validation and a means to
+ obtain the base performance for the VNF in terms of its
+ maximum forwarding rate and latency). The metrics gathered
+ from this test will serve as a key comparison point for
+ vSwitch bypass technologies performance and vSwitch
+ performance. <figure align="right">
+ <preamble>Benchmark VNF forwarding capability</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | VNF | | |
+ | | | | |
+ | +------------------------------------------+ | |
+ | | Passthrough/SR-IOV | | Host
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmarking with isolated resources alone, with other
+ resources (both HW&amp;SW) disabled Example, vSw and VM are
+ SUT</t>
+
+ <t>Benchmarking with isolated resources alone, leaving some
+ resources unused</t>
+
+ <t>Benchmark with isolated resources and all resources
+ occupied</t>
+ </list></t>
+
+ <t>Next Steps<list style="symbols">
+ <t>Limited sharing</t>
+
+ <t>Production scenarios</t>
+
+ <t>Stressful scenarios</t>
+ </list></t>
+ </list></t>
+ </section>
+ </section>
+
+ <section title="VSWITCHPERF Specification Summary">
+ <t>The overall specification in preparation is referred to as a Level
+ Test Design (LTD) document, which will contain a suite of performance
+ tests. The base performance tests in the LTD are based on the
+ pre-existing specifications developed by BMWG to test the performance of
+ physical switches. These specifications include:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2544"/> Benchmarking Methodology for Network
+ Interconnect Devices</t>
+
+ <t><xref target="RFC2889"/> Benchmarking Methodology for LAN
+ Switching</t>
+
+ <t><xref target="RFC6201"/> Device Reset Characterization</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t>Some of the above/newer RFCs are being applied in benchmarking for
+ the first time, and represent a development challenge for test equipment
+ developers. Fortunately, many members of the testing system community
+ have engaged on the VSPERF project, including an open source test
+ system.</t>
+
+ <t>In addition to this, the LTD also re-uses the terminology defined
+ by:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2285"/> Benchmarking Terminology for LAN
+ Switching Devices</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t/>
+
+ <t>Specifications to be included in future updates of the LTD
+ include:<list style="symbols">
+ <t><xref target="RFC3918"/> Methodology for IP Multicast
+ Benchmarking</t>
+
+ <t><xref target="RFC4737"/> Packet Reordering Metrics</t>
+ </list></t>
+
+ <t>As one might expect, the most fundamental internetworking
+ characteristics of Throughput and Latency remain important when the
+ switch is virtualized, and these benchmarks figure prominently in the
+ specification.</t>
+
+ <t>When considering characteristics important to "telco" network
+ functions, we must begin to consider additional performance metrics. In
+ this case, the project specifications have referenced metrics from the
+ IETF IP Performance Metrics (IPPM) literature. This means that the <xref
+ target="RFC2544"/> test of Latency is replaced by measurement of a
+ metric derived from IPPM's <xref target="RFC2679"/>, where a set of
+ statistical summaries will be provided (mean, max, min, etc.). Further
+ metrics planned to be benchmarked include packet delay variation as
+ defined by <xref target="RFC5481"/> , reordering, burst behaviour, DUT
+ availability, DUT capacity and packet loss in long term testing at
+ Throughput level, where some low-level of background loss may be present
+ and characterized.</t>
+
+ <t>Tests have been (or will be) designed to collect the metrics
+ below:</t>
+
+ <t><list style="symbols">
+ <t>Throughput Tests to measure the maximum forwarding rate (in
+ frames per second or fps) and bit rate (in Mbps) for a constant load
+ (as defined by <xref target="RFC1242"/>) without traffic loss.</t>
+
+ <t>Packet and Frame Delay Distribution Tests to measure average, min
+ and max packet and frame delay for constant loads.</t>
+
+ <t>Packet Delay Tests to understand latency distribution for
+ different packet sizes and over an extended test run to uncover
+ outliers.</t>
+
+ <t>Scalability Tests to understand how the virtual switch performs
+ as the number of flows, active ports, complexity of the forwarding
+ logic&rsquo;s configuration&hellip; it has to deal with
+ increases.</t>
+
+ <t>Stream Performance Tests (TCP, UDP) to measure bulk data transfer
+ performance, i.e. how fast systems can send and receive data through
+ the switch.</t>
+
+ <t>Control Path and Datapath Coupling Tests, to understand how
+ closely coupled the datapath and the control path are as well as the
+ effect of this coupling on the performance of the DUT (example:
+ delay of the initial packet of a flow).</t>
+
+ <t>CPU and Memory Consumption Tests to understand the virtual
+ switch&rsquo;s footprint on the system, usually conducted as
+ auxiliary measurements with benchmarks above. They include: CPU
+ utilization, Cache utilization and Memory footprint.</t>
+
+ <t>The so-called "Soak" tests, where the selected test is conducted
+ over a long period of time (with an ideal duration of 24 hours, and
+ at least 6 hours). The purpose of soak tests is to capture transient
+ changes in performance which may occur due to infrequent processes
+ or the low probability coincidence of two or more processes. The
+ performance must be evaluated periodically during continuous
+ testing, and this results in use of <xref target="RFC2889"/> Frame
+ Rate metrics instead of <xref target="RFC2544"/> Throughput (which
+ requires stopping traffic to allow time for all traffic to exit
+ internal queues).</t>
+ </list></t>
+
+ <t>Future/planned test specs include:<list style="symbols">
+ <t>Request/Response Performance Tests (TCP, UDP) which measure the
+ transaction rate through the switch.</t>
+
+ <t>Noisy Neighbour Tests, to understand the effects of resource
+ sharing on the performance of a virtual switch.</t>
+
+ <t>Tests derived from examination of ETSI NFV Draft GS IFA003
+ requirements <xref target="IFA003"/> on characterization of
+ acceleration technologies applied to vswitches.</t>
+ </list>The flexibility of deployment of a virtual switch within a
+ network means that the BMWG IETF existing literature needs to be used to
+ characterize the performance of a switch in various deployment
+ scenarios. The deployment scenarios under consideration include:</t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to physical
+ port</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +--------------------+ | |
+ | | | | |
+ | | v | | Host
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure></t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ : | |
+ | | | | | Guest
+ | : v | |
+ | +---------------+ +---------------+ | |
+ | | logical port 0| | logical port 1| | |
+ +---+---------------+-----------+---------------+---+ __|
+ ^ :
+ | |
+ : v __
+ +---+---------------+----------+---------------+---+ |
+ | | logical port 0| | logical port 1| | |
+ | +---------------+ +---------------+ | |
+ | ^ : | |
+ | | | | | Host
+ | : v | |
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+---------+---------------+--+ |
+ | | 0 1 | | 3 4 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | ^ | ^ | | | Host
+ | | |-----------------| v | |
+ | +--------------+ +--------------+ | |
+ | | phy ports | vSwitch | phy ports | | |
+ +---+--------------+----------+--------------+---+_|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ | |
+ | | | | Guest
+ | : | |
+ | +---------------+ | |
+ | | logical port 0| | |
+ +---+---------------+-------------------------------+ __|
+ ^
+ |
+ : __
+ +---+---------------+------------------------------+ |
+ | | logical port 0| | |
+ | +---------------+ | |
+ | ^ | |
+ | | | | Host
+ | : | |
+ | +--------------+ | |
+ | | phy port | vSwitch | |
+ +---+--------------+------------ -------------- ---+ __|
+ ^
+ |
+ :
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | : | |
+ | | | | Guest
+ | v | |
+ | +---------------+ | |
+ | | logical port | | |
+ +-------------------------------+---------------+---+ __|
+ :
+ |
+ v __
+ +------------------------------+---------------+---+ |
+ | | logical port | | |
+ | +---------------+ | |
+ | : | |
+ | | | | Host
+ | v | |
+ | +--------------+ | |
+ | vSwitch | phy port | | |
+ +-------------------------------+--------------+---+ __|
+ :
+ |
+ v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | | | | ^ | |
+ | v | | | | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 | | | | 0 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ : ^
+ | |
+ v : _
+ +---+---------------+---------+---------------+--+ |
+ | | 1 | | 1 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | | ^ | | Host
+ | L-----------------+ | |
+ | | |
+ | vSwitch | |
+ +------------------------------------------------+_|]]></artwork>
+ </figure></t>
+
+ <t>A set of Deployment Scenario figures is available on the VSPERF Test
+ Methodology Wiki page <xref target="TestTopo"/>.</t>
+ </section>
+
+ <section title="3x3 Matrix Coverage">
+ <t>This section organizes the many existing test specifications into the
+ "3x3" matrix (introduced in <xref target="I-D.ietf-bmwg-virtual-net"/>).
+ Because the LTD specification ID names are quite long, this section is
+ organized into lists for each occupied cell of the matrix (not all are
+ occupied, also the matrix has grown to 3x4 to accommodate scale metrics
+ when displaying the coverage of many metrics/benchmarks). The current
+ version of the LTD specification is available <xref target="LTD"/>.</t>
+
+ <t>The tests listed below assess the activation of paths in the data
+ plane, rather than the control plane.</t>
+
+ <t>A complete list of tests with short summaries is available on the
+ VSPERF "LTD Test Spec Overview" Wiki page <xref target="LTDoverV"/>.</t>
+
+ <section title="Speed of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressLearningRate</t>
+
+ <t>PacketLatency.InitialPacketProcessingLatency</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Activation section">
+ <t><list style="symbols">
+ <t>CPDP.Coupling.Flow.Addition</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Activation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.SystemRecoveryTime</t>
+
+ <t>Throughput.RFC2544.ResetTime</t>
+ </list></t>
+ </section>
+
+ <section title="Scale of Activation">
+ <t><list style="symbols">
+ <t>Activation.RFC2889.AddressCachingCapacity</t>
+ </list></t>
+ </section>
+
+ <section title="Speed of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.PacketLossRate</t>
+
+ <t>CPU.RFC2544.0PacketLoss</t>
+
+ <t>Throughput.RFC2544.PacketLossRateFrameModification</t>
+
+ <t>Throughput.RFC2544.BackToBackFrames</t>
+
+ <t>Throughput.RFC2889.MaxForwardingRate</t>
+
+ <t>Throughput.RFC2889.ForwardPressure</t>
+
+ <t>Throughput.RFC2889.BroadcastFrameForwarding</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.ErrorFramesFiltering</t>
+
+ <t>Throughput.RFC2544.Profile</t>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.Soak</t>
+
+ <t>Throughput.RFC2889.SoakFrameModification</t>
+
+ <t>PacketDelayVariation.RFC3393.Soak</t>
+ </list></t>
+ </section>
+
+ <section title="Scalability of Operation">
+ <t><list style="symbols">
+ <t>Scalability.RFC2544.0PacketLoss</t>
+
+ <t>MemoryBandwidth.RFC2544.0PacketLoss.Scalability</t>
+ </list></t>
+ </section>
+
+ <section title="Summary">
+ <t><figure>
+ <artwork><![CDATA[|------------------------------------------------------------------------|
+| | | | | |
+| | SPEED | ACCURACY | RELIABILITY | SCALE |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Activation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Operation | X | X | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| De-activation | | | | |
+| | | | | |
+|------------------------------------------------------------------------|]]></artwork>
+ </figure></t>
+ </section>
+ </section>
+
+ <section title="Security Considerations">
+ <t>Benchmarking activities as described in this memo are limited to
+ technology characterization of a Device Under Test/System Under Test
+ (DUT/SUT) using controlled stimuli in a laboratory environment, with
+ dedicated address space and the constraints specified in the sections
+ above.</t>
+
+ <t>The benchmarking network topology will be an independent test setup
+ and MUST NOT be connected to devices that may forward the test traffic
+ into a production network, or misroute traffic to the test management
+ network.</t>
+
+ <t>Further, benchmarking is performed on a "black-box" basis, relying
+ solely on measurements observable external to the DUT/SUT.</t>
+
+ <t>Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+ benchmarking purposes. Any implications for network security arising
+ from the DUT/SUT SHOULD be identical in the lab and in production
+ networks.</t>
+ </section>
+
+ <section anchor="IANA" title="IANA Considerations">
+ <t>No IANA Action is requested at this time.</t>
+ </section>
+
+ <section title="Acknowledgements">
+ <t>The authors appreciate and acknowledge comments from Scott Bradner,
+ Marius Georgescu, Ramki Krishnan, Doug Montgomery, Martin Klozik,
+ Christian Trautman, and others for their reviews.</t>
+ </section>
+ </middle>
+
+ <back>
+ <references title="Normative References">
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2119"?>
+
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2330"?>
+
+ <?rfc include='reference.RFC.2544'?>
+
+ <?rfc include="reference.RFC.2679"?>
+
+ <?rfc include='reference.RFC.2680'?>
+
+ <?rfc include='reference.RFC.3393'?>
+
+ <?rfc include='reference.RFC.3432'?>
+
+ <?rfc include='reference.RFC.2681'?>
+
+ <?rfc include='reference.RFC.5905'?>
+
+ <?rfc include='reference.RFC.4689'?>
+
+ <?rfc include='reference.RFC.4737'?>
+
+ <?rfc include='reference.RFC.5357'?>
+
+ <?rfc include='reference.RFC.2889'?>
+
+ <?rfc include='reference.RFC.3918'?>
+
+ <?rfc include='reference.RFC.6201'?>
+
+ <?rfc include='reference.RFC.2285'?>
+
+ <reference anchor="NFV.PER001">
+ <front>
+ <title>Network Function Virtualization: Performance and Portability
+ Best Practices</title>
+
+ <author fullname="ETSI NFV" initials="" surname="">
+ <organization/>
+ </author>
+
+ <date month="June" year="2014"/>
+ </front>
+
+ <seriesInfo name="Group Specification"
+ value="ETSI GS NFV-PER 001 V1.1.1 (2014-06)"/>
+
+ <format type="PDF"/>
+ </reference>
+ </references>
+
+ <references title="Informative References">
+ <?rfc include='reference.RFC.1242'?>
+
+ <?rfc include='reference.RFC.5481'?>
+
+ <?rfc include='reference.RFC.6049'?>
+
+ <?rfc include='reference.RFC.6248'?>
+
+ <?rfc include='reference.RFC.6390'?>
+
+ <?rfc include='reference.I-D.ietf-bmwg-virtual-net'?>
+
+ <?rfc include='reference.I-D.huang-bmwg-virtual-network-performance'?>
+
+ <reference anchor="TestTopo">
+ <front>
+ <title>Test Topologies
+ https://wiki.opnfv.org/vsperf/test_methodology</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTDoverV">
+ <front>
+ <title>LTD Test Spec Overview
+ https://wiki.opnfv.org/wiki/vswitchperf_test_spec_review</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="LTD">
+ <front>
+ <title>LTD Test Specification
+ http://artifacts.opnfv.org/vswitchperf/docs/requirements/index.html</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="BrahRel">
+ <front>
+ <title>Brahmaputra, Second OPNFV Release
+ https://www.opnfv.org/brahmaputra</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+
+ <reference anchor="IFA003">
+ <front>
+ <title>https://docbox.etsi.org/ISG/NFV/Open/Drafts/IFA003_Acceleration_-_vSwitch_Spec/</title>
+
+ <author>
+ <organization/>
+ </author>
+
+ <date/>
+ </front>
+ </reference>
+ </references>
+ </back>
+</rfc>
diff --git a/docs/testing/developer/requirements/vm2vm_alternative_benchmark.png b/docs/testing/developer/requirements/vm2vm_alternative_benchmark.png
new file mode 100644
index 00000000..d21334ba
--- /dev/null
+++ b/docs/testing/developer/requirements/vm2vm_alternative_benchmark.png
Binary files differ
diff --git a/docs/testing/developer/requirements/vm2vm_benchmark.png b/docs/testing/developer/requirements/vm2vm_benchmark.png
new file mode 100644
index 00000000..3a85e51f
--- /dev/null
+++ b/docs/testing/developer/requirements/vm2vm_benchmark.png
Binary files differ
diff --git a/docs/testing/developer/requirements/vm2vm_hypervisor_benchmark.png b/docs/testing/developer/requirements/vm2vm_hypervisor_benchmark.png
new file mode 100644
index 00000000..b5b76e8a
--- /dev/null
+++ b/docs/testing/developer/requirements/vm2vm_hypervisor_benchmark.png
Binary files differ
diff --git a/docs/testing/developer/requirements/vm2vm_virtual_interface_benchmark.png b/docs/testing/developer/requirements/vm2vm_virtual_interface_benchmark.png
new file mode 100644
index 00000000..55294af6
--- /dev/null
+++ b/docs/testing/developer/requirements/vm2vm_virtual_interface_benchmark.png
Binary files differ
diff --git a/docs/testing/developer/requirements/vswitchperf_ltd.rst b/docs/testing/developer/requirements/vswitchperf_ltd.rst
new file mode 100644
index 00000000..e1372520
--- /dev/null
+++ b/docs/testing/developer/requirements/vswitchperf_ltd.rst
@@ -0,0 +1,1712 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+******************************
+VSPERF LEVEL TEST DESIGN (LTD)
+******************************
+
+.. 3.1
+
+============
+Introduction
+============
+
+The intention of this Level Test Design (LTD) document is to specify the set of
+tests to carry out in order to objectively measure the current characteristics
+of a virtual switch in the Network Function Virtualization Infrastructure
+(NFVI) as well as the test pass criteria. The detailed test cases will be
+defined in details-of-LTD_, preceded by the doc-id-of-LTD_ and the scope-of-LTD_.
+
+This document is currently in draft form.
+
+.. 3.1.1
+
+
+.. _doc-id-of-LTD:
+
+Document identifier
+===================
+
+The document id will be used to uniquely
+identify versions of the LTD. The format for the document id will be:
+OPNFV\_vswitchperf\_LTD\_REL\_STATUS, where by the
+status is one of: draft, reviewed, corrected or final. The document id
+for this version of the LTD is:
+OPNFV\_vswitchperf\_LTD\_Brahmaputra\_REVIEWED.
+
+.. 3.1.2
+
+.. _scope-of-LTD:
+
+Scope
+=====
+
+The main purpose of this project is to specify a suite of
+performance tests in order to objectively measure the current packet
+transfer characteristics of a virtual switch in the NFVI. The intent of
+the project is to facilitate testing of any virtual switch. Thus, a
+generic suite of tests shall be developed, with no hard dependencies to
+a single implementation. In addition, the test case suite shall be
+architecture independent.
+
+The test cases developed in this project shall not form part of a
+separate test framework, all of these tests may be inserted into the
+Continuous Integration Test Framework and/or the Platform Functionality
+Test Framework - if a vSwitch becomes a standard component of an OPNFV
+release.
+
+.. 3.1.3
+
+References
+==========
+
+* `RFC 1242 Benchmarking Terminology for Network Interconnection
+ Devices <http://www.ietf.org/rfc/rfc1242.txt>`__
+* `RFC 2544 Benchmarking Methodology for Network Interconnect
+ Devices <http://www.ietf.org/rfc/rfc2544.txt>`__
+* `RFC 2285 Benchmarking Terminology for LAN Switching
+ Devices <http://www.ietf.org/rfc/rfc2285.txt>`__
+* `RFC 2889 Benchmarking Methodology for LAN Switching
+ Devices <http://www.ietf.org/rfc/rfc2889.txt>`__
+* `RFC 3918 Methodology for IP Multicast
+ Benchmarking <http://www.ietf.org/rfc/rfc3918.txt>`__
+* `RFC 4737 Packet Reordering
+ Metrics <http://www.ietf.org/rfc/rfc4737.txt>`__
+* `RFC 5481 Packet Delay Variation Applicability
+ Statement <http://www.ietf.org/rfc/rfc5481.txt>`__
+* `RFC 6201 Device Reset
+ Characterization <http://tools.ietf.org/html/rfc6201>`__
+
+.. 3.2
+
+.. _details-of-LTD:
+
+================================
+Details of the Level Test Design
+================================
+
+This section describes the features to be tested (FeaturesToBeTested-of-LTD_), and
+identifies the sets of test cases or scenarios (TestIdentification-of-LTD_).
+
+.. 3.2.1
+
+.. _FeaturesToBeTested-of-LTD:
+
+Features to be tested
+=====================
+
+Characterizing virtual switches (i.e. Device Under Test (DUT) in this document)
+includes measuring the following performance metrics:
+
+- Throughput
+- Packet delay
+- Packet delay variation
+- Packet loss
+- Burst behaviour
+- Packet re-ordering
+- Packet correctness
+- Availability and capacity of the DUT
+
+.. 3.2.2
+
+.. _TestIdentification-of-LTD:
+
+Test identification
+===================
+
+.. 3.2.2.1
+
+Throughput tests
+----------------
+
+The following tests aim to determine the maximum forwarding rate that
+can be achieved with a virtual switch. The list is not exhaustive but
+should indicate the type of tests that should be required. It is
+expected that more will be added.
+
+.. 3.2.2.1.1
+
+.. _PacketLossRatio:
+
+Test ID: LTD.Throughput.RFC2544.PacketLossRatio
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2544 X% packet loss ratio Throughput and Latency Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test determines the DUT's maximum forwarding rate with X% traffic
+ loss for a constant load (fixed length frames at a fixed interval time).
+ The default loss percentages to be tested are: - X = 0% - X = 10^-7%
+
+ Note: Other values can be tested if required by the user.
+
+ The selected frame sizes are those previously defined under
+ :ref:`default-test-parameters`.
+ The test can also be used to determine the average latency of the traffic.
+
+ Under the `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__
+ test methodology, the test duration will
+ include a number of trials; each trial should run for a minimum period
+ of 60 seconds. A binary search methodology must be applied for each
+ trial to obtain the final result.
+
+ **Expected Result**: At the end of each trial, the presence or absence
+ of loss determines the modification of offered load for the next trial,
+ converging on a maximum rate, or
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ Throughput with X%
+ loss.
+ The Throughput load is re-used in related
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ tests and other
+ tests.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum forwarding rate in Frames Per Second (FPS) and Mbps of
+ the DUT for each frame size with X% packet loss.
+ - The average latency of the traffic flow when passing through the DUT
+ (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+.. 3.2.2.1.2
+
+.. _PacketLossRatioFrameModification:
+
+Test ID: LTD.Throughput.RFC2544.PacketLossRatioFrameModification
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2544 X% packet loss Throughput and Latency Test with
+ packet modification
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test determines the DUT's maximum forwarding rate with X% traffic
+ loss for a constant load (fixed length frames at a fixed interval time).
+ The default loss percentages to be tested are: - X = 0% - X = 10^-7%
+
+ Note: Other values can be tested if required by the user.
+
+ The selected frame sizes are those previously defined under
+ :ref:`default-test-parameters`.
+ The test can also be used to determine the average latency of the traffic.
+
+ Under the `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__
+ test methodology, the test duration will
+ include a number of trials; each trial should run for a minimum period
+ of 60 seconds. A binary search methodology must be applied for each
+ trial to obtain the final result.
+
+ During this test, the DUT must perform the following operations on the
+ traffic flow:
+
+ - Perform packet parsing on the DUT's ingress port.
+ - Perform any relevant address look-ups on the DUT's ingress ports.
+ - Modify the packet header before forwarding the packet to the DUT's
+ egress port. Packet modifications include:
+
+ - Modifying the Ethernet source or destination MAC address.
+ - Modifying/adding a VLAN tag. (**Recommended**).
+ - Modifying/adding a MPLS tag.
+ - Modifying the source or destination ip address.
+ - Modifying the TOS/DSCP field.
+ - Modifying the source or destination ports for UDP/TCP/SCTP.
+ - Modifying the TTL.
+
+ **Expected Result**: The Packet parsing/modifications require some
+ additional degree of processing resource, therefore the
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__
+ Throughput is expected to be somewhat lower than the Throughput level
+ measured without additional steps. The reduction is expected to be
+ greatest on tests with the smallest packet sizes (greatest header
+ processing rates).
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum forwarding rate in Frames Per Second (FPS) and Mbps of
+ the DUT for each frame size with X% packet loss and packet
+ modification operations being performed by the DUT.
+ - The average latency of the traffic flow when passing through the DUT
+ (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - The `RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__
+ PDV form of delay variation on the traffic flow,
+ using the 99th percentile.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+.. 3.2.2.1.3
+
+Test ID: LTD.Throughput.RFC2544.Profile
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2544 Throughput and Latency Profile
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test reveals how throughput and latency degrades as the offered
+ rate varies in the region of the DUT's maximum forwarding rate as
+ determined by LTD.Throughput.RFC2544.PacketLossRatio (0% Packet Loss).
+ For example it can be used to determine if the degradation of throughput
+ and latency as the offered rate increases is slow and graceful or sudden
+ and severe.
+
+ The selected frame sizes are those previously defined under
+ :ref:`default-test-parameters`.
+
+ The offered traffic rate is described as a percentage delta with respect
+ to the DUT's RFC 2544 Throughput as determined by
+ LTD.Throughput.RFC2544.PacketLoss Ratio (0% Packet Loss case). A delta
+ of 0% is equivalent to an offered traffic rate equal to the RFC 2544
+ Maximum Throughput; A delta of +50% indicates an offered rate half-way
+ between the Maximum RFC2544 Throughput and line-rate, whereas a delta of
+ -50% indicates an offered rate of half the RFC 2544 Maximum Throughput.
+ Therefore the range of the delta figure is natuarlly bounded at -100%
+ (zero offered traffic) and +100% (traffic offered at line rate).
+
+ The following deltas to the maximum forwarding rate should be applied:
+
+ - -50%, -10%, 0%, +10% & +50%
+
+ **Expected Result**: For each packet size a profile should be produced
+ of how throughput and latency vary with offered rate.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The forwarding rate in Frames Per Second (FPS) and Mbps of the DUT
+ for each delta to the maximum forwarding rate and for each frame
+ size.
+ - The average latency for each delta to the maximum forwarding rate and
+ for each frame size.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+ - Any failures experienced (for example if the vSwitch crashes, stops
+ processing packets, restarts or becomes unresponsive to commands)
+ when the offered load is above Maximum Throughput MUST be recorded
+ and reported with the results.
+
+.. 3.2.2.1.4
+
+Test ID: LTD.Throughput.RFC2544.SystemRecoveryTime
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2544 System Recovery Time Test
+
+ **Prerequisite Test** LTD.Throughput.RFC2544.PacketLossRatio
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine the length of time it takes the DUT
+ to recover from an overload condition for a constant load (fixed length
+ frames at a fixed interval time). The selected frame sizes are those
+ previously defined under :ref:`default-test-parameters`,
+ traffic should be sent to the DUT under normal conditions. During the
+ duration of the test and while the traffic flows are passing though the
+ DUT, at least one situation leading to an overload condition for the DUT
+ should occur. The time from the end of the overload condition to when
+ the DUT returns to normal operations should be measured to determine
+ recovery time. Prior to overloading the DUT, one should record the
+ average latency for 10,000 packets forwarded through the DUT.
+
+ The overload condition SHOULD be to transmit traffic at a very high
+ frame rate to the DUT (150% of the maximum 0% packet loss rate as
+ determined by LTD.Throughput.RFC2544.PacketLossRatio or line-rate
+ whichever is lower), for at least 60 seconds, then reduce the frame rate
+ to 75% of the maximum 0% packet loss rate. A number of time-stamps
+ should be recorded: - Record the time-stamp at which the frame rate was
+ reduced and record a second time-stamp at the time of the last frame
+ lost. The recovery time is the difference between the two timestamps. -
+ Record the average latency for 10,000 frames after the last frame loss
+ and continue to record average latency measurements for every 10,000
+ frames, when latency returns to within 10% of pre-overload levels record
+ the time-stamp.
+
+ **Expected Result**:
+
+ **Metrics collected**
+
+ The following are the metrics collected for this test:
+
+ - The length of time it takes the DUT to recover from an overload
+ condition.
+ - The length of time it takes the DUT to recover the average latency to
+ pre-overload conditions.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+.. 3.2.2.1.5
+
+.. _BackToBackFrames:
+
+Test ID: LTD.Throughput.RFC2544.BackToBackFrames
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC2544 Back To Back Frames Test
+
+ **Prerequisite Test**: N
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to characterize the ability of the DUT to
+ process back-to-back frames. For each frame size previously defined
+ under :ref:`default-test-parameters`, a burst of traffic
+ is sent to the DUT with the minimum inter-frame gap between each frame.
+ If the number of received frames equals the number of frames that were
+ transmitted, the burst size should be increased and traffic is sent to
+ the DUT again. The value measured is the back-to-back value, that is the
+ maximum burst size the DUT can handle without any frame loss. Please note
+ a trial must run for a minimum of 2 seconds and should be repeated 50
+ times (at a minimum).
+
+ **Expected Result**:
+
+ Tests of back-to-back frames with physical devices have produced
+ unstable results in some cases. All tests should be repeated in multiple
+ test sessions and results stability should be examined.
+
+ **Metrics collected**
+
+ The following are the metrics collected for this test:
+
+ - The average back-to-back value across the trials, which is
+ the number of frames in the longest burst that the DUT will
+ handle without the loss of any frames.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+.. 3.2.2.1.6
+
+Test ID: LTD.Throughput.RFC2889.MaxForwardingRateSoak
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2889 X% packet loss Max Forwarding Rate Soak Test
+
+ **Prerequisite Test** LTD.Throughput.RFC2544.PacketLossRatio
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand the Max Forwarding Rate stability
+ over an extended test duration in order to uncover any outliers. To allow
+ for an extended test duration, the test should ideally run for 24 hours
+ or, if this is not possible, for at least 6 hours. For this test, each frame
+ size must be sent at the highest Throughput rate with X% packet loss, as
+ determined in the prerequisite test. The default loss percentages to be
+ tested are: - X = 0% - X = 10^-7%
+
+ Note: Other values can be tested if required by the user.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - Max Forwarding Rate stability of the DUT.
+
+ - This means reporting the number of packets lost per time interval
+ and reporting any time intervals with packet loss. The
+ `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__
+ Forwarding Rate shall be measured in each interval.
+ An interval of 60s is suggested.
+
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+ - The `RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__
+ PDV form of delay variation on the traffic flow,
+ using the 99th percentile.
+
+.. 3.2.2.1.7
+
+Test ID: LTD.Throughput.RFC2889.MaxForwardingRateSoakFrameModification
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2889 Max Forwarding Rate Soak Test with Frame Modification
+
+ **Prerequisite Test**:
+ LTD.Throughput.RFC2544.PacketLossRatioFrameModification (0% Packet Loss)
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand the Max Forwarding Rate stability over an
+ extended test duration in order to uncover any outliers. To allow for an
+ extended test duration, the test should ideally run for 24 hours or, if
+ this is not possible, for at least 6 hour. For this test, each frame
+ size must be sent at the highest Throughput rate with 0% packet loss, as
+ determined in the prerequisite test.
+
+ During this test, the DUT must perform the following operations on the
+ traffic flow:
+
+ - Perform packet parsing on the DUT's ingress port.
+ - Perform any relevant address look-ups on the DUT's ingress ports.
+ - Modify the packet header before forwarding the packet to the DUT's
+ egress port. Packet modifications include:
+
+ - Modifying the Ethernet source or destination MAC address.
+ - Modifying/adding a VLAN tag (**Recommended**).
+ - Modifying/adding a MPLS tag.
+ - Modifying the source or destination ip address.
+ - Modifying the TOS/DSCP field.
+ - Modifying the source or destination ports for UDP/TCP/SCTP.
+ - Modifying the TTL.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - Max Forwarding Rate stability of the DUT.
+
+ - This means reporting the number of packets lost per time interval
+ and reporting any time intervals with packet loss. The
+ `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__
+ Forwarding Rate shall be measured in each interval.
+ An interval of 60s is suggested.
+
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+ - The `RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__
+ PDV form of delay variation on the traffic flow, using the 99th
+ percentile.
+
+.. 3.2.2.1.8
+
+Test ID: LTD.Throughput.RFC6201.ResetTime
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 6201 Reset Time Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine the length of time it takes the DUT
+ to recover from a reset.
+
+ Two reset methods are defined - planned and unplanned. A planned reset
+ requires stopping and restarting the virtual switch by the usual
+ 'graceful' method defined by it's documentation. An unplanned reset
+ requires simulating a fatal internal fault in the virtual switch - for
+ example by using kill -SIGKILL on a Linux environment.
+
+ Both reset methods SHOULD be exercised.
+
+ For each frame size previously defined under :ref:`default-test-parameters`,
+ traffic should be sent to the DUT under
+ normal conditions. During the duration of the test and while the traffic
+ flows are passing through the DUT, the DUT should be reset and the Reset
+ time measured. The Reset time is the total time that a device is
+ determined to be out of operation and includes the time to perform the
+ reset and the time to recover from it (cf. `RFC6201
+ <https://www.rfc-editor.org/rfc/rfc6201.txt>`__).
+
+ `RFC6201 <https://www.rfc-editor.org/rfc/rfc6201.txt>`__ defines two methods
+ to measure the Reset time:
+
+ - Frame-Loss Method: which requires the monitoring of the number of
+ lost frames and calculates the Reset time based on the number of
+ frames lost and the offered rate according to the following
+ formula:
+
+ .. code-block:: console
+
+ Frames_lost (packets)
+ Reset_time = -------------------------------------
+ Offered_rate (packets per second)
+
+ - Timestamp Method: which measures the time from which the last frame
+ is forwarded from the DUT to the time the first frame is forwarded
+ after the reset. This involves time-stamping all transmitted frames
+ and recording the timestamp of the last frame that was received prior
+ to the reset and also measuring the timestamp of the first frame that
+ is received after the reset. The Reset time is the difference between
+ these two timestamps.
+
+ According to `RFC6201 <https://www.rfc-editor.org/rfc/rfc6201.txt>`__ the
+ choice of method depends on the test tool's capability; the Frame-Loss
+ method SHOULD be used if the test tool supports:
+
+ * Counting the number of lost frames per stream.
+ * Transmitting test frame despite the physical link status.
+
+ whereas the Timestamp method SHOULD be used if the test tool supports:
+
+ * Timestamping each frame.
+ * Monitoring received frame's timestamp.
+ * Transmitting frames only if the physical link status is up.
+
+ **Expected Result**:
+
+ **Metrics collected**
+
+ The following are the metrics collected for this test:
+
+ * Average Reset Time over the number of trials performed.
+
+ Results of this test should include the following information:
+
+ * The reset method used.
+ * Throughput in Fps and Mbps.
+ * Average Frame Loss over the number of trials performed.
+ * Average Reset Time in milliseconds over the number of trials performed.
+ * Number of trials performed.
+ * Protocol: IPv4, IPv6, MPLS, etc.
+ * Frame Size in Octets
+ * Port Media: Ethernet, Gigabit Ethernet (GbE), etc.
+ * Port Speed: 10 Gbps, 40 Gbps etc.
+ * Interface Encapsulation: Ethernet, Ethernet VLAN, etc.
+
+ **Deployment scenario**:
+
+ * Physical → virtual switch → physical.
+
+.. 3.2.2.1.9
+
+Test ID: LTD.Throughput.RFC2889.MaxForwardingRate
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC2889 Forwarding Rate Test
+
+ **Prerequisite Test**: LTD.Throughput.RFC2544.PacketLossRatio
+
+ **Priority**:
+
+ **Description**:
+
+ This test measures the DUT's Max Forwarding Rate when the Offered Load
+ is varied between the throughput and the Maximum Offered Load for fixed
+ length frames at a fixed time interval. The selected frame sizes are
+ those previously defined under :ref:`default-test-parameters`.
+ The throughput is the maximum offered
+ load with 0% frame loss (measured by the prerequisite test), and the
+ Maximum Offered Load (as defined by
+ `RFC2285 <https://www.rfc-editor.org/rfc/rfc2285.txt>`__) is *"the highest
+ number of frames per second that an external source can transmit to a
+ DUT/SUT for forwarding to a specified output interface or interfaces"*.
+
+ Traffic should be sent to the DUT at a particular rate (TX rate)
+ starting with TX rate equal to the throughput rate. The rate of
+ successfully received frames at the destination counted (in FPS). If the
+ RX rate is equal to the TX rate, the TX rate should be increased by a
+ fixed step size and the RX rate measured again until the Max Forwarding
+ Rate is found.
+
+ The trial duration for each iteration should last for the period of time
+ needed for the system to reach steady state for the frame size being
+ tested. Under `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__
+ (Sec. 5.6.3.1) test methodology, the test
+ duration should run for a minimum period of 30 seconds, regardless
+ whether the system reaches steady state before the minimum duration
+ ends.
+
+ **Expected Result**: According to
+ `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__ The Max Forwarding
+ Rate is the highest forwarding rate of a DUT taken from an iterative set of
+ forwarding rate measurements. The iterative set of forwarding rate measurements
+ are made by setting the intended load transmitted from an external source and
+ measuring the offered load (i.e what the DUT is capable of forwarding). If the
+ Throughput == the Maximum Offered Load, it follows that Max Forwarding Rate is
+ equal to the Maximum Offered Load.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The Max Forwarding Rate for the DUT for each packet size.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical. Note: Full mesh tests with
+ multiple ingress and egress ports are a key aspect of RFC 2889
+ benchmarks, and scenarios with both 2 and 4 ports should be tested.
+ In any case, the number of ports used must be reported.
+
+.. 3.2.2.1.10
+
+Test ID: LTD.Throughput.RFC2889.ForwardPressure
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC2889 Forward Pressure Test
+
+ **Prerequisite Test**: LTD.Throughput.RFC2889.MaxForwardingRate
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine if the DUT transmits frames with an
+ inter-frame gap that is less than 12 bytes. This test overloads the DUT
+ and measures the output for forward pressure. Traffic should be
+ transmitted to the DUT with an inter-frame gap of 11 bytes, this will
+ overload the DUT by 1 byte per frame. The forwarding rate of the DUT
+ should be measured.
+
+ **Expected Result**: The forwarding rate should not exceed the maximum
+ forwarding rate of the DUT collected by
+ LTD.Throughput.RFC2889.MaxForwardingRate.
+
+ **Metrics collected**
+
+ The following are the metrics collected for this test:
+
+ - Forwarding rate of the DUT in FPS or Mbps.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+.. 3.2.2.1.11
+
+Test ID: LTD.Throughput.RFC2889.ErrorFramesFiltering
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC2889 Error Frames Filtering Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine whether the DUT will propagate any
+ erroneous frames it receives or whether it is capable of filtering out
+ the erroneous frames. Traffic should be sent with erroneous frames
+ included within the flow at random intervals. Illegal frames that must
+ be tested include: - Oversize Frames. - Undersize Frames. - CRC Errored
+ Frames. - Dribble Bit Errored Frames - Alignment Errored Frames
+
+ The traffic flow exiting the DUT should be recorded and checked to
+ determine if the erroneous frames where passed through the DUT.
+
+ **Expected Result**: Broken frames are not passed!
+
+ **Metrics collected**
+
+ No Metrics are collected in this test, instead it determines:
+
+ - Whether the DUT will propagate erroneous frames.
+ - Or whether the DUT will correctly filter out any erroneous frames
+ from traffic flow with out removing correct frames.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+.. 3.2.2.1.12
+
+Test ID: LTD.Throughput.RFC2889.BroadcastFrameForwarding
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC2889 Broadcast Frame Forwarding Test
+
+ **Prerequisite Test**: N
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine the maximum forwarding rate of the
+ DUT when forwarding broadcast traffic. For each frame previously defined
+ under :ref:`default-test-parameters`, the traffic should
+ be set up as broadcast traffic. The traffic throughput of the DUT should
+ be measured.
+
+ The test should be conducted with at least 4 physical ports on the DUT.
+ The number of ports used MUST be recorded.
+
+ As broadcast involves forwarding a single incoming packet to several
+ destinations, the latency of a single packet is defined as the average
+ of the latencies for each of the broadcast destinations.
+
+ The incoming packet is transmitted on each of the other physical ports,
+ it is not transmitted on the port on which it was received. The test MAY
+ be conducted using different broadcasting ports to uncover any
+ performance differences.
+
+ **Expected Result**:
+
+ **Metrics collected**:
+
+ The following are the metrics collected for this test:
+
+ - The forwarding rate of the DUT when forwarding broadcast traffic.
+ - The minimum, average & maximum packets latencies observed.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch 3x physical. In the Broadcast rate testing,
+ four test ports are required. One of the ports is connected to the test
+ device, so it can send broadcast frames and listen for miss-routed frames.
+
+.. 3.2.2.1.13
+
+Test ID: LTD.Throughput.RFC2544.WorstN-BestN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: Modified RFC 2544 X% packet loss ratio Throughput and Latency Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test determines the DUT's maximum forwarding rate with X% traffic
+ loss for a constant load (fixed length frames at a fixed interval time).
+ The default loss percentages to be tested are: X = 0%, X = 10^-7%
+
+ Modified RFC 2544 throughput benchmarking methodology aims to quantify
+ the throughput measurement variations observed during standard RFC 2544
+ benchmarking measurements of virtual switches and VNFs. The RFC2544
+ binary search algorithm is modified to use more samples per test trial
+ to drive the binary search and yield statistically more meaningful
+ results. This keeps the heart of the RFC2544 methodology, still relying
+ on the binary search of throughput at specified loss tolerance, while
+ providing more useful information about the range of results seen in
+ testing. Instead of using a single traffic trial per iteration step,
+ each traffic trial is repeated N times and the success/failure of the
+ iteration step is based on these N traffic trials. Two types of revised
+ tests are defined - *Worst-of-N* and *Best-of-N*.
+
+ **Worst-of-N**
+
+ *Worst-of-N* indicates the lowest expected maximum throughput for (
+ packet size, loss tolerance) when repeating the test.
+
+ 1. Repeat the same test run N times at a set packet rate, record each
+ result.
+ 2. Take the WORST result (highest packet loss) out of N result samples,
+ called the Worst-of-N sample.
+ 3. If Worst-of-N sample has loss less than the set loss tolerance, then
+ the step is successful - increase the test traffic rate.
+ 4. If Worst-of-N sample has loss greater than the set loss tolerance
+ then the step failed - decrease the test traffic rate.
+ 5. Go to step 1.
+
+ **Best-of-N**
+
+ *Best-of-N* indicates the highest expected maximum throughput for (
+ packet size, loss tolerance) when repeating the test.
+
+ 1. Repeat the same traffic run N times at a set packet rate, record
+ each result.
+ 2. Take the BEST result (least packet loss) out of N result samples,
+ called the Best-of-N sample.
+ 3. If Best-of-N sample has loss less than the set loss tolerance, then
+ the step is successful - increase the test traffic rate.
+ 4. If Best-of-N sample has loss greater than the set loss tolerance,
+ then the step failed - decrease the test traffic rate.
+ 5. Go to step 1.
+
+ Performing both Worst-of-N and Best-of-N benchmark tests yields lower
+ and upper bounds of expected maximum throughput under the operating
+ conditions, giving a very good indication to the user of the
+ deterministic performance range for the tested setup.
+
+ **Expected Result**: At the end of each trial series, the presence or
+ absence of loss determines the modification of offered load for the
+ next trial series, converging on a maximum rate, or
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ Throughput
+ with X% loss.
+ The Throughput load is re-used in related
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ tests and other
+ tests.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum forwarding rate in Frames Per Second (FPS) and Mbps of
+ the DUT for each frame size with X% packet loss.
+ - The average latency of the traffic flow when passing through the DUT
+ (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - Following may also be collected as part of this test, to determine
+ the vSwitch's performance footprint on the system:
+
+ - CPU core utilization.
+ - CPU cache utilization.
+ - Memory footprint.
+ - System bus (QPI, PCI, ...) utilization.
+ - CPU cycles consumed per packet.
+
+.. 3.2.2.1.14
+
+Test ID: LTD.Throughput.Overlay.Network.<tech>.RFC2544.PacketLossRatio
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: <tech> Overlay Network RFC 2544 X% packet loss ratio Throughput and Latency Test
+
+
+ NOTE: Throughout this test, four interchangeable overlay technologies are covered by the
+ same test description. They are: VXLAN, GRE, NVGRE and GENEVE.
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+ This test evaluates standard switch performance benchmarks for the scenario where an
+ Overlay Network is deployed for all paths through the vSwitch. Overlay Technologies covered
+ (replacing <tech> in the test name) include:
+
+ - VXLAN
+ - GRE
+ - NVGRE
+ - GENEVE
+
+ Performance will be assessed for each of the following overlay network functions:
+
+ - Encapsulation only
+ - De-encapsulation only
+ - Both Encapsulation and De-encapsulation
+
+ For each native packet, the DUT must perform the following operations:
+
+ - Examine the packet and classify its correct overlay net (tunnel) assignment
+ - Encapsulate the packet
+ - Switch the packet to the correct port
+
+ For each encapsulated packet, the DUT must perform the following operations:
+
+ - Examine the packet and classify its correct native network assignment
+ - De-encapsulate the packet, if required
+ - Switch the packet to the correct port
+
+ The selected frame sizes are those previously defined under
+ :ref:`default-test-parameters`.
+
+ Thus, each test comprises an overlay technology, a network function,
+ and a packet size *with* overlay network overhead included
+ (but see also the discussion at
+ https://etherpad.opnfv.org/p/vSwitchTestsDrafts ).
+
+ The test can also be used to determine the average latency of the traffic.
+
+ Under the `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__
+ test methodology, the test duration will
+ include a number of trials; each trial should run for a minimum period
+ of 60 seconds. A binary search methodology must be applied for each
+ trial to obtain the final result for Throughput.
+
+ **Expected Result**: At the end of each trial, the presence or absence
+ of loss determines the modification of offered load for the next trial,
+ converging on a maximum rate, or
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ Throughput with X%
+ loss (where the value of X is typically equal to zero).
+ The Throughput load is re-used in related
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ tests and other
+ tests.
+
+ **Metrics Collected**:
+ The following are the metrics collected for this test:
+
+ - The maximum Throughput in Frames Per Second (FPS) and Mbps of
+ the DUT for each frame size with X% packet loss.
+ - The average latency of the traffic flow when passing through the DUT
+ and VNFs (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+.. 3.2.3.1.15
+
+Test ID: LTD.Throughput.RFC2544.MatchAction.PacketLossRatio
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2544 X% packet loss ratio match action Throughput and Latency Test
+
+ **Prerequisite Test**: LTD.Throughput.RFC2544.PacketLossRatio
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine the cost of carrying out match
+ action(s) on the DUT’s RFC2544 Throughput with X% traffic loss for
+ a constant load (fixed length frames at a fixed interval time).
+
+ Each test case requires:
+
+ * selection of a specific match action(s),
+ * specifying a percentage of total traffic that is elligible
+ for the match action,
+ * determination of the specific test configuration (number
+ of flows, number of test ports, presence of an external
+ controller, etc.), and
+ * measurement of the RFC 2544 Throughput level with X% packet
+ loss: Traffic shall be bi-directional and symmetric.
+
+ Note: It would be ideal to verify that all match action-elligible
+ traffic was forwarded to the correct port, and if forwarded to
+ an unintended port it should be considered lost.
+
+ A match action is an action that is typically carried on a frame
+ or packet that matches a set of flow classification parameters
+ (typically frame/packet header fields). A match action may or may
+ not modify a packet/frame. Match actions include [1]:
+
+ * output : outputs a packet to a particular port.
+ * normal: Subjects the packet to traditional L2/L3 processing
+ (MAC learning).
+ * flood: Outputs the packet on all switch physical ports
+ other than the port on which it was received and any ports
+ on which flooding is disabled.
+ * all: Outputs the packet on all switch physical ports other
+ than the port on which it was received.
+ * local: Outputs the packet on the ``local port``, which
+ corresponds to the network device that has the same name as
+ the bridge.
+ * in_port: Outputs the packet on the port from which it was
+ received.
+ * Controller: Sends the packet and its metadata to the
+ OpenFlow controller as a ``packet in`` message.
+ * enqueue: Enqueues the packet on the specified queue
+ within port.
+ * drop: discard the packet.
+
+ Modifications include [1]:
+
+ * mod vlan: covered by LTD.Throughput.RFC2544.PacketLossRatioFrameModification
+ * mod_dl_src: Sets the source Ethernet address.
+ * mod_dl_dst: Sets the destination Ethernet address.
+ * mod_nw_src: Sets the IPv4 source address.
+ * mod_nw_dst: Sets the IPv4 destination address.
+ * mod_tp_src: Sets the TCP or UDP or SCTP source port.
+ * mod_tp_dst: Sets the TCP or UDP or SCTP destination port.
+ * mod_nw_tos: Sets the DSCP bits in the IPv4 ToS/DSCP or
+ IPv6 traffic class field.
+ * mod_nw_ecn: Sets the ECN bits in the appropriate IPv4 or
+ IPv6 field.
+ * mod_nw_ttl: Sets the IPv4 TTL or IPv6 hop limit field.
+
+ Note: This comprehensive list requires extensive traffic generator
+ capabilities.
+
+ The match action(s) that were applied as part of the test should be
+ reported in the final test report.
+
+ During this test, the DUT must perform the following operations on
+ the traffic flow:
+
+ * Perform packet parsing on the DUT’s ingress port.
+ * Perform any relevant address look-ups on the DUT’s ingress
+ ports.
+ * Carry out one or more of the match actions specified above.
+
+ The default loss percentages to be tested are: - X = 0% - X = 10^-7%
+ Other values can be tested if required by the user. The selected
+ frame sizes are those previously defined under
+ :ref:`default-test-parameters`.
+
+ The test can also be used to determine the average latency of the
+ traffic when a match action is applied to packets in a flow. Under
+ the RFC2544 test methodology, the test duration will include a
+ number of trials; each trial should run for a minimum period of 60
+ seconds. A binary search methodology must be applied for each
+ trial to obtain the final result.
+
+ **Expected Result:**
+
+ At the end of each trial, the presence or absence of loss
+ determines the modification of offered load for the next trial,
+ converging on a maximum rate, or RFC2544Throughput with X% loss.
+ The Throughput load is re-used in related RFC2544 tests and other
+ tests.
+
+ **Metrics Collected:**
+
+ The following are the metrics collected for this test:
+
+ * The RFC 2544 Throughput in Frames Per Second (FPS) and Mbps
+ of the DUT for each frame size with X% packet loss.
+ * The average latency of the traffic flow when passing through
+ the DUT (if testing for latency, note that this average is
+ different from the test specified in Section 26.3 ofRFC2544).
+ * CPU and memory utilization may also be collected as part of
+ this test, to determine the vSwitch’s performance footprint
+ on the system.
+
+ The metrics collected can be compared to that of the prerequisite
+ test to determine the cost of the match action(s) in the pipeline.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical (and others are possible)
+
+ [1] ovs-ofctl - administer OpenFlow switches
+ [http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt ]
+
+
+.. 3.2.2.2
+
+Packet Latency tests
+--------------------
+
+These tests will measure the store and forward latency as well as the packet
+delay variation for various packet types through the virtual switch. The
+following list is not exhaustive but should indicate the type of tests
+that should be required. It is expected that more will be added.
+
+.. 3.2.2.2.1
+
+Test ID: LTD.PacketLatency.InitialPacketProcessingLatency
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: Initial Packet Processing Latency
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ In some virtual switch architectures, the first packets of a flow will
+ take the system longer to process than subsequent packets in the flow.
+ This test determines the latency for these packets. The test will
+ measure the latency of the packets as they are processed by the
+ flow-setup-path of the DUT. There are two methods for this test, a
+ recommended method and a nalternative method that can be used if it is
+ possible to disable the fastpath of the virtual switch.
+
+ Recommended method: This test will send 64,000 packets to the DUT, each
+ belonging to a different flow. Average packet latency will be determined
+ over the 64,000 packets.
+
+ Alternative method: This test will send a single packet to the DUT after
+ a fixed interval of time. The time interval will be equivalent to the
+ amount of time it takes for a flow to time out in the virtual switch
+ plus 10%. Average packet latency will be determined over 1,000,000
+ packets.
+
+ This test is intended only for non-learning virtual switches; For learning
+ virtual switches use RFC2889.
+
+ For this test, only unidirectional traffic is required.
+
+ **Expected Result**: The average latency for the initial packet of all
+ flows should be greater than the latency of subsequent traffic.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - Average latency of the initial packets of all flows that are
+ processed by the DUT.
+
+ **Deployment scenario**:
+
+ - Physical → Virtual Switch → Physical.
+
+.. 3.2.2.2.2
+
+Test ID: LTD.PacketDelayVariation.RFC3393.Soak
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: Packet Delay Variation Soak Test
+
+ **Prerequisite Tests**: LTD.Throughput.RFC2544.PacketLossRatio (0% Packet Loss)
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand the distribution of packet delay
+ variation for different frame sizes over an extended test duration and
+ to determine if there are any outliers. To allow for an extended test
+ duration, the test should ideally run for 24 hours or, if this is not
+ possible, for at least 6 hour. For this test, each frame size must be
+ sent at the highest possible throughput with 0% packet loss, as
+ determined in the prerequisite test.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The packet delay variation value for traffic passing through the DUT.
+ - The `RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__
+ PDV form of delay variation on the traffic flow,
+ using the 99th percentile, for each 60s interval during the test.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+.. 3.2.2.3
+
+Scalability tests
+-----------------
+
+The general aim of these tests is to understand the impact of large flow
+table size and flow lookups on throughput. The following list is not
+exhaustive but should indicate the type of tests that should be required.
+It is expected that more will be added.
+
+.. 3.2.2.3.1
+
+.. _Scalability0PacketLoss:
+
+Test ID: LTD.Scalability.Flows.RFC2544.0PacketLoss
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2544 0% loss Flow Scalability throughput test
+
+ **Prerequisite Test**: LTD.Throughput.RFC2544.PacketLossRatio, IF the
+ delta Throughput between the single-flow RFC2544 test and this test with
+ a variable number of flows is desired.
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to measure how throughput changes as the number
+ of flows in the DUT increases. The test will measure the throughput
+ through the fastpath, as such the flows need to be installed on the DUT
+ before passing traffic.
+
+ For each frame size previously defined under :ref:`default-test-parameters`
+ and for each of the following number of flows:
+
+ - 1,000
+ - 2,000
+ - 4,000
+ - 8,000
+ - 16,000
+ - 32,000
+ - 64,000
+ - Max supported number of flows.
+
+ This test will be conducted under two conditions following the
+ establishment of all flows as required by RFC 2544, regarding the flow
+ expiration time-out:
+
+ 1) The time-out never expires during each trial.
+
+ 2) The time-out expires for all flows periodically. This would require a
+ short time-out compared with flow re-appearance for a small number of
+ flows, and may not be possible for all flow conditions.
+
+ The maximum 0% packet loss Throughput should be determined in a manner
+ identical to LTD.Throughput.RFC2544.PacketLossRatio.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum number of frames per second that can be forwarded at the
+ specified number of flows and the specified frame size, with zero
+ packet loss.
+
+.. 3.2.2.3.2
+
+Test ID: LTD.MemoryBandwidth.RFC2544.0PacketLoss.Scalability
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2544 0% loss Memory Bandwidth Scalability test
+
+ **Prerequisite Tests**: LTD.Throughput.RFC2544.PacketLossRatio, IF the
+ delta Throughput between an undisturbed RFC2544 test and this test with
+ the Throughput affected by cache and memory bandwidth contention is desired.
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand how the DUT's performance is
+ affected by cache sharing and memory bandwidth between processes.
+
+ During the test all cores not used by the vSwitch should be running a
+ memory intensive application. This application should read and write
+ random data to random addresses in unused physical memory. The random
+ nature of the data and addresses is intended to consume cache, exercise
+ main memory access (as opposed to cache) and exercise all memory buses
+ equally. Furthermore:
+
+ - the ratio of reads to writes should be recorded. A ratio of 1:1
+ SHOULD be used.
+ - the reads and writes MUST be of cache-line size and be cache-line aligned.
+ - in NUMA architectures memory access SHOULD be local to the core's node.
+ Whether only local memory or a mix of local and remote memory is used
+ MUST be recorded.
+ - the memory bandwidth (reads plus writes) used per-core MUST be recorded;
+ the test MUST be run with a per-core memory bandwidth equal to half the
+ maximum system memory bandwidth divided by the number of cores. The test
+ MAY be run with other values for the per-core memory bandwidth.
+ - the test MAY also be run with the memory intensive application running
+ on all cores.
+
+ Under these conditions the DUT's 0% packet loss throughput is determined
+ as per LTD.Throughput.RFC2544.PacketLossRatio.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The DUT's 0% packet loss throughput in the presence of cache sharing and
+ memory bandwidth between processes.
+
+.. 3.2.2.3.3
+
+Test ID: LTD.Scalability.VNF.RFC2544.PacketLossRatio
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: VNF Scalability RFC 2544 X% packet loss ratio Throughput and
+ Latency Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test determines the DUT's throughput rate with X% traffic loss for
+ a constant load (fixed length frames at a fixed interval time) when the
+ number of VNFs on the DUT increases. The default loss percentages
+ to be tested are: - X = 0% - X = 10^-7% . The minimum number of
+ VNFs to be tested are 3.
+
+ Flow classification should be conducted with L2, L3 and L4 matching
+ to understand the matching and scaling capability of the vSwitch. The
+ matching fields which were used as part of the test should be reported
+ as part of the benchmark report.
+
+ The vSwitch is responsible for forwarding frames between the VNFs
+
+ The SUT (vSwitch and VNF daisy chain) operation should be validated
+ before running the test. This may be completed by running a burst or
+ continuous stream of traffic through the SUT to ensure proper operation
+ before a test.
+
+ **Note**: The traffic rate used to validate SUT operation should be low
+ enough not to stress the SUT.
+
+ **Note**: Other values can be tested if required by the user.
+
+ **Note**: The same VNF should be used in the "daisy chain" formation.
+ Each addition of a VNF should be conducted in a new test setup (The DUT
+ is brought down, then the DUT is brought up again). An atlernative approach
+ would be to continue to add VNFs without bringing down the DUT. The
+ approach used needs to be documented as part of the test report.
+
+ The selected frame sizes are those previously defined under
+ :ref:`default-test-parameters`.
+ The test can also be used to determine the average latency of the traffic.
+
+ Under the `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__
+ test methodology, the test duration will
+ include a number of trials; each trial should run for a minimum period
+ of 60 seconds. A binary search methodology must be applied for each
+ trial to obtain the final result for Throughput.
+
+ **Expected Result**: At the end of each trial, the presence or absence
+ of loss determines the modification of offered load for the next trial,
+ converging on a maximum rate, or
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ Throughput with X%
+ loss.
+ The Throughput load is re-used in related
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ tests and other
+ tests.
+
+ If the test VNFs are rather light-weight in terms of processing, the test
+ provides a view of multiple passes through the vswitch on logical
+ interfaces. In other words, the test produces an optimistic count of
+ daisy-chained VNFs, but the cumulative effect of traffic on the vSwitch is
+ "real" (assuming that the vSwitch has some dedicated resources, and the
+ effects on shared resources is understood).
+
+
+ **Metrics Collected**:
+ The following are the metrics collected for this test:
+
+ - The maximum Throughput in Frames Per Second (FPS) and Mbps of
+ the DUT for each frame size with X% packet loss.
+ - The average latency of the traffic flow when passing through the DUT
+ and VNFs (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+.. 3.2.2.3.4
+
+Test ID: LTD.Scalability.VNF.RFC2544.PacketLossProfile
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: VNF Scalability RFC 2544 Throughput and Latency Profile
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test reveals how throughput and latency degrades as the number
+ of VNFs increases and offered rate varies in the region of the DUT's
+ maximum forwarding rate as determined by
+ LTD.Throughput.RFC2544.PacketLossRatio (0% Packet Loss).
+ For example it can be used to determine if the degradation of throughput
+ and latency as the number of VNFs and offered rate increases is slow
+ and graceful, or sudden and severe. The minimum number of VNFs to
+ be tested is 3.
+
+ The selected frame sizes are those previously defined under
+ :ref:`default-test-parameters`.
+
+ The offered traffic rate is described as a percentage delta with respect
+ to the DUT's RFC 2544 Throughput as determined by
+ LTD.Throughput.RFC2544.PacketLoss Ratio (0% Packet Loss case). A delta
+ of 0% is equivalent to an offered traffic rate equal to the RFC 2544
+ Throughput; A delta of +50% indicates an offered rate half-way
+ between the Throughput and line-rate, whereas a delta of
+ -50% indicates an offered rate of half the maximum rate. Therefore the
+ range of the delta figure is natuarlly bounded at -100% (zero offered
+ traffic) and +100% (traffic offered at line rate).
+
+ The following deltas to the maximum forwarding rate should be applied:
+
+ - -50%, -10%, 0%, +10% & +50%
+
+ **Note**: Other values can be tested if required by the user.
+
+ **Note**: The same VNF should be used in the "daisy chain" formation.
+ Each addition of a VNF should be conducted in a new test setup (The DUT
+ is brought down, then the DUT is brought up again). An atlernative approach
+ would be to continue to add VNFs without bringing down the DUT. The
+ approach used needs to be documented as part of the test report.
+
+ Flow classification should be conducted with L2, L3 and L4 matching
+ to understand the matching and scaling capability of the vSwitch. The
+ matching fields which were used as part of the test should be reported
+ as part of the benchmark report.
+
+ The SUT (vSwitch and VNF daisy chain) operation should be validated
+ before running the test. This may be completed by running a burst or
+ continuous stream of traffic through the SUT to ensure proper operation
+ before a test.
+
+ **Note**: the traffic rate used to validate SUT operation should be low
+ enough not to stress the SUT
+
+ **Expected Result**: For each packet size a profile should be produced
+ of how throughput and latency vary with offered rate.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The forwarding rate in Frames Per Second (FPS) and Mbps of the DUT
+ for each delta to the maximum forwarding rate and for each frame
+ size.
+ - The average latency for each delta to the maximum forwarding rate and
+ for each frame size.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+ - Any failures experienced (for example if the vSwitch crashes, stops
+ processing packets, restarts or becomes unresponsive to commands)
+ when the offered load is above Maximum Throughput MUST be recorded
+ and reported with the results.
+
+.. 3.2.2.4
+
+Activation tests
+----------------
+
+The general aim of these tests is to understand the capacity of the
+and speed with which the vswitch can accommodate new flows.
+
+.. 3.2.2.4.1
+
+Test ID: LTD.Activation.RFC2889.AddressCachingCapacity
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC2889 Address Caching Capacity Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ Please note this test is only applicable to virtual switches that are capable of
+ MAC learning. The aim of this test is to determine the address caching
+ capacity of the DUT for a constant load (fixed length frames at a fixed
+ interval time). The selected frame sizes are those previously defined
+ under :ref:`default-test-parameters`.
+
+ In order to run this test the aging time, that is the maximum time the
+ DUT will keep a learned address in its flow table, and a set of initial
+ addresses, whose value should be >= 1 and <= the max number supported by
+ the implementation must be known. Please note that if the aging time is
+ configurable it must be longer than the time necessary to produce frames
+ from the external source at the specified rate. If the aging time is
+ fixed the frame rate must be brought down to a value that the external
+ source can produce in a time that is less than the aging time.
+
+ Learning Frames should be sent from an external source to the DUT to
+ install a number of flows. The Learning Frames must have a fixed
+ destination address and must vary the source address of the frames. The
+ DUT should install flows in its flow table based on the varying source
+ addresses. Frames should then be transmitted from an external source at
+ a suitable frame rate to see if the DUT has properly learned all of the
+ addresses. If there is no frame loss and no flooding, the number of
+ addresses sent to the DUT should be increased and the test is repeated
+ until the max number of cached addresses supported by the DUT
+ determined.
+
+ **Expected Result**:
+
+ **Metrics collected**:
+
+ The following are the metrics collected for this test:
+
+ - Number of cached addresses supported by the DUT.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → 2 x physical (one receiving, one listening).
+
+.. 3.2.2.4.2
+
+Test ID: LTD.Activation.RFC2889.AddressLearningRate
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC2889 Address Learning Rate Test
+
+ **Prerequisite Test**: LTD.Memory.RFC2889.AddressCachingCapacity
+
+ **Priority**:
+
+ **Description**:
+
+ Please note this test is only applicable to virtual switches that are capable of
+ MAC learning. The aim of this test is to determine the rate of address
+ learning of the DUT for a constant load (fixed length frames at a fixed
+ interval time). The selected frame sizes are those previously defined
+ under :ref:`default-test-parameters`, traffic should be
+ sent with each IPv4/IPv6 address incremented by one. The rate at which
+ the DUT learns a new address should be measured. The maximum caching
+ capacity from LTD.Memory.RFC2889.AddressCachingCapacity should be taken
+ into consideration as the maximum number of addresses for which the
+ learning rate can be obtained.
+
+ **Expected Result**: It may be worthwhile to report the behaviour when
+ operating beyond address capacity - some DUTs may be more friendly to
+ new addresses than others.
+
+ **Metrics collected**:
+
+ The following are the metrics collected for this test:
+
+ - The address learning rate of the DUT.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → 2 x physical (one receiving, one listening).
+
+.. 3.2.2.5
+
+Coupling between control path and datapath Tests
+------------------------------------------------
+
+The following tests aim to determine how tightly coupled the datapath
+and the control path are within a virtual switch. The following list
+is not exhaustive but should indicate the type of tests that should be
+required. It is expected that more will be added.
+
+.. 3.2.2.5.1
+
+Test ID: LTD.CPDPCouplingFlowAddition
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: Control Path and Datapath Coupling
+
+ **Prerequisite Test**:
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand how exercising the DUT's control
+ path affects datapath performance.
+
+ Initially a certain number of flow table entries are installed in the
+ vSwitch. Then over the duration of an RFC2544 throughput test
+ flow-entries are added and removed at the rates specified below. No
+ traffic is 'hitting' these flow-entries, they are simply added and
+ removed.
+
+ The test MUST be repeated with the following initial number of
+ flow-entries installed: - < 10 - 1000 - 100,000 - 10,000,000 (or the
+ maximum supported number of flow-entries)
+
+ The test MUST be repeated with the following rates of flow-entry
+ addition and deletion per second: - 0 - 1 (i.e. 1 addition plus 1
+ deletion) - 100 - 10,000
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum forwarding rate in Frames Per Second (FPS) and Mbps of
+ the DUT.
+ - The average latency of the traffic flow when passing through the DUT
+ (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+.. 3.2.2.6
+
+CPU and memory consumption
+--------------------------
+
+The following tests will profile a virtual switch's CPU and memory
+utilization under various loads and circumstances. The following
+list is not exhaustive but should indicate the type of tests that
+should be required. It is expected that more will be added.
+
+.. 3.2.2.6.1
+
+.. _CPU0PacketLoss:
+
+Test ID: LTD.Stress.RFC2544.0PacketLoss
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ **Title**: RFC 2544 0% Loss CPU OR Memory Stress Test
+
+ **Prerequisite Test**:
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand the overall performance of the
+ system when a CPU or Memory intensive application is run on the same DUT as
+ the Virtual Switch. For each frame size, an
+ LTD.Throughput.RFC2544.PacketLossRatio (0% Packet Loss) test should be
+ performed. Throughout the entire test a CPU or Memory intensive application
+ should be run on all cores on the system not in use by the Virtual Switch.
+ For NUMA system only cores on the same NUMA node are loaded.
+
+ It is recommended that stress-ng be used for loading the non-Virtual
+ Switch cores but any stress tool MAY be used.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - Memory and CPU utilization of the cores running the Virtual Switch.
+ - The number of identity of the cores allocated to the Virtual Switch.
+ - The configuration of the stress tool (for example the command line
+ parameters used to start it.)
+
+ **Note:** Stress in the test ID can be replaced with the name of the
+ component being stressed, when reporting the results:
+ LTD.CPU.RFC2544.0PacketLoss or LTD.Memory.RFC2544.0PacketLoss
+
+.. 3.2.2.7
+
+Summary List of Tests
+---------------------
+
+1. Throughput tests
+
+ - Test ID: LTD.Throughput.RFC2544.PacketLossRatio
+ - Test ID: LTD.Throughput.RFC2544.PacketLossRatioFrameModification
+ - Test ID: LTD.Throughput.RFC2544.Profile
+ - Test ID: LTD.Throughput.RFC2544.SystemRecoveryTime
+ - Test ID: LTD.Throughput.RFC2544.BackToBackFrames
+ - Test ID: LTD.Throughput.RFC2889.Soak
+ - Test ID: LTD.Throughput.RFC2889.SoakFrameModification
+ - Test ID: LTD.Throughput.RFC6201.ResetTime
+ - Test ID: LTD.Throughput.RFC2889.MaxForwardingRate
+ - Test ID: LTD.Throughput.RFC2889.ForwardPressure
+ - Test ID: LTD.Throughput.RFC2889.ErrorFramesFiltering
+ - Test ID: LTD.Throughput.RFC2889.BroadcastFrameForwarding
+ - Test ID: LTD.Throughput.RFC2544.WorstN-BestN
+ - Test ID: LTD.Throughput.Overlay.Network.<tech>.RFC2544.PacketLossRatio
+
+2. Packet Latency tests
+
+ - Test ID: LTD.PacketLatency.InitialPacketProcessingLatency
+ - Test ID: LTD.PacketDelayVariation.RFC3393.Soak
+
+3. Scalability tests
+
+ - Test ID: LTD.Scalability.Flows.RFC2544.0PacketLoss
+ - Test ID: LTD.MemoryBandwidth.RFC2544.0PacketLoss.Scalability
+ - LTD.Scalability.VNF.RFC2544.PacketLossProfile
+ - LTD.Scalability.VNF.RFC2544.PacketLossRatio
+
+4. Activation tests
+
+ - Test ID: LTD.Activation.RFC2889.AddressCachingCapacity
+ - Test ID: LTD.Activation.RFC2889.AddressLearningRate
+
+5. Coupling between control path and datapath Tests
+
+ - Test ID: LTD.CPDPCouplingFlowAddition
+
+6. CPU and memory consumption
+
+ - Test ID: LTD.Stress.RFC2544.0PacketLoss
diff --git a/docs/testing/developer/requirements/vswitchperf_ltp.rst b/docs/testing/developer/requirements/vswitchperf_ltp.rst
new file mode 100644
index 00000000..e5147bea
--- /dev/null
+++ b/docs/testing/developer/requirements/vswitchperf_ltp.rst
@@ -0,0 +1,1344 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+.. 3.1
+
+*****************************
+VSPERF LEVEL TEST PLAN (LTP)
+*****************************
+
+===============
+Introduction
+===============
+
+The objective of the OPNFV project titled
+**Characterize vSwitch Performance for Telco NFV Use Cases**, is to
+evaluate the performance of virtual switches to identify its suitability for a
+Telco Network Function Virtualization (NFV) environment. The intention of this
+Level Test Plan (LTP) document is to specify the scope, approach, resources,
+and schedule of the virtual switch performance benchmarking activities in
+OPNFV. The test cases will be identified in a separate document called the
+Level Test Design (LTD) document.
+
+This document is currently in draft form.
+
+.. 3.1.1
+
+
+.. _doc-id:
+
+Document identifier
+=========================
+
+The document id will be used to uniquely identify versions of the LTP. The
+format for the document id will be: OPNFV\_vswitchperf\_LTP\_REL\_STATUS, where
+by the status is one of: draft, reviewed, corrected or final. The document id
+for this version of the LTP is: OPNFV\_vswitchperf\_LTP\_Colorado\_REVIEWED.
+
+.. 3.1.2
+
+.. _scope:
+
+Scope
+==========
+
+The main purpose of this project is to specify a suite of
+performance tests in order to objectively measure the current packet
+transfer characteristics of a virtual switch in the NFVI. The intent of
+the project is to facilitate the performance testing of any virtual switch.
+Thus, a generic suite of tests shall be developed, with no hard dependencies to
+a single implementation. In addition, the test case suite shall be
+architecture independent.
+
+The test cases developed in this project shall not form part of a
+separate test framework, all of these tests may be inserted into the
+Continuous Integration Test Framework and/or the Platform Functionality
+Test Framework - if a vSwitch becomes a standard component of an OPNFV
+release.
+
+.. 3.1.3
+
+References
+===============
+
+* `RFC 1242 Benchmarking Terminology for Network Interconnection
+ Devices <http://www.ietf.org/rfc/rfc1242.txt>`__
+* `RFC 2544 Benchmarking Methodology for Network Interconnect
+ Devices <http://www.ietf.org/rfc/rfc2544.txt>`__
+* `RFC 2285 Benchmarking Terminology for LAN Switching
+ Devices <http://www.ietf.org/rfc/rfc2285.txt>`__
+* `RFC 2889 Benchmarking Methodology for LAN Switching
+ Devices <http://www.ietf.org/rfc/rfc2889.txt>`__
+* `RFC 3918 Methodology for IP Multicast
+ Benchmarking <http://www.ietf.org/rfc/rfc3918.txt>`__
+* `RFC 4737 Packet Reordering
+ Metrics <http://www.ietf.org/rfc/rfc4737.txt>`__
+* `RFC 5481 Packet Delay Variation Applicability
+ Statement <http://www.ietf.org/rfc/rfc5481.txt>`__
+* `RFC 6201 Device Reset
+ Characterization <http://tools.ietf.org/html/rfc6201>`__
+
+.. 3.1.4
+
+Level in the overall sequence
+===============================
+The level of testing conducted by vswitchperf in the overall testing sequence (among
+all the testing projects in OPNFV) is the performance benchmarking of a
+specific component (the vswitch) in the OPNFV platfrom. It's expected that this
+testing will follow on from the functional and integration testing conducted by
+other testing projects in OPNFV, namely Functest and Yardstick.
+
+.. 3.1.5
+
+Test classes and overall test conditions
+=========================================
+A benchmark is defined by the IETF as: A standardized test that serves as a
+basis for performance evaluation and comparison. It's important to note that
+benchmarks are not Functional tests. They do not provide PASS/FAIL criteria,
+and most importantly ARE NOT performed on live networks, or performed with live
+network traffic.
+
+In order to determine the packet transfer characteristics of a virtual switch,
+the benchmarking tests will be broken down into the following categories:
+
+- **Throughput Tests** to measure the maximum forwarding rate (in
+ frames per second or fps) and bit rate (in Mbps) for a constant load
+ (as defined by `RFC1242 <https://www.rfc-editor.org/rfc/rfc1242.txt>`__)
+ without traffic loss.
+- **Packet and Frame Delay Tests** to measure average, min and max
+ packet and frame delay for constant loads.
+- **Stream Performance Tests** (TCP, UDP) to measure bulk data transfer
+ performance, i.e. how fast systems can send and receive data through
+ the virtual switch.
+- **Request/Response Performance** Tests (TCP, UDP) the measure the
+ transaction rate through the virtual switch.
+- **Packet Delay Tests** to understand latency distribution for
+ different packet sizes and over an extended test run to uncover
+ outliers.
+- **Scalability Tests** to understand how the virtual switch performs
+ as the number of flows, active ports, complexity of the forwarding
+ logic's configuration... it has to deal with increases.
+- **Control Path and Datapath Coupling** Tests, to understand how
+ closely coupled the datapath and the control path are as well as the
+ effect of this coupling on the performance of the DUT.
+- **CPU and Memory Consumption Tests** to understand the virtual
+ switch’s footprint on the system, this includes:
+
+ * CPU core utilization.
+ * CPU cache utilization.
+ * Memory footprint.
+ * System bus (QPI, PCI, ..) utilization.
+ * Memory lanes utilization.
+ * CPU cycles consumed per packet.
+ * Time To Establish Flows Tests.
+
+- **Noisy Neighbour Tests**, to understand the effects of resource
+ sharing on the performance of a virtual switch.
+
+**Note:** some of the tests above can be conducted simultaneously where
+the combined results would be insightful, for example Packet/Frame Delay
+and Scalability.
+
+
+
+.. 3.2
+
+.. _details-of-LTP:
+
+===================================
+Details of the Level Test Plan
+===================================
+
+This section describes the following items:
+* Test items and their identifiers (TestItems_)
+* Test Traceability Matrix (TestMatrix_)
+* Features to be tested (FeaturesToBeTested_)
+* Features not to be tested (FeaturesNotToBeTested_)
+* Approach (Approach_)
+* Item pass/fail criteria (PassFailCriteria_)
+* Suspension criteria and resumption requirements (SuspensionResumptionReqs_)
+
+.. 3.2.1
+
+.. _TestItems:
+
+Test items and their identifiers
+==================================
+The test item/application vsperf is trying to test are virtual switches and in
+particular their performance in an nfv environment. vsperf will first try to
+measure the maximum achievable performance by a virtual switch and then it will
+focus in on usecases that are as close to real life deployment scenarios as
+possible.
+
+.. 3.2.2
+
+.. _TestMatrix:
+
+Test Traceability Matrix
+==========================
+vswitchperf leverages the "3x3" matrix (introduced in
+https://tools.ietf.org/html/draft-ietf-bmwg-virtual-net-02) to achieve test
+traceability. The matrix was expanded to 3x4 to accommodate scale metrics when
+displaying the coverage of many metrics/benchmarks). Test case covreage in the
+LTD is tracked using the following catagories:
+
+
++---------------+-------------+------------+---------------+-------------+
+| | | | | |
+| | SPEED | ACCURACY | RELIABILITY | SCALE |
+| | | | | |
++---------------+-------------+------------+---------------+-------------+
+| | | | | |
+| Activation | X | X | X | X |
+| | | | | |
++---------------+-------------+------------+---------------+-------------+
+| | | | | |
+| Operation | X | X | X | X |
+| | | | | |
++---------------+-------------+------------+---------------+-------------+
+| | | | | |
+| De-activation | | | | |
+| | | | | |
++---------------+-------------+------------+---------------+-------------+
+
+X = denotes a test catagory that has 1 or more test cases defined.
+
+.. 3.2.3
+
+.. _FeaturesToBeTested:
+
+Features to be tested
+==========================
+
+Characterizing virtual switches (i.e. Device Under Test (DUT) in this document)
+includes measuring the following performance metrics:
+
+- **Throughput** as defined by `RFC1242
+ <https://www.rfc-editor.org/rfc/rfc1242.txt>`__: The maximum rate at which
+ **none** of the offered frames are dropped by the DUT. The maximum frame
+ rate and bit rate that can be transmitted by the DUT without any error
+ should be recorded. Note there is an equivalent bit rate and a specific
+ layer at which the payloads contribute to the bits. Errors and
+ improperly formed frames or packets are dropped.
+- **Packet delay** introduced by the DUT and its cumulative effect on
+ E2E networks. Frame delay can be measured equivalently.
+- **Packet delay variation**: measured from the perspective of the
+ VNF/application. Packet delay variation is sometimes called "jitter".
+ However, we will avoid the term "jitter" as the term holds different
+ meaning to different groups of people. In this document we will
+ simply use the term packet delay variation. The preferred form for this
+ metric is the PDV form of delay variation defined in `RFC5481
+ <https://www.rfc-editor.org/rfc/rfc5481.txt>`__. The most relevant
+ measurement of PDV considers the delay variation of a single user flow,
+ as this will be relevant to the size of end-system buffers to compensate
+ for delay variation. The measurement system's ability to store the
+ delays of individual packets in the flow of interest is a key factor
+ that determines the specific measurement method. At the outset, it is
+ ideal to view the complete PDV distribution. Systems that can capture
+ and store packets and their delays have the freedom to calculate the
+ reference minimum delay and to determine various quantiles of the PDV
+ distribution accurately (in post-measurement processing routines).
+ Systems without storage must apply algorithms to calculate delay and
+ statistical measurements on the fly. For example, a system may store
+ temporary estimates of the mimimum delay and the set of (100) packets
+ with the longest delays during measurement (to calculate a high quantile,
+ and update these sets with new values periodically.
+ In some cases, a limited number of delay histogram bins will be
+ available, and the bin limits will need to be set using results from
+ repeated experiments. See section 8 of `RFC5481
+ <https://www.rfc-editor.org/rfc/rfc5481.txt>`__.
+- **Packet loss** (within a configured waiting time at the receiver): All
+ packets sent to the DUT should be accounted for.
+- **Burst behaviour**: measures the ability of the DUT to buffer packets.
+- **Packet re-ordering**: measures the ability of the device under test to
+ maintain sending order throughout transfer to the destination.
+- **Packet correctness**: packets or Frames must be well-formed, in that
+ they include all required fields, conform to length requirements, pass
+ integrity checks, etc.
+- **Availability and capacity** of the DUT i.e. when the DUT is fully “up”
+ and connected, following measurements should be captured for
+ DUT without any network packet load:
+
+ - Includes average power consumption of the CPUs (in various power states) and
+ system over specified period of time. Time period should not be less
+ than 60 seconds.
+ - Includes average per core CPU utilization over specified period of time.
+ Time period should not be less than 60 seconds.
+ - Includes the number of NIC interfaces supported.
+ - Includes headroom of VM workload processing cores (i.e. available
+ for applications).
+
+.. 3.2.4
+
+.. _FeaturesNotToBeTested:
+
+Features not to be tested
+==========================
+vsperf doesn't intend to define or perform any functional tests. The aim is to
+focus on performance.
+
+.. 3.2.5
+
+.. _Approach:
+
+Approach
+==============
+The testing approach adoped by the vswitchperf project is black box testing,
+meaning the test inputs can be generated and the outputs captured and
+completely evaluated from the outside of the System Under Test. Some metrics
+can be collected on the SUT, such as cpu or memory utilization if the
+collection has no/minimal impact on benchmark.
+This section will look at the deployment scenarios and the general methodology
+used by vswitchperf. In addition, this section will also specify the details of
+the Test Report that must be collected for each of the test cases.
+
+.. 3.2.5.1
+
+Deployment Scenarios
+--------------------------
+The following represents possible deployment test scenarios which can
+help to determine the performance of both the virtual switch and the
+datapaths to physical ports (to NICs) and to logical ports (to VNFs):
+
+.. 3.2.5.1.1
+
+.. _Phy2Phy:
+
+Physical port → vSwitch → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. code-block:: console
+
+ _
+ +--------------------------------------------------+ |
+ | +--------------------+ | |
+ | | | | |
+ | | v | | Host
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ _|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+.. 3.2.5.1.2
+
+.. _PVP:
+
+Physical port → vSwitch → VNF → vSwitch → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. code-block:: console
+
+ _
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ : | |
+ | | | | | Guest
+ | : v | |
+ | +---------------+ +---------------+ | |
+ | | logical port 0| | logical port 1| | |
+ +---+---------------+-----------+---------------+---+ _|
+ ^ :
+ | |
+ : v _
+ +---+---------------+----------+---------------+---+ |
+ | | logical port 0| | logical port 1| | |
+ | +---------------+ +---------------+ | |
+ | ^ : | |
+ | | | | | Host
+ | : v | |
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ _|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+.. 3.2.5.1.3
+
+.. _PVVP:
+
+Physical port → vSwitch → VNF → vSwitch → VNF → vSwitch → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ _
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+ _|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+---------+---------------+--+ |
+ | | 0 1 | | 3 4 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | ^ | ^ | | | Host
+ | | L-----------------+ v | |
+ | +--------------+ +--------------+ | |
+ | | phy ports | vSwitch | phy ports | | |
+ +---+--------------+----------+--------------+---+ _|
+ ^ ^ : :
+ | | | |
+ : : v v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+.. 3.2.5.1.4
+
+Physical port → VNF → vSwitch → VNF → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ _
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ |+-------------------+ | | +-------------------+| |
+ || Application | | | | Application || |
+ |+-------------------+ | | +-------------------+| |
+ | ^ | | | ^ | | | Guests
+ | | v | | | v | |
+ |+-------------------+ | | +-------------------+| |
+ || logical ports | | | | logical ports || |
+ || 0 1 | | | | 0 1 || |
+ ++--------------------++ ++--------------------++ _|
+ ^ : ^ :
+ (PCI passthrough) | | (PCI passthrough)
+ | v : | _
+ +--------++------------+-+------------++---------+ |
+ | | || 0 | | 1 || | | |
+ | | ||logical port| |logical port|| | | |
+ | | |+------------+ +------------+| | | |
+ | | | | ^ | | | |
+ | | | L-----------------+ | | | |
+ | | | | | | | Host
+ | | | vSwitch | | | |
+ | | +-----------------------------+ | | |
+ | | | | |
+ | | v | |
+ | +--------------+ +--------------+ | |
+ | | phy port/VF | | phy port/VF | | |
+ +-+--------------+--------------+--------------+-+ _|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+.. 3.2.5.1.5
+
+Physical port → vSwitch → VNF
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ _
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ | |
+ | | | | Guest
+ | : | |
+ | +---------------+ | |
+ | | logical port 0| | |
+ +---+---------------+-------------------------------+ _|
+ ^
+ |
+ : _
+ +---+---------------+------------------------------+ |
+ | | logical port 0| | |
+ | +---------------+ | |
+ | ^ | |
+ | | | | Host
+ | : | |
+ | +--------------+ | |
+ | | phy port | vSwitch | |
+ +---+--------------+------------ -------------- ---+ _|
+ ^
+ |
+ :
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+.. 3.2.5.1.6
+
+VNF → vSwitch → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ _
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | : | |
+ | | | | Guest
+ | v | |
+ | +---------------+ | |
+ | | logical port | | |
+ +-------------------------------+---------------+---+ _|
+ :
+ |
+ v _
+ +------------------------------+---------------+---+ |
+ | | logical port | | |
+ | +---------------+ | |
+ | : | |
+ | | | | Host
+ | v | |
+ | +--------------+ | |
+ | vSwitch | phy port | | |
+ +-------------------------------+--------------+---+ _|
+ :
+ |
+ v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+.. 3.2.5.1.7
+
+VNF → vSwitch → VNF → vSwitch
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ _
+ +-------------------------+ +-------------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +-----------------+ | | +-----------------+ | |
+ | | Application | | | | Application | | |
+ | +-----------------+ | | +-----------------+ | |
+ | : | | ^ | |
+ | | | | | | | Guest
+ | v | | : | |
+ | +---------------+ | | +---------------+ | |
+ | | logical port 0| | | | logical port 0| | |
+ +-----+---------------+---+ +---+---------------+-----+ _|
+ : ^
+ | |
+ v : _
+ +----+---------------+------------+---------------+-----+ |
+ | | port 0 | | port 1 | | |
+ | +---------------+ +---------------+ | |
+ | : ^ | |
+ | | | | | Host
+ | +--------------------+ | |
+ | | |
+ | vswitch | |
+ +-------------------------------------------------------+ _|
+
+.. 3.2.5.1.8
+
+HOST 1(Physical port → virtual switch → VNF → virtual switch → Physical port)
+→ HOST 2(Physical port → virtual switch → VNF → virtual switch → Physical port)
+
+HOST 1 (PVP) → HOST 2 (PVP)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ _
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+ _|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+--+ +---+---------------+--+ |
+ | | 0 1 | | | | 3 4 | | |
+ | | logical ports | | | | logical ports | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | | Hosts
+ | | v | | | v | |
+ | +--------------+ | | +--------------+ | |
+ | | phy ports | | | | phy ports | | |
+ +---+--------------+---+ +---+--------------+---+ _|
+ ^ : : :
+ | +-----------------+ |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+
+
+**Note:** For tests where the traffic generator and/or measurement
+receiver are implemented on VM and connected to the virtual switch
+through vNIC, the issues of shared resources and interactions between
+the measurement devices and the device under test must be considered.
+
+**Note:** Some RFC 2889 tests require a full-mesh sending and receiving
+pattern involving more than two ports. This possibility is illustrated in the
+Physical port → vSwitch → VNF → vSwitch → VNF → vSwitch → physical port
+diagram above (with 2 sending and 2 receiving ports, though all ports
+could be used bi-directionally).
+
+**Note:** When Deployment Scenarios are used in RFC 2889 address learning
+or cache capacity testing, an additional port from the vSwitch must be
+connected to the test device. This port is used to listen for flooded
+frames.
+
+.. 3.2.5.2
+
+General Methodology:
+--------------------------
+To establish the baseline performance of the virtual switch, tests would
+initially be run with a simple workload in the VNF (the recommended
+simple workload VNF would be `DPDK <http://www.dpdk.org/>`__'s testpmd
+application forwarding packets in a VM or vloop\_vnf a simple kernel
+module that forwards traffic between two network interfaces inside the
+virtualized environment while bypassing the networking stack).
+Subsequently, the tests would also be executed with a real Telco
+workload running in the VNF, which would exercise the virtual switch in
+the context of higher level Telco NFV use cases, and prove that its
+underlying characteristics and behaviour can be measured and validated.
+Suitable real Telco workload VNFs are yet to be identified.
+
+.. 3.2.5.2.1
+
+.. _default-test-parameters:
+
+Default Test Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following list identifies the default parameters for suite of
+tests:
+
+- Reference application: Simple forwarding or Open Source VNF.
+- Frame size (bytes): 64, 128, 256, 512, 1024, 1280, 1518, 2K, 4k OR
+ Packet size based on use-case (e.g. RTP 64B, 256B) OR Mix of packet sizes as
+ maintained by the Functest project <https://wiki.opnfv.org/traffic_profile_management>.
+- Reordering check: Tests should confirm that packets within a flow are
+ not reordered.
+- Duplex: Unidirectional / Bidirectional. Default: Full duplex with
+ traffic transmitting in both directions, as network traffic generally
+ does not flow in a single direction. By default the data rate of
+ transmitted traffic should be the same in both directions, please
+ note that asymmetric traffic (e.g. downlink-heavy) tests will be
+ mentioned explicitly for the relevant test cases.
+- Number of Flows: Default for non scalability tests is a single flow.
+ For scalability tests the goal is to test with maximum supported
+ flows but where possible will test up to 10 Million flows. Start with
+ a single flow and scale up. By default flows should be added
+ sequentially, tests that add flows simultaneously will explicitly
+ call out their flow addition behaviour. Packets are generated across
+ the flows uniformly with no burstiness. For multi-core tests should
+ consider the number of packet flows based on vSwitch/VNF multi-thread
+ implementation and behavior.
+
+- Traffic Types: UDP, SCTP, RTP, GTP and UDP traffic.
+- Deployment scenarios are:
+- Physical → virtual switch → physical.
+- Physical → virtual switch → VNF → virtual switch → physical.
+- Physical → virtual switch → VNF → virtual switch → VNF → virtual
+ switch → physical.
+- Physical → VNF → virtual switch → VNF → physical.
+- Physical → virtual switch → VNF.
+- VNF → virtual switch → Physical.
+- VNF → virtual switch → VNF.
+
+Tests MUST have these parameters unless otherwise stated. **Test cases
+with non default parameters will be stated explicitly**.
+
+**Note**: For throughput tests unless stated otherwise, test
+configurations should ensure that traffic traverses the installed flows
+through the virtual switch, i.e. flows are installed and have an appropriate
+time out that doesn't expire before packet transmission starts.
+
+.. 3.2.5.2.2
+
+Flow Classification
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Virtual switches classify packets into flows by processing and matching
+particular header fields in the packet/frame and/or the input port where
+the packets/frames arrived. The vSwitch then carries out an action on
+the group of packets that match the classification parameters. Thus a
+flow is considered to be a sequence of packets that have a shared set of
+header field values or have arrived on the same port and have the same
+action applied to them. Performance results can vary based on the
+parameters the vSwitch uses to match for a flow. The recommended flow
+classification parameters for L3 vSwitch performance tests are: the
+input port, the source IP address, the destination IP address and the
+Ethernet protocol type field. It is essential to increase the flow
+time-out time on a vSwitch before conducting any performance tests that
+do not measure the flow set-up time. Normally the first packet of a
+particular flow will install the flow in the vSwitch which adds an
+additional latency, subsequent packets of the same flow are not subject
+to this latency if the flow is already installed on the vSwitch.
+
+.. 3.2.5.2.3
+
+Test Priority
+~~~~~~~~~~~~~~~~~~~~~
+
+Tests will be assigned a priority in order to determine which tests
+should be implemented immediately and which tests implementations
+can be deferred.
+
+Priority can be of following types: - Urgent: Must be implemented
+immediately. - High: Must be implemented in the next release. - Medium:
+May be implemented after the release. - Low: May or may not be
+implemented at all.
+
+.. 3.2.5.2.4
+
+SUT Setup
+~~~~~~~~~~~~~~~~~~
+
+The SUT should be configured to its "default" state. The
+SUT's configuration or set-up must not change between tests in any way
+other than what is required to do the test. All supported protocols must
+be configured and enabled for each test set up.
+
+.. 3.2.5.2.5
+
+Port Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The DUT should be configured with n ports where
+n is a multiple of 2. Half of the ports on the DUT should be used as
+ingress ports and the other half of the ports on the DUT should be used
+as egress ports. Where a DUT has more than 2 ports, the ingress data
+streams should be set-up so that they transmit packets to the egress
+ports in sequence so that there is an even distribution of traffic
+across ports. For example, if a DUT has 4 ports 0(ingress), 1(ingress),
+2(egress) and 3(egress), the traffic stream directed at port 0 should
+output a packet to port 2 followed by a packet to port 3. The traffic
+stream directed at port 1 should also output a packet to port 2 followed
+by a packet to port 3.
+
+.. 3.2.5.2.6
+
+Frame Formats
+~~~~~~~~~~~~~~~~~~~~~
+
+**Frame formats Layer 2 (data link layer) protocols**
+
+- Ethernet II
+
+.. code-block:: console
+
+ +---------------------------+-----------+
+ | Ethernet Header | Payload | Check Sum |
+ +-----------------+---------+-----------+
+ |_________________|_________|___________|
+ 14 Bytes 46 - 1500 4 Bytes
+ Bytes
+
+
+**Layer 3 (network layer) protocols**
+
+- IPv4
+
+.. code-block:: console
+
+ +-----------------+-----------+---------+-----------+
+ | Ethernet Header | IP Header | Payload | Checksum |
+ +-----------------+-----------+---------+-----------+
+ |_________________|___________|_________|___________|
+ 14 Bytes 20 bytes 26 - 1480 4 Bytes
+ Bytes
+
+- IPv6
+
+.. code-block:: console
+
+ +-----------------+-----------+---------+-----------+
+ | Ethernet Header | IP Header | Payload | Checksum |
+ +-----------------+-----------+---------+-----------+
+ |_________________|___________|_________|___________|
+ 14 Bytes 40 bytes 26 - 1460 4 Bytes
+ Bytes
+
+**Layer 4 (transport layer) protocols**
+
+ - TCP
+ - UDP
+ - SCTP
+
+.. code-block:: console
+
+ +-----------------+-----------+-----------------+---------+-----------+
+ | Ethernet Header | IP Header | Layer 4 Header | Payload | Checksum |
+ +-----------------+-----------+-----------------+---------+-----------+
+ |_________________|___________|_________________|_________|___________|
+ 14 Bytes 40 bytes 20 Bytes 6 - 1460 4 Bytes
+ Bytes
+
+
+**Layer 5 (application layer) protocols**
+
+ - RTP
+ - GTP
+
+.. code-block:: console
+
+ +-----------------+-----------+-----------------+---------+-----------+
+ | Ethernet Header | IP Header | Layer 4 Header | Payload | Checksum |
+ +-----------------+-----------+-----------------+---------+-----------+
+ |_________________|___________|_________________|_________|___________|
+ 14 Bytes 20 bytes 20 Bytes >= 6 Bytes 4 Bytes
+
+.. 3.2.5.2.7
+
+Packet Throughput
+~~~~~~~~~~~~~~~~~~~~~~~~~
+There is a difference between an Ethernet frame,
+an IP packet, and a UDP datagram. In the seven-layer OSI model of
+computer networking, packet refers to a data unit at layer 3 (network
+layer). The correct term for a data unit at layer 2 (data link layer) is
+a frame, and at layer 4 (transport layer) is a segment or datagram.
+
+Important concepts related to 10GbE performance are frame rate and
+throughput. The MAC bit rate of 10GbE, defined in the IEEE standard 802
+.3ae, is 10 billion bits per second. Frame rate is based on the bit rate
+and frame format definitions. Throughput, defined in IETF RFC 1242, is
+the highest rate at which the system under test can forward the offered
+load, without loss.
+
+The frame rate for 10GbE is determined by a formula that divides the 10
+billion bits per second by the preamble + frame length + inter-frame
+gap.
+
+The maximum frame rate is calculated using the minimum values of the
+following parameters, as described in the IEEE 802 .3ae standard:
+
+- Preamble: 8 bytes \* 8 = 64 bits
+- Frame Length: 64 bytes (minimum) \* 8 = 512 bits
+- Inter-frame Gap: 12 bytes (minimum) \* 8 = 96 bits
+
+Therefore, Maximum Frame Rate (64B Frames)
+= MAC Transmit Bit Rate / (Preamble + Frame Length + Inter-frame Gap)
+= 10,000,000,000 / (64 + 512 + 96)
+= 10,000,000,000 / 672
+= 14,880,952.38 frame per second (fps)
+
+.. 3.2.5.3
+
+RFCs for testing virtual switch performance
+--------------------------------------------------
+
+The starting point for defining the suite of tests for benchmarking the
+performance of a virtual switch is to take existing RFCs and standards
+that were designed to test their physical counterparts and adapting them
+for testing virtual switches. The rationale behind this is to establish
+a fair comparison between the performance of virtual and physical
+switches. This section outlines the RFCs that are used by this
+specification.
+
+.. 3.2.5.3.1
+
+RFC 1242 Benchmarking Terminology for Network Interconnection
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Devices RFC 1242 defines the terminology that is used in describing
+performance benchmarking tests and their results. Definitions and
+discussions covered include: Back-to-back, bridge, bridge/router,
+constant load, data link frame size, frame loss rate, inter frame gap,
+latency, and many more.
+
+.. 3.2.5.3.2
+
+RFC 2544 Benchmarking Methodology for Network Interconnect Devices
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+RFC 2544 outlines a benchmarking methodology for network Interconnect
+Devices. The methodology results in performance metrics such as latency,
+frame loss percentage, and maximum data throughput.
+
+In this document network “throughput” (measured in millions of frames
+per second) is based on RFC 2544, unless otherwise noted. Frame size
+refers to Ethernet frames ranging from smallest frames of 64 bytes to
+largest frames of 9K bytes.
+
+Types of tests are:
+
+1. Throughput test defines the maximum number of frames per second
+ that can be transmitted without any error, or 0% loss ratio.
+ In some Throughput tests (and those tests with long duration),
+ evaluation of an additional frame loss ratio is suggested. The
+ current ratio (10^-7 %) is based on understanding the typical
+ user-to-user packet loss ratio needed for good application
+ performance and recognizing that a single transfer through a
+ vswitch must contribute a tiny fraction of user-to-user loss.
+ Further, the ratio 10^-7 % also recognizes practical limitations
+ when measuring loss ratio.
+
+2. Latency test measures the time required for a frame to travel from
+ the originating device through the network to the destination device.
+ Please note that RFC2544 Latency measurement will be superseded with
+ a measurement of average latency over all successfully transferred
+ packets or frames.
+
+3. Frame loss test measures the network’s
+ response in overload conditions - a critical indicator of the
+ network’s ability to support real-time applications in which a
+ large amount of frame loss will rapidly degrade service quality.
+
+4. Burst test assesses the buffering capability of a virtual switch. It
+ measures the maximum number of frames received at full line rate
+ before a frame is lost. In carrier Ethernet networks, this
+ measurement validates the excess information rate (EIR) as defined in
+ many SLAs.
+
+5. System recovery to characterize speed of recovery from an overload
+ condition.
+
+6. Reset to characterize speed of recovery from device or software
+ reset. This type of test has been updated by `RFC6201
+ <https://www.rfc-editor.org/rfc/rfc6201.txt>`__ as such,
+ the methodology defined by this specification will be that of RFC 6201.
+
+Although not included in the defined RFC 2544 standard, another crucial
+measurement in Ethernet networking is packet delay variation. The
+definition set out by this specification comes from
+`RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__.
+
+.. 3.2.5.3.3
+
+RFC 2285 Benchmarking Terminology for LAN Switching Devices
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+RFC 2285 defines the terminology that is used to describe the
+terminology for benchmarking a LAN switching device. It extends RFC
+1242 and defines: DUTs, SUTs, Traffic orientation and distribution,
+bursts, loads, forwarding rates, etc.
+
+.. 3.2.5.3.4
+
+RFC 2889 Benchmarking Methodology for LAN Switching
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+RFC 2889 outlines a benchmarking methodology for LAN switching, it
+extends RFC 2544. The outlined methodology gathers performance
+metrics for forwarding, congestion control, latency, address handling
+and finally filtering.
+
+.. 3.2.5.3.5
+
+RFC 3918 Methodology for IP Multicast Benchmarking
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+RFC 3918 outlines a methodology for IP Multicast benchmarking.
+
+.. 3.2.5.3.6
+
+RFC 4737 Packet Reordering Metrics
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+RFC 4737 describes metrics for identifying and counting re-ordered
+packets within a stream, and metrics to measure the extent each
+packet has been re-ordered.
+
+.. 3.2.5.3.7
+
+RFC 5481 Packet Delay Variation Applicability Statement
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+RFC 5481 defined two common, but different forms of delay variation
+metrics, and compares the metrics over a range of networking
+circumstances and tasks. The most suitable form for vSwitch
+benchmarking is the "PDV" form.
+
+.. 3.2.5.3.8
+
+RFC 6201 Device Reset Characterization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+RFC 6201 extends the methodology for characterizing the speed of
+recovery of the DUT from device or software reset described in RFC
+2544.
+
+.. 3.2.6:
+
+.. _PassFailCriteria:
+
+Item pass/fail criteria
+=========================
+
+vswitchperf does not specify Pass/Fail criteria for the tests in terms of a
+threshold, as benchmarks do not (and should not do this). The results/metrics
+for a test are simply reported. If it had to be defined, a test is considered
+to have passed if it succesfully completed and a relavent metric was
+recorded/reported for the SUT.
+
+.. 3.2.7:
+
+.. _SuspensionResumptionReqs:
+
+Suspension criteria and resumption requirements
+================================================
+In the case of a throughput test, a test should be suspended if a virtual
+switch is failing to forward any traffic. A test should be restarted from a
+clean state if the intention is to carry out the test again.
+
+.. 3.2.8:
+
+.. _TestDelierables:
+
+Test deliverables
+==================
+Each test should produce a test report that details SUT information as well as
+the test results. There are a number of parameters related to the system, DUT
+and tests that can affect the repeatability of a test results and should be
+recorded. In order to minimise the variation in the results of a test,
+it is recommended that the test report includes the following information:
+
+- Hardware details including:
+
+ - Platform details.
+ - Processor details.
+ - Memory information (see below)
+ - Number of enabled cores.
+ - Number of cores used for the test.
+ - Number of physical NICs, as well as their details (manufacturer,
+ versions, type and the PCI slot they are plugged into).
+ - NIC interrupt configuration.
+ - BIOS version, release date and any configurations that were
+ modified.
+
+- Software details including:
+
+ - OS version (for host and VNF)
+ - Kernel version (for host and VNF)
+ - GRUB boot parameters (for host and VNF).
+ - Hypervisor details (Type and version).
+ - Selected vSwitch, version number or commit id used.
+ - vSwitch launch command line if it has been parameterised.
+ - Memory allocation to the vSwitch – which NUMA node it is using,
+ and how many memory channels.
+ - Where the vswitch is built from source: compiler details including
+ versions and the flags that were used to compile the vSwitch.
+ - DPDK or any other SW dependency version number or commit id used.
+ - Memory allocation to a VM - if it's from Hugpages/elsewhere.
+ - VM storage type: snapshot/independent persistent/independent
+ non-persistent.
+ - Number of VMs.
+ - Number of Virtual NICs (vNICs), versions, type and driver.
+ - Number of virtual CPUs and their core affinity on the host.
+ - Number vNIC interrupt configuration.
+ - Thread affinitization for the applications (including the vSwitch
+ itself) on the host.
+ - Details of Resource isolation, such as CPUs designated for
+ Host/Kernel (isolcpu) and CPUs designated for specific processes
+ (taskset).
+
+- Memory Details
+
+ - Total memory
+ - Type of memory
+ - Used memory
+ - Active memory
+ - Inactive memory
+ - Free memory
+ - Buffer memory
+ - Swap cache
+ - Total swap
+ - Used swap
+ - Free swap
+
+- Test duration.
+- Number of flows.
+- Traffic Information:
+
+ - Traffic type - UDP, TCP, IMIX / Other.
+ - Packet Sizes.
+
+- Deployment Scenario.
+
+**Note**: Tests that require additional parameters to be recorded will
+explicitly specify this.
+
+
+.. 3.3:
+
+.. _TestManagement:
+
+Test management
+=================
+This section will detail the test activities that will be conducted by vsperf
+as well as the infrastructure that will be used to complete the tests in OPNFV.
+
+.. 3.3.1:
+
+Planned activities and tasks; test progression
+=================================================
+A key consideration when conducting any sort of benchmark is trying to
+ensure the consistency and repeatability of test results between runs.
+When benchmarking the performance of a virtual switch there are many
+factors that can affect the consistency of results. This section
+describes these factors and the measures that can be taken to limit
+their effects. In addition, this section will outline some system tests
+to validate the platform and the VNF before conducting any vSwitch
+benchmarking tests.
+
+**System Isolation:**
+
+When conducting a benchmarking test on any SUT, it is essential to limit
+(and if reasonable, eliminate) any noise that may interfere with the
+accuracy of the metrics collected by the test. This noise may be
+introduced by other hardware or software (OS, other applications), and
+can result in significantly varying performance metrics being collected
+between consecutive runs of the same test. In the case of characterizing
+the performance of a virtual switch, there are a number of configuration
+parameters that can help increase the repeatability and stability of
+test results, including:
+
+- OS/GRUB configuration:
+
+ - maxcpus = n where n >= 0; limits the kernel to using 'n'
+ processors. Only use exactly what you need.
+ - isolcpus: Isolate CPUs from the general scheduler. Isolate all
+ CPUs bar one which will be used by the OS.
+ - use taskset to affinitize the forwarding application and the VNFs
+ onto isolated cores. VNFs and the vSwitch should be allocated
+ their own cores, i.e. must not share the same cores. vCPUs for the
+ VNF should be affinitized to individual cores also.
+ - Limit the amount of background applications that are running and
+ set OS to boot to runlevel 3. Make sure to kill any unnecessary
+ system processes/daemons.
+ - Only enable hardware that you need to use for your test – to
+ ensure there are no other interrupts on the system.
+ - Configure NIC interrupts to only use the cores that are not
+ allocated to any other process (VNF/vSwitch).
+
+- NUMA configuration: Any unused sockets in a multi-socket system
+ should be disabled.
+- CPU pinning: The vSwitch and the VNF should each be affinitized to
+ separate logical cores using a combination of maxcpus, isolcpus and
+ taskset.
+- BIOS configuration: BIOS should be configured for performance where
+ an explicit option exists, sleep states should be disabled, any
+ virtualization optimization technologies should be enabled, and
+ hyperthreading should also be enabled, turbo boost and overclocking
+ should be disabled.
+
+**System Validation:**
+
+System validation is broken down into two sub-categories: Platform
+validation and VNF validation. The validation test itself involves
+verifying the forwarding capability and stability for the sub-system
+under test. The rationale behind system validation is two fold. Firstly
+to give a tester confidence in the stability of the platform or VNF that
+is being tested; and secondly to provide base performance comparison
+points to understand the overhead introduced by the virtual switch.
+
+* Benchmark platform forwarding capability: This is an OPTIONAL test
+ used to verify the platform and measure the base performance (maximum
+ forwarding rate in fps and latency) that can be achieved by the
+ platform without a vSwitch or a VNF. The following diagram outlines
+ the set-up for benchmarking Platform forwarding capability:
+
+ .. code-block:: console
+
+ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | l2fw or DPDK L2FWD app | | Host
+ | | | | |
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+* Benchmark VNF forwarding capability: This test is used to verify
+ the VNF and measure the base performance (maximum forwarding rate in
+ fps and latency) that can be achieved by the VNF without a vSwitch.
+ The performance metrics collected by this test will serve as a key
+ comparison point for NIC passthrough technologies and vSwitches. VNF
+ in this context refers to the hypervisor and the VM. The following
+ diagram outlines the set-up for benchmarking VNF forwarding
+ capability:
+
+ .. code-block:: console
+
+ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | VNF | | |
+ | | | | |
+ | +------------------------------------------+ | |
+ | | Passthrough/SR-IOV | | Host
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+
+**Methodology to benchmark Platform/VNF forwarding capability**
+
+
+The recommended methodology for the platform/VNF validation and
+benchmark is: - Run `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__
+Maximum Forwarding Rate test, this test will produce maximum
+forwarding rate and latency results that will serve as the
+expected values. These expected values can be used in
+subsequent steps or compared with in subsequent validation tests. -
+Transmit bidirectional traffic at line rate/max forwarding rate
+(whichever is higher) for at least 72 hours, measure throughput (fps)
+and latency. - Note: Traffic should be bidirectional. - Establish a
+baseline forwarding rate for what the platform can achieve. - Additional
+validation: After the test has completed for 72 hours run bidirectional
+traffic at the maximum forwarding rate once more to see if the system is
+still functional and measure throughput (fps) and latency. Compare the
+measure the new obtained values with the expected values.
+
+**NOTE 1**: How the Platform is configured for its forwarding capability
+test (BIOS settings, GRUB configuration, runlevel...) is how the
+platform should be configured for every test after this
+
+**NOTE 2**: How the VNF is configured for its forwarding capability test
+(# of vCPUs, vNICs, Memory, affinitization…) is how it should be
+configured for every test that uses a VNF after this.
+
+**Methodology to benchmark the VNF to vSwitch to VNF deployment scenario**
+
+vsperf has identified the following concerns when benchmarking the VNF to
+vSwitch to VNF deployment scenario:
+
+* The accuracy of the timing synchronization between VNFs/VMs.
+* The clock accuracy of a VNF/VM if they were to be used as traffic generators.
+* VNF traffic generator/receiver may be using resources of the system under
+ test, causing at least three forms of workload to increase as the traffic
+ load increases (generation, switching, receiving).
+
+The recommendation from vsperf is that tests for this sceanario must
+include an external HW traffic generator to act as the tester/traffic transmitter
+and receiver. The perscribed methodology to benchmark this deployment scanrio with
+an external tester involves the following three steps:
+
+#. Determine the forwarding capability and latency through the virtual interface
+connected to the VNF/VM.
+
+.. Figure:: vm2vm_virtual_interface_benchmark.png
+
+ Virtual interfaces performance benchmark
+
+#. Determine the forwarding capability and latency through the VNF/hypervisor.
+
+.. Figure:: vm2vm_hypervisor_benchmark.png
+
+ Hypervisor performance benchmark
+
+#. Determine the forwarding capability and latency for the VNF to vSwitch to VNF
+ taking the information from the previous two steps into account.
+
+.. Figure:: vm2vm_benchmark.png
+
+ VNF to vSwitch to VNF performance benchmark
+
+vsperf also identified an alternative configuration for the final step:
+
+.. Figure:: vm2vm_alternative_benchmark.png
+
+ VNF to vSwitch to VNF alternative performance benchmark
+
+.. 3.3.2:
+
+Environment/infrastructure
+============================
+VSPERF CI jobs are run using the OPNFV lab infrastructure as described by the
+'Pharos Project <https://www.opnfv.org/community/projects/pharos>`_ .
+A VSPERF POD is described here https://wiki.opnfv.org/display/pharos/VSPERF+in+Intel+Pharos+Lab+-+Pod+12
+
+vsperf CI
+---------
+vsperf CI jobs are broken down into:
+
+ * Daily job:
+
+ * Runs everyday takes about 10 hours to complete.
+ * TESTCASES_DAILY='phy2phy_tput back2back phy2phy_tput_mod_vlan
+ phy2phy_scalability pvp_tput pvp_back2back pvvp_tput pvvp_back2back'.
+ * TESTPARAM_DAILY='--test-params TRAFFICGEN_PKT_SIZES=(64,128,512,1024,1518)'.
+
+ * Merge job:
+
+ * Runs whenever patches are merged to master.
+ * Runs a basic Sanity test.
+
+ * Verify job:
+
+ * Runs every time a patch is pushed to gerrit.
+ * Builds documentation.
+
+Scripts:
+--------
+There are 2 scripts that are part of VSPERFs CI:
+
+ * build-vsperf.sh: Lives in the VSPERF repository in the ci/ directory and is
+ used to run vsperf with the appropriate cli parameters.
+ * vswitchperf.yml: YAML description of our jenkins job. lives in the RELENG
+ repository.
+
+More info on vsperf CI can be found here:
+https://wiki.opnfv.org/display/vsperf/VSPERF+CI
+
+.. 3.3.3:
+
+Responsibilities and authority
+===============================
+The group responsible for managing, designing, preparing and executing the
+tests listed in the LTD are the vsperf committers and contributors. The vsperf
+committers and contributors should work with the relavent OPNFV projects to
+ensure that the infrastructure is in place for testing vswitches, and that the
+results are published to common end point (a results database).
+
diff --git a/docs/testing/developer/results/results.rst b/docs/testing/developer/results/results.rst
new file mode 100644
index 00000000..df9c52cb
--- /dev/null
+++ b/docs/testing/developer/results/results.rst
@@ -0,0 +1,38 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+OPNFV Test Results
+=========================
+VSPERF CI jobs are run daily and sample results can be found at
+https://wiki.opnfv.org/display/vsperf/Vsperf+Results
+
+The following example maps the results in the test dashboard to the appropriate
+test case in the VSPERF Framework and specifies the metric the vertical/Y axis
+is plotting. **Please note**, the presence of dpdk within a test name signifies
+that the vswitch under test was OVS with DPDK, while its absence indicates that
+the vswitch under test was stock OVS.
+
+===================== ===================== ================== ===============
+ Dashboard Test Framework Test Metric Guest Interface
+===================== ===================== ================== ===============
+tput_ovsdpdk phy2phy_tput Throughput (FPS) N/A
+tput_ovs phy2phy_tput Throughput (FPS) N/A
+b2b_ovsdpdk back2back Back-to-back value N/A
+b2b_ovs back2back Back-to-back value N/A
+tput_mod_vlan_ovs phy2phy_tput_mod_vlan Throughput (FPS) N/A
+tput_mod_vlan_ovsdpdk phy2phy_tput_mod_vlan Throughput (FPS) N/A
+scalability_ovs phy2phy_scalability Throughput (FPS) N/A
+scalability_ovsdpdk phy2phy_scalability Throughput (FPS) N/A
+pvp_tput_ovsdpdkuser pvp_tput Throughput (FPS) vhost-user
+pvp_tput_ovsvirtio pvp_tput Throughput (FPS) virtio-net
+pvp_b2b_ovsdpdkuser pvp_back2back Back-to-back value vhost-user
+pvp_b2b_ovsvirtio pvp_back2back Back-to-back value virtio-net
+pvvp_tput_ovsdpdkuser pvvp_tput Throughput (FPS) vhost-user
+pvvp_tput_ovsvirtio pvvp_tput Throughput (FPS) virtio-net
+pvvp_b2b_ovsdpdkuser pvvp_back2back Throughput (FPS) vhost-user
+pvvp_b2b_ovsvirtio pvvp_back2back Throughput (FPS) virtio-net
+===================== ===================== ================== ===============
+
+The loopback application in the VNF was used for PVP and PVVP scenarios was DPDK
+testpmd.
diff --git a/docs/testing/developer/results/scenario.rst b/docs/testing/developer/results/scenario.rst
new file mode 100644
index 00000000..a57d6a03
--- /dev/null
+++ b/docs/testing/developer/results/scenario.rst
@@ -0,0 +1,47 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+VSPERF Test Scenarios
+=====================
+
+Predefined Tests run with CI:
+
+===================== ===========================================================
+ Test Definition
+===================== ===========================================================
+phy2phy_tput :ref:`PacketLossRatio <PacketLossRatio>` for :ref:`Phy2Phy <Phy2Phy>`
+back2back :ref:`BackToBackFrames <BackToBackFrames>` for :ref:`Phy2Phy <Phy2Phy>`
+phy2phy_tput_mod_vlan :ref:`PacketLossRatioFrameModification <PacketLossRatioFrameModification>` for :ref:`Phy2Phy <Phy2Phy>`
+phy2phy_cont :ref:`Phy2Phy <Phy2Phy>` blast vswitch at x% TX rate and measure throughput
+pvp_cont :ref:`PVP <PVP>` blast vswitch at x% TX rate and measure throughput
+pvvp_cont :ref:`PVVP <PVVP>` blast vswitch at x% TX rate and measure throughput
+phy2phy_scalability :ref:`Scalability0PacketLoss <Scalability0PacketLoss>` for :ref:`Phy2Phy <Phy2Phy>`
+pvp_tput :ref:`PacketLossRatio <PacketLossRatio>` for :ref:`PVP <PVP>`
+pvp_back2back :ref:`BackToBackFrames <BackToBackFrames>` for :ref:`PVP <PVP>`
+pvvp_tput :ref:`PacketLossRatio <PacketLossRatio>` for :ref:`PVVP <PVVP>`
+pvvp_back2back :ref:`BackToBackFrames <BackToBackFrames>` for :ref:`PVVP <PVVP>`
+phy2phy_cpu_load :ref:`CPU0PacketLoss <CPU0PacketLoss>` for :ref:`Phy2Phy <Phy2Phy>`
+phy2phy_mem_load Same as :ref:`CPU0PacketLoss <CPU0PacketLoss>` but using a memory intensive app
+===================== ===========================================================
+
+Deployment topologies:
+
+* :ref:`Phy2Phy <Phy2Phy>`: Physical port -> vSwitch -> Physical port.
+* :ref:`PVP <PVP>`: Physical port -> vSwitch -> VNF -> vSwitch -> Physical port.
+* :ref:`PVVP <PVVP>`: Physical port -> vSwitch -> VNF -> vSwitch -> VNF -> vSwitch ->
+ Physical port.
+
+Loopback applications in the Guest:
+
+* `DPDK testpmd <http://dpdk.org/doc/guides/testpmd_app_ug/index.html>`_.
+* Linux Bridge.
+* :ref:`l2fwd-module`
+
+Supported traffic generators:
+
+* Spirent Testcenter
+* Ixia: IxOS and IxNet.
+* Xena
+* MoonGen
+* Dummy
diff --git a/docs/testing/user/configguide/LICENSE b/docs/testing/user/configguide/LICENSE
new file mode 100644
index 00000000..7bc572ce
--- /dev/null
+++ b/docs/testing/user/configguide/LICENSE
@@ -0,0 +1,2 @@
+This work is licensed under a Creative Commons Attribution 4.0 International License.
+http://creativecommons.org/licenses/by/4.0
diff --git a/docs/testing/user/configguide/TCLServerProperties.png b/docs/testing/user/configguide/TCLServerProperties.png
new file mode 100644
index 00000000..682de7c5
--- /dev/null
+++ b/docs/testing/user/configguide/TCLServerProperties.png
Binary files differ
diff --git a/docs/testing/user/configguide/index.rst b/docs/testing/user/configguide/index.rst
new file mode 100644
index 00000000..4e082261
--- /dev/null
+++ b/docs/testing/user/configguide/index.rst
@@ -0,0 +1,51 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T, Red Hat, Spirent, Ixia and others.
+
+.. OPNFV VSPERF Documentation master file.
+
+======
+VSPERF
+======
+
+VSPERF is an OPNFV testing project.
+
+VSPERF provides an automated test-framework and comprehensive test suite based on
+industry standards for measuring data-plane performance of Telco NFV switching
+technologies as well as physical and virtual network interfaces (NFVI). The VSPERF
+architecture is switch and traffic generator agnostic and provides full control of
+software component versions and configurations as well as test-case customization.
+
+The Danube release of VSPERF includes improvements in documentation and capabilities.
+This includes additional test-cases such as RFC 5481 Latency test and RFC-2889
+address-learning-rate test. Hardware traffic generator support is now provided for
+Spirent and Xena in addition to Ixia. The Moongen software traffic generator is also
+now fully supported. VSPERF can be used in a variety of modes for configuration and
+setup of the network and/or for control of the test-generator and test execution.
+
+* Wiki: https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+* Repository: https://git.opnfv.org/vswitchperf
+* Artifacts: https://artifacts.opnfv.org/vswitchperf.html
+* Continuous Integration status: https://build.opnfv.org/ci/view/vswitchperf/
+
+******************************
+VSPERF User Guide
+******************************
+
+.. toctree::
+ :caption: VSPERF User Guide
+ :maxdepth: 5
+ :numbered: 5
+
+ ./installation.rst
+ ./upgrade.rst
+ ./trafficgen.rst
+
+ ../userguide/testusage.rst
+ ../userguide/teststeps.rst
+ ../userguide/integration.rst
+ ../userguide/yardstick.rst
+
+Indices
+=======
+* :ref:`search`
diff --git a/docs/testing/user/configguide/installation.rst b/docs/testing/user/configguide/installation.rst
new file mode 100644
index 00000000..1965a8f5
--- /dev/null
+++ b/docs/testing/user/configguide/installation.rst
@@ -0,0 +1,310 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+.. _vsperf-installation:
+
+======================
+Installing vswitchperf
+======================
+
+Downloading vswitchperf
+-----------------------
+
+The vswitchperf can be downloaded from its official git repository, which is
+hosted by OPNFV. It is necessary to install a ``git`` at your DUT before downloading
+vswitchperf. Installation of ``git`` is specific to the packaging system used by
+Linux OS installed at DUT.
+
+Example of installation of GIT package and its dependencies:
+
+* in case of OS based on RedHat Linux:
+
+ .. code:: bash
+
+ sudo yum install git
+
+
+* in case of Ubuntu or Debian:
+
+ .. code:: bash
+
+ sudo apt-get install git
+
+After the ``git`` is successfully installed at DUT, then vswitchperf can be downloaded
+as follows:
+
+.. code:: bash
+
+ git clone http://git.opnfv.org/vswitchperf
+
+The last command will create a directory ``vswitchperf`` with a local copy of vswitchperf
+repository.
+
+Supported Operating Systems
+---------------------------
+
+* CentOS 7.3
+* Fedora 24 (kernel 4.8 requires DPDK 16.11 and newer)
+* Fedora 25 (kernel 4.9 requires DPDK 16.11 and newer)
+* openSUSE 42.2
+* RedHat 7.2 Enterprise Linux
+* RedHat 7.3 Enterprise Linux
+* Ubuntu 14.04
+* Ubuntu 16.04
+* Ubuntu 16.10 (kernel 4.8 requires DPDK 16.11 and newer)
+
+Supported vSwitches
+-------------------
+
+The vSwitch must support Open Flow 1.3 or greater.
+
+* Open vSwitch
+* Open vSwitch with DPDK support
+* TestPMD application from DPDK (supports p2p and pvp scenarios)
+
+Supported Hypervisors
+---------------------
+
+* Qemu version 2.3 or greater (version 2.5.0 is recommended)
+
+Supported VNFs
+--------------
+
+In theory, it is possible to use any VNF image, which is compatible
+with supported hypervisor. However such VNF must ensure, that appropriate
+number of network interfaces is configured and that traffic is properly
+forwarded among them. For new vswitchperf users it is recommended to start
+with official vloop-vnf_ image, which is maintained by vswitchperf community.
+
+.. _vloop-vnf:
+
+vloop-vnf
+=========
+
+The official VM image is called vloop-vnf and it is available for free download
+from OPNFV artifactory. This image is based on Linux Ubuntu distribution and it
+supports following applications for traffic forwarding:
+
+* DPDK testpmd
+* Linux Bridge
+* Custom l2fwd module
+
+The vloop-vnf can be downloaded to DUT, for example by ``wget``:
+
+ .. code:: bash
+
+ wget http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160823.qcow2
+
+**NOTE:** In case that ``wget`` is not installed at your DUT, you could install it at RPM
+based system by ``sudo yum install wget`` or at DEB based system by ``sudo apt-get install
+wget``.
+
+Changelog of vloop-vnf:
+
+ * `vloop-vnf-ubuntu-14.04_20160823`_
+
+ * ethtool installed
+ * only 1 NIC is configured by default to speed up boot with 1 NIC setup
+ * security updates applied
+
+ * `vloop-vnf-ubuntu-14.04_20160804`_
+
+ * Linux kernel 4.4.0 installed
+ * libnuma-dev installed
+ * security updates applied
+
+ * `vloop-vnf-ubuntu-14.04_20160303`_
+
+ * snmpd service is disabled by default to avoid error messages during VM boot
+ * security updates applied
+
+ * `vloop-vnf-ubuntu-14.04_20151216`_
+
+ * version with development tools required for build of DPDK and l2fwd
+
+.. _vsperf-installation-script:
+
+Installation
+------------
+
+The test suite requires Python 3.3 or newer and relies on a number of other
+system and python packages. These need to be installed for the test suite
+to function.
+
+Installation of required packages, preparation of Python 3 virtual
+environment and compilation of OVS, DPDK and QEMU is performed by
+script **systems/build_base_machine.sh**. It should be executed under
+user account, which will be used for vsperf execution.
+
+**NOTE:** Password-less sudo access must be configured for given
+user account before script is executed.
+
+.. code:: bash
+
+ $ cd systems
+ $ ./build_base_machine.sh
+
+**NOTE:** you don't need to go into any of the systems subdirectories,
+simply run the top level **build_base_machine.sh**, your OS will be detected
+automatically.
+
+Script **build_base_machine.sh** will install all the vsperf dependencies
+in terms of system packages, Python 3.x and required Python modules.
+In case of CentOS 7 or RHEL it will install Python 3.3 from an additional
+repository provided by Software Collections (`a link`_). Installation script
+will also use `virtualenv`_ to create a vsperf virtual environment, which is
+isolated from the default Python environment. This environment will reside in a
+directory called **vsperfenv** in $HOME. It will ensure, that system wide Python
+installation is not modified or broken by VSPERF installation. The complete list
+of Python packages installed inside virtualenv can be found at file
+``requirements.txt``, which is located at vswitchperf repository.
+
+**NOTE:** For RHEL 7.3 Enterprise and CentOS 7.3 OVS Vanilla is not
+built from upstream source due to kernel incompatibilities. Please see the
+instructions in the vswitchperf_design document for details on configuring
+OVS Vanilla for binary package usage.
+
+.. _vpp-installation:
+
+VPP installation
+================
+
+Currently vswitchperf installation scripts do not support automatic build
+of VPP. In order to execute tests with VPP, it is required to install it
+manually. Please refer to the official documentation of `fd.io`_ project to
+install VPP from `packages`_ or from the `sources`_.
+
+See details about :ref:`vpp-test`.
+
+.. _fd.io: https://fd.io/
+.. _packages: https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages
+.. _sources: https://wiki.fd.io/view/VPP/Build,_install,_and_test_images
+
+Using vswitchperf
+-----------------
+
+You will need to activate the virtual environment every time you start a
+new shell session. Its activation is specific to your OS:
+
+* CentOS 7 and RHEL
+
+ .. code:: bash
+
+ $ scl enable python33 bash
+ $ source $HOME/vsperfenv/bin/activate
+
+* Fedora and Ubuntu
+
+ .. code:: bash
+
+ $ source $HOME/vsperfenv/bin/activate
+
+After the virtual environment is configued, then VSPERF can be used.
+For example:
+
+ .. code:: bash
+
+ (vsperfenv) $ cd vswitchperf
+ (vsperfenv) $ ./vsperf --help
+
+Gotcha
+======
+
+In case you will see following error during environment activation:
+
+.. code:: bash
+
+ $ source $HOME/vsperfenv/bin/activate
+ Badly placed ()'s.
+
+then check what type of shell you are using:
+
+.. code:: bash
+
+ $ echo $SHELL
+ /bin/tcsh
+
+See what scripts are available in $HOME/vsperfenv/bin
+
+.. code:: bash
+
+ $ ls $HOME/vsperfenv/bin/
+ activate activate.csh activate.fish activate_this.py
+
+source the appropriate script
+
+.. code:: bash
+
+ $ source bin/activate.csh
+
+Working Behind a Proxy
+======================
+
+If you're behind a proxy, you'll likely want to configure this before
+running any of the above. For example:
+
+ .. code:: bash
+
+ export http_proxy=proxy.mycompany.com:123
+ export https_proxy=proxy.mycompany.com:123
+
+.. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
+.. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
+.. _vloop-vnf-ubuntu-14.04_20160823: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160823.qcow2
+.. _vloop-vnf-ubuntu-14.04_20160804: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160804.qcow2
+.. _vloop-vnf-ubuntu-14.04_20160303: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160303.qcow2
+.. _vloop-vnf-ubuntu-14.04_20151216: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20151216.qcow2
+
+Hugepage Configuration
+----------------------
+
+Systems running vsperf with either dpdk and/or tests with guests must configure
+hugepage amounts to support running these configurations. It is recommended
+to configure 1GB hugepages as the pagesize.
+
+The amount of hugepages needed depends on your configuration files in vsperf.
+Each guest image requires 2048 MB by default according to the default settings
+in the ``04_vnf.conf`` file.
+
+.. code:: bash
+
+ GUEST_MEMORY = ['2048']
+
+The dpdk startup parameters also require an amount of hugepages depending on
+your configuration in the ``02_vswitch.conf`` file.
+
+.. code:: bash
+
+ DPDK_SOCKET_MEM = ['1024', '0']
+
+**NOTE:** Option ``DPDK_SOCKET_MEM`` is used by all vSwitches with DPDK support.
+It means Open vSwitch, VPP and TestPMD.
+
+VSPerf will verify hugepage amounts are free before executing test
+environments. In case of hugepage amounts not being free, test initialization
+will fail and testing will stop.
+
+**NOTE:** In some instances on a test failure dpdk resources may not
+release hugepages used in dpdk configuration. It is recommended to configure a
+few extra hugepages to prevent a false detection by VSPerf that not enough free
+hugepages are available to execute the test environment. Normally dpdk would use
+previously allocated hugepages upon initialization.
+
+Depending on your OS selection configuration of hugepages may vary. Please refer
+to your OS documentation to set hugepages correctly. It is recommended to set
+the required amount of hugepages to be allocated by default on reboots.
+
+Information on hugepage requirements for dpdk can be found at
+http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html
+
+You can review your hugepage amounts by executing the following command
+
+.. code:: bash
+
+ cat /proc/meminfo | grep Huge
+
+If no hugepages are available vsperf will try to automatically allocate some.
+Allocation is controlled by ``HUGEPAGE_RAM_ALLOCATION`` configuration parameter in
+``02_vswitch.conf`` file. Default is 2GB, resulting in either 2 1GB hugepages
+or 1024 2MB hugepages.
diff --git a/docs/testing/user/configguide/trafficgen.rst b/docs/testing/user/configguide/trafficgen.rst
new file mode 100644
index 00000000..4e42b2be
--- /dev/null
+++ b/docs/testing/user/configguide/trafficgen.rst
@@ -0,0 +1,671 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+.. _trafficgen-installation:
+
+===========================
+'vsperf' Traffic Gen Guide
+===========================
+
+Overview
+--------
+
+VSPERF supports the following traffic generators:
+
+ * Dummy_ (DEFAULT)
+ * Ixia_
+ * `Spirent TestCenter`_
+ * `Xena Networks`_
+ * MoonGen_
+
+To see the list of traffic gens from the cli:
+
+.. code-block:: console
+
+ $ ./vsperf --list-trafficgens
+
+This guide provides the details of how to install
+and configure the various traffic generators.
+
+Background Information
+----------------------
+The traffic default configuration can be found in **conf/03_traffic.conf**,
+and is configured as follows:
+
+.. code-block:: console
+
+ TRAFFIC = {
+ 'traffic_type' : 'rfc2544_throughput',
+ 'frame_rate' : 100,
+ 'bidir' : 'True', # will be passed as string in title format to tgen
+ 'multistream' : 0,
+ 'stream_type' : 'L4',
+ 'pre_installed_flows' : 'No', # used by vswitch implementation
+ 'flow_type' : 'port', # used by vswitch implementation
+
+ 'l2': {
+ 'framesize': 64,
+ 'srcmac': '00:00:00:00:00:00',
+ 'dstmac': '00:00:00:00:00:00',
+ },
+ 'l3': {
+ 'proto': 'udp',
+ 'srcip': '1.1.1.1',
+ 'dstip': '90.90.90.90',
+ },
+ 'l4': {
+ 'srcport': 3000,
+ 'dstport': 3001,
+ },
+ 'vlan': {
+ 'enabled': False,
+ 'id': 0,
+ 'priority': 0,
+ 'cfi': 0,
+ },
+ }
+
+The framesize parameter can be overridden from the configuration
+files by adding the following to your custom configuration file
+``10_custom.conf``:
+
+.. code-block:: console
+
+ TRAFFICGEN_PKT_SIZES = (64, 128,)
+
+OR from the commandline:
+
+.. code-block:: console
+
+ $ ./vsperf --test-params "TRAFFICGEN_PKT_SIZES=(x,y)" $TESTNAME
+
+You can also modify the traffic transmission duration and the number
+of tests run by the traffic generator by extending the example
+commandline above to:
+
+.. code-block:: console
+
+ $ ./vsperf --test-params "TRAFFICGEN_PKT_SIZES=(x,y);TRAFFICGEN_DURATION=10;" \
+ "TRAFFICGEN_RFC2544_TESTS=1" $TESTNAME
+
+.. _trafficgen-dummy:
+
+Dummy
+-----
+
+The Dummy traffic generator can be used to test VSPERF installation or
+to demonstrate VSPERF functionality at DUT without connection
+to a real traffic generator.
+
+You could also use the Dummy generator in case, that your external
+traffic generator is not supported by VSPERF. In such case you could
+use VSPERF to setup your test scenario and then transmit the traffic.
+After the transmission is completed you could specify values for all
+collected metrics and VSPERF will use them to generate final reports.
+
+Setup
+~~~~~
+
+To select the Dummy generator please add the following to your
+custom configuration file ``10_custom.conf``.
+
+.. code-block:: console
+
+ TRAFFICGEN = 'Dummy'
+
+OR run ``vsperf`` with the ``--trafficgen`` argument
+
+.. code-block:: console
+
+ $ ./vsperf --trafficgen Dummy $TESTNAME
+
+Where $TESTNAME is the name of the vsperf test you would like to run.
+This will setup the vSwitch and the VNF (if one is part of your test)
+print the traffic configuration and prompt you to transmit traffic
+when the setup is complete.
+
+.. code-block:: console
+
+ Please send 'continuous' traffic with the following stream config:
+ 30mS, 90mpps, multistream False
+ and the following flow config:
+ {
+ "flow_type": "port",
+ "l3": {
+ "srcip": "1.1.1.1",
+ "proto": "tcp",
+ "dstip": "90.90.90.90"
+ },
+ "traffic_type": "rfc2544_continuous",
+ "multistream": 0,
+ "bidir": "True",
+ "vlan": {
+ "cfi": 0,
+ "priority": 0,
+ "id": 0,
+ "enabled": false
+ },
+ "frame_rate": 90,
+ "l2": {
+ "dstport": 3001,
+ "srcport": 3000,
+ "dstmac": "00:00:00:00:00:00",
+ "srcmac": "00:00:00:00:00:00",
+ "framesize": 64
+ }
+ }
+ What was the result for 'frames tx'?
+
+When your traffic generator has completed traffic transmission and provided
+the results please input these at the VSPERF prompt. VSPERF will try
+to verify the input:
+
+.. code-block:: console
+
+ Is '$input_value' correct?
+
+Please answer with y OR n.
+
+VSPERF will ask you to provide a value for every of collected metrics. The list
+of metrics can be found at traffic-type-metrics_.
+Finally vsperf will print out the results for your test and generate the
+appropriate logs and report files.
+
+.. _traffic-type-metrics:
+
+Metrics collected for supported traffic types
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Below you could find a list of metrics collected by VSPERF for each of supported
+traffic types.
+
+RFC2544 Throughput and Continuous:
+
+ * frames tx
+ * frames rx
+ * min latency
+ * max latency
+ * avg latency
+ * frameloss
+
+RFC2544 Back2back:
+
+ * b2b frames
+ * b2b frame loss %
+
+Dummy result pre-configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In case of a Dummy traffic generator it is possible to pre-configure the test
+results. This is useful for creation of demo testcases, which do not require
+a real traffic generator. Such testcase can be run by any user and it will still
+generate all reports and result files.
+
+Result values can be specified within ``TRAFFICGEN_DUMMY_RESULTS`` dictionary,
+where every of collected metrics must be properly defined. Please check the list
+of traffic-type-metrics_.
+
+Dictionary with dummy results can be passed by CLI argument ``--test-params``
+or specified in ``Parameters`` section of testcase definition.
+
+Example of testcase execution with dummy results defined by CLI argument:
+
+.. code-block:: console
+
+ $ ./vsperf back2back --trafficgen Dummy --test-params \
+ "TRAFFICGEN_DUMMY_RESULTS={'b2b frames':'3000','b2b frame loss %':'0.0'}"
+
+Example of testcase definition with pre-configured dummy results:
+
+.. code-block:: python
+
+ {
+ "Name": "back2back",
+ "Traffic Type": "rfc2544_back2back",
+ "Deployment": "p2p",
+ "biDirectional": "True",
+ "Description": "LTD.Throughput.RFC2544.BackToBackFrames",
+ "Parameters" : {
+ 'TRAFFICGEN_DUMMY_RESULTS' : {'b2b frames':'3000','b2b frame loss %':'0.0'}
+ },
+ },
+
+**NOTE:** Pre-configured results for the Dummy traffic generator will be used only
+in case, that the Dummy traffic generator is used. Otherwise the option
+``TRAFFICGEN_DUMMY_RESULTS`` will be ignored.
+
+.. _Ixia:
+
+Ixia
+----
+
+VSPERF can use both IxNetwork and IxExplorer TCL servers to control Ixia chassis.
+However usage of IxNetwork TCL server is a preferred option. Following sections
+will describe installation and configuration of IxNetwork components used by VSPERF.
+
+Installation
+~~~~~~~~~~~~
+
+On the system under the test you need to install IxNetworkTclClient$(VER\_NUM)Linux.bin.tgz.
+
+On the IXIA client software system you need to install IxNetwork TCL server. After its
+installation you should configure it as follows:
+
+ 1. Find the IxNetwork TCL server app (start -> All Programs -> IXIA ->
+ IxNetwork -> IxNetwork\_$(VER\_NUM) -> IxNetwork TCL Server)
+ 2. Right click on IxNetwork TCL Server, select properties - Under shortcut tab in
+ the Target dialogue box make sure there is the argument "-tclport xxxx"
+ where xxxx is your port number (take note of this port number as you will
+ need it for the 10\_custom.conf file).
+
+ .. image:: TCLServerProperties.png
+
+ 3. Hit Ok and start the TCL server application
+
+VSPERF configuration
+~~~~~~~~~~~~~~~~~~~~
+
+There are several configuration options specific to the IxNetwork traffic generator
+from IXIA. It is essential to set them correctly, before the VSPERF is executed
+for the first time.
+
+Detailed description of options follows:
+
+ * ``TRAFFICGEN_IXNET_MACHINE`` - IP address of server, where IxNetwork TCL Server is running
+ * ``TRAFFICGEN_IXNET_PORT`` - PORT, where IxNetwork TCL Server is accepting connections from
+ TCL clients
+ * ``TRAFFICGEN_IXNET_USER`` - username, which will be used during communication with IxNetwork
+ TCL Server and IXIA chassis
+ * ``TRAFFICGEN_IXIA_HOST`` - IP address of IXIA traffic generator chassis
+ * ``TRAFFICGEN_IXIA_CARD`` - identification of card with dedicated ports at IXIA chassis
+ * ``TRAFFICGEN_IXIA_PORT1`` - identification of the first dedicated port at ``TRAFFICGEN_IXIA_CARD``
+ at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
+ unidirectional traffic, it is essential to correctly connect 1st IXIA port to the 1st NIC
+ at DUT, i.e. to the first PCI handle from ``WHITELIST_NICS`` list. Otherwise traffic may not
+ be able to pass through the vSwitch.
+ * ``TRAFFICGEN_IXIA_PORT2`` - identification of the second dedicated port at ``TRAFFICGEN_IXIA_CARD``
+ at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
+ unidirectional traffic, it is essential to correctly connect 2nd IXIA port to the 2nd NIC
+ at DUT, i.e. to the second PCI handle from ``WHITELIST_NICS`` list. Otherwise traffic may not
+ be able to pass through the vSwitch.
+ * ``TRAFFICGEN_IXNET_LIB_PATH`` - path to the DUT specific installation of IxNetwork TCL API
+ * ``TRAFFICGEN_IXNET_TCL_SCRIPT`` - name of the TCL script, which VSPERF will use for
+ communication with IXIA TCL server
+ * ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` - folder accessible from IxNetwork TCL server,
+ where test results are stored, e.g. ``c:/ixia_results``; see test-results-share_
+ * ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` - directory accessible from the DUT, where test
+ results from IxNetwork TCL server are stored, e.g. ``/mnt/ixia_results``; see
+ test-results-share_
+
+.. _test-results-share:
+
+Test results share
+~~~~~~~~~~~~~~~~~~
+
+VSPERF is not able to retrieve test results via TCL API directly. Instead, all test
+results are stored at IxNetwork TCL server. Results are stored at folder defined by
+``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` configuration parameter. Content of this
+folder must be shared (e.g. via samba protocol) between TCL Server and DUT, where
+VSPERF is executed. VSPERF expects, that test results will be available at directory
+configured by ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` configuration parameter.
+
+Example of sharing configuration:
+
+ * Create a new folder at IxNetwork TCL server machine, e.g. ``c:\ixia_results``
+ * Modify sharing options of ``ixia_results`` folder to share it with everybody
+ * Create a new directory at DUT, where shared directory with results
+ will be mounted, e.g. ``/mnt/ixia_results``
+ * Update your custom VSPERF configuration file as follows:
+
+ .. code-block:: python
+
+ TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/ixia_results'
+ TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia_results'
+
+ **NOTE:** It is essential to use slashes '/' also in path
+ configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter.
+ * Install cifs-utils package.
+
+ e.g. at rpm based Linux distribution:
+
+ .. code-block:: console
+
+ yum install cifs-utils
+
+ * Mount shared directory, so VSPERF can access test results.
+
+ e.g. by adding new record into ``/etc/fstab``
+
+ .. code-block:: console
+
+ mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results
+ -o file_mode=0777,dir_mode=0777,nounix
+
+It is recommended to verify, that any new file inserted into ``c:/ixia_results`` folder
+is visible at DUT inside ``/mnt/ixia_results`` directory.
+
+.. _`Spirent TestCenter`:
+
+Spirent Setup
+-------------
+
+Spirent installation files and instructions are available on the
+Spirent support website at:
+
+http://support.spirent.com
+
+Select a version of Spirent TestCenter software to utilize. This example
+will use Spirent TestCenter v4.57 as an example. Substitute the appropriate
+version in place of 'v4.57' in the examples, below.
+
+On the CentOS 7 System
+~~~~~~~~~~~~~~~~~~~~~~
+
+Download and install the following:
+
+Spirent TestCenter Application, v4.57 for 64-bit Linux Client
+
+Spirent Virtual Deployment Service (VDS)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Spirent VDS is required for both TestCenter hardware and virtual
+chassis in the vsperf environment. For installation, select the version
+that matches the Spirent TestCenter Application version. For v4.57,
+the matching VDS version is 1.0.55. Download either the ova (VMware)
+or qcow2 (QEMU) image and create a VM with it. Initialize the VM
+according to Spirent installation instructions.
+
+Using Spirent TestCenter Virtual (STCv)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+STCv is available in both ova (VMware) and qcow2 (QEMU) formats. For
+VMware, download:
+
+Spirent TestCenter Virtual Machine for VMware, v4.57 for Hypervisor - VMware ESX.ESXi
+
+Virtual test port performance is affected by the hypervisor configuration. For
+best practice results in deploying STCv, the following is suggested:
+
+- Create a single VM with two test ports rather than two VMs with one port each
+- Set STCv in DPDK mode
+- Give STCv 2*n + 1 cores, where n = the number of ports. For vsperf, cores = 5.
+- Turning off hyperthreading and pinning these cores will improve performance
+- Give STCv 2 GB of RAM
+
+To get the highest performance and accuracy, Spirent TestCenter hardware is
+recommended. vsperf can run with either stype test ports.
+
+Using STC REST Client
+~~~~~~~~~~~~~~~~~~~~~
+The stcrestclient package provides the stchttp.py ReST API wrapper module.
+This allows simple function calls, nearly identical to those provided by
+StcPython.py, to be used to access TestCenter server sessions via the
+STC ReST API. Basic ReST functionality is provided by the resthttp module,
+and may be used for writing ReST clients independent of STC.
+
+- Project page: <https://github.com/Spirent/py-stcrestclient>
+- Package download: <http://pypi.python.org/pypi/stcrestclient>
+
+To use REST interface, follow the instructions in the Project page to
+install the package. Once installed, the scripts named with 'rest' keyword
+can be used. For example: testcenter-rfc2544-rest.py can be used to run
+RFC 2544 tests using the REST interface.
+
+Configuration:
+~~~~~~~~~~~~~~
+
+1. The Labserver and license server addresses. These parameters applies to
+ all the tests, and are mandatory for all tests.
+
+.. code-block:: console
+
+ TRAFFICGEN_STC_LAB_SERVER_ADDR = " "
+ TRAFFICGEN_STC_LICENSE_SERVER_ADDR = " "
+ TRAFFICGEN_STC_PYTHON2_PATH = " "
+ TRAFFICGEN_STC_TESTCENTER_PATH = " "
+ TRAFFICGEN_STC_TEST_SESSION_NAME = " "
+ TRAFFICGEN_STC_CSV_RESULTS_FILE_PREFIX = " "
+
+2. For RFC2544 tests, the following parameters are mandatory
+
+.. code-block:: console
+
+ TRAFFICGEN_STC_EAST_CHASSIS_ADDR = " "
+ TRAFFICGEN_STC_EAST_SLOT_NUM = " "
+ TRAFFICGEN_STC_EAST_PORT_NUM = " "
+ TRAFFICGEN_STC_EAST_INTF_ADDR = " "
+ TRAFFICGEN_STC_EAST_INTF_GATEWAY_ADDR = " "
+ TRAFFICGEN_STC_WEST_CHASSIS_ADDR = ""
+ TRAFFICGEN_STC_WEST_SLOT_NUM = " "
+ TRAFFICGEN_STC_WEST_PORT_NUM = " "
+ TRAFFICGEN_STC_WEST_INTF_ADDR = " "
+ TRAFFICGEN_STC_WEST_INTF_GATEWAY_ADDR = " "
+ TRAFFICGEN_STC_RFC2544_TPUT_TEST_FILE_NAME
+
+3. RFC2889 tests: Currently, the forwarding, address-caching, and
+ address-learning-rate tests of RFC2889 are supported.
+ The testcenter-rfc2889-rest.py script implements the rfc2889 tests.
+ The configuration for RFC2889 involves test-case definition, and parameter
+ definition, as described below. New results-constants, as shown below, are
+ added to support these tests.
+
+Example of testcase definition for RFC2889 tests:
+
+.. code-block:: python
+
+ {
+ "Name": "phy2phy_forwarding",
+ "Deployment": "p2p",
+ "Description": "LTD.Forwarding.RFC2889.MaxForwardingRate",
+ "Parameters" : {
+ "TRAFFIC" : {
+ "traffic_type" : "rfc2889_forwarding",
+ },
+ },
+ }
+
+For RFC2889 tests, specifying the locations for the monitoring ports is mandatory.
+Necessary parameters are:
+
+.. code-block:: console
+
+ TRAFFICGEN_STC_RFC2889_TEST_FILE_NAME
+ TRAFFICGEN_STC_EAST_CHASSIS_ADDR = " "
+ TRAFFICGEN_STC_EAST_SLOT_NUM = " "
+ TRAFFICGEN_STC_EAST_PORT_NUM = " "
+ TRAFFICGEN_STC_EAST_INTF_ADDR = " "
+ TRAFFICGEN_STC_EAST_INTF_GATEWAY_ADDR = " "
+ TRAFFICGEN_STC_WEST_CHASSIS_ADDR = ""
+ TRAFFICGEN_STC_WEST_SLOT_NUM = " "
+ TRAFFICGEN_STC_WEST_PORT_NUM = " "
+ TRAFFICGEN_STC_WEST_INTF_ADDR = " "
+ TRAFFICGEN_STC_WEST_INTF_GATEWAY_ADDR = " "
+ TRAFFICGEN_STC_VERBOSE = "True"
+ TRAFFICGEN_STC_RFC2889_LOCATIONS="//10.1.1.1/1/1,//10.1.1.1/2/2"
+
+Other Configurations are :
+
+.. code-block:: console
+
+ TRAFFICGEN_STC_RFC2889_MIN_LR = 1488
+ TRAFFICGEN_STC_RFC2889_MAX_LR = 14880
+ TRAFFICGEN_STC_RFC2889_MIN_ADDRS = 1000
+ TRAFFICGEN_STC_RFC2889_MAX_ADDRS = 65536
+ TRAFFICGEN_STC_RFC2889_AC_LR = 1000
+
+The first 2 values are for address-learning test where as other 3 values are
+for the Address caching capacity test. LR: Learning Rate. AC: Address Caching.
+Maximum value for address is 16777216. Whereas, maximum for LR is 4294967295.
+
+Results for RFC2889 Tests: Forwarding tests outputs following values:
+
+.. code-block:: console
+
+ TX_RATE_FPS : "Transmission Rate in Frames/sec"
+ THROUGHPUT_RX_FPS: "Received Throughput Frames/sec"
+ TX_RATE_MBPS : " Transmission rate in MBPS"
+ THROUGHPUT_RX_MBPS: "Received Throughput in MBPS"
+ TX_RATE_PERCENT: "Transmission Rate in Percentage"
+ FRAME_LOSS_PERCENT: "Frame loss in Percentage"
+ FORWARDING_RATE_FPS: " Maximum Forwarding Rate in FPS"
+
+
+Whereas, the address caching test outputs following values,
+
+.. code-block:: console
+
+ CACHING_CAPACITY_ADDRS = 'Number of address it can cache'
+ ADDR_LEARNED_PERCENT = 'Percentage of address successfully learned'
+
+and address learning test outputs just a single value:
+
+.. code-block:: console
+
+ OPTIMAL_LEARNING_RATE_FPS = 'Optimal learning rate in fps'
+
+Note that 'FORWARDING_RATE_FPS', 'CACHING_CAPACITY_ADDRS',
+'ADDR_LEARNED_PERCENT' and 'OPTIMAL_LEARNING_RATE_FPS' are the new
+result-constants added to support RFC2889 tests.
+
+.. _`Xena Networks`:
+
+Xena Networks
+-------------
+
+Installation
+~~~~~~~~~~~~
+
+Xena Networks traffic generator requires specific files and packages to be
+installed. It is assumed the user has access to the Xena2544.exe file which
+must be placed in VSPerf installation location under the tools/pkt_gen/xena
+folder. Contact Xena Networks for the latest version of this file. The user
+can also visit www.xenanetworks/downloads to obtain the file with a valid
+support contract.
+
+**Note** VSPerf has been fully tested with version v2.43 of Xena2544.exe
+
+To execute the Xena2544.exe file under Linux distributions the mono-complete
+package must be installed. To install this package follow the instructions
+below. Further information can be obtained from
+http://www.mono-project.com/docs/getting-started/install/linux/
+
+.. code-block:: console
+
+ rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF"
+ yum-config-manager --add-repo http://download.mono-project.com/repo/centos/
+ yum -y install mono-complete
+
+To prevent gpg errors on future yum installation of packages the mono-project
+repo should be disabled once installed.
+
+.. code-block:: console
+
+ yum-config-manager --disable download.mono-project.com_repo_centos_
+
+Configuration
+~~~~~~~~~~~~~
+
+Connection information for your Xena Chassis must be supplied inside the
+``10_custom.conf`` or ``03_custom.conf`` file. The following parameters must be
+set to allow for proper connections to the chassis.
+
+.. code-block:: console
+
+ TRAFFICGEN_XENA_IP = ''
+ TRAFFICGEN_XENA_PORT1 = ''
+ TRAFFICGEN_XENA_PORT2 = ''
+ TRAFFICGEN_XENA_USER = ''
+ TRAFFICGEN_XENA_PASSWORD = ''
+ TRAFFICGEN_XENA_MODULE1 = ''
+ TRAFFICGEN_XENA_MODULE2 = ''
+
+RFC2544 Throughput Testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Xena traffic generator testing for rfc2544 throughput can be modified for
+different behaviors if needed. The default options for the following are
+optimized for best results.
+
+.. code-block:: console
+
+ TRAFFICGEN_XENA_2544_TPUT_INIT_VALUE = '10.0'
+ TRAFFICGEN_XENA_2544_TPUT_MIN_VALUE = '0.1'
+ TRAFFICGEN_XENA_2544_TPUT_MAX_VALUE = '100.0'
+ TRAFFICGEN_XENA_2544_TPUT_VALUE_RESOLUTION = '0.5'
+ TRAFFICGEN_XENA_2544_TPUT_USEPASS_THRESHHOLD = 'false'
+ TRAFFICGEN_XENA_2544_TPUT_PASS_THRESHHOLD = '0.0'
+
+Each value modifies the behavior of rfc 2544 throughput testing. Refer to your
+Xena documentation to understand the behavior changes in modifying these
+values.
+
+Continuous Traffic Testing
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Xena continuous traffic by default does a 3 second learning preemption to allow
+the DUT to receive learning packets before a continuous test is performed. If
+a custom test case requires this learning be disabled, you can disable the option
+or modify the length of the learning by modifying the following settings.
+
+.. code-block:: console
+
+ TRAFFICGEN_XENA_CONT_PORT_LEARNING_ENABLED = False
+ TRAFFICGEN_XENA_CONT_PORT_LEARNING_DURATION = 3
+
+MoonGen
+-------
+
+Installation
+~~~~~~~~~~~~
+
+MoonGen architecture overview and general installation instructions
+can be found here:
+
+https://github.com/emmericp/MoonGen
+
+* Note: Today, MoonGen with VSPERF only supports 10Gbps line speeds.
+
+For VSPERF use, MoonGen should be cloned from here (as opposed to the
+previously mentioned GitHub):
+
+git clone https://github.com/atheurer/lua-trafficgen
+
+and use the master branch:
+
+git checkout master
+
+VSPERF uses a particular Lua script with the MoonGen project:
+
+trafficgen.lua
+
+Follow MoonGen set up and execution instructions here:
+
+https://github.com/atheurer/lua-trafficgen/blob/master/README.md
+
+Note one will need to set up ssh login to not use passwords between the server
+running MoonGen and the device under test (running the VSPERF test
+infrastructure). This is because VSPERF on one server uses 'ssh' to
+configure and run MoonGen upon the other server.
+
+One can set up this ssh access by doing the following on both servers:
+
+.. code-block:: console
+
+ ssh-keygen -b 2048 -t rsa
+ ssh-copy-id <other server>
+
+Configuration
+~~~~~~~~~~~~~
+
+Connection information for MoonGen must be supplied inside the
+``10_custom.conf`` or ``03_custom.conf`` file. The following parameters must be
+set to allow for proper connections to the host with MoonGen.
+
+.. code-block:: console
+
+ TRAFFICGEN_MOONGEN_HOST_IP_ADDR = ""
+ TRAFFICGEN_MOONGEN_USER = ""
+ TRAFFICGEN_MOONGEN_BASE_DIR = ""
+ TRAFFICGEN_MOONGEN_PORTS = ""
+ TRAFFICGEN_MOONGEN_LINE_SPEED_GBPS = ""
diff --git a/docs/testing/user/configguide/upgrade.rst b/docs/testing/user/configguide/upgrade.rst
new file mode 100644
index 00000000..cf92572c
--- /dev/null
+++ b/docs/testing/user/configguide/upgrade.rst
@@ -0,0 +1,183 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+=====================
+Upgrading vswitchperf
+=====================
+
+Generic
+-------
+
+In case, that VSPERF is cloned from git repository, then it is easy to
+upgrade it to the newest stable version or to the development version.
+
+You could get a list of stable releases by ``git`` command. It is necessary
+to update local git repository first.
+
+**NOTE:** Git commands must be executed from directory, where VSPERF repository
+was cloned, e.g. ``vswitchperf``.
+
+Update of local git repository:
+
+.. code:: bash
+
+ $ git pull
+
+List of stable releases:
+
+.. code:: bash
+
+ $ git tag
+
+ brahmaputra.1.0
+ colorado.1.0
+ colorado.2.0
+ colorado.3.0
+ danube.1.0
+
+You could select which stable release should be used. For example, select ``danube.1.0``:
+
+.. code:: bash
+
+ $ git checkout danube.1.0
+
+
+Development version of VSPERF can be selected by:
+
+.. code:: bash
+
+ $ git checkout master
+
+Colorado to Danube upgrade notes
+--------------------------------
+
+Obsoleted features
+~~~~~~~~~~~~~~~~~~
+
+Support of vHost Cuse interface has been removed in Danube release. It means,
+that it is not possible to select ``QemuDpdkVhostCuse`` as a VNF anymore. Option
+``QemuDpdkVhostUser`` should be used instead. Please check you configuration files
+and definition of your testcases for any occurrence of:
+
+.. code:: python
+
+ VNF = "QemuDpdkVhostCuse"
+
+or
+
+.. code:: python
+
+ "VNF" : "QemuDpdkVhostCuse"
+
+In case that ``QemuDpdkVhostCuse`` is found, it must be modified to ``QemuDpdkVhostUser``.
+
+**NOTE:** In case that execution of VSPERF is automated by scripts (e.g. for
+CI purposes), then these scripts must be checked and updated too. It means,
+that any occurrence of:
+
+.. code:: bash
+
+ ./vsperf --vnf QemuDpdkVhostCuse
+
+must be updated to:
+
+.. code:: bash
+
+ ./vsperf --vnf QemuDpdkVhostUser
+
+Configuration
+~~~~~~~~~~~~~
+
+Several configuration changes were introduced during Danube release. The most
+important changes are discussed below.
+
+Paths to DPDK, OVS and QEMU
+===========================
+
+VSPERF uses external tools for proper testcase execution. Thus it is important
+to properly configure paths to these tools. In case that tools are installed
+by installation scripts and are located inside ``./src`` directory inside
+VSPERF home, then no changes are needed. On the other hand, if path settings
+was changed by custom configuration file, then it is required to update configuration
+accordingly. Please check your configuration files for following configuration
+options:
+
+.. code:: bash
+
+ OVS_DIR
+ OVS_DIR_VANILLA
+ OVS_DIR_USER
+ OVS_DIR_CUSE
+
+ RTE_SDK_USER
+ RTE_SDK_CUSE
+
+ QEMU_DIR
+ QEMU_DIR_USER
+ QEMU_DIR_CUSE
+ QEMU_BIN
+
+In case that any of these options is defined, then configuration must be updated.
+All paths to the tools are now stored inside ``PATHS`` dictionary. Please
+refer to the :ref:`paths-documentation` and update your configuration where necessary.
+
+Configuration change via CLI
+============================
+
+In previous releases it was possible to modify selected configuration options
+(mostly VNF specific) via command line interface, i.e. by ``--test-params``
+argument. This concept has been generalized in Danube release and it is
+possible to modify any configuration parameter via CLI or via **Parameters**
+section of the testcase definition. Old configuration options were obsoleted
+and it is required to specify configuration parameter name in the same form
+as it is defined inside configuration file, i.e. in uppercase. Please
+refer to the :ref:`overriding-parameters-documentation` for additional details.
+
+**NOTE:** In case that execution of VSPERF is automated by scripts (e.g. for
+CI purposes), then these scripts must be checked and updated too. It means,
+that any occurrence of
+
+.. code:: bash
+
+ guest_loopback
+ vanilla_tgen_port1_ip
+ vanilla_tgen_port1_mac
+ vanilla_tgen_port2_ip
+ vanilla_tgen_port2_mac
+ tunnel_type
+
+shall be changed to the uppercase form and data type of entered values must
+match to data types of original values from configuration files.
+
+In case that ``guest_nic1_name`` or ``guest_nic2_name`` is changed,
+then new dictionary ``GUEST_NICS`` must be modified accordingly.
+Please see :ref:`configuration-of-guest-options` and ``conf/04_vnf.conf`` for additional
+details.
+
+Traffic configuration via CLI
+=============================
+
+In previous releases it was possible to modify selected attributes of generated
+traffic via command line interface. This concept has been enhanced in Danube
+release and it is now possible to modify all traffic specific options via
+CLI or by ``TRAFFIC`` dictionary in configuration file. Detailed description
+is available at :ref:`configuration-of-traffic-dictionary` section of documentation.
+
+Please check your automated scripts for VSPERF execution for following CLI
+parameters and update them according to the documentation:
+
+.. code:: bash
+
+ bidir
+ duration
+ frame_rate
+ iload
+ lossrate
+ multistream
+ pkt_sizes
+ pre-installed_flows
+ rfc2544_tests
+ stream_type
+ traffic_type
+
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
new file mode 100644
index 00000000..5c6886bf
--- /dev/null
+++ b/docs/testing/user/userguide/index.rst
@@ -0,0 +1,10 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T, Red Hat, Spirent, Ixia and others.
+
+.. OPNFV VSPERF Documentation master file.
+
+
+Indices
+=======
+* :ref:`search`
diff --git a/docs/testing/user/userguide/integration.rst b/docs/testing/user/userguide/integration.rst
new file mode 100644
index 00000000..83b29da6
--- /dev/null
+++ b/docs/testing/user/userguide/integration.rst
@@ -0,0 +1,504 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+.. _integration-tests:
+
+Integration tests
+=================
+
+VSPERF includes a set of integration tests defined in conf/integration.
+These tests can be run by specifying --integration as a parameter to vsperf.
+Current tests in conf/integration include switch functionality and Overlay
+tests.
+
+Tests in the conf/integration can be used to test scaling of different switch
+configurations by adding steps into the test case.
+
+For the overlay tests VSPERF supports VXLAN, GRE and GENEVE tunneling protocols.
+Testing of these protocols is limited to unidirectional traffic and
+P2P (Physical to Physical scenarios).
+
+NOTE: The configuration for overlay tests provided in this guide is for
+unidirectional traffic only.
+
+Executing Integration Tests
+---------------------------
+
+To execute integration tests VSPERF is run with the integration parameter. To
+view the current test list simply execute the following command:
+
+.. code-block:: console
+
+ ./vsperf --integration --list
+
+The standard tests included are defined inside the
+``conf/integration/01_testcases.conf`` file.
+
+Executing Tunnel encapsulation tests
+------------------------------------
+
+The VXLAN OVS DPDK encapsulation tests requires IPs, MAC addresses,
+bridge names and WHITELIST_NICS for DPDK.
+
+NOTE: Only Ixia traffic generators currently support the execution of the tunnel
+encapsulation tests. Support for other traffic generators may come in a future
+release.
+
+Default values are already provided. To customize for your environment, override
+the following variables in you user_settings.py file:
+
+ .. code-block:: python
+
+ # Variables defined in conf/integration/02_vswitch.conf
+ # Tunnel endpoint for Overlay P2P deployment scenario
+ # used for br0
+ VTEP_IP1 = '192.168.0.1/24'
+
+ # Used as remote_ip in adding OVS tunnel port and
+ # to set ARP entry in OVS (e.g. tnl/arp/set br-ext 192.168.240.10 02:00:00:00:00:02
+ VTEP_IP2 = '192.168.240.10'
+
+ # Network to use when adding a route for inner frame data
+ VTEP_IP2_SUBNET = '192.168.240.0/24'
+
+ # Bridge names
+ TUNNEL_INTEGRATION_BRIDGE = 'br0'
+ TUNNEL_EXTERNAL_BRIDGE = 'br-ext'
+
+ # IP of br-ext
+ TUNNEL_EXTERNAL_BRIDGE_IP = '192.168.240.1/24'
+
+ # vxlan|gre|geneve
+ TUNNEL_TYPE = 'vxlan'
+
+ # Variables defined conf/integration/03_traffic.conf
+ # For OP2P deployment scenario
+ TRAFFICGEN_PORT1_MAC = '02:00:00:00:00:01'
+ TRAFFICGEN_PORT2_MAC = '02:00:00:00:00:02'
+ TRAFFICGEN_PORT1_IP = '1.1.1.1'
+ TRAFFICGEN_PORT2_IP = '192.168.240.10'
+
+To run VXLAN encapsulation tests:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration \
+ --test-params 'TUNNEL_TYPE=vxlan' overlay_p2p_tput
+
+To run GRE encapsulation tests:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration \
+ --test-params 'TUNNEL_TYPE=gre' overlay_p2p_tput
+
+To run GENEVE encapsulation tests:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration \
+ --test-params 'TUNNEL_TYPE=geneve' overlay_p2p_tput
+
+To run OVS NATIVE tunnel tests (VXLAN/GRE/GENEVE):
+
+1. Install the OVS kernel modules
+
+ .. code:: console
+
+ cd src/ovs/ovs
+ sudo -E make modules_install
+
+2. Set the following variables:
+
+ .. code-block:: python
+
+ VSWITCH = 'OvsVanilla'
+ # Specify vport_* kernel module to test.
+ PATHS['vswitch']['OvsVanilla']['src']['modules'] = [
+ 'vport_vxlan',
+ 'vport_gre',
+ 'vport_geneve',
+ 'datapath/linux/openvswitch.ko',
+ ]
+
+ **NOTE:** In case, that Vanilla OVS is installed from binary package, then
+ please set ``PATHS['vswitch']['OvsVanilla']['bin']['modules']`` instead.
+
+3. Run tests:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration \
+ --test-params 'TUNNEL_TYPE=vxlan' overlay_p2p_tput
+
+
+Executing VXLAN decapsulation tests
+------------------------------------
+
+To run VXLAN decapsulation tests:
+
+1. Set the variables used in "Executing Tunnel encapsulation tests"
+
+2. Set dstmac of DUT_NIC2_MAC to the MAC adddress of the 2nd NIC of your DUT
+
+ .. code-block:: python
+
+ DUT_NIC2_MAC = '<DUT NIC2 MAC>'
+
+3. Run test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration overlay_p2p_decap_cont
+
+If you want to use different values for your VXLAN frame, you may set:
+
+ .. code-block:: python
+
+ VXLAN_FRAME_L3 = {'proto': 'udp',
+ 'packetsize': 64,
+ 'srcip': TRAFFICGEN_PORT1_IP,
+ 'dstip': '192.168.240.1',
+ }
+ VXLAN_FRAME_L4 = {'srcport': 4789,
+ 'dstport': 4789,
+ 'vni': VXLAN_VNI,
+ 'inner_srcmac': '01:02:03:04:05:06',
+ 'inner_dstmac': '06:05:04:03:02:01',
+ 'inner_srcip': '192.168.0.10',
+ 'inner_dstip': '192.168.240.9',
+ 'inner_proto': 'udp',
+ 'inner_srcport': 3000,
+ 'inner_dstport': 3001,
+ }
+
+
+Executing GRE decapsulation tests
+---------------------------------
+
+To run GRE decapsulation tests:
+
+1. Set the variables used in "Executing Tunnel encapsulation tests"
+
+2. Set dstmac of DUT_NIC2_MAC to the MAC adddress of the 2nd NIC of your DUT
+
+ .. code-block:: python
+
+ DUT_NIC2_MAC = '<DUT NIC2 MAC>'
+
+3. Run test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --test-params 'TUNNEL_TYPE=gre' \
+ --integration overlay_p2p_decap_cont
+
+
+If you want to use different values for your GRE frame, you may set:
+
+ .. code-block:: python
+
+ GRE_FRAME_L3 = {'proto': 'gre',
+ 'packetsize': 64,
+ 'srcip': TRAFFICGEN_PORT1_IP,
+ 'dstip': '192.168.240.1',
+ }
+
+ GRE_FRAME_L4 = {'srcport': 0,
+ 'dstport': 0
+ 'inner_srcmac': '01:02:03:04:05:06',
+ 'inner_dstmac': '06:05:04:03:02:01',
+ 'inner_srcip': '192.168.0.10',
+ 'inner_dstip': '192.168.240.9',
+ 'inner_proto': 'udp',
+ 'inner_srcport': 3000,
+ 'inner_dstport': 3001,
+ }
+
+
+Executing GENEVE decapsulation tests
+------------------------------------
+
+IxNet 7.3X does not have native support of GENEVE protocol. The
+template, GeneveIxNetTemplate.xml_ClearText.xml, should be imported
+into IxNET for this testcase to work.
+
+To import the template do:
+
+1. Run the IxNetwork TCL Server
+2. Click on the Traffic menu
+3. Click on the Traffic actions and click Edit Packet Templates
+4. On the Template editor window, click Import. Select the template
+ located at ``3rd_party/ixia/GeneveIxNetTemplate.xml_ClearText.xml``
+ and click import.
+5. Restart the TCL Server.
+
+To run GENEVE decapsulation tests:
+
+1. Set the variables used in "Executing Tunnel encapsulation tests"
+
+2. Set dstmac of DUT_NIC2_MAC to the MAC adddress of the 2nd NIC of your DUT
+
+ .. code-block:: python
+
+ DUT_NIC2_MAC = '<DUT NIC2 MAC>'
+
+3. Run test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --test-params 'tunnel_type=geneve' \
+ --integration overlay_p2p_decap_cont
+
+
+If you want to use different values for your GENEVE frame, you may set:
+
+ .. code-block:: python
+
+ GENEVE_FRAME_L3 = {'proto': 'udp',
+ 'packetsize': 64,
+ 'srcip': TRAFFICGEN_PORT1_IP,
+ 'dstip': '192.168.240.1',
+ }
+
+ GENEVE_FRAME_L4 = {'srcport': 6081,
+ 'dstport': 6081,
+ 'geneve_vni': 0,
+ 'inner_srcmac': '01:02:03:04:05:06',
+ 'inner_dstmac': '06:05:04:03:02:01',
+ 'inner_srcip': '192.168.0.10',
+ 'inner_dstip': '192.168.240.9',
+ 'inner_proto': 'udp',
+ 'inner_srcport': 3000,
+ 'inner_dstport': 3001,
+ }
+
+
+Executing Native/Vanilla OVS VXLAN decapsulation tests
+------------------------------------------------------
+
+To run VXLAN decapsulation tests:
+
+1. Set the following variables in your user_settings.py file:
+
+ .. code-block:: python
+
+ PATHS['vswitch']['OvsVanilla']['src']['modules'] = [
+ 'vport_vxlan',
+ 'datapath/linux/openvswitch.ko',
+ ]
+
+ DUT_NIC1_MAC = '<DUT NIC1 MAC ADDRESS>'
+
+ TRAFFICGEN_PORT1_IP = '172.16.1.2'
+ TRAFFICGEN_PORT2_IP = '192.168.1.11'
+
+ VTEP_IP1 = '172.16.1.2/24'
+ VTEP_IP2 = '192.168.1.1'
+ VTEP_IP2_SUBNET = '192.168.1.0/24'
+ TUNNEL_EXTERNAL_BRIDGE_IP = '172.16.1.1/24'
+ TUNNEL_INT_BRIDGE_IP = '192.168.1.1'
+
+ VXLAN_FRAME_L2 = {'srcmac':
+ '01:02:03:04:05:06',
+ 'dstmac': DUT_NIC1_MAC
+ }
+
+ VXLAN_FRAME_L3 = {'proto': 'udp',
+ 'packetsize': 64,
+ 'srcip': TRAFFICGEN_PORT1_IP,
+ 'dstip': '172.16.1.1',
+ }
+
+ VXLAN_FRAME_L4 = {
+ 'srcport': 4789,
+ 'dstport': 4789,
+ 'protocolpad': 'true',
+ 'vni': 99,
+ 'inner_srcmac': '01:02:03:04:05:06',
+ 'inner_dstmac': '06:05:04:03:02:01',
+ 'inner_srcip': '192.168.1.2',
+ 'inner_dstip': TRAFFICGEN_PORT2_IP,
+ 'inner_proto': 'udp',
+ 'inner_srcport': 3000,
+ 'inner_dstport': 3001,
+ }
+
+ **NOTE:** In case, that Vanilla OVS is installed from binary package, then
+ please set ``PATHS['vswitch']['OvsVanilla']['bin']['modules']`` instead.
+
+2. Run test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration \
+ --test-params 'tunnel_type=vxlan' overlay_p2p_decap_cont
+
+Executing Native/Vanilla OVS GRE decapsulation tests
+----------------------------------------------------
+
+To run GRE decapsulation tests:
+
+1. Set the following variables in your user_settings.py file:
+
+ .. code-block:: python
+
+ PATHS['vswitch']['OvsVanilla']['src']['modules'] = [
+ 'vport_gre',
+ 'datapath/linux/openvswitch.ko',
+ ]
+
+ DUT_NIC1_MAC = '<DUT NIC1 MAC ADDRESS>'
+
+ TRAFFICGEN_PORT1_IP = '172.16.1.2'
+ TRAFFICGEN_PORT2_IP = '192.168.1.11'
+
+ VTEP_IP1 = '172.16.1.2/24'
+ VTEP_IP2 = '192.168.1.1'
+ VTEP_IP2_SUBNET = '192.168.1.0/24'
+ TUNNEL_EXTERNAL_BRIDGE_IP = '172.16.1.1/24'
+ TUNNEL_INT_BRIDGE_IP = '192.168.1.1'
+
+ GRE_FRAME_L2 = {'srcmac':
+ '01:02:03:04:05:06',
+ 'dstmac': DUT_NIC1_MAC
+ }
+
+ GRE_FRAME_L3 = {'proto': 'udp',
+ 'packetsize': 64,
+ 'srcip': TRAFFICGEN_PORT1_IP,
+ 'dstip': '172.16.1.1',
+ }
+
+ GRE_FRAME_L4 = {
+ 'srcport': 4789,
+ 'dstport': 4789,
+ 'protocolpad': 'true',
+ 'inner_srcmac': '01:02:03:04:05:06',
+ 'inner_dstmac': '06:05:04:03:02:01',
+ 'inner_srcip': '192.168.1.2',
+ 'inner_dstip': TRAFFICGEN_PORT2_IP,
+ 'inner_proto': 'udp',
+ 'inner_srcport': 3000,
+ 'inner_dstport': 3001,
+ }
+
+ **NOTE:** In case, that Vanilla OVS is installed from binary package, then
+ please set ``PATHS['vswitch']['OvsVanilla']['bin']['modules']`` instead.
+
+2. Run test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration \
+ --test-params 'tunnel_type=gre' overlay_p2p_decap_cont
+
+Executing Native/Vanilla OVS GENEVE decapsulation tests
+-------------------------------------------------------
+
+To run GENEVE decapsulation tests:
+
+1. Set the following variables in your user_settings.py file:
+
+ .. code-block:: python
+
+ PATHS['vswitch']['OvsVanilla']['src']['modules'] = [
+ 'vport_geneve',
+ 'datapath/linux/openvswitch.ko',
+ ]
+
+ DUT_NIC1_MAC = '<DUT NIC1 MAC ADDRESS>'
+
+ TRAFFICGEN_PORT1_IP = '172.16.1.2'
+ TRAFFICGEN_PORT2_IP = '192.168.1.11'
+
+ VTEP_IP1 = '172.16.1.2/24'
+ VTEP_IP2 = '192.168.1.1'
+ VTEP_IP2_SUBNET = '192.168.1.0/24'
+ TUNNEL_EXTERNAL_BRIDGE_IP = '172.16.1.1/24'
+ TUNNEL_INT_BRIDGE_IP = '192.168.1.1'
+
+ GENEVE_FRAME_L2 = {'srcmac':
+ '01:02:03:04:05:06',
+ 'dstmac': DUT_NIC1_MAC
+ }
+
+ GENEVE_FRAME_L3 = {'proto': 'udp',
+ 'packetsize': 64,
+ 'srcip': TRAFFICGEN_PORT1_IP,
+ 'dstip': '172.16.1.1',
+ }
+
+ GENEVE_FRAME_L4 = {'srcport': 6081,
+ 'dstport': 6081,
+ 'protocolpad': 'true',
+ 'geneve_vni': 0,
+ 'inner_srcmac': '01:02:03:04:05:06',
+ 'inner_dstmac': '06:05:04:03:02:01',
+ 'inner_srcip': '192.168.1.2',
+ 'inner_dstip': TRAFFICGEN_PORT2_IP,
+ 'inner_proto': 'udp',
+ 'inner_srcport': 3000,
+ 'inner_dstport': 3001,
+ }
+
+ **NOTE:** In case, that Vanilla OVS is installed from binary package, then
+ please set ``PATHS['vswitch']['OvsVanilla']['bin']['modules']`` instead.
+
+2. Run test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration \
+ --test-params 'tunnel_type=geneve' overlay_p2p_decap_cont
+
+
+Executing Tunnel encapsulation+decapsulation tests
+--------------------------------------------------
+
+The OVS DPDK encapsulation_decapsulation tests requires IPs, MAC addresses,
+bridge names and WHITELIST_NICS for DPDK.
+
+The test cases can test the tunneling encap and decap without using any ingress
+overlay traffic as compared to above test cases. To achieve this the OVS is
+configured to perform encap and decap in a series on the same traffic stream as
+given below.
+
+TRAFFIC-IN --> [ENCAP] --> [MOD-PKT] --> [DECAP] --> TRAFFIC-OUT
+
+
+Default values are already provided. To customize for your environment, override
+the following variables in you user_settings.py file:
+
+ .. code-block:: python
+
+ # Variables defined in conf/integration/02_vswitch.conf
+
+ # Bridge names
+ TUNNEL_EXTERNAL_BRIDGE1 = 'br-phy1'
+ TUNNEL_EXTERNAL_BRIDGE2 = 'br-phy2'
+ TUNNEL_MODIFY_BRIDGE1 = 'br-mod1'
+ TUNNEL_MODIFY_BRIDGE2 = 'br-mod2'
+
+ # IP of br-mod1
+ TUNNEL_MODIFY_BRIDGE_IP1 = '10.0.0.1/24'
+
+ # Mac of br-mod1
+ TUNNEL_MODIFY_BRIDGE_MAC1 = '00:00:10:00:00:01'
+
+ # IP of br-mod2
+ TUNNEL_MODIFY_BRIDGE_IP2 = '20.0.0.1/24'
+
+ #Mac of br-mod2
+ TUNNEL_MODIFY_BRIDGE_MAC2 = '00:00:20:00:00:01'
+
+ # vxlan|gre|geneve, Only VXLAN is supported for now.
+ TUNNEL_TYPE = 'vxlan'
+
+To run VXLAN encapsulation+decapsulation tests:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration \
+ overlay_p2p_mod_tput
diff --git a/docs/testing/user/userguide/teststeps.rst b/docs/testing/user/userguide/teststeps.rst
new file mode 100644
index 00000000..870c3d80
--- /dev/null
+++ b/docs/testing/user/userguide/teststeps.rst
@@ -0,0 +1,667 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+.. _step-driven-tests:
+
+Step driven tests
+=================
+
+In general, test scenarios are defined by a ``deployment`` used in the particular
+test case definition. The chosen deployment scenario will take care of the vSwitch
+configuration, deployment of VNFs and it can also affect configuration of a traffic
+generator. In order to allow a more flexible way of testcase scripting, VSPERF supports
+a detailed step driven testcase definition. It can be used to configure and
+program vSwitch, deploy and terminate VNFs, execute a traffic generator,
+modify a VSPERF configuration, execute external commands, etc.
+
+Execution of step driven tests is done on a step by step work flow starting
+with step 0 as defined inside the test case. Each step of the test increments
+the step number by one which is indicated in the log.
+
+.. code-block:: console
+
+ (testcases.integration) - Step 0 'vswitch add_vport ['br0']' start
+
+Step driven tests can be used for both performance and integration testing.
+In case of integration test, each step in the test case is validated. If a step
+does not pass validation the test will fail and terminate. The test will continue
+until a failure is detected or all steps pass. A csv report file is generated after
+a test completes with an OK or FAIL result.
+
+In case of performance test, the validation of steps is not performed and
+standard output files with results from traffic generator and underlying OS
+details are generated by vsperf.
+
+Step driven testcases can be used in two different ways:
+
+ # description of full testcase - in this case ``clean`` deployment is used
+ to indicate that vsperf should neither configure vSwitch nor deploy any VNF.
+ Test shall perform all required vSwitch configuration and programming and
+ deploy required number of VNFs.
+
+ # modification of existing deployment - in this case, any of supported
+ deployments can be used to perform initial vSwitch configuration and
+ deployment of VNFs. Additional actions defined by TestSteps can be used
+ to alter vSwitch configuration or deploy additional VNFs. After the last
+ step is processed, the test execution will continue with traffic execution.
+
+Test objects and their functions
+--------------------------------
+
+Every test step can call a function of one of the supported test objects. The list
+of supported objects and their most common functions follows:
+
+ * ``vswitch`` - provides functions for vSwitch configuration
+
+ List of supported functions:
+
+ * ``add_switch br_name`` - creates a new switch (bridge) with given ``br_name``
+ * ``del_switch br_name`` - deletes switch (bridge) with given ``br_name``
+ * ``add_phy_port br_name`` - adds a physical port into bridge specified by ``br_name``
+ * ``add_vport br_name`` - adds a virtual port into bridge specified by ``br_name``
+ * ``del_port br_name port_name`` - removes physical or virtual port specified by
+ ``port_name`` from bridge ``br_name``
+ * ``add_flow br_name flow`` - adds flow specified by ``flow`` dictionary into
+ the bridge ``br_name``; Content of flow dictionary will be passed to the vSwitch.
+ In case of Open vSwitch it will be passed to the ``ovs-ofctl add-flow`` command.
+ Please see Open vSwitch documentation for the list of supported flow parameters.
+ * ``del_flow br_name [flow]`` - deletes flow specified by ``flow`` dictionary from
+ bridge ``br_name``; In case that optional parameter ``flow`` is not specified
+ or set to an empty dictionary ``{}``, then all flows from bridge ``br_name``
+ will be deleted.
+ * ``dump_flows br_name`` - dumps all flows from bridge specified by ``br_name``
+ * ``enable_stp br_name`` - enables Spanning Tree Protocol for bridge ``br_name``
+ * ``disable_stp br_name`` - disables Spanning Tree Protocol for bridge ``br_name``
+ * ``enable_rstp br_name`` - enables Rapid Spanning Tree Protocol for bridge ``br_name``
+ * ``disable_rstp br_name`` - disables Rapid Spanning Tree Protocol for bridge ``br_name``
+
+ Examples:
+
+ .. code-block:: python
+
+ ['vswitch', 'add_switch', 'int_br0']
+
+ ['vswitch', 'del_switch', 'int_br0']
+
+ ['vswitch', 'add_phy_port', 'int_br0']
+
+ ['vswitch', 'del_port', 'int_br0', '#STEP[2][0]']
+
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '1', 'actions': ['output:2'],
+ 'idle_timeout': '0'}],
+
+ ['vswitch', 'enable_rstp', 'int_br0']
+
+ * ``vnf[ID]`` - provides functions for deployment and termination of VNFs; Optional
+ alfanumerical ``ID`` is used for VNF identification in case that testcase
+ deploys multiple VNFs.
+
+ List of supported functions:
+
+ * ``start`` - starts a VNF based on VSPERF configuration
+ * ``stop`` - gracefully terminates given VNF
+
+ Examples:
+
+ .. code-block:: python
+
+ ['vnf1', 'start']
+ ['vnf2', 'start']
+ ['vnf2', 'stop']
+ ['vnf1', 'stop']
+
+ * ``trafficgen`` - triggers traffic generation
+
+ List of supported functions:
+
+ * ``send_traffic traffic`` - starts a traffic based on the vsperf configuration
+ and given ``traffic`` dictionary. More details about ``traffic`` dictionary
+ and its possible values are available at :ref:`Traffic Generator Integration Guide
+ <step-5-supported-traffic-types>`
+
+ Examples:
+
+ .. code-block:: python
+
+ ['trafficgen', 'send_traffic', {'traffic_type' : 'rfc2544_throughput'}]
+
+ ['trafficgen', 'send_traffic', {'traffic_type' : 'rfc2544_back2back', 'bidir' : 'True'}]
+
+ * ``settings`` - reads or modifies VSPERF configuration
+
+ List of supported functions:
+
+ * ``getValue param`` - returns value of given ``param``
+ * ``setValue param value`` - sets value of ``param`` to given ``value``
+
+ Examples:
+
+ .. code-block:: python
+
+ ['settings', 'getValue', 'TOOLS']
+
+ ['settings', 'setValue', 'GUEST_USERNAME', ['root']]
+
+ * ``namespace`` - creates or modifies network namespaces
+
+ List of supported functions:
+
+ * ``create_namespace name`` - creates new namespace with given ``name``
+ * ``delete_namespace name`` - deletes namespace specified by its ``name``
+ * ``assign_port_to_namespace port name [port_up]`` - assigns NIC specified by ``port``
+ into given namespace ``name``; If optional parameter ``port_up`` is set to ``True``,
+ then port will be brought up.
+ * ``add_ip_to_namespace_eth port name addr cidr`` - assigns an IP address ``addr``/``cidr``
+ to the NIC specified by ``port`` within namespace ``name``
+ * ``reset_port_to_root port name`` - returns given ``port`` from namespace ``name`` back
+ to the root namespace
+
+ Examples:
+
+ .. code-block:: python
+
+ ['namespace', 'create_namespace', 'testns']
+
+ ['namespace', 'assign_port_to_namespace', 'eth0', 'testns']
+
+ * ``veth`` - manipulates with eth and veth devices
+
+ List of supported functions:
+
+ * ``add_veth_port port peer_port`` - adds a pair of veth ports named ``port`` and
+ ``peer_port``
+ * ``del_veth_port port peer_port`` - deletes a veth port pair specified by ``port``
+ and ``peer_port``
+ * ``bring_up_eth_port eth_port [namespace]`` - brings up ``eth_port`` in (optional)
+ ``namespace``
+
+ Examples:
+
+ .. code-block:: python
+
+ ['veth', 'add_veth_port', 'veth', 'veth1']
+
+ ['veth', 'bring_up_eth_port', 'eth1']
+
+ * ``tools`` - provides a set of helper functions
+
+ List of supported functions:
+
+ * ``Assert condition`` - evaluates given ``condition`` and raises ``AssertionError``
+ in case that condition is not ``True``
+ * ``Eval expression`` - evaluates given expression as a python code and returns
+ its result
+ * ``Exec command [regex]`` - executes a shell command and filters its output by
+ (optional) regular expression
+
+ Examples:
+
+ .. code-block:: python
+
+ ['tools', 'exec', 'numactl -H', 'available: ([0-9]+)']
+ ['tools', 'assert', '#STEP[-1][0]>1']
+
+ * ``wait`` - is used for test case interruption. This object doesn't have
+ any functions. Once reached, vsperf will pause test execution and waits
+ for press of ``Enter key``. It can be used during testcase design
+ for debugging purposes.
+
+ Examples:
+
+ .. code-block:: python
+
+ ['wait']
+
+Test Macros
+-----------
+
+Test profiles can include macros as part of the test step. Each step in the
+profile may return a value such as a port name. Recall macros use #STEP to
+indicate the recalled value inside the return structure. If the method the
+test step calls returns a value it can be later recalled, for example:
+
+.. code-block:: python
+
+ {
+ "Name": "vswitch_add_del_vport",
+ "Deployment": "clean",
+ "Description": "vSwitch - add and delete virtual port",
+ "TestSteps": [
+ ['vswitch', 'add_switch', 'int_br0'], # STEP 0
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 1
+ ['vswitch', 'del_port', 'int_br0', '#STEP[1][0]'], # STEP 2
+ ['vswitch', 'del_switch', 'int_br0'], # STEP 3
+ ]
+ }
+
+This test profile uses the vswitch add_vport method which returns a string
+value of the port added. This is later called by the del_port method using the
+name from step 1.
+
+It is also possible to use negative indexes in step macros. In that case
+``#STEP[-1]`` will refer to the result from previous step, ``#STEP[-2]``
+will refer to result of step called before previous step, etc. It means,
+that you could change ``STEP 2`` from previous example to achieve the same
+functionality:
+
+.. code-block:: python
+
+ ['vswitch', 'del_port', 'int_br0', '#STEP[-1][0]'], # STEP 2
+
+Also commonly used steps can be created as a separate profile.
+
+.. code-block:: python
+
+ STEP_VSWITCH_PVP_INIT = [
+ ['vswitch', 'add_switch', 'int_br0'], # STEP 0
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 1
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 2
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 3
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 4
+ ]
+
+This profile can then be used inside other testcases
+
+.. code-block:: python
+
+ {
+ "Name": "vswitch_pvp",
+ "Deployment": "clean",
+ "Description": "vSwitch - configure switch and one vnf",
+ "TestSteps": STEP_VSWITCH_PVP_INIT +
+ [
+ ['vnf', 'start'],
+ ['vnf', 'stop'],
+ ] +
+ STEP_VSWITCH_PVP_FINIT
+ }
+
+HelloWorld and other basic Testcases
+------------------------------------
+
+The following examples are for demonstration purposes.
+You can run them by copying and pasting into the
+conf/integration/01_testcases.conf file.
+A command-line instruction is shown at the end of each
+example.
+
+HelloWorld
+^^^^^^^^^^
+
+The first example is a HelloWorld testcase.
+It simply creates a bridge with 2 physical ports, then sets up a flow to drop
+incoming packets from the port that was instantiated at the STEP #1.
+There's no interaction with the traffic generator.
+Then the flow, the 2 ports and the bridge are deleted.
+'add_phy_port' method creates a 'dpdk' type interface that will manage the
+physical port. The string value returned is the port name that will be referred
+by 'del_port' later on.
+
+.. code-block:: python
+
+ {
+ "Name": "HelloWorld",
+ "Description": "My first testcase",
+ "Deployment": "clean",
+ "TestSteps": [
+ ['vswitch', 'add_switch', 'int_br0'], # STEP 0
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 1
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 2
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'actions': ['drop'], 'idle_timeout': '0'}],
+ ['vswitch', 'del_flow', 'int_br0'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[1][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[2][0]'],
+ ['vswitch', 'del_switch', 'int_br0'],
+ ]
+
+ },
+
+To run HelloWorld test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration HelloWorld
+
+Specify a Flow by the IP address
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The next example shows how to explicitly set up a flow by specifying a
+destination IP address.
+All packets received from the port created at STEP #1 that have a destination
+IP address = 90.90.90.90 will be forwarded to the port created at the STEP #2.
+
+.. code-block:: python
+
+ {
+ "Name": "p2p_rule_l3da",
+ "Description": "Phy2Phy with rule on L3 Dest Addr",
+ "Deployment": "clean",
+ "biDirectional": "False",
+ "TestSteps": [
+ ['vswitch', 'add_switch', 'int_br0'], # STEP 0
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 1
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 2
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'dl_type': '0x0800', 'nw_dst': '90.90.90.90', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ ['trafficgen', 'send_traffic', \
+ {'traffic_type' : 'rfc2544_continuous'}],
+ ['vswitch', 'dump_flows', 'int_br0'], # STEP 5
+ ['vswitch', 'del_flow', 'int_br0'], # STEP 7 == del-flows
+ ['vswitch', 'del_port', 'int_br0', '#STEP[1][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[2][0]'],
+ ['vswitch', 'del_switch', 'int_br0'],
+ ]
+ },
+
+To run the test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration p2p_rule_l3da
+
+Multistream feature
+^^^^^^^^^^^^^^^^^^^
+
+The next testcase uses the multistream feature.
+The traffic generator will send packets with different UDP ports.
+That is accomplished by using "Stream Type" and "MultiStream" keywords.
+4 different flows are set to forward all incoming packets.
+
+.. code-block:: python
+
+ {
+ "Name": "multistream_l4",
+ "Description": "Multistream on UDP ports",
+ "Deployment": "clean",
+ "Parameters": {
+ 'TRAFFIC' : {
+ "multistream": 4,
+ "stream_type": "L4",
+ },
+ },
+ "TestSteps": [
+ ['vswitch', 'add_switch', 'int_br0'], # STEP 0
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 1
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 2
+ # Setup Flows
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'dl_type': '0x0800', 'nw_proto': '17', 'udp_dst': '0', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'dl_type': '0x0800', 'nw_proto': '17', 'udp_dst': '1', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'dl_type': '0x0800', 'nw_proto': '17', 'udp_dst': '2', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'dl_type': '0x0800', 'nw_proto': '17', 'udp_dst': '3', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ # Send mono-dir traffic
+ ['trafficgen', 'send_traffic', \
+ {'traffic_type' : 'rfc2544_continuous', \
+ 'bidir' : 'False'}],
+ # Clean up
+ ['vswitch', 'del_flow', 'int_br0'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[1][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[2][0]'],
+ ['vswitch', 'del_switch', 'int_br0'],
+ ]
+ },
+
+To run the test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration multistream_l4
+
+PVP with a VM Replacement
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This example launches a 1st VM in a PVP topology, then the VM is replaced
+by another VM.
+When VNF setup parameter in ./conf/04_vnf.conf is "QemuDpdkVhostUser"
+'add_vport' method creates a 'dpdkvhostuser' type port to connect a VM.
+
+.. code-block:: python
+
+ {
+ "Name": "ex_replace_vm",
+ "Description": "PVP with VM replacement",
+ "Deployment": "clean",
+ "TestSteps": [
+ ['vswitch', 'add_switch', 'int_br0'], # STEP 0
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 1
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 2
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 3 vm1
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 4
+
+ # Setup Flows
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'actions': ['output:#STEP[3][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[4][1]', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[2][1]', \
+ 'actions': ['output:#STEP[4][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[3][1]', \
+ 'actions': ['output:#STEP[1][1]'], 'idle_timeout': '0'}],
+
+ # Start VM 1
+ ['vnf1', 'start'],
+ # Now we want to replace VM 1 with another VM
+ ['vnf1', 'stop'],
+
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 11 vm2
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 12
+ ['vswitch', 'del_flow', 'int_br0'],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'actions': ['output:#STEP[11][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[12][1]', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+
+ # Start VM 2
+ ['vnf2', 'start'],
+ ['vnf2', 'stop'],
+ ['vswitch', 'dump_flows', 'int_br0'],
+
+ # Clean up
+ ['vswitch', 'del_flow', 'int_br0'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[1][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[2][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[3][0]'], # vm1
+ ['vswitch', 'del_port', 'int_br0', '#STEP[4][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[11][0]'], # vm2
+ ['vswitch', 'del_port', 'int_br0', '#STEP[12][0]'],
+ ['vswitch', 'del_switch', 'int_br0'],
+ ]
+ },
+
+To run the test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration ex_replace_vm
+
+VM with a Linux bridge
+^^^^^^^^^^^^^^^^^^^^^^
+
+This example setups a PVP topology and routes traffic to the VM based on
+the destination IP address. A command-line parameter is used to select a Linux
+bridge as a guest loopback application. It is also possible to select a guest
+loopback application by a configuration option ``GUEST_LOOPBACK``.
+
+.. code-block:: python
+
+ {
+ "Name": "ex_pvp_rule_l3da",
+ "Description": "PVP with flow on L3 Dest Addr",
+ "Deployment": "clean",
+ "TestSteps": [
+ ['vswitch', 'add_switch', 'int_br0'], # STEP 0
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 1
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 2
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 3 vm1
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 4
+ # Setup Flows
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'dl_type': '0x0800', 'nw_dst': '90.90.90.90', \
+ 'actions': ['output:#STEP[3][1]'], 'idle_timeout': '0'}],
+ # Each pkt from the VM is forwarded to the 2nd dpdk port
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[4][1]', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ # Start VMs
+ ['vnf1', 'start'],
+ ['trafficgen', 'send_traffic', \
+ {'traffic_type' : 'rfc2544_continuous', \
+ 'bidir' : 'False'}],
+ ['vnf1', 'stop'],
+ # Clean up
+ ['vswitch', 'dump_flows', 'int_br0'], # STEP 10
+ ['vswitch', 'del_flow', 'int_br0'], # STEP 11
+ ['vswitch', 'del_port', 'int_br0', '#STEP[1][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[2][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[3][0]'], # vm1 ports
+ ['vswitch', 'del_port', 'int_br0', '#STEP[4][0]'],
+ ['vswitch', 'del_switch', 'int_br0'],
+ ]
+ },
+
+To run the test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --test-params \
+ "GUEST_LOOPBACK=['linux_bridge']" --integration ex_pvp_rule_l3da
+
+Forward packets based on UDP port
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This examples launches 2 VMs connected in parallel.
+Incoming packets will be forwarded to one specific VM depending on the
+destination UDP port.
+
+.. code-block:: python
+
+ {
+ "Name": "ex_2pvp_rule_l4dp",
+ "Description": "2 PVP with flows on L4 Dest Port",
+ "Deployment": "clean",
+ "Parameters": {
+ 'TRAFFIC' : {
+ "multistream": 2,
+ "stream_type": "L4",
+ },
+ },
+ "TestSteps": [
+ ['vswitch', 'add_switch', 'int_br0'], # STEP 0
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 1
+ ['vswitch', 'add_phy_port', 'int_br0'], # STEP 2
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 3 vm1
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 4
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 5 vm2
+ ['vswitch', 'add_vport', 'int_br0'], # STEP 6
+ # Setup Flows to reply ICMPv6 and similar packets, so to
+ # avoid flooding internal port with their re-transmissions
+ ['vswitch', 'add_flow', 'int_br0', \
+ {'priority': '1', 'dl_src': '00:00:00:00:00:01', \
+ 'actions': ['output:#STEP[3][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', \
+ {'priority': '1', 'dl_src': '00:00:00:00:00:02', \
+ 'actions': ['output:#STEP[4][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', \
+ {'priority': '1', 'dl_src': '00:00:00:00:00:03', \
+ 'actions': ['output:#STEP[5][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', \
+ {'priority': '1', 'dl_src': '00:00:00:00:00:04', \
+ 'actions': ['output:#STEP[6][1]'], 'idle_timeout': '0'}],
+ # Forward UDP packets depending on dest port
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'dl_type': '0x0800', 'nw_proto': '17', 'udp_dst': '0', \
+ 'actions': ['output:#STEP[3][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[1][1]', \
+ 'dl_type': '0x0800', 'nw_proto': '17', 'udp_dst': '1', \
+ 'actions': ['output:#STEP[5][1]'], 'idle_timeout': '0'}],
+ # Send VM output to phy port #2
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[4][1]', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'int_br0', {'in_port': '#STEP[6][1]', \
+ 'actions': ['output:#STEP[2][1]'], 'idle_timeout': '0'}],
+ # Start VMs
+ ['vnf1', 'start'], # STEP 16
+ ['vnf2', 'start'], # STEP 17
+ ['trafficgen', 'send_traffic', \
+ {'traffic_type' : 'rfc2544_continuous', \
+ 'bidir' : 'False'}],
+ ['vnf1', 'stop'],
+ ['vnf2', 'stop'],
+ ['vswitch', 'dump_flows', 'int_br0'],
+ # Clean up
+ ['vswitch', 'del_flow', 'int_br0'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[1][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[2][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[3][0]'], # vm1 ports
+ ['vswitch', 'del_port', 'int_br0', '#STEP[4][0]'],
+ ['vswitch', 'del_port', 'int_br0', '#STEP[5][0]'], # vm2 ports
+ ['vswitch', 'del_port', 'int_br0', '#STEP[6][0]'],
+ ['vswitch', 'del_switch', 'int_br0'],
+ ]
+ },
+
+To run the test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py --integration ex_2pvp_rule_l4dp
+
+Modification of existing PVVP deployment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This is an example of modification of a standard deployment scenario with additional TestSteps.
+Standard PVVP scenario is used to configure a vSwitch and to deploy two VNFs connected
+in series. Additional TestSteps will deploy a 3rd VNF and connect it in parallel to
+already configured VNFs. Traffic generator is instructed (by Multistream feature) to send
+two separate traffic streams. One stream will be sent to the standalone VNF and second
+to two chained VNFs.
+
+In case, that test is defined as a performance test, then traffic results will be collected
+and available in both csv and rst report files.
+
+.. code-block:: python
+
+ {
+ "Name": "pvvp_pvp_cont",
+ "Deployment": "pvvp",
+ "Description": "PVVP and PVP in parallel with Continuous Stream",
+ "Parameters" : {
+ "TRAFFIC" : {
+ "traffic_type" : "rfc2544_continuous",
+ "multistream": 2,
+ },
+ },
+ "TestSteps": [
+ ['vswitch', 'add_vport', 'br0'],
+ ['vswitch', 'add_vport', 'br0'],
+ # priority must be higher than default 32768, otherwise flows won't match
+ ['vswitch', 'add_flow', 'br0',
+ {'in_port': '1', 'actions': ['output:#STEP[-2][1]'], 'idle_timeout': '0', 'dl_type':'0x0800',
+ 'nw_proto':'17', 'tp_dst':'0', 'priority': '33000'}],
+ ['vswitch', 'add_flow', 'br0',
+ {'in_port': '2', 'actions': ['output:#STEP[-2][1]'], 'idle_timeout': '0', 'dl_type':'0x0800',
+ 'nw_proto':'17', 'tp_dst':'0', 'priority': '33000'}],
+ ['vswitch', 'add_flow', 'br0', {'in_port': '#STEP[-4][1]', 'actions': ['output:1'],
+ 'idle_timeout': '0'}],
+ ['vswitch', 'add_flow', 'br0', {'in_port': '#STEP[-4][1]', 'actions': ['output:2'],
+ 'idle_timeout': '0'}],
+ ['vswitch', 'dump_flows', 'br0'],
+ ['vnf1', 'start'],
+ ]
+ },
+
+To run the test:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file user_settings.py pvvp_pvp_cont
+
diff --git a/docs/testing/user/userguide/testusage.rst b/docs/testing/user/userguide/testusage.rst
new file mode 100644
index 00000000..c6037aaf
--- /dev/null
+++ b/docs/testing/user/userguide/testusage.rst
@@ -0,0 +1,848 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+vSwitchPerf test suites userguide
+---------------------------------
+
+General
+^^^^^^^
+
+VSPERF requires a traffic generators to run tests, automated traffic gen
+support in VSPERF includes:
+
+- IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA
+ client software.
+- Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual
+ in a VM) and a VM to run the Spirent Virtual Deployment Service image,
+ formerly known as "Spirent LabServer".
+- Xena Network traffic generator (Xena hardware chassis) that houses the Xena
+ Traffic generator modules.
+- Moongen software traffic generator. Requires a separate machine running
+ moongen to execute packet generation.
+
+If you want to use another traffic generator, please select the :ref:`trafficgen-dummy`
+generator.
+
+VSPERF Installation
+^^^^^^^^^^^^^^^^^^^
+
+To see the supported Operating Systems, vSwitches and system requirements,
+please follow the `installation instructions <vsperf-installation>`.
+
+Traffic Generator Setup
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Follow the `Traffic generator instructions <trafficgen-installation>` to
+install and configure a suitable traffic generator.
+
+Cloning and building src dependencies
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In order to run VSPERF, you will need to download DPDK and OVS. You can
+do this manually and build them in a preferred location, OR you could
+use vswitchperf/src. The vswitchperf/src directory contains makefiles
+that will allow you to clone and build the libraries that VSPERF depends
+on, such as DPDK and OVS. To clone and build simply:
+
+.. code-block:: console
+
+ $ cd src
+ $ make
+
+VSPERF can be used with stock OVS (without DPDK support). When build
+is finished, the libraries are stored in src_vanilla directory.
+
+The 'make' builds all options in src:
+
+* Vanilla OVS
+* OVS with vhost_user as the guest access method (with DPDK support)
+
+The vhost_user build will reside in src/ovs/
+The Vanilla OVS build will reside in vswitchperf/src_vanilla
+
+To delete a src subdirectory and its contents to allow you to re-clone simply
+use:
+
+.. code-block:: console
+
+ $ make clobber
+
+Configure the ``./conf/10_custom.conf`` file
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``10_custom.conf`` file is the configuration file that overrides
+default configurations in all the other configuration files in ``./conf``
+The supplied ``10_custom.conf`` file **MUST** be modified, as it contains
+configuration items for which there are no reasonable default values.
+
+The configuration items that can be added is not limited to the initial
+contents. Any configuration item mentioned in any .conf file in
+``./conf`` directory can be added and that item will be overridden by
+the custom configuration value.
+
+Further details about configuration files evaluation and special behaviour
+of options with ``GUEST_`` prefix could be found at :ref:`design document
+<design-configuration>`.
+
+Using a custom settings file
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If your ``10_custom.conf`` doesn't reside in the ``./conf`` directory
+of if you want to use an alternative configuration file, the file can
+be passed to ``vsperf`` via the ``--conf-file`` argument.
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file <path_to_custom_conf> ...
+
+Note that configuration passed in via the environment (``--load-env``)
+or via another command line argument will override both the default and
+your custom configuration files. This "priority hierarchy" can be
+described like so (1 = max priority):
+
+1. Testcase definition section ``Parameters``
+2. Command line arguments
+3. Environment variables
+4. Configuration file(s)
+
+Further details about configuration files evaluation and special behaviour
+of options with ``GUEST_`` prefix could be found at :ref:`design document
+<design-configuration>`.
+
+.. _overriding-parameters-documentation:
+
+Overriding values defined in configuration files
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The configuration items can be overridden by command line argument
+``--test-params``. In this case, the configuration items and
+their values should be passed in form of ``item=value`` and separated
+by semicolon.
+
+Example:
+
+.. code:: console
+
+ $ ./vsperf --test-params "TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,);" \
+ "GUEST_LOOPBACK=['testpmd','l2fwd']" pvvp_tput
+
+The second option is to override configuration items by ``Parameters`` section
+of the test case definition. The configuration items can be added into ``Parameters``
+dictionary with their new values. These values will override values defined in
+configuration files or specified by ``--test-params`` command line argument.
+
+Example:
+
+.. code:: python
+
+ "Parameters" : {'TRAFFICGEN_PKT_SIZES' : (128,),
+ 'TRAFFICGEN_DURATION' : 10,
+ 'GUEST_LOOPBACK' : ['testpmd','l2fwd'],
+ }
+
+**NOTE:** In both cases, configuration item names and their values must be specified
+in the same form as they are defined inside configuration files. Parameter names
+must be specified in uppercase and data types of original and new value must match.
+Python syntax rules related to data types and structures must be followed.
+For example, parameter ``TRAFFICGEN_PKT_SIZES`` above is defined as a tuple
+with a single value ``128``. In this case trailing comma is mandatory, otherwise
+value can be wrongly interpreted as a number instead of a tuple and vsperf
+execution would fail. Please check configuration files for default values and their
+types and use them as a basis for any customized values. In case of any doubt, please
+check official python documentation related to data structures like tuples, lists
+and dictionaries.
+
+**NOTE:** Vsperf execution will terminate with runtime error in case, that unknown
+parameter name is passed via ``--test-params`` CLI argument or defined in ``Parameters``
+section of test case definition. It is also forbidden to redefine a value of
+``TEST_PARAMS`` configuration item via CLI or ``Parameters`` section.
+
+vloop_vnf
+^^^^^^^^^
+
+VSPERF uses a VM image called vloop_vnf for looping traffic in the deployment
+scenarios involving VMs. The image can be downloaded from
+`<http://artifacts.opnfv.org/>`__.
+
+Please see the installation instructions for information on :ref:`vloop-vnf`
+images.
+
+.. _l2fwd-module:
+
+l2fwd Kernel Module
+^^^^^^^^^^^^^^^^^^^
+
+A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
+support for Destination Network Address Translation (DNAT) for both the MAC and
+IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
+
+Executing tests
+^^^^^^^^^^^^^^^
+
+All examples inside these docs assume, that user is inside the VSPERF
+directory. VSPERF can be executed from any directory.
+
+Before running any tests make sure you have root permissions by adding
+the following line to /etc/sudoers:
+
+.. code-block:: console
+
+ username ALL=(ALL) NOPASSWD: ALL
+
+username in the example above should be replaced with a real username.
+
+To list the available tests:
+
+.. code-block:: console
+
+ $ ./vsperf --list
+
+To run a single test:
+
+.. code-block:: console
+
+ $ ./vsperf $TESTNAME
+
+Where $TESTNAME is the name of the vsperf test you would like to run.
+
+To run a group of tests, for example all tests with a name containing
+'RFC2544':
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf --tests="RFC2544"
+
+To run all tests:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
+
+Some tests allow for configurable parameters, including test duration
+(in seconds) as well as packet sizes (in bytes).
+
+.. code:: bash
+
+ $ ./vsperf --conf-file user_settings.py \
+ --tests RFC2544Tput \
+ --test-params "TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)"
+
+For all available options, check out the help dialog:
+
+.. code-block:: console
+
+ $ ./vsperf --help
+
+Executing Vanilla OVS tests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. If needed, recompile src for all OVS variants
+
+ .. code-block:: console
+
+ $ cd src
+ $ make distclean
+ $ make
+
+2. Update your ``10_custom.conf`` file to use Vanilla OVS:
+
+ .. code-block:: python
+
+ VSWITCH = 'OvsVanilla'
+
+3. Run test:
+
+ .. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>
+
+ Please note if you don't want to configure Vanilla OVS through the
+ configuration file, you can pass it as a CLI argument.
+
+ .. code-block:: console
+
+ $ ./vsperf --vswitch OvsVanilla
+
+
+Executing tests with VMs
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+To run tests using vhost-user as guest access method:
+
+1. Set VHOST_METHOD and VNF of your settings file to:
+
+ .. code-block:: python
+
+ VSWITCH = 'OvsDpdkVhost'
+ VNF = 'QemuDpdkVhost'
+
+2. If needed, recompile src for all OVS variants
+
+ .. code-block:: console
+
+ $ cd src
+ $ make distclean
+ $ make
+
+3. Run test:
+
+ .. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
+
+Executing tests with VMs using Vanilla OVS
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To run tests using Vanilla OVS:
+
+1. Set the following variables:
+
+ .. code-block:: python
+
+ VSWITCH = 'OvsVanilla'
+ VNF = 'QemuVirtioNet'
+
+ VANILLA_TGEN_PORT1_IP = n.n.n.n
+ VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn
+
+ VANILLA_TGEN_PORT2_IP = n.n.n.n
+ VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn
+
+ VANILLA_BRIDGE_IP = n.n.n.n
+
+ or use ``--test-params`` option
+
+ .. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
+ --test-params "VANILLA_TGEN_PORT1_IP=n.n.n.n;" \
+ "VANILLA_TGEN_PORT1_MAC=nn:nn:nn:nn:nn:nn;" \
+ "VANILLA_TGEN_PORT2_IP=n.n.n.n;" \
+ "VANILLA_TGEN_PORT2_MAC=nn:nn:nn:nn:nn:nn"
+
+2. If needed, recompile src for all OVS variants
+
+ .. code-block:: console
+
+ $ cd src
+ $ make distclean
+ $ make
+
+3. Run test:
+
+ .. code-block:: console
+
+ $ ./vsperf --conf-file<path_to_custom_conf>/10_custom.conf
+
+.. _vpp-test:
+
+Executing VPP tests
+^^^^^^^^^^^^^^^^^^^
+
+Currently it is not possible to use standard scenario deployments for execution of
+tests with VPP. It means, that deployments ``p2p``, ``pvp``, ``pvvp`` and in general any
+:ref:`pxp-deployment` won't work with VPP. However it is possible to use VPP in
+:ref:`step-driven-tests`. A basic set of VPP testcases covering ``phy2phy``, ``pvp``
+and ``pvvp`` tests are already prepared.
+
+List of performance tests with VPP support follows:
+
+* phy2phy_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
+* phy2phy_cont_vpp: VPP: Phy2Phy Continuous Stream
+* phy2phy_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
+* pvp_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
+* pvp_cont_vpp: VPP: PVP Continuous Stream
+* pvp_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
+* pvvp_tput_vpp: VPP: LTD.Throughput.RFC2544.PacketLossRatio
+* pvvp_cont_vpp: VPP: PVP Continuous Stream
+* pvvp_back2back_vpp: VPP: LTD.Throughput.RFC2544.BackToBackFrames
+
+In order to execute testcases with VPP it is required to:
+
+* install VPP manually, see :ref:`vpp-installation`
+* configure ``WHITELIST_NICS``, with two physical NICs connected to the traffic generator
+* configure traffic generator, see :ref:`trafficgen-installation`
+
+After that it is possible to execute VPP testcases listed above.
+
+For example:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf> phy2phy_tput_vpp
+
+.. _vfio-pci:
+
+Using vfio_pci with DPDK
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To use vfio with DPDK instead of igb_uio add into your custom configuration
+file the following parameter:
+
+.. code-block:: python
+
+ PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci']
+
+
+**NOTE:** In case, that DPDK is installed from binary package, then please
+set ``PATHS['dpdk']['bin']['modules']`` instead.
+
+**NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
+
+**NOTE:** Please ensure your boot/grub parameters include
+the following:
+
+.. code-block:: console
+
+ iommu=pt intel_iommu=on
+
+To check that IOMMU is enabled on your platform:
+
+.. code-block:: console
+
+ $ dmesg | grep IOMMU
+ [ 0.000000] Intel-IOMMU: enabled
+ [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
+ [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
+ [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
+ [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
+ [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
+ [ 3.335744] IOMMU: dmar0 using Queued invalidation
+ [ 3.335746] IOMMU: dmar1 using Queued invalidation
+ ....
+
+.. _SRIOV-support:
+
+Using SRIOV support
+^^^^^^^^^^^^^^^^^^^
+
+To use virtual functions of NIC with SRIOV support, use extended form
+of NIC PCI slot definition:
+
+.. code-block:: python
+
+ WHITELIST_NICS = ['0000:05:00.0|vf0', '0000:05:00.1|vf3']
+
+Where 'vf' is an indication of virtual function usage and following
+number defines a VF to be used. In case that VF usage is detected,
+then vswitchperf will enable SRIOV support for given card and it will
+detect PCI slot numbers of selected VFs.
+
+So in example above, one VF will be configured for NIC '0000:05:00.0'
+and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
+will detect PCI addresses of selected VFs and it will use them during
+test execution.
+
+At the end of vswitchperf execution, SRIOV support will be disabled.
+
+SRIOV support is generic and it can be used in different testing scenarios.
+For example:
+
+* vSwitch tests with DPDK or without DPDK support to verify impact
+ of VF usage on vSwitch performance
+* tests without vSwitch, where traffic is forwared directly
+ between VF interfaces by packet forwarder (e.g. testpmd application)
+* tests without vSwitch, where VM accesses VF interfaces directly
+ by PCI-passthrough_ to measure raw VM throughput performance.
+
+.. _PCI-passthrough:
+
+Using QEMU with PCI passthrough support
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Raw virtual machine throughput performance can be measured by execution of PVP
+test with direct access to NICs by PCI passthrough. To execute VM with direct
+access to PCI devices, enable vfio-pci_. In order to use virtual functions,
+SRIOV-support_ must be enabled.
+
+Execution of test with PCI passthrough with vswitch disabled:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
+ --vswitch none --vnf QemuPciPassthrough pvp_tput
+
+Any of supported guest-loopback-application_ can be used inside VM with
+PCI passthrough support.
+
+Note: Qemu with PCI passthrough support can be used only with PVP test
+deployment.
+
+.. _guest-loopback-application:
+
+Selection of loopback application for tests with VMs
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To select the loopback applications which will forward packets inside VMs,
+the following parameter should be configured:
+
+.. code-block:: python
+
+ GUEST_LOOPBACK = ['testpmd']
+
+or use ``--test-params`` CLI argument:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
+ --test-params "GUEST_LOOPBACK=['testpmd']"
+
+Supported loopback applications are:
+
+.. code-block:: console
+
+ 'testpmd' - testpmd from dpdk will be built and used
+ 'l2fwd' - l2fwd module provided by Huawei will be built and used
+ 'linux_bridge' - linux bridge will be configured
+ 'buildin' - nothing will be configured by vsperf; VM image must
+ ensure traffic forwarding between its interfaces
+
+Guest loopback application must be configured, otherwise traffic
+will not be forwarded by VM and testcases with VM related deployments
+will fail. Guest loopback application is set to 'testpmd' by default.
+
+**NOTE:** In case that only 1 or more than 2 NICs are configured for VM,
+then 'testpmd' should be used. As it is able to forward traffic between
+multiple VM NIC pairs.
+
+**NOTE:** In case of linux_bridge, all guest NICs are connected to the same
+bridge inside the guest.
+
+Mergable Buffers Options with QEMU
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Mergable buffers can be disabled with VSPerf within QEMU. This option can
+increase performance significantly when not using jumbo frame sized packets.
+By default VSPerf disables mergable buffers. If you wish to enable it you
+can modify the setting in the a custom conf file.
+
+.. code-block:: python
+
+ GUEST_NIC_MERGE_BUFFERS_DISABLE = [False]
+
+Then execute using the custom conf file.
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf
+
+Alternatively you can just pass the param during execution.
+
+.. code-block:: console
+
+ $ ./vsperf --test-params "GUEST_NIC_MERGE_BUFFERS_DISABLE=[False]"
+
+
+Selection of dpdk binding driver for tests with VMs
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To select dpdk binding driver, which will specify which driver the vm NICs will
+use for dpdk bind, the following configuration parameter should be configured:
+
+.. code-block:: console
+
+ GUEST_DPDK_BIND_DRIVER = ['igb_uio_from_src']
+
+The supported dpdk guest bind drivers are:
+
+.. code-block:: console
+
+ 'uio_pci_generic' - Use uio_pci_generic driver
+ 'igb_uio_from_src' - Build and use the igb_uio driver from the dpdk src
+ files
+ 'vfio_no_iommu' - Use vfio with no iommu option. This requires custom
+ guest images that support this option. The default
+ vloop image does not support this driver.
+
+Note: uio_pci_generic does not support sr-iov testcases with guests attached.
+This is because uio_pci_generic only supports legacy interrupts. In case
+uio_pci_generic is selected with the vnf as QemuPciPassthrough it will be
+modified to use igb_uio_from_src instead.
+
+Note: vfio_no_iommu requires kernels equal to or greater than 4.5 and dpdk
+16.04 or greater. Using this option will also taint the kernel.
+
+Please refer to the dpdk documents at http://dpdk.org/doc/guides for more
+information on these drivers.
+
+Multi-Queue Configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+VSPerf currently supports multi-queue with the following limitations:
+
+1. Requires QEMU 2.5 or greater and any OVS version higher than 2.5. The
+ default upstream package versions installed by VSPerf satisfies this
+ requirement.
+
+2. Guest image must have ethtool utility installed if using l2fwd or linux
+ bridge inside guest for loopback.
+
+3. If using OVS versions 2.5.0 or less enable old style multi-queue as shown
+ in the ''02_vswitch.conf'' file.
+
+ .. code-block:: python
+
+ OVS_OLD_STYLE_MQ = True
+
+To enable multi-queue for dpdk modify the ''02_vswitch.conf'' file.
+
+.. code-block:: python
+
+ VSWITCH_DPDK_MULTI_QUEUES = 2
+
+**NOTE:** you should consider using the switch affinity to set a pmd cpu mask
+that can optimize your performance. Consider the numa of the NIC in use if this
+applies by checking /sys/class/net/<eth_name>/device/numa_node and setting an
+appropriate mask to create PMD threads on the same numa node.
+
+When multi-queue is enabled, each dpdk or dpdkvhostuser port that is created
+on the switch will set the option for multiple queues. If old style multi queue
+has been enabled a global option for multi queue will be used instead of the
+port by port option.
+
+To enable multi-queue on the guest modify the ''04_vnf.conf'' file.
+
+.. code-block:: python
+
+ GUEST_NIC_QUEUES = [2]
+
+Enabling multi-queue at the guest will add multiple queues to each NIC port when
+qemu launches the guest.
+
+In case of Vanilla OVS, multi-queue is enabled on the tuntap ports and nic
+queues will be enabled inside the guest with ethtool. Simply enabling the
+multi-queue on the guest is sufficient for Vanilla OVS multi-queue.
+
+Testpmd should be configured to take advantage of multi-queue on the guest if
+using DPDKVhostUser. This can be done by modifying the ''04_vnf.conf'' file.
+
+.. code-block:: python
+
+ GUEST_TESTPMD_PARAMS = ['-l 0,1,2,3,4 -n 4 --socket-mem 512 -- '
+ '--burst=64 -i --txqflags=0xf00 '
+ '--nb-cores=4 --rxq=2 --txq=2 '
+ '--disable-hw-vlan']
+
+**NOTE:** The guest SMP cores must be configured to allow for testpmd to use the
+optimal number of cores to take advantage of the multiple guest queues.
+
+In case of using Vanilla OVS and qemu virtio-net you can increase performance
+by binding vhost-net threads to cpus. This can be done by enabling the affinity
+in the ''04_vnf.conf'' file. This can be done to non multi-queue enabled
+configurations as well as there will be 2 vhost-net threads.
+
+.. code-block:: python
+
+ VSWITCH_VHOST_NET_AFFINITIZATION = True
+
+ VSWITCH_VHOST_CPU_MAP = [4,5,8,11]
+
+**NOTE:** This method of binding would require a custom script in a real
+environment.
+
+**NOTE:** For optimal performance guest SMPs and/or vhost-net threads should be
+on the same numa as the NIC in use if possible/applicable. Testpmd should be
+assigned at least (nb_cores +1) total cores with the cpu mask.
+
+Executing Packet Forwarding tests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To select the applications which will forward packets,
+the following parameters should be configured:
+
+.. code-block:: python
+
+ VSWITCH = 'none'
+ PKTFWD = 'TestPMD'
+
+or use ``--vswitch`` and ``--fwdapp`` CLI arguments:
+
+.. code-block:: console
+
+ $ ./vsperf phy2phy_cont --conf-file user_settings.py \
+ --vswitch none \
+ --fwdapp TestPMD
+
+Supported Packet Forwarding applications are:
+
+.. code-block:: console
+
+ 'testpmd' - testpmd from dpdk
+
+
+1. Update your ''10_custom.conf'' file to use the appropriate variables
+ for selected Packet Forwarder:
+
+ .. code-block:: python
+
+ # testpmd configuration
+ TESTPMD_ARGS = []
+ # packet forwarding mode supported by testpmd; Please see DPDK documentation
+ # for comprehensive list of modes supported by your version.
+ # e.g. io|mac|mac_retry|macswap|flowgen|rxonly|txonly|csum|icmpecho|...
+ # Note: Option "mac_retry" has been changed to "mac retry" since DPDK v16.07
+ TESTPMD_FWD_MODE = 'csum'
+ # checksum calculation layer: ip|udp|tcp|sctp|outer-ip
+ TESTPMD_CSUM_LAYER = 'ip'
+ # checksum calculation place: hw (hardware) | sw (software)
+ TESTPMD_CSUM_CALC = 'sw'
+ # recognize tunnel headers: on|off
+ TESTPMD_CSUM_PARSE_TUNNEL = 'off'
+
+2. Run test:
+
+ .. code-block:: console
+
+ $ ./vsperf phy2phy_tput --conf-file <path_to_settings_py>
+
+Executing Packet Forwarding tests with one guest
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+TestPMD with DPDK 16.11 or greater can be used to forward packets as a switch to a single guest using TestPMD vdev
+option. To set this configuration the following parameters should be used.
+
+ .. code-block:: python
+
+ VSWITCH = 'none'
+ PKTFWD = 'TestPMD'
+
+or use ``--vswitch`` and ``--fwdapp`` CLI arguments:
+
+ .. code-block:: console
+
+ $ ./vsperf pvp_tput --conf-file user_settings.py \
+ --vswitch none \
+ --fwdapp TestPMD
+
+Guest forwarding application only supports TestPMD in this configuration.
+
+ .. code-block:: python
+
+ GUEST_LOOPBACK = ['testpmd']
+
+For optimal performance one cpu per port +1 should be used for TestPMD. Also set additional params for packet forwarding
+application to use the correct number of nb-cores.
+
+ .. code-block:: python
+
+ DPDK_SOCKET_MEM = ['1024', '0']
+ VSWITCHD_DPDK_ARGS = ['-l', '46,44,42,40,38', '-n', '4']
+ TESTPMD_ARGS = ['--nb-cores=4', '--txq=1', '--rxq=1']
+
+For guest TestPMD 3 VCpus should be assigned with the following TestPMD params.
+
+ .. code-block:: python
+
+ GUEST_TESTPMD_PARAMS = ['-l 0,1,2 -n 4 --socket-mem 1024 -- '
+ '--burst=64 -i --txqflags=0xf00 '
+ '--disable-hw-vlan --nb-cores=2 --txq=1 --rxq=1']
+
+Execution of TestPMD can be run with the following command line
+
+ .. code-block:: console
+
+ ./vsperf pvp_tput --vswitch=none --fwdapp=TestPMD --conf-file <path_to_settings_py>
+
+**NOTE:** To achieve the best 0% loss numbers with rfc2544 throughput testing, other tunings should be applied to host
+and guest such as tuned profiles and CPU tunings to prevent possible interrupts to worker threads.
+
+VSPERF modes of operation
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+VSPERF can be run in different modes. By default it will configure vSwitch,
+traffic generator and VNF. However it can be used just for configuration
+and execution of traffic generator. Another option is execution of all
+components except traffic generator itself.
+
+Mode of operation is driven by configuration parameter -m or --mode
+
+.. code-block:: console
+
+ -m MODE, --mode MODE vsperf mode of operation;
+ Values:
+ "normal" - execute vSwitch, VNF and traffic generator
+ "trafficgen" - execute only traffic generator
+ "trafficgen-off" - execute vSwitch and VNF
+ "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
+
+In case, that VSPERF is executed in "trafficgen" mode, then configuration
+of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the
+``--test-params`` option. It is not needed to specify all values of ``TRAFFIC``
+dictionary. It is sufficient to specify only values, which should be changed.
+Detailed description of ``TRAFFIC`` dictionary can be found at
+:ref:`configuration-of-traffic-dictionary`.
+
+Example of execution of VSPERF in "trafficgen" mode:
+
+.. code-block:: console
+
+ $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \
+ --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
+
+Code change verification by pylint
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Every developer participating in VSPERF project should run
+pylint before his python code is submitted for review. Project
+specific configuration for pylint is available at 'pylint.rc'.
+
+Example of manual pylint invocation:
+
+.. code-block:: console
+
+ $ pylint --rcfile ./pylintrc ./vsperf
+
+GOTCHAs:
+^^^^^^^^
+
+Custom image fails to boot
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Using custom VM images may not boot within VSPerf pxp testing because of
+the drive boot and shared type which could be caused by a missing scsi
+driver inside the image. In case of issues you can try changing the drive
+boot type to ide.
+
+.. code-block:: python
+
+ GUEST_BOOT_DRIVE_TYPE = ['ide']
+ GUEST_SHARED_DRIVE_TYPE = ['ide']
+
+OVS with DPDK and QEMU
+~~~~~~~~~~~~~~~~~~~~~~~
+
+If you encounter the following error: "before (last 100 chars):
+'-path=/dev/hugepages,share=on: unable to map backing store for
+hugepages: Cannot allocate memory\r\n\r\n" during qemu initialization,
+check the amount of hugepages on your system:
+
+.. code-block:: console
+
+ $ cat /proc/meminfo | grep HugePages
+
+
+By default the vswitchd is launched with 1Gb of memory, to change
+this, modify --socket-mem parameter in conf/02_vswitch.conf to allocate
+an appropriate amount of memory:
+
+.. code-block:: python
+
+ DPDK_SOCKET_MEM = ['1024', '0']
+ VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4']
+ VSWITCHD_DPDK_CONFIG = {
+ 'dpdk-init' : 'true',
+ 'dpdk-lcore-mask' : '0x4',
+ 'dpdk-socket-mem' : '1024,0',
+ }
+
+Note: Option ``VSWITCHD_DPDK_ARGS`` is used for vswitchd, which supports ``--dpdk``
+parameter. In recent vswitchd versions, option ``VSWITCHD_DPDK_CONFIG`` will be
+used to configure vswitchd via ``ovs-vsctl`` calls.
+
+
+More information
+^^^^^^^^^^^^^^^^
+
+For more information and details refer to the rest of vSwitchPerfuser documentation.
+
diff --git a/docs/testing/user/userguide/yardstick.rst b/docs/testing/user/userguide/yardstick.rst
new file mode 100644
index 00000000..b5e5c72d
--- /dev/null
+++ b/docs/testing/user/userguide/yardstick.rst
@@ -0,0 +1,250 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T and others.
+
+Execution of vswitchperf testcases by Yardstick
+-----------------------------------------------
+
+General
+^^^^^^^
+
+Yardstick is a generic framework for a test execution, which is used for
+validation of installation of OPNFV platform. In the future, Yardstick will
+support two options of vswitchperf testcase execution:
+
+- plugin mode, which will execute native vswitchperf testcases; Tests will
+ be executed natively by vsperf, and test results will be processed and
+ reported by yardstick.
+- traffic generator mode, which will run vswitchperf in **trafficgen**
+ mode only; Yardstick framework will be used to launch VNFs and to configure
+ flows to ensure, that traffic is properly routed. This mode will allow to
+ test OVS performance in real world scenarios.
+
+In Colorado release only the traffic generator mode is supported.
+
+Yardstick Installation
+^^^^^^^^^^^^^^^^^^^^^^
+
+In order to run Yardstick testcases, you will need to prepare your test
+environment. Please follow the `installation instructions
+<http://artifacts.opnfv.org/yardstick/docs/user_guides_framework/index.html>`__
+to install the yardstick.
+
+Please note, that yardstick uses OpenStack for execution of testcases.
+OpenStack must be installed with Heat and Neutron services. Otherwise
+vswitchperf testcases cannot be executed.
+
+VM image with vswitchperf
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A special VM image is required for execution of vswitchperf specific testcases
+by yardstick. It is possible to use a sample VM image available at OPNFV
+artifactory or to build customized image.
+
+Sample VM image with vswitchperf
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Sample VM image is available at vswitchperf section of OPNFV artifactory
+for free download:
+
+.. code-block:: console
+
+ $ wget http://artifacts.opnfv.org/vswitchperf/vnf/vsperf-yardstick-image.qcow2
+
+This image can be used for execution of sample testcases with dummy traffic
+generator.
+
+**NOTE:** Traffic generators might require an installation of client software.
+This software is not included in the sample image and must be installed by user.
+
+**NOTE:** This image will be updated only in case, that new features related
+to yardstick integration will be added to the vswitchperf.
+
+Preparation of custom VM image
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In general, any Linux distribution supported by vswitchperf can be used as
+a base image for vswitchperf. One of the possibilities is to modify vloop-vnf
+image, which can be downloaded from `<http://artifacts.opnfv.org/vswitchperf.html/>`__
+(see :ref:`vloop-vnf`).
+
+Please follow the :ref:`vsperf-installation` to
+install vswitchperf inside vloop-vnf image. As vswitchperf will be run in
+trafficgen mode, it is possible to skip installation and compilation of OVS,
+QEMU and DPDK to keep image size smaller.
+
+In case, that selected traffic generator requires installation of additional
+client software, please follow appropriate documentation. For example in case
+of IXIA, you would need to install IxOS and IxNetowrk TCL API.
+
+VM image usage
+~~~~~~~~~~~~~~
+
+Image with vswitchperf must be uploaded into the glance service and
+vswitchperf specific flavor configured, e.g.:
+
+.. code-block:: console
+
+ $ glance --os-username admin --os-image-api-version 1 image-create --name \
+ vsperf --is-public true --disk-format qcow2 --container-format bare --file \
+ vsperf-yardstick-image.qcow2
+
+ $ nova --os-username admin flavor-create vsperf-flavor 100 2048 25 1
+
+Testcase execution
+^^^^^^^^^^^^^^^^^^
+
+After installation, yardstick is available as python package within yardstick
+specific virtual environment. It means, that yardstick environment must be
+enabled before the test execution, e.g.:
+
+.. code-block:: console
+
+ source ~/yardstick_venv/bin/activate
+
+
+Next step is configuration of OpenStack environment, e.g. in case of devstack:
+
+.. code-block:: console
+
+ source /opt/openstack/devstack/openrc
+ export EXTERNAL_NETWORK=public
+
+Vswitchperf testcases executable by yardstick are located at vswitchperf
+repository inside ``yardstick/tests`` directory. Example of their download
+and execution follows:
+
+.. code-block:: console
+
+ git clone https://gerrit.opnfv.org/gerrit/vswitchperf
+ cd vswitchperf
+
+ yardstick -d task start yardstick/tests/rfc2544_throughput_dummy.yaml
+
+**NOTE:** Optional argument ``-d`` shows debug output.
+
+Testcase customization
+^^^^^^^^^^^^^^^^^^^^^^
+
+Yardstick testcases are described by YAML files. vswitchperf specific testcases
+are part of the vswitchperf repository and their yaml files can be found at
+``yardstick/tests`` directory. For detailed description of yaml file structure,
+please see yardstick documentation and testcase samples. Only vswitchperf specific
+parts will be discussed here.
+
+Example of yaml file:
+
+.. code-block:: yaml
+
+ ...
+ scenarios:
+ -
+ type: Vsperf
+ options:
+ testname: 'p2p_rfc2544_throughput'
+ trafficgen_port1: 'eth1'
+ trafficgen_port2: 'eth3'
+ external_bridge: 'br-ex'
+ test_params: 'TRAFFICGEN_DURATION=30;TRAFFIC={'traffic_type':'rfc2544_throughput}'
+ conf_file: '~/vsperf-yardstick.conf'
+
+ host: vsperf.demo
+
+ runner:
+ type: Sequence
+ scenario_option_name: frame_size
+ sequence:
+ - 64
+ - 128
+ - 512
+ - 1024
+ - 1518
+ sla:
+ metrics: 'throughput_rx_fps'
+ throughput_rx_fps: 500000
+ action: monitor
+
+ context:
+ ...
+
+Section option
+~~~~~~~~~~~~~~
+
+Section **option** defines details of vswitchperf test scenario. Lot of options
+are identical to the vswitchperf parameters passed through ``--test-params``
+argument. Following options are supported:
+
+- **frame_size** - a packet size for which test should be executed;
+ Multiple packet sizes can be tested by modification of Sequence runner
+ section inside YAML definition. Default: '64'
+- **conf_file** - sets path to the vswitchperf configuration file, which will be
+ uploaded to VM; Default: '~/vsperf-yardstick.conf'
+- **setup_script** - sets path to the setup script, which will be executed
+ during setup and teardown phases
+- **trafficgen_port1** - specifies device name of 1st interface connected to
+ the trafficgen
+- **trafficgen_port2** - specifies device name of 2nd interface connected to
+ the trafficgen
+- **external_bridge** - specifies name of external bridge configured in OVS;
+ Default: 'br-ex'
+- **test_params** - specifies a string with a list of vsperf configuration
+ parameters, which will be passed to the ``--test-params`` CLI argument;
+ Parameters should be stated in the form of ``param=value`` and separated
+ by a semicolon. Configuration of traffic generator is driven by ``TRAFFIC``
+ dictionary, which can be also updated by values defined by ``test_params``.
+ Please check VSPERF documentation for details about available configuration
+ parameters and their data types.
+ In case that both **test_params** and **conf_file** are specified,
+ then values from **test_params** will override values defined
+ in the configuration file.
+
+In case that **trafficgen_port1** and/or **trafficgen_port2** are defined, then
+these interfaces will be inserted into the **external_bridge** of OVS. It is
+expected, that OVS runs at the same node, where the testcase is executed. In case
+of more complex OpenStack installation or a need of additional OVS configuration,
+**setup_script** can be used.
+
+**NOTE** It is essential to specify a configuration for selected traffic generator.
+In case, that standalone testcase is created, then traffic generator can be
+selected and configured directly in YAML file by **test_params**. On the other
+hand, if multiple testcases should be executed with the same traffic generator
+settings, then a customized configuration file should be prepared and its name
+passed by **conf_file** option.
+
+Section runner
+~~~~~~~~~~~~~~
+
+Yardstick supports several `runner types
+<http://artifacts.opnfv.org/yardstick/docs/userguide/architecture.html#runner-types>`__.
+In case of vswitchperf specific TCs, **Sequence** runner type can be used to
+execute the testcase for given list of frame sizes.
+
+
+Section sla
+~~~~~~~~~~~
+
+In case that sla section is not defined, then testcase will be always
+considered as successful. On the other hand, it is possible to define a set of
+test metrics and their minimal values to evaluate test success. Any numeric
+value, reported by vswitchperf inside CSV result file, can be used.
+Multiple metrics can be defined as a coma separated list of items. Minimal
+value must be set separately for each metric.
+
+e.g.:
+
+.. code-block:: yaml
+
+ sla:
+ metrics: 'throughput_rx_fps,throughput_rx_mbps'
+ throughput_rx_fps: 500000
+ throughput_rx_mbps: 1000
+
+In case that any of defined metrics will be lower than defined value, then
+testcase will be marked as failed. Based on ``action`` policy, yardstick
+will either stop test execution (value ``assert``) or it will run next test
+(value ``monitor``).
+
+**NOTE** The throughput SLA (or any other SLA) cannot be set to a meaningful
+value without knowledge of the server and networking environment, possibly
+including prior testing in that environment to establish a baseline SLA level
+under well-understood circumstances.