summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorMaryam Tahhan <maryam.tahhan@intel.com>2015-08-24 14:05:15 +0100
committerMaryam Tahhan <maryam.tahhan@intel.com>2015-08-24 14:08:03 +0100
commit400c276d192d280cf74f09b2e8b2234057b56de3 (patch)
tree190ba32a0d01b53d9911b77b17676095eefd9f90 /docs
parent9d5095b7a5f99ce4cacc49f8d494bdd9baba4a12 (diff)
docs: Migrate Docs to RST format and new dir docs/
Migrate all existing VSPERF documentation to a new directory called docs/ and convert to ReStructuredText format. It's recommended that any doc changes in the future are tested with: http://rst.ninjs.org/. Change-Id: I18aa574b1259986502bde4ceef1fae7c6a5c1f33 JIRA: VSPERF-60 Signed-off-by: Maryam Tahhan <maryam.tahhan@intel.com> Reviewed-by: Al Morton <acmorton@att.com> Reviewed-by: Eugene Snider <Eugene.Snider@huawei.com> Reviewed-by: Gurpreet Singh <gurpreet.singh@spirent.com> Reviewed-by: Tv Rao <tv.rao@freescale.com
Diffstat (limited to 'docs')
-rw-r--r--docs/NEWS.md51
-rw-r--r--docs/NEWS.rst55
-rw-r--r--docs/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-00.xml907
-rw-r--r--docs/images/TCLServerProperties.png (renamed from docs/TCLServerProperties.png)bin11667 -> 11667 bytes
-rwxr-xr-xdocs/installation.md61
-rw-r--r--docs/installation.rst68
-rwxr-xr-xdocs/quickstart.md118
-rw-r--r--docs/quickstart.rst160
-rw-r--r--docs/vswitchperf_ltd.rst1931
9 files changed, 3121 insertions, 230 deletions
diff --git a/docs/NEWS.md b/docs/NEWS.md
deleted file mode 100644
index 604328ab..00000000
--- a/docs/NEWS.md
+++ /dev/null
@@ -1,51 +0,0 @@
-#August 2015
-
-## New
-
-* Backport and enhancement of reporting
-
-
-#July 2015
-
-## New
-
-* PVP deployment scenario testing using vhost-user as guest access method
- * Verified on CentOS7 and Fedora 20
- * Requires QEMU 2.2.0 and DPDK 2.0
-
-
-#May 2015
-
-This is the initial release of a re-designed version of the software based on
-community feedback. This initial release supports only the Phy2Phy deployment
-scenario and the LTD.Throughput.RFC2544.PacketLossRatio test - both described
-in the OPNFV vswitchperf 'CHARACTERIZE VSWITCH PERFORMANCE FOR TELCO NFV USE
-CASES LEVEL TEST DESIGN'. The intention is that more test cases will follow
-once the community has digested the initial release.
-
-## New
-
-* Performance testing with continuous stream
-* Vanilla OVS support added.
- * Support for non-DPDK OVS build.
- * Build and installation support through Makefile will be added via
- next patch(Currently it is possible to manually build ovs and
- setting it in vsperf configuration files).
- * PvP scenario is not yet implemented.
-* CentOS7 support
- * Verified on CentOS7
- * Install & Quickstart documentation
-
-* Implementation of LTD.Scalability.RFC2544.0PacketLoss testcase
-* Better support for mixing tests types with Deployment Scenarios
-* Re-work based on community feedback of TOIT
- * Framework support for other vSwitches
- * Framework support for non-Ixia traffic generators
- * Framework support for different VNFs
-* Python3
-* Support for biDirectional functionality for ixnet interface
-
-## Missing
-
-* xmlunit output is currently disabled
-* VNF support.
diff --git a/docs/NEWS.rst b/docs/NEWS.rst
new file mode 100644
index 00000000..8c7ecaaa
--- /dev/null
+++ b/docs/NEWS.rst
@@ -0,0 +1,55 @@
+August 2015
+===========
+New
+---
+- Backport and enhancement of reporting
+
+July 2015
+=========
+New
+---
+- PVP deployment scenario testing using vhost-user as guest access method
+ - Verified on CentOS7 and Fedora 20
+ - Requires QEMU 2.2.0 and DPDK 2.0
+
+May 2015
+========
+
+This is the initial release of a re-designed version of the software
+based on community feedback. This initial release supports only the
+Phy2Phy deployment scenario and the
+LTD.Throughput.RFC2544.PacketLossRatio test - both described in the
+OPNFV vswitchperf 'CHARACTERIZE VSWITCH PERFORMANCE FOR TELCO NFV USE
+CASES LEVEL TEST DESIGN'. The intention is that more test cases will
+follow once the community has digested the initial release.
+
+New
+---
+
+- Performance testing with continuous stream
+- Vanilla OVS support added.
+
+ - Support for non-DPDK OVS build.
+ - Build and installation support through Makefile will be added via
+ next patch(Currently it is possible to manually build ovs and
+ setting it in vsperf configuration files).
+ - PvP scenario is not yet implemented.
+
+- CentOS7 support
+- Verified on CentOS7
+- Install & Quickstart documentation
+
+- Better support for mixing tests types with Deployment Scenarios
+- Re-work based on community feedback of TOIT
+- Framework support for other vSwitches
+- Framework support for non-Ixia traffic generators
+- Framework support for different VNFs
+- Python3
+- Support for biDirectional functionality for ixnet interface
+
+Missing
+-------
+
+- Report generation is currently disabled
+- xmlunit output is currently disabled
+- VNF support.
diff --git a/docs/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-00.xml b/docs/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-00.xml
new file mode 100644
index 00000000..d8351957
--- /dev/null
+++ b/docs/ietf_draft/draft-vsperf-bmwg-vswitch-opnfv-00.xml
@@ -0,0 +1,907 @@
+<?xml version="1.0" encoding="US-ASCII"?>
+<!DOCTYPE rfc SYSTEM "rfc2629.dtd">
+<?rfc toc="yes"?>
+<?rfc tocompact="yes"?>
+<?rfc tocdepth="3"?>
+<?rfc tocindent="yes"?>
+<?rfc symrefs="yes"?>
+<?rfc sortrefs="yes"?>
+<?rfc comments="yes"?>
+<?rfc inline="yes"?>
+<?rfc compact="yes"?>
+<?rfc subcompact="no"?>
+<rfc category="info" docName="draft-vsperf-bmwg-vswitch-opnfv-00"
+ ipr="trust200902">
+ <front>
+ <title abbrev="Benchmarking vSwitches">Benchmarking Virtual Switches in
+ OPNFV</title>
+
+ <author fullname="Maryam Tahhan" initials="M." surname="Tahhan">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>maryam.tahhan@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Billy O'Mahony" initials="B." surname="O'Mahony">
+ <organization>Intel</organization>
+
+ <address>
+ <postal>
+ <street/>
+
+ <city/>
+
+ <region/>
+
+ <code/>
+
+ <country/>
+ </postal>
+
+ <phone/>
+
+ <facsimile/>
+
+ <email>billy.o.mahony@intel.com</email>
+
+ <uri/>
+ </address>
+ </author>
+
+ <author fullname="Al Morton" initials="A." surname="Morton">
+ <organization>AT&amp;T Labs</organization>
+
+ <address>
+ <postal>
+ <street>200 Laurel Avenue South</street>
+
+ <city>Middletown,</city>
+
+ <region>NJ</region>
+
+ <code>07748</code>
+
+ <country>USA</country>
+ </postal>
+
+ <phone>+1 732 420 1571</phone>
+
+ <facsimile>+1 732 368 1192</facsimile>
+
+ <email>acmorton@att.com</email>
+
+ <uri>http://home.comcast.net/~acmacm/</uri>
+ </address>
+ </author>
+
+ <date day="3" month="July" year="2015"/>
+
+ <abstract>
+ <t>This memo describes the progress of the Open Platform for NFV (OPNFV)
+ project on virtual switch performance "VSWITCHPERF". This project
+ intends to build on the current and completed work of the Benchmarking
+ Methodology Working Group in IETF, by referencing existing literature.
+ The Benchmarking Methodology Working Group has traditionally conducted
+ laboratory characterization of dedicated physical implementations of
+ internetworking functions. Therefore, this memo begins to describe the
+ additional considerations when virtual switches are implemented in
+ general-purpose hardware. The expanded tests and benchmarks are also
+ influenced by the OPNFV mission to support virtualization of the "telco"
+ infrastructure.</t>
+ </abstract>
+
+ <note title="Requirements Language">
+ <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+ "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
+ document are to be interpreted as described in <xref
+ target="RFC2119">RFC 2119</xref>.</t>
+
+ <t/>
+ </note>
+ </front>
+
+ <middle>
+ <section title="Introduction">
+ <t>Benchmarking Methodology Working Group (BMWG) has traditionally
+ conducted laboratory characterization of dedicated physical
+ implementations of internetworking functions. The Black-box Benchmarks
+ of Throughput, Latency, Forwarding Rates and others have served our
+ industry for many years. Now, Network Function Virtualization (NFV) has
+ the goal to transform how internetwork functions are implemented, and
+ therefore has garnered much attention.</t>
+
+ <t>This memo describes the progress of the Open Platform for NFV (OPNFV)
+ project on virtual switch performance characterization, "VSWITCHPERF".
+ This project intends to build on the current and completed work of the
+ Benchmarking Methodology Working Group in IETF, by referencing existing
+ literature. For example, currently the most referenced RFC is <xref
+ target="RFC2544"/> (which depends on <xref target="RFC1242"/>) and
+ foundation of the benchmarking work in OPNFV is common and strong.</t>
+
+ <t>See
+ https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+ for more background, and the OPNFV website for general information:
+ https://www.opnfv.org/</t>
+
+ <t>The authors note that OPNFV distinguishes itself from other open
+ source compute and networking projects through its emphasis on existing
+ "telco" services as opposed to cloud-computing. There are many ways in
+ which telco requirements have different emphasis on performance
+ dimensions when compared to cloud computing: support for and transfer of
+ isochronous media streams is one example.</t>
+
+ <t>Note also that the move to NFV Infrastructure has resulted in many
+ new benchmarking initiatives across the industry, and the authors are
+ currently doing their best to maintain alignment with many other
+ projects, and this Internet Draft is evidence of the efforts.</t>
+ </section>
+
+ <section title="Scope">
+ <t>The primary purpose and scope of the memo is to inform BMWG of
+ work-in-progress that builds on the body of extensive literature and
+ experience. Additionally, once the initial information conveyed here is
+ received, this memo may be expanded to include more detail and
+ commentary from both BMWG and OPNFV communities, under BMWG's chartered
+ work to characterize the NFV Infrastructure (a virtual switch is an
+ important aspect of that infrastructure).</t>
+ </section>
+
+ <section title="Benchmarking Considerations">
+ <t>This section highlights some specific considerations (from <xref
+ target="I-D.ietf-bmwg-virtual-net"/>)related to Benchmarks for virtual
+ switches. The OPNFV project is sharing its present view on these areas,
+ as they develop their specifications in the Level Test Design (LTD)
+ document.</t>
+
+ <section title="Comparison with Physical Network Functions">
+ <t>To compare the performance of virtual designs and implementations
+ with their physical counterparts, identical benchmarks are needed.
+ BMWG has developed specifications for many network functions this memo
+ re-uses existing benchmarks through references, and expands them
+ during development of new methods. A key configuration aspect is the
+ number of parallel cores required to achieve comparable performance
+ with a given physical device, or whether some limit of scale was
+ reached before the cores could achieve the comparable level.</t>
+
+ <t>It's unlikely that the virtual switch will be the only application
+ running on the SUT, so CPU utilization, Cache utilization, and Memory
+ footprint should also be recorded for the virtual implementations of
+ internetworking functions.</t>
+ </section>
+
+ <section title="Continued Emphasis on Black-Box Benchmarks">
+ <t>External observations remain essential as the basis for Benchmarks.
+ Internal observations with fixed specification and interpretation will
+ be provided in parallel to assist the development of operations
+ procedures when the technology is deployed.</t>
+ </section>
+
+ <section title="New Configuration Parameters">
+ <t>A key consideration when conducting any sort of benchmark is trying
+ to ensure the consistency and repeatability of test results. When
+ benchmarking the performance of a vSwitch there are many factors that
+ can affect the consistency of results, one key factor is matching the
+ various hardware and software details of the SUT. This section lists
+ some of the many new parameters which this project believes are
+ critical to report in order to achieve repeatability.</t>
+
+ <t>Hardware details including:</t>
+
+ <t><list style="symbols">
+ <t>Platform details</t>
+
+ <t>Processor details</t>
+
+ <t>Memory information (type and size)</t>
+
+ <t>Number of enabled cores</t>
+
+ <t>Number of cores used for the test</t>
+
+ <t>Number of physical NICs, as well as their details
+ (manufacturer, versions, type and the PCI slot they are plugged
+ into)</t>
+
+ <t>NIC interrupt configuration</t>
+
+ <t>BIOS version, release date and any configurations that were
+ modified</t>
+
+ <t>CPU microcode level</t>
+
+ <t>Memory DIMM configurations (quad rank performance may not be
+ the same as dual rank) in size, freq and slot locations</t>
+
+ <t>PCI configuration parameters (payload size, early ack
+ option...)</t>
+
+ <t>Power management at all levels (ACPI sleep states, processor
+ package, OS...)</t>
+ </list>Software details including:</t>
+
+ <t><list style="symbols">
+ <t>OS parameters and behavior (text vs graphical no one typing at
+ the console on one system)</t>
+
+ <t>OS version (for host and VNF)</t>
+
+ <t>Kernel version (for host and VNF)</t>
+
+ <t>GRUB boot parameters (for host and VNF)</t>
+
+ <t>Hypervisor details (Type and version)</t>
+
+ <t>Selected vSwitch, version number or commit id used</t>
+
+ <t>vSwitch launch command line if it has been parameterised</t>
+
+ <t>Memory allocation to the vSwitch</t>
+
+ <t>which NUMA node it is using, and how many memory channels</t>
+
+ <t>DPDK or any other SW dependency version number or commit id
+ used</t>
+
+ <t>Memory allocation to a VM - if it's from Hugpages/elsewhere</t>
+
+ <t>VM storage type: snapshot/independent persistent/independent
+ non-persistent</t>
+
+ <t>Number of VMs</t>
+
+ <t>Number of Virtual NICs (vNICs), versions, type and driver</t>
+
+ <t>Number of virtual CPUs and their core affinity on the host</t>
+
+ <t>Number vNIC interrupt configuration</t>
+
+ <t>Thread affinitization for the applications (including the
+ vSwitch itself) on the host</t>
+
+ <t>Details of Resource isolation, such as CPUs designated for
+ Host/Kernel (isolcpu) and CPUs designated for specific processes
+ (taskset). - Test duration. - Number of flows.</t>
+ </list></t>
+
+ <t>Test Traffic Information:<list style="symbols">
+ <t>Traffic type - UDP, TCP, IMIX / Other</t>
+
+ <t>Packet Sizes</t>
+
+ <t>Deployment Scenario</t>
+ </list></t>
+
+ <t/>
+ </section>
+
+ <section title="Flow classification">
+ <t>Virtual switches group packets into flows by processing and
+ matching particular packet or frame header information, or by matching
+ packets based on the input ports. Thus a flow can be thought of a
+ sequence of packets that have the same set of header field values or
+ have arrived on the same port. Performance results can vary based on
+ the parameters the vSwitch uses to match for a flow. The recommended
+ flow classification parameters for any vSwitch performance tests are:
+ the input port, the source IP address, the destination IP address and
+ the ethernet protocol type field. It is essential to increase the flow
+ timeout time on a vSwitch before conducting any performance tests that
+ do not measure the flow setup time. Normally the first packet of a
+ particular stream will install the flow in the virtual switch which
+ adds an additional latency, subsequent packets of the same flow are
+ not subject to this latency if the flow is already installed on the
+ vSwitch.</t>
+ </section>
+
+ <section title="Benchmarks using Baselines with Resource Isolation">
+ <t>This outline describes measurement of baseline with isolated
+ resources at a high level, which is the intended approach at this
+ time.</t>
+
+ <t><list style="numbers">
+ <t>Baselines: <list style="symbols">
+ <t>Optional: Benchmark platform forwarding capability without
+ a vswitch or VNF for at least 72 hours (serves as a means of
+ platform validation and a means to obtain the base performance
+ for the platform in terms of its maximum forwarding rate and
+ latency). <figure>
+ <preamble>Benchmark platform forwarding
+ capability</preamble>
+
+ <artwork align="right"><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | Simple Forwarding App | | Host
+ | | | | |
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmark VNF forwarding capability with direct
+ connectivity (vSwitch bypass, e.g., SR/IOV) for at least 72
+ hours (serves as a means of VNF validation and a means to
+ obtain the base performance for the VNF in terms of its
+ maximum forwarding rate and latency). The metrics gathered
+ from this test will serve as a key comparison point for
+ vSwitch bypass technologies performance and vSwitch
+ performance. <figure align="right">
+ <preamble>Benchmark VNF forwarding capability</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | VNF | | |
+ | | | | |
+ | +------------------------------------------+ | |
+ | | Passthrough/SR-IOV | | Host
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+
+ <postamble/>
+ </figure></t>
+
+ <t>Benchmarking with isolated resources alone, with other
+ resources (both HW&amp;SW) disabled Example, vSw and VM are
+ SUT</t>
+
+ <t>Benchmarking with isolated resources alone, leaving some
+ resources unused</t>
+
+ <t>Benchmark with isolated resources and all resources
+ occupied</t>
+ </list></t>
+
+ <t>Next Steps<list style="symbols">
+ <t>Limited sharing</t>
+
+ <t>Production scenarios</t>
+
+ <t>Stressful scenarios</t>
+ </list></t>
+ </list></t>
+ </section>
+ </section>
+
+ <section title="VSWITCHPERF Specification Summary">
+ <t>The overall specification in preparation is referred to as a Level
+ Test Design (LTD) document, which will contain a suite of performance
+ tests. The base performance tests in the LTD are based on the
+ pre-existing specifications developed by BMWG to test the performance of
+ physical switches. These specifications include:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2544"/> Benchmarking Methodology for Network
+ Interconnect Devices</t>
+
+ <t><xref target="RFC2889"/> Benchmarking Methodology for LAN
+ Switching</t>
+
+ <t><xref target="RFC6201"/> Device Reset Characterization</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t/>
+
+ <t>In addition to this, the LTD also re-uses the terminology defined
+ by:</t>
+
+ <t><list style="symbols">
+ <t><xref target="RFC2285"/> Benchmarking Terminology for LAN
+ Switching Devices</t>
+
+ <t><xref target="RFC5481"/> Packet Delay Variation Applicability
+ Statement</t>
+ </list></t>
+
+ <t/>
+
+ <t>Specifications to be included in future updates of the LTD
+ include:<list style="symbols">
+ <t><xref target="RFC3918"/> Methodology for IP Multicast
+ Benchmarking</t>
+
+ <t><xref target="RFC4737"/> Packet Reordering Metrics</t>
+ </list></t>
+
+ <t>As one might expect, the most fundamental internetworking
+ characteristics of Throughput and Latency remain important when the
+ switch is virtualized, and these benchmarks figure prominently in the
+ specification.</t>
+
+ <t>When considering characteristics important to "telco" network
+ functions, we must begin to consider additional performance metrics. In
+ this case, the project specifications have referenced metrics from the
+ IETF IP Performance Metrics (IPPM) literature. This means that the <xref
+ target="RFC2544"/> test of Latency is replaced by measurement of a
+ metric derived from IPPM's <xref target="RFC2679"/>, where a set of
+ statistical summaries will be provided (mean, max, min, etc.). Further
+ metrics planned to be benchmarked include packet delay variation as
+ defined by <xref target="RFC5481"/> , reordering, burst behaviour, DUT
+ availability, DUT capacity and packet loss in long term testing at
+ Throughput level, where some low-level of background loss may be present
+ and characterized.</t>
+
+ <t>Tests have been (or will be) designed to collect the metrics
+ below:</t>
+
+ <t><list style="symbols">
+ <t>Throughput Tests to measure the maximum forwarding rate (in
+ frames per second or fps) and bit rate (in Mbps) for a constant load
+ (as defined by RFC1242) without traffic loss.</t>
+
+ <t>Packet and Frame Delay Distribution Tests to measure average, min
+ and max packet and frame delay for constant loads.</t>
+
+ <t>Packet Delay Tests to understand latency distribution for
+ different packet sizes and over an extended test run to uncover
+ outliers.</t>
+
+ <t>Scalability Tests to understand how the virtual switch performs
+ as the number of flows, active ports, complexity of the forwarding
+ logic&rsquo;s configuration&hellip; it has to deal with
+ increases.</t>
+
+ <t>Stream Performance Tests (TCP, UDP) to measure bulk data transfer
+ performance, i.e. how fast systems can send and receive data through
+ the switch.</t>
+
+ <t>Control Path and Datapath Coupling Tests, to understand how
+ closely coupled the datapath and the control path are as well as the
+ effect of this coupling on the performance of the DUT (example:
+ delay of the initial packet of a flow).</t>
+
+ <t>CPU and Memory Consumption Tests to understand the virtual
+ switch&rsquo;s footprint on the system, usually conducted as
+ auxiliary measurements with benchmarks above. They include: CPU
+ utilization, Cache utilization and Memory footprint.</t>
+ </list></t>
+
+ <t>Future/planned test specs include:<list style="symbols">
+ <t>Request/Response Performance Tests (TCP, UDP) which measure the
+ transaction rate through the switch.</t>
+
+ <t>Noisy Neighbour Tests, to understand the effects of resource
+ sharing on the performance of a virtual switch.</t>
+ </list>The flexibility of deployment of a virtual switch within a
+ network means that the BMWG IETF existing literature needs to be used to
+ characterize the performance of a switch in various deployment
+ scenarios. The deployment scenarios under consideration include:</t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to physical
+ port</preamble>
+
+ <artwork><![CDATA[ __
+ +--------------------------------------------------+ |
+ | +--------------------+ | |
+ | | | | |
+ | | v | | Host
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure></t>
+
+ <t><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ : | |
+ | | | | | Guest
+ | : v | |
+ | +---------------+ +---------------+ | |
+ | | logical port 0| | logical port 1| | |
+ +---+---------------+-----------+---------------+---+ __|
+ ^ :
+ | |
+ : v __
+ +---+---------------+----------+---------------+---+ |
+ | | logical port 0| | logical port 1| | |
+ | +---------------+ +---------------+ | |
+ | ^ : | |
+ | | | | | Host
+ | : v | |
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF to virtual switch
+ to VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+---------+---------------+--+ |
+ | | 0 1 | | 3 4 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | ^ | ^ | | | Host
+ | | |-----------------| v | |
+ | +--------------+ +--------------+ | |
+ | | phy ports | vSwitch | phy ports | | |
+ +---+--------------+----------+--------------+---+_|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>Physical port to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ | |
+ | | | | Guest
+ | : | |
+ | +---------------+ | |
+ | | logical port 0| | |
+ +---+---------------+-------------------------------+ __|
+ ^
+ |
+ : __
+ +---+---------------+------------------------------+ |
+ | | logical port 0| | |
+ | +---------------+ | |
+ | ^ | |
+ | | | | Host
+ | : | |
+ | +--------------+ | |
+ | | phy port | vSwitch | |
+ +---+--------------+------------ -------------- ---+ __|
+ ^
+ |
+ :
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to physical port</preamble>
+
+ <artwork><![CDATA[ __
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | : | |
+ | | | | Guest
+ | v | |
+ | +---------------+ | |
+ | | logical port | | |
+ +-------------------------------+---------------+---+ __|
+ :
+ |
+ v __
+ +------------------------------+---------------+---+ |
+ | | logical port | | |
+ | +---------------+ | |
+ | : | |
+ | | | | Host
+ | v | |
+ | +--------------+ | |
+ | vSwitch | phy port | | |
+ +-------------------------------+--------------+---+ __|
+ :
+ |
+ v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+]]></artwork>
+ </figure><figure>
+ <preamble>VNF to virtual switch to VNF</preamble>
+
+ <artwork><![CDATA[ __
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | | | | ^ | |
+ | v | | | | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 | | | | 0 | | |
+ +---+---------------+--+ +---+---------------+--+__|
+ : ^
+ | |
+ v : _
+ +---+---------------+---------+---------------+--+ |
+ | | 1 | | 1 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | | ^ | | Host
+ | L-----------------+ | |
+ | | |
+ | vSwitch | |
+ +------------------------------------------------+_|]]></artwork>
+ </figure></t>
+ </section>
+
+ <section title="3x3 Matrix Coverage">
+ <t>This section organizes the many existing test specifications into the
+ "3x3" matrix (introduced in <xref target="I-D.ietf-bmwg-virtual-net"/>).
+ Because the LTD specification ID names are quite long, this section is
+ organized into lists for each occupied cell of the matrix (not all are
+ occupied, also the matrix has grown to 3x4 to accommodate scale
+ metrics).</t>
+
+ <t>The tests listed below assess the activation of paths in the data
+ plane, rather than the control plane.</t>
+
+ <t>(Editor's Note: a complete list of tests is available here:
+ https://wiki.opnfv.org/wiki/vswitchperf_test_spec_review )</t>
+
+ <section title="Speed of Activation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.AddressLearningRate</t>
+
+ <t>Throughput.RFC2889.AddressCachingCapacity</t>
+
+ <t>PacketLatency.InitialPacketProcessingLatency</t>
+
+ <t/>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Activation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.SystemRecoveryTime</t>
+
+ <t>Throughput.RFC2544.ResetTime</t>
+ </list></t>
+ </section>
+
+ <section title="Scale of Activation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.AddressCachingCapacity</t>
+
+ <t/>
+ </list></t>
+ </section>
+
+ <section title="Speed of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.PacketLossRate</t>
+
+ <t>Throughput.RFC2544.PacketLossRateFrameModification</t>
+
+ <t>Throughput.RFC2544.BackToBackFrames</t>
+
+ <t>Throughput.RFC2889.ForwardingRate</t>
+
+ <t>Throughput.RFC2889.ForwardPressure</t>
+
+ <t>Throughput.RFC2889.BroadcastFrameForwarding</t>
+
+ <t>RFC2889 Broadcast Frame Latency test</t>
+ </list></t>
+ </section>
+
+ <section title="Accuracy of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2889.ErrorFramesFiltering</t>
+
+ <t/>
+ </list></t>
+ </section>
+
+ <section title="Reliability of Operation">
+ <t><list style="symbols">
+ <t>Throughput.RFC2544.Soak</t>
+
+ <t>Throughput.RFC2544.SoakFrameModification</t>
+
+ <t/>
+ </list></t>
+ </section>
+
+ <section title="Summary">
+ <t><figure>
+ <artwork><![CDATA[|------------------------------------------------------------------------|
+| | | | | |
+| | SPEED | ACCURACY | RELIABILITY | SCALE |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Activation | X | | X | X |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| Operation | X | X | X | |
+| | | | | |
+|------------------------------------------------------------------------|
+| | | | | |
+| De-activation | | | | |
+| | | | | |
+|------------------------------------------------------------------------|]]></artwork>
+ </figure></t>
+ </section>
+ </section>
+
+ <section title="Security Considerations">
+ <t>Benchmarking activities as described in this memo are limited to
+ technology characterization of a Device Under Test/System Under Test
+ (DUT/SUT) using controlled stimuli in a laboratory environment, with
+ dedicated address space and the constraints specified in the sections
+ above.</t>
+
+ <t>The benchmarking network topology will be an independent test setup
+ and MUST NOT be connected to devices that may forward the test traffic
+ into a production network, or misroute traffic to the test management
+ network.</t>
+
+ <t>Further, benchmarking is performed on a "black-box" basis, relying
+ solely on measurements observable external to the DUT/SUT.</t>
+
+ <t>Special capabilities SHOULD NOT exist in the DUT/SUT specifically for
+ benchmarking purposes. Any implications for network security arising
+ from the DUT/SUT SHOULD be identical in the lab and in production
+ networks.</t>
+ </section>
+
+ <section anchor="IANA" title="IANA Considerations">
+ <t>No IANA Action is requested at this time.</t>
+ </section>
+
+ <section title="Acknowledgements">
+ <t>The authors acknowledge</t>
+ </section>
+ </middle>
+
+ <back>
+ <references title="Normative References">
+ <?rfc ?>
+
+ <?rfc include="reference.RFC.2119"?>
+
+ <?rfc include="reference.RFC.2330"?>
+
+ <?rfc include='reference.RFC.2544'?>
+
+ <?rfc include="reference.RFC.2679"?>
+
+ <?rfc include='reference.RFC.2680'?>
+
+ <?rfc include='reference.RFC.3393'?>
+
+ <?rfc include='reference.RFC.3432'?>
+
+ <?rfc include='reference.RFC.2681'?>
+
+ <?rfc include='reference.RFC.5905'?>
+
+ <?rfc include='reference.RFC.4689'?>
+
+ <?rfc include='reference.RFC.4737'?>
+
+ <?rfc include='reference.RFC.5357'?>
+
+ <?rfc include='reference.RFC.2889'?>
+
+ <?rfc include='reference.RFC.3918'?>
+
+ <?rfc include='reference.RFC.6201'?>
+
+ <?rfc include='reference.RFC.2285'?>
+
+ <reference anchor="NFV.PER001">
+ <front>
+ <title>Network Function Virtualization: Performance and Portability
+ Best Practices</title>
+
+ <author fullname="ETSI NFV" initials="" surname="">
+ <organization/>
+ </author>
+
+ <date month="June" year="2014"/>
+ </front>
+
+ <seriesInfo name="Group Specification"
+ value="ETSI GS NFV-PER 001 V1.1.1 (2014-06)"/>
+
+ <format type="PDF"/>
+ </reference>
+ </references>
+
+ <references title="Informative References">
+ <?rfc include='reference.RFC.1242'?>
+
+ <?rfc include='reference.RFC.5481'?>
+
+ <?rfc include='reference.RFC.6049'?>
+
+ <?rfc include='reference.RFC.6248'?>
+
+ <?rfc include='reference.RFC.6390'?>
+
+ <?rfc include='reference.I-D.ietf-bmwg-virtual-net'?>
+ </references>
+ </back>
+</rfc>
diff --git a/docs/TCLServerProperties.png b/docs/images/TCLServerProperties.png
index 682de7c5..682de7c5 100644
--- a/docs/TCLServerProperties.png
+++ b/docs/images/TCLServerProperties.png
Binary files differ
diff --git a/docs/installation.md b/docs/installation.md
deleted file mode 100755
index bc3b7113..00000000
--- a/docs/installation.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# Installing toit
-
-The test suite requires Python 3.3 and relies on a number of other packages. These need to be installed for the test suite to function.
-To install Python 3.3 in CentOS 7, an additional repository, Software Collections (see https://www.softwarecollections.org/en/scls/rhscl/python33) should be enabled.
-
-Install the requirements as specified below.
-
----
-## Enable Software Collections (SCL)
-
-```bash
-yum -y install scl-utils
-yum -y install https://www.softwarecollections.org/en/scls/rhscl/python33/epel-7-x86_64/download/rhscl-python33-epel-7-x86_64.noarch.rpm
-```
-
-## System packages
-
-There are a number of packages that must be installed using `yum`. These packages are listed in packages.txt and can be installed like so:
-
-```bash
-yum -y --exclude=python33-mod_wsgi* install $(cat packages.txt)
-```
-
----
-
-## Python 3 Packages
-
-To avoid file permission errors and Python version issues, use virtualenv to create an isolated environment with Python3.
-The required Python 3 packages can be found in the `requirements.txt` file in the root of the test suite.
-They can be installed in your virtual environment like so:
-
-```bash
-scl enable python33 bash
-# Create virtual environment
-virtualenv vsperfenv
-cd vsperfenv
-source bin/activate
-pip install -r requirements.txt
-```
-
-You need to activate the virtual environment every time you start a new shell session.
-To activate, simple run:
-
-```bash
-scl enable python33 bash
-cd vsperfenv
-source bin/activate
-```
-
----
-
-# Working Behind a Proxy
-
-If you're behind a proxy, you'll likely want to configure this before running any of the above. For example:
-
-```bash
-export http_proxy=proxy.mycompany.com:123
-export https_proxy=proxy.mycompany.com:123
-```
-
----
diff --git a/docs/installation.rst b/docs/installation.rst
new file mode 100644
index 00000000..e9a3d115
--- /dev/null
+++ b/docs/installation.rst
@@ -0,0 +1,68 @@
+Installing toit
+===============
+
+The test suite requires Python 3.3 and relies on a number of other
+packages. These need to be installed for the test suite to function. To
+install Python 3.3 in CentOS 7, an additional repository, Software
+Collections (see
+https://www.softwarecollections.org/en/scls/rhscl/python33) should be
+enabled.
+
+Install the requirements as specified below.
+
+Enable Software Collections (SCL)
+---------------------------------
+
+ .. code-block:: console
+
+ yum -y install scl-utils
+ yum -y install https://www.softwarecollections.org/en/scls/rhscl/python33/epel-7-x86_64/download/rhscl-python33-epel-7-x86_64.noarch.rpm
+
+System packages
+-----------------
+There are a number of packages that must be installed using `yum`. These can be installed like so:
+
+ .. code-block:: console
+
+ yum -y --exclude=python33-mod_wsgi* install python33-* pciutils
+
+
+Python 3 Packages
+-----------------
+
+To avoid file permission errors and Python version issues, use
+virtualenv to create an isolated environment with Python3. The required
+Python 3 packages can be found in the ``requirements.txt`` file in the
+root of the test suite. They can be installed in your virtual
+environment like so:
+
+ .. code-block:: bash
+
+ scl enable python33 bash
+ # Create virtual environment
+ virtualenv vsperfenv
+ cd vsperfenv
+ source bin/activate
+ pip install -r requirements.txt
+
+You need to activate the virtual environment every time you start a new
+shell session. To activate, simple run:
+
+.. code:: bash
+
+ scl enable python33 bash
+ cd vsperfenv
+ source bin/activate
+
+--------------
+
+Working Behind a Proxy
+======================
+
+If you're behind a proxy, you'll likely want to configure this before
+running any of the above. For example:
+
+ .. code:: bash
+
+ export http_proxy=proxy.mycompany.com:123
+ export https_proxy=proxy.mycompany.com:123
diff --git a/docs/quickstart.md b/docs/quickstart.md
deleted file mode 100755
index b6cc3242..00000000
--- a/docs/quickstart.md
+++ /dev/null
@@ -1,118 +0,0 @@
-# Getting Started with 'vsperf'
-
-## Hardware Requirements
-VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
-
-## vSwitch Requirements
-The vSwitch must support Open Flow 1.3 or greater.
-
-## Installation
-
-Follow the [installation instructions] to install.
-
-## IXIA Setup
-###On the CentOS 7 system
-You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz.
-
-### On the IXIA client software system
-Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server)
- - Right click on IxNetwork TCL Server, select properties
- - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx" where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file).
- ![Alt text](TCLServerProperties.png)
- - Hit Ok and start the TCL server application
-
-## Cloning and building src dependencies
-In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory contains makefiles that will allow you to clone and build the libraries that VSPERF depends on, such as DPDK and OVS. To clone and build simply:
-
-```bash
-cd src
-make
-```
-
-VSPERF can be used with OVS without DPDK support. In this case you have to specify path to the kernel sources by WITH_LINUX parameter:
-
-```bash
-cd src
-make WITH_LINUX=/lib/modules/`uname -r`/build
-```
-
-To build DPDK and OVS for PVP testing, use:
-
-```bash
-make VHOST_USER=y
-```
-
-To delete a src subdirectory and its contents to allow you to re-clone simply use:
-
-```bash
-make cleanse
-```
-
-## Configure the `./conf/10_custom.conf` file
-
-The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values.
-
-The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom
-configuration value.
-
-## Using a custom settings file
-
-Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument.
-
-```bash
-./vsperf --conf-file <path_to_settings_py> ...
-```
-
-Note that configuration passed in via the environment (`--load-env`) or via another command line argument will override both the default and your custom configuration files. This "priority hierarchy" can be described like so (1 = max priority):
-
-1. Command line arguments
-2. Environment variables
-3. Configuration file(s)
-
----
-
-## Executing tests
-Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers:
-```
-username ALL=(ALL) NOPASSWD: ALL
-```
-username in the example above should be replaced with a real username.
-
-To list the available tests:
-
-```bash
-./vsperf --list-tests
-```
-
-To run a group of tests, for example all tests with a name containing
-'RFC2544':
-
-```bash
-./vsperf --conf-file=user_settings.py --tests="RFC2544"
-```
-
-To run all tests:
-
-```bash
-./vsperf --conf-file=user_settings.py
-```
-
-Some tests allow for configurable parameters, including test duration (in
-seconds) as well as packet sizes (in bytes).
-
-```bash
-./vsperf --conf-file user_settings.py
- --tests RFC2544Tput
- --test-param "rfc2544_duration=10;packet_sizes=128"
-```
-
-For all available options, check out the help dialog:
-
-```bash
-./vsperf --help
-```
-
----
-
-[installation instructions]: installation.md
-
diff --git a/docs/quickstart.rst b/docs/quickstart.rst
new file mode 100644
index 00000000..fe04cb7d
--- /dev/null
+++ b/docs/quickstart.rst
@@ -0,0 +1,160 @@
+Getting Started with 'vsperf'
+=============================
+
+Hardware Requirements
+---------------------
+
+VSPERF requires the following hardware to run tests: IXIA traffic
+generator (IxNetwork), a machine that runs the IXIA client software and
+a CentOS Linux release 7.1.1503 (Core) host.
+
+vSwitch Requirements
+--------------------
+
+The vSwitch must support Open Flow 1.3 or greater.
+
+Installation
+------------
+
+Follow the `installation instructions <installation.md>`__ to install.
+
+IXIA Setup
+----------
+
+On the CentOS 7 system
+~~~~~~~~~~~~~~~~~~~~~~
+
+You need to install IxNetworkTclClient$(VER\_NUM)Linux.bin.tgz.
+
+On the IXIA client software system
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Find the IxNetwork TCL server app (start -> All Programs -> IXIA ->
+IxNetwork -> IxNetwork\_$(VER\_NUM) -> IxNetwork TCL Server)
+
+Right click on IxNetwork TCL Server, select properties - Under shortcut tab in
+the Target dialogue box make sure there is the argument "-tclport xxxx"
+where xxxx is your port number (take note of this port number you will
+need it for the 10\_custom.conf file).
+
+|Alt text|
+
+Hit Ok and start the TCL server application
+
+Cloning and building src dependencies
+-------------------------------------
+
+In order to run VSPERF, you will need to download DPDK and OVS. You can
+do this manually and build them in a preferred location, or you could
+use vswitchperf/src. The vswitchperf/src directory contains makefiles
+that will allow you to clone and build the libraries that VSPERF depends
+on, such as DPDK and OVS. To clone and build simply:
+
+ .. code-block:: console
+
+ cd src
+ make
+
+VSPERF can be used with OVS without DPDK support. In this case you have
+to specify path to the kernel sources by WITH\_LINUX parameter:
+
+ .. code-block:: console
+
+ cd src
+ make WITH_LINUX=/lib/modules/`uname -r`/build
+
+To build DPDK and OVS for PVP testing with vhost_user as the guest access
+method, use:
+
+ .. code-block:: console
+
+ make VHOST_USER=y
+
+To delete a src subdirectory and its contents to allow you to re-clone simply
+use:
+
+ .. code-block:: console
+
+ make clobber
+
+Configure the ``./conf/10_custom.conf`` file
+--------------------------------------------
+
+The supplied ``10_custom.conf`` file must be modified, as it contains
+configuration items for which there are no reasonable default values.
+
+The configuration items that can be added is not limited to the initial
+contents. Any configuration item mentioned in any .conf file in
+``./conf`` directory can be added and that item will be overridden by
+the custom configuration value.
+
+Using a custom settings file
+----------------------------
+
+Alternatively a custom settings file can be passed to ``vsperf`` via the
+``--conf-file`` argument.
+
+ .. code-block:: console
+
+ ./vsperf --conf-file <path_to_settings_py> ...
+
+Note that configuration passed in via the environment (``--load-env``)
+or via another command line argument will override both the default and
+your custom configuration files. This "priority hierarchy" can be
+described like so (1 = max priority):
+
+1. Command line arguments
+2. Environment variables
+3. Configuration file(s)
+
+--------------
+
+Executing tests
+---------------
+
+Before running any tests make sure you have root permissions by adding
+the following line to /etc/sudoers:
+
+ .. code-block:: console
+
+ username ALL=(ALL) NOPASSWD: ALL
+
+username in the example above should be replaced with a real username.
+
+To list the available tests:
+
+ .. code-block:: console
+
+ ./vsperf --list-tests
+
+To run a group of tests, for example all tests with a name containing
+'RFC2544':
+
+ .. code-block:: console
+
+ ./vsperf --conf-file=user_settings.py --tests="RFC2544"
+
+To run all tests:
+
+ .. code-block:: console
+
+ ./vsperf --conf-file=user_settings.py
+
+Some tests allow for configurable parameters, including test duration
+(in seconds) as well as packet sizes (in bytes).
+
+.. code:: bash
+
+ ./vsperf --conf-file user_settings.py
+ --tests RFC2544Tput
+ --test-param "rfc2544_duration=10;packet_sizes=128"
+
+For all available options, check out the help dialog:
+
+ .. code-block:: console
+
+ ./vsperf --help
+
+--------------
+
+.. |Alt text| image:: images/TCLServerProperties.png
diff --git a/docs/vswitchperf_ltd.rst b/docs/vswitchperf_ltd.rst
new file mode 100644
index 00000000..7eaba009
--- /dev/null
+++ b/docs/vswitchperf_ltd.rst
@@ -0,0 +1,1931 @@
+CHARACTERIZE VSWITCH PERFORMANCE FOR TELCO NFV USE CASES LEVEL TEST DESIGN
+==========================================================================
+
+.. contents:: Table of Contents
+
+1. Introduction
+===============
+
+The objective of the OPNFV project titled
+**“Characterize vSwitch Performance for Telco NFV Use Cases”**, is to
+evaluate a virtual switch to identify its suitability for a Telco
+Network Function Virtualization (NFV) environment. The intention of this
+Level Test Design (LTD) document is to specify the set of tests to carry
+out in order to objectively measure the current characteristics of a
+virtual switch in the Network Function Virtualization Infrastructure
+(NFVI) as well as the test pass criteria. The detailed test cases will
+be defined in `Section 2 <#DetailsOfTheLevelTestDesign>`__, preceded by
+the `Document identifier <#DocId>`__ and the `Scope <#Scope>`__.
+
+This document is currently in draft form.
+
+1.1. Document identifier
+------------------------
+
+The document id will be used to uniquely
+identify versions of the LTD. The format for the document id will be:
+OPNFV\_vswitchperf\_LTD\_ver\_NUM\_MONTH\_YEAR\_STATUS, where by the
+status is one of: draft, reviewed, corrected or final. The document id
+for this version of the LTD is:
+OPNFV\_vswitchperf\_LTD\_ver\_1.6\_Jan\_15\_DRAFT.
+
+1.2. Scope
+----------
+
+The main purpose of this project is to specify a suite of
+performance tests in order to objectively measure the current packet
+transfer characteristics of a virtual switch in the NFVI. The intent of
+the project is to facilitate testing of any virtual switch. Thus, a
+generic suite of tests shall be developed, with no hard dependencies to
+a single implementation. In addition, the test case suite shall be
+architecture independent.
+
+The test cases developed in this project shall not form part of a
+separate test framework, all of these tests may be inserted into the
+Continuous Integration Test Framework and/or the Platform Functionality
+Test Framework - if a vSwitch becomes a standard component of an OPNFV
+release.
+
+1.3. References
+---------------
+
+* `RFC 1242 Benchmarking Terminology for Network Interconnection
+ Devices <http://www.ietf.org/rfc/rfc1242.txt>`__
+* `RFC 2544 Benchmarking Methodology for Network Interconnect
+ Devices <http://www.ietf.org/rfc/rfc2544.txt>`__
+* `RFC 2285 Benchmarking Terminology for LAN Switching
+ Devices <http://www.ietf.org/rfc/rfc2285.txt>`__
+* `RFC 2889 Benchmarking Methodology for LAN Switching
+ Devices <http://www.ietf.org/rfc/rfc2889.txt>`__
+* `RFC 3918 Methodology for IP Multicast
+ Benchmarking <http://www.ietf.org/rfc/rfc3918.txt>`__
+* `RFC 4737 Packet Reordering
+ Metrics <http://www.ietf.org/rfc/rfc4737.txt>`__
+* `RFC 5481 Packet Delay Variation Applicability
+ Statement <http://www.ietf.org/rfc/rfc5481.txt>`__
+* `RFC 6201 Device Reset
+ Characterization <http://tools.ietf.org/html/rfc6201>`__
+
+2. Details of the Level Test Design
+===================================
+
+This section describes the features to be tested (`cf. 2.1
+<#FeaturesToBeTested>`__), the test approach (`cf. 2.2 <#Approach>`__);
+it also identifies the sets of test cases or scenarios (`cf. 2.3
+<#TestIdentification>`__) along with the pass/fail criteria (`cf. 2.4
+<#PassFail>`__) and the test deliverables (`cf. 2.5 <#TestDeliverables>`__).
+
+2.1. Features to be tested
+--------------------------
+
+Characterizing virtual switches (i.e. Device Under Test (DUT) in this document)
+includes measuring the following performance metrics:
+
+- **Throughput** as defined by `RFC1242
+ <https://www.rfc-editor.org/rfc/rfc1242.txt>`__: The maximum rate at which
+ **none** of the offered frames are dropped by the DUT. The maximum frame
+ rate and bit rate that can be transmitted by the DUT without any error
+ should be recorded. Note there is an equivalent bit rate and a specific
+ layer at which the payloads contribute to the bits. Errors and
+ improperly formed frames or packets are dropped.
+- **Packet delay** introduced by the DUT and its cumulative effect on
+ E2E networks. Frame delay can be measured equivalently.
+- **Packet delay variation**: measured from the perspective of the
+ VNF/application. Packet delay variation is sometimes called "jitter".
+ However, we will avoid the term "jitter" as the term holds different
+ meaning to different groups of people. In this document we will
+ simply use the term packet delay variation. The preferred form for this
+ metric is the PDV form of delay variation defined in `RFC5481
+ <https://www.rfc-editor.org/rfc/rfc5481.txt>`__.
+- **Packet loss** (within a configured waiting time at the receiver): All
+ packets sent to the DUT should be accounted for.
+- **Burst behaviour**: measures the ability of the DUT to buffer packets.
+- **Packet re-ordering**: measures the ability of the device under test to
+ maintain sending order throughout transfer to the destination.
+- **Packet correctness**: packets or Frames must be well-formed, in that
+ they include all required fields, conform to length requirements, pass
+ integrity checks, etc.
+- **Availability and capacity** of the DUT i.e. when the DUT is fully “up”
+ and connected:
+
+ - Includes power consumption of the CPU (in various power states) and
+ system.
+ - Includes CPU utilization.
+ - Includes the number of NIC interfaces supported.
+ - Includes headroom of VM workload processing cores (i.e. available
+ for applications).
+
+
+2.2. Approach
+==============
+
+In order to determine the packet transfer characteristics of a virtual
+switch, the tests will be broken down into the following categories:
+
+2.2.1 Test Categories
+----------------------
+- **Throughput Tests** to measure the maximum forwarding rate (in
+ frames per second or fps) and bit rate (in Mbps) for a constant load
+ (as defined by `RFC1242 <https://www.rfc-editor.org/rfc/rfc1242.txt>`__)
+ without traffic loss.
+- **Packet and Frame Delay Tests** to measure average, min and max
+ packet and frame delay for constant loads.
+- **Stream Performance Tests** (TCP, UDP) to measure bulk data transfer
+ performance, i.e. how fast systems can send and receive data through
+ the switch.
+- **Request/Response Performance** Tests (TCP, UDP) the measure the
+ transaction rate through the switch.
+- **Packet Delay Tests** to understand latency distribution for
+ different packet sizes and over an extended test run to uncover
+ outliers.
+- **Scalability Tests** to understand how the virtual switch performs
+ as the number of flows, active ports, complexity of the forwarding
+ logic's configuration... it has to deal with increases.
+- **Control Path and Datapath Coupling** Tests, to understand how
+ closely coupled the datapath and the control path are as well as the
+ effect of this coupling on the performance of the DUT.
+- **CPU and Memory Consumption Tests** to understand the virtual
+ switch’s footprint on the system, this includes:
+
+ * CPU utilization
+ * Cache utilization
+ * Memory footprint
+ * Time To Establish Flows Tests.
+
+- **Noisy Neighbour Tests**, to understand the effects of resource
+ sharing on the performance of a virtual switch.
+
+**Note:** some of the tests above can be conducted simultaneously where
+the combined results would be insightful, for example Packet/Frame Delay
+and Scalability.
+
+2.2.2 Deployment Scenarios
+--------------------------
+The following represents possible deployments which can help to
+determine the performance of both the virtual switch and the datapath
+into the VNF:
+
+Physical port → vSwitch → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ .. code-block:: console
+
+ _
+ +--------------------------------------------------+ |
+ | +--------------------+ | |
+ | | | | |
+ | | v | | Host
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ _|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+
+Physical port → vSwitch → VNF → vSwitch → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ .. code-block:: console
+
+ _
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ : | |
+ | | | | | Guest
+ | : v | |
+ | +---------------+ +---------------+ | |
+ | | logical port 0| | logical port 1| | |
+ +---+---------------+-----------+---------------+---+ _|
+ ^ :
+ | |
+ : v _
+ +---+---------------+----------+---------------+---+ |
+ | | logical port 0| | logical port 1| | |
+ | +---------------+ +---------------+ | |
+ | ^ : | |
+ | | | | | Host
+ | : v | |
+ | +--------------+ +--------------+ | |
+ | | phy port | vSwitch | phy port | | |
+ +---+--------------+------------+--------------+---+ _|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+
+Physical port → vSwitch → VNF → vSwitch → VNF → vSwitch → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ .. code-block:: console
+
+ _
+ +----------------------+ +----------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +---------------+ | | +---------------+ | |
+ | | Application | | | | Application | | |
+ | +---------------+ | | +---------------+ | |
+ | ^ | | | ^ | | |
+ | | v | | | v | | Guests
+ | +---------------+ | | +---------------+ | |
+ | | logical ports | | | | logical ports | | |
+ | | 0 1 | | | | 0 1 | | |
+ +---+---------------+--+ +---+---------------+--+ _|
+ ^ : ^ :
+ | | | |
+ : v : v _
+ +---+---------------+---------+---------------+--+ |
+ | | 0 1 | | 3 4 | | |
+ | | logical ports | | logical ports | | |
+ | +---------------+ +---------------+ | |
+ | ^ | ^ | | | Host
+ | | L-----------------+ v | |
+ | +--------------+ +--------------+ | |
+ | | phy ports | vSwitch | phy ports | | |
+ +---+--------------+----------+--------------+---+ _|
+ ^ : ^ :
+ | | | |
+ : v : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+
+Physical port → vSwitch → VNF
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ .. code-block:: console
+
+ _
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | ^ | |
+ | | | | Guest
+ | : | |
+ | +---------------+ | |
+ | | logical port 0| | |
+ +---+---------------+-------------------------------+ _|
+ ^
+ |
+ : _
+ +---+---------------+------------------------------+ |
+ | | logical port 0| | |
+ | +---------------+ | |
+ | ^ | |
+ | | | | Host
+ | : | |
+ | +--------------+ | |
+ | | phy port | vSwitch | |
+ +---+--------------+------------ -------------- ---+ _|
+ ^
+ |
+ :
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+VNF → vSwitch → physical port
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ .. code-block:: console
+
+ _
+ +---------------------------------------------------+ |
+ | | |
+ | +-------------------------------------------+ | |
+ | | Application | | |
+ | +-------------------------------------------+ | |
+ | : | |
+ | | | | Guest
+ | v | |
+ | +---------------+ | |
+ | | logical port | | |
+ +-------------------------------+---------------+---+ _|
+ :
+ |
+ v _
+ +------------------------------+---------------+---+ |
+ | | logical port | | |
+ | +---------------+ | |
+ | : | |
+ | | | | Host
+ | v | |
+ | +--------------+ | |
+ | vSwitch | phy port | | |
+ +-------------------------------+--------------+---+ _|
+ :
+ |
+ v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+VNF → vSwitch → VNF → vSwitch
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ .. code-block:: console
+
+ _
+ +-------------------------+ +-------------------------+ |
+ | Guest 1 | | Guest 2 | |
+ | +-----------------+ | | +-----------------+ | |
+ | | Application | | | | Application | | |
+ | +-----------------+ | | +-----------------+ | |
+ | : | | ^ | |
+ | | | | | | | Guest
+ | v | | : | |
+ | +---------------+ | | +---------------+ | |
+ | | logical port 0| | | | logical port 0| | |
+ +-----+---------------+---+ +---+---------------+-----+ _|
+ : ^
+ | |
+ v : _
+ +----+---------------+------------+---------------+-----+ |
+ | | port 0 | | port 1 | | |
+ | +---------------+ +---------------+ | |
+ | : ^ | |
+ | | | | | Host
+ | +--------------------+ | |
+ | | |
+ | vswitch | |
+ +-------------------------------------------------------+ _|
+
+HOST 1(Physical port → virtual switch → VNF → virtual switch →
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Physical port) → HOST 2(Physical port → virtual switch → VNF →
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+virtual switch → Physical port)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ .. code-block:: console
+
+
+ +--------------------------------------------+ +------------------------------------------+
+ | +---------------------------------+ | | +--------------------------------+ |
+ | | Application | | | | Application | |
+ | +----------------------------+----+ | | +-------------------------+------+ |
+ | ^ | | | ^ | |
+ | | v | | | v |
+ | +------------------+ +------------------+ | | +------------------+ +----------------+ |
+ | | Logical port 0 | | Logical port 1 | | | | Logical port 0 | | Logical port 1 | |
+ +-+------------------+--+------------------+-+ +-+------------------+--+----------------+-+
+ ^ | ^ |
+ | | | |
+ | v | v
+ +-+------------------+--+------------------+-+ +-+------------------+--+----------------+-+
+ | | Logical port 0 | | Logical port 1 | | | | Logical port 0 | | Logical port 1 | |
+ | +------------------+ +------------------+ | | +------------------+ +----------------+ |
+ | ^ | | | ^ | |
+ | | | | | | | |
+ | | vswitch v | | | vswitch v |
+ | +--------+---------+ +------------------+ | | +------------------+ +----------------+ |
+ | | phy port | | phy port | | | | phy port | | phy port | |
+ +-+------------------+--+------------------+-+ +-+------------------+--+----------------+-+
+ ^ | ^ |
+ | | | |
+ | ------------------------ v
+ +-------------------------------------------------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +-------------------------------------------------------------------------------------------+
+
+**Note:** For tests where the traffic generator and/or measurement
+receiver are implemented on VM and connected to the virtual switch
+through vNIC, the issues of shared resources and interactions between
+the measurement devices and the device under test must be considered.
+
+2.2.3 General Methodology:
+--------------------------
+To establish the baseline performance of the virtual switch, tests would
+initially be run with a simple workload in the VNF (the recommended
+simple workload VNF would be `DPDK <http://www.dpdk.org/>`__'s testpmd
+application forwarding packets in a VM or vloop\_vnf a simple kernel
+module that forwards traffic between two network interfaces inside the
+virtualized environment while bypassing the networking stack).
+Subsequently, the tests would also be executed with a real Telco
+workload running in the VNF, which would exercise the virtual switch in
+the context of higher level Telco NFV use cases, and prove that its
+underlying characteristics and behaviour can be measured and validated.
+Suitable real Telco workload VNFs are yet to be identified.
+
+2.2.3.1 Default Test Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following list identifies the default parameters for suite of
+tests:
+
+- Reference application: Simple forwarding or Open Source VNF.
+- Frame size (bytes): 64, 128, 256, 512, 1024, 1280, 1518, 2K, 4k OR
+ Packet size based on use-case (e.g. RTP 64B, 256B).
+- Reordering check: Tests should confirm that packets within a flow are
+ not reordered.
+- Duplex: Unidirectional / Bidirectional. Default: Full duplex with
+ traffic transmitting in both directions, as network traffic generally
+ does not flow in a single direction. By default the data rate of
+ transmitted traffic should be the same in both directions, please
+ note that asymmetric traffic (e.g. downlink-heavy) tests will be
+ mentioned explicitly for the relevant test cases.
+- Number of Flows: Default for non scalability tests is a single flow.
+ For scalability tests the goal is to test with maximum supported
+ flows but where possible will test up to 10 Million flows. Start with
+ a single flow and scale up. By default flows should be added
+ sequentially, tests that add flows simultaneously will explicitly
+ call out their flow addition behaviour. Packets are generated across
+ the flows uniformly with no burstiness.
+- Traffic Types: UDP, SCTP, RTP, GTP and UDP traffic.
+- Deployment scenarios are:
+- Physical → virtual switch → physical.
+- Physical → virtual switch → VNF → virtual switch → physical.
+- Physical → virtual switch → VNF → virtual switch → VNF → virtual
+ switch → physical.
+- Physical → virtual switch → VNF.
+- VNF → virtual switch → Physical.
+- VNF → virtual switch → VNF.
+
+Tests MUST have these parameters unless otherwise stated. **Test cases
+with non default parameters will be stated explicitly**.
+
+**Note**: For throughput tests unless stated otherwise, test
+configurations should ensure that traffic traverses the installed flows
+through the switch, i.e. flows are installed and have an appropriate
+time out that doesn't expire before packet transmission starts.
+
+2.2.3.2 Flow Classification
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Virtual switches classify packets into flows by processing and matching
+particular header fields in the packet/frame and/or the input port where
+the packets/frames arrived. The vSwitch then carries out an action on
+the group of packets that match the classification parameters. Thus a
+flow is considered to be a sequence of packets that have a shared set of
+header field values or have arrived on the same port and have the same
+action applied to them. Performance results can vary based on the
+parameters the vSwitch uses to match for a flow. The recommended flow
+classification parameters for L3 vSwitch performance tests are: the
+input port, the source IP address, the destination IP address and the
+Ethernet protocol type field. It is essential to increase the flow
+time-out time on a vSwitch before conducting any performance tests that
+do not measure the flow set-up time. Normally the first packet of a
+particular flow will install the flow in the vSwitch which adds an
+additional latency, subsequent packets of the same flow are not subject
+to this latency if the flow is already installed on the vSwitch.
+
+2.2.3.3 Test Priority
+~~~~~~~~~~~~~~~~~~~~~
+
+Tests will be assigned a priority in order to determine which tests
+should be implemented immediately and which tests implementations
+can be deferred.
+
+Priority can be of following types: - Urgent: Must be implemented
+immediately. - High: Must be implemented in the next release. - Medium:
+May be implemented after the release. - Low: May or may not be
+implemented at all.
+
+2.2.3.4 SUT Setup
+~~~~~~~~~~~~~~~~~
+
+The SUT should be configured to its "default" state. The
+SUT's configuration or set-up must not change between tests in any way
+other than what is required to do the test. All supported protocols must
+be configured and enabled for each test set up.
+
+2.2.3.4.1 Port Configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The DUT should be configured with n ports where
+n is a multiple of 2. Half of the ports on the DUT should be used as
+ingress ports and the other half of the ports on the DUT should be used
+as egress ports. Where a DUT has more than 2 ports, the ingress data
+streams should be set-up so that they transmit packets to the egress
+ports in sequence so that there is an even distribution of traffic
+across ports. For example, if a DUT has 4 ports 0(ingress), 1(ingress),
+2(egress) and 3(egress), the traffic stream directed at port 0 should
+output a packet to port 2 followed by a packet to port 3. The traffic
+stream directed at port 1 should also output a packet to port 2 followed
+by a packet to port 3.
+
+2.2.3.4.2 Frame Formats
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Frame formats Layer 2 (data link layer) protocols
+++++++++++++++++++++++++++++++++++++++++++++++++++
+- Ethernet II
+
+ .. code-block:: console
+
+ +---------------------+--------------------+-----------+
+ | Ethernet Header | Payload | Check Sum |
+ +---------------------+--------------------+-----------+
+ |_____________________|____________________|___________|
+ 14 Bytes 46 - 1500 Bytes 4 Bytes
+
+Layer 3 (network layer) protocols
+++++++++++++++++++++++++++++++++++
+
+- IPv4
+
+ .. code-block:: console
+
+ +---------------------+--------------------+--------------------+-----------+
+ | Ethernet Header | IP Header | Payload | Check Sum |
+ +---------------------+--------------------+--------------------+-----------+
+ |_____________________|____________________|____________________|___________|
+ 14 Bytes 20 bytes 26 - 1480 Bytes 4 Bytes
+
+- IPv6
+
+ .. code-block:: console
+
+ +---------------------+--------------------+--------------------+-----------+
+ | Ethernet Header | IP Header | Payload | Check Sum |
+ +---------------------+--------------------+--------------------+-----------+
+ |_____________________|____________________|____________________|___________|
+ 14 Bytes 40 bytes 26 - 1460 Bytes 4 Bytes
+
+Layer 4 (transport layer) protocols
+++++++++++++++++++++++++++++++++++++
+ - TCP
+ - UDP
+ - SCTP
+
+ .. code-block:: console
+
+ +---------------------+--------------------+-----------------+--------------------+-----------+
+ | Ethernet Header | IP Header | Layer 4 Header | Payload | Check Sum |
+ +---------------------+--------------------+-----------------+--------------------+-----------+
+ |_____________________|____________________|_________________|____________________|___________|
+ 14 Bytes 40 bytes 20 Bytes 6 - 1460 Bytes 4 Bytes
+
+Layer 5 (application layer) protocols
++++++++++++++++++++++++++++++++++++++
+ - RTP
+ - GTP
+
+ .. code-block:: console
+
+ +---------------------+--------------------+-----------------+--------------------+-----------+
+ | Ethernet Header | IP Header | Layer 4 Header | Payload | Check Sum |
+ +---------------------+--------------------+-----------------+--------------------+-----------+
+ |_____________________|____________________|_________________|____________________|___________|
+ 14 Bytes 20 bytes 20 Bytes Min 6 Bytes 4 Bytes
+
+
+2.2.3.4.3 Packet Throughput
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+There is a difference between an Ethernet frame,
+an IP packet, and a UDP datagram. In the seven-layer OSI model of
+computer networking, packet refers to a data unit at layer 3 (network
+layer). The correct term for a data unit at layer 2 (data link layer) is
+a frame, and at layer 4 (transport layer) is a segment or datagram.
+
+Important concepts related to 10GbE performance are frame rate and
+throughput. The MAC bit rate of 10GbE, defined in the IEEE standard 802
+.3ae, is 10 billion bits per second. Frame rate is based on the bit rate
+and frame format definitions. Throughput, defined in IETF RFC 1242, is
+the highest rate at which the system under test can forward the offered
+load, without loss.
+
+The frame rate for 10GbE is determined by a formula that divides the 10
+billion bits per second by the preamble + frame length + inter-frame
+gap.
+
+The maximum frame rate is calculated using the minimum values of the
+following parameters, as described in the IEEE 802 .3ae standard:
+
+- Preamble: 8 bytes \* 8 = 64 bits
+- Frame Length: 64 bytes (minimum) \* 8 = 512 bits
+- Inter-frame Gap: 12 bytes (minimum) \* 8 = 96 bits
+
+Therefore, Maximum Frame Rate (64B Frames)
+= MAC Transmit Bit Rate / (Preamble + Frame Length + Inter-frame Gap)
+= 10,000,000,000 / (64 + 512 + 96)
+= 10,000,000,000 / 672
+= 14,880,952.38 frame per second (fps)
+
+2.2.3.4.4 System isolation and validation
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A key consideration when conducting any sort of benchmark is trying to
+ensure the consistency and repeatability of test results between runs.
+When benchmarking the performance of a virtual switch there are many
+factors that can affect the consistency of results. This section
+describes these factors and the measures that can be taken to limit
+their effects. In addition, this section will outline some system tests
+to validate the platform and the VNF before conducting any vSwitch
+benchmarking tests.
+
+System Isolation
+++++++++++++++++
+When conducting a benchmarking test on any SUT, it is essential to limit
+(and if reasonable, eliminate) any noise that may interfere with the
+accuracy of the metrics collected by the test. This noise may be
+introduced by other hardware or software (OS, other applications), and
+can result in significantly varying performance metrics being collected
+between consecutive runs of the same test. In the case of characterizing
+the performance of a virtual switch, there are a number of configuration
+parameters that can help increase the repeatability and stability of
+test results, including:
+
+- OS/GRUB configuration:
+
+ - maxcpus = n where n >= 0; limits the kernel to using 'n'
+ processors. Only use exactly what you need.
+ - isolcpus: Isolate CPUs from the general scheduler. Isolate all
+ CPUs bar one which will be used by the OS.
+ - use taskset to affinitize the forwarding application and the VNFs
+ onto isolated cores. VNFs and the vSwitch should be allocated
+ their own cores, i.e. must not share the same cores. vCPUs for the
+ VNF should be affinitized to individual cores also.
+ - Limit the amount of background applications that are running and
+ set OS to boot to runlevel 3. Make sure to kill any unnecessary
+ system processes/daemons.
+ - Only enable hardware that you need to use for your test – to
+ ensure there are no other interrupts on the system.
+ - Configure NIC interrupts to only use the cores that are not
+ allocated to any other process (VNF/vSwitch).
+
+- NUMA configuration: Any unused sockets in a multi-socket system
+ should be disabled.
+- CPU pinning: The vSwitch and the VNF should each be affinitized to
+ separate logical cores using a combination of maxcpus, isolcpus and
+ taskset.
+- BIOS configuration: BIOS should be configured for performance where
+ an explicit option exists, sleep states should be disabled, any
+ virtualization optimization technologies should be enabled, and
+ hyperthreading should also be enabled.
+
+System Validation
++++++++++++++++++
+System validation is broken down into two sub-categories: Platform
+validation and VNF validation. The validation test itself involves
+verifying the forwarding capability and stability for the sub-system
+under test. The rationale behind system validation is two fold. Firstly
+to give a tester confidence in the stability of the platform or VNF that
+is being tested; and secondly to provide base performance comparison
+points to understand the overhead introduced by the virtual switch.
+
+* Benchmark platform forwarding capability: This is an OPTIONAL test
+ used to verify the platform and measure the base performance (maximum
+ forwarding rate in fps and latency) that can be achieved by the
+ platform without a vSwitch or a VNF. The following diagram outlines
+ the set-up for benchmarking Platform forwarding capability:
+
+ .. code-block:: console
+
+ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | l2fw or DPDK L2FWD app | | Host
+ | | | | |
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+* Benchmark VNF forwarding capability: This test is used to verify
+ the VNF and measure the base performance (maximum forwarding rate in
+ fps and latency) that can be achieved by the VNF without a vSwitch.
+ The performance metrics collected by this test will serve as a key
+ comparison point for NIC passthrough technologies and vSwitches. VNF
+ in this context refers to the hypervisor and the VM. The following
+ diagram outlines the set-up for benchmarking VNF forwarding
+ capability:
+
+ .. code-block:: console
+
+ __
+ +--------------------------------------------------+ |
+ | +------------------------------------------+ | |
+ | | | | |
+ | | VNF | | |
+ | | | | |
+ | +------------------------------------------+ | |
+ | | Passthrough/SR-IOV | | Host
+ | +------------------------------------------+ | |
+ | | NIC | | |
+ +---+------------------------------------------+---+ __|
+ ^ :
+ | |
+ : v
+ +--------------------------------------------------+
+ | |
+ | traffic generator |
+ | |
+ +--------------------------------------------------+
+
+
+Methodology to benchmark Platform/VNF forwarding capability
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+The recommended methodology for the platform/VNF validation and
+benchmark is: - Run `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__
+Maximum Forwarding Rate test, this test will produce maximum
+forwarding rate and latency results that will serve as the
+expected values. These expected values can be used in
+subsequent steps or compared with in subsequent validation tests. -
+Transmit bidirectional traffic at line rate/max forwarding rate
+(whichever is higher) for at least 72 hours, measure throughput (fps)
+and latency. - Note: Traffic should be bidirectional. - Establish a
+baseline forwarding rate for what the platform can achieve. - Additional
+validation: After the test has completed for 72 hours run bidirectional
+traffic at the maximum forwarding rate once more to see if the system is
+still functional and measure throughput (fps) and latency. Compare the
+measure the new obtained values with the expected values.
+
+**NOTE 1**: How the Platform is configured for its forwarding capability
+test (BIOS settings, GRUB configuration, runlevel...) is how the
+platform should be configured for every test after this
+
+**NOTE 2**: How the VNF is configured for its forwarding capability test
+(# of vCPUs, vNICs, Memory, affinitization…) is how it should be
+configured for every test that uses a VNF after this.
+
+2.2.4 RFCs for testing switch performance
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The starting point for defining the suite of tests for benchmarking the
+performance of a virtual switch is to take existing RFCs and standards
+that were designed to test their physical counterparts and adapting them
+for testing virtual switches. The rationale behind this is to establish
+a fair comparison between the performance of virtual and physical
+switches. This section outlines the RFCs that are used by this
+specification.
+
+RFC 1242 Benchmarking Terminology for Network Interconnection
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Devices RFC 1242 defines the terminology that is used in describing
+performance benchmarking tests and their results. Definitions and
+discussions covered include: Back-to-back, bridge, bridge/router,
+constant load, data link frame size, frame loss rate, inter frame gap,
+latency, and many more.
+
+RFC 2544 Benchmarking Methodology for Network Interconnect Devices
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+RFC 2544 outlines a benchmarking methodology for network Interconnect
+Devices. The methodology results in performance metrics such as latency,
+frame loss percentage, and maximum data throughput.
+
+In this document network “throughput” (measured in millions of frames
+per second) is based on RFC 2544, unless otherwise noted. Frame size
+refers to Ethernet frames ranging from smallest frames of 64 bytes to
+largest frames of 4K bytes.
+
+Types of tests are:
+
+1. Throughput test defines the maximum number of frames per second
+ that can be transmitted without any error.
+
+2. Latency test measures the time required for a frame to travel from
+ the originating device through the network to the destination device.
+ Please note that RFC2544 Latency measurement will be superseded with
+ a measurement of average latency over all successfully transferred
+ packets or frames.
+
+3. Frame loss test measures the network’s
+ response in overload conditions - a critical indicator of the
+ network’s ability to support real-time applications in which a
+ large amount of frame loss will rapidly degrade service quality.
+
+4. Burst test assesses the buffering capability of a switch. It
+ measures the maximum number of frames received at full line rate
+ before a frame is lost. In carrier Ethernet networks, this
+ measurement validates the excess information rate (EIR) as defined in
+ many SLAs.
+
+5. System recovery to characterize speed of recovery from an overload
+ condition.
+
+6. Reset to characterize speed of recovery from device or software
+ reset. This type of test has been updated by `RFC6201 <https://www.rfc-editor.org/rfc/rfc6201.txt>`__ as such,
+ the methodology defined by this specification will be that of RFC 6201.
+
+Although not included in the defined RFC 2544 standard, another crucial
+measurement in Ethernet networking is packet delay variation. The
+definition set out by this specification comes from
+`RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__.
+
+RFC 2285 Benchmarking Terminology for LAN Switching Devices
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+RFC 2285 defines the terminology that is used to describe the
+terminology for benchmarking a LAN switching device. It extends RFC
+1242 and defines: DUTs, SUTs, Traffic orientation and distribution,
+bursts, loads, forwarding rates, etc.
+
+RFC 2889 Benchmarking Methodology for LAN Switching
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+RFC 2889 outlines a benchmarking methodology for LAN switching, it
+extends RFC 2544. The outlined methodology gathers performance
+metrics for forwarding, congestion control, latency, address handling
+and finally filtering.
+
+RFC 3918 Methodology for IP Multicast Benchmarking
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+RFC 3918 outlines a methodology for IP Multicast benchmarking.
+
+RFC 4737 Packet Reordering Metrics
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+RFC 4737 describes metrics for identifying and counting re-ordered
+packets within a stream, and metrics to measure the extent each
+packet has been re-ordered.
+
+RFC 5481 Packet Delay Variation Applicability Statement
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+RFC 5481 defined two common, but different forms of delay variation
+metrics, and compares the metrics over a range of networking
+circumstances and tasks. The most suitable form for vSwitch
+benchmarking is the "PDV" form.
+
+RFC 6201 Device Reset Characterization
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+RFC 6201 extends the methodology for characterizing the speed of
+recovery of the DUT from device or software reset described in RFC
+2544.
+
+2.2.5 Details of the Test Report
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are a number of parameters related to the system, DUT and tests
+that can affect the repeatability of a test results and should be
+recorded. In order to minimise the variation in the results of a test,
+it is recommended that the test report includes the following information:
+
+- Hardware details including:
+
+ - Platform details.
+ - Processor details.
+ - Memory information (see below)
+ - Number of enabled cores.
+ - Number of cores used for the test.
+ - Number of physical NICs, as well as their details (manufacturer,
+ versions, type and the PCI slot they are plugged into).
+ - NIC interrupt configuration.
+ - BIOS version, release date and any configurations that were
+ modified.
+
+- Software details including:
+
+ - OS version (for host and VNF)
+ - Kernel version (for host and VNF)
+ - GRUB boot parameters (for host and VNF).
+ - Hypervisor details (Type and version).
+ - Selected vSwitch, version number or commit id used.
+ - vSwitch launch command line if it has been parameterised.
+ - Memory allocation to the vSwitch – which NUMA node it is using,
+ and how many memory channels.
+ - Where the vswitch is built from source: compiler details including
+ versions and the flags that were used to compile the vSwitch.
+ - DPDK or any other SW dependency version number or commit id used.
+ - Memory allocation to a VM - if it's from Hugpages/elsewhere.
+ - VM storage type: snapshot/independent persistent/independent
+ non-persistent.
+ - Number of VMs.
+ - Number of Virtual NICs (vNICs), versions, type and driver.
+ - Number of virtual CPUs and their core affinity on the host.
+ - Number vNIC interrupt configuration.
+ - Thread affinitization for the applications (including the vSwitch
+ itself) on the host.
+ - Details of Resource isolation, such as CPUs designated for
+ Host/Kernel (isolcpu) and CPUs designated for specific processes
+ (taskset).
+
+- Memory Details
+
+ - Total memory
+ - Type of memory
+ - Used memory
+ - Active memory
+ - Inactive memory
+ - Free memory
+ - Buffer memory
+ - Swap cache
+ - Total swap
+ - Used swap
+ - Free swap
+
+- Test duration.
+- Number of flows.
+- Traffic Information:
+
+ - Traffic type - UDP, TCP, IMIX / Other.
+ - Packet Sizes.
+
+- Deployment Scenario.
+
+**Note**: Tests that require additional parameters to be recorded will
+explicitly specify this.
+
+2.3. Test identification
+------------------------
+2.3.1 Throughput tests
+~~~~~~~~~~~~~~~~~~~~~~
+The following tests aim to determine the maximum forwarding rate that
+can be achieved with a virtual switch. The list is not exhaustive but
+should indicate the type of tests that should be required. It is
+expected that more will be added.
+
+Test ID: LTD.Throughput.RFC2544.PacketLossRatio
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2544 X% packet loss ratio Throughput and Latency Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test determines the DUT's maximum forwarding rate with X% traffic
+ loss for a constant load (fixed length frames at a fixed interval time).
+ The default loss percentages to be tested are: - X = 0% - X = 10^-7%
+
+ Note: Other values can be tested if required by the user.
+
+ The selected frame sizes are those previously defined under `Default
+ Test Parameters <#DefaultParams>`__. The test can also be used to
+ determine the average latency of the traffic.
+
+ Under the `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__
+ test methodology, the test duration will
+ include a number of trials; each trial should run for a minimum period
+ of 60 seconds. A binary search methodology must be applied for each
+ trial to obtain the final result.
+
+ **Expected Result**: At the end of each trial, the presence or absence
+ of loss determines the modification of offered load for the next trial,
+ converging on a maximum rate, or
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ Throughput with X% loss.
+ The Throughput load is re-used in related
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__ tests and other
+ tests.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum forwarding rate in Frames Per Second (FPS) and Mbps of
+ the DUT for each frame size with X% packet loss.
+ - The average latency of the traffic flow when passing through the DUT
+ (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+Test ID: LTD.Throughput.RFC2544.PacketLossRatioFrameModification
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2544 X% packet loss Throughput and Latency Test with
+ packet modification
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test determines the DUT's maximum forwarding rate with X% traffic
+ loss for a constant load (fixed length frames at a fixed interval time).
+ The default loss percentages to be tested are: - X = 0% - X = 10^-7%
+
+ Note: Other values can be tested if required by the user.
+
+ The selected frame sizes are those previously defined under `Default
+ Test Parameters <#DefaultParams>`__. The test can also be used to
+ determine the average latency of the traffic.
+
+ Under the `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__
+ test methodology, the test duration will
+ include a number of trials; each trial should run for a minimum period
+ of 60 seconds. A binary search methodology must be applied for each
+ trial to obtain the final result.
+
+ During this test, the DUT must perform the following operations on the
+ traffic flow:
+
+ - Perform packet parsing on the DUT's ingress port.
+ - Perform any relevant address look-ups on the DUT's ingress ports.
+ - Modify the packet header before forwarding the packet to the DUT's
+ egress port. Packet modifications include:
+
+ - Modifying the Ethernet source or destination MAC address.
+ - Modifying/adding a VLAN tag. (**Recommended**).
+ - Modifying/adding a MPLS tag.
+ - Modifying the source or destination ip address.
+ - Modifying the TOS/DSCP field.
+ - Modifying the source or destination ports for UDP/TCP/SCTP.
+ - Modifying the TTL.
+
+ **Expected Result**: The Packet parsing/modifications require some
+ additional degree of processing resource, therefore the
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__
+ Throughput is expected to be somewhat lower than the Throughput level
+ measured without additional steps. The reduction is expected to be
+ greatest on tests with the smallest packet sizes (greatest header
+ processing rates).
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum forwarding rate in Frames Per Second (FPS) and Mbps of
+ the DUT for each frame size with X% packet loss and packet
+ modification operations being performed by the DUT.
+ - The average latency of the traffic flow when passing through the DUT
+ (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - The `RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__
+ PDV form of delay variation on the traffic flow,
+ using the 99th percentile.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+Test ID: LTD.Throughput.RFC2544.Profile
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2544 Throughput and Latency Profile
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ This test reveals how throughput and latency degrades as the offered
+ rate varies in the region of the DUT's maximum forwarding rate as
+ determined by LTD.Throughput.RFC2544.PacketLossRatio (0% Packet Loss).
+ For example it can be used to determine if the degradation of throughput
+ and latency as the offered rate increases is slow and graceful or sudden
+ and severe.
+
+ The selected frame sizes are those previously defined under `Default
+ Test Parameters <#DefaultParams>`__.
+
+ The offered traffic rate is described as a percentage delta with respect
+ to the DUT's maximum forwarding rate as determined by
+ LTD.Throughput.RFC2544.PacketLoss Ratio (0% Packet Loss case). A delta
+ of 0% is equivalent to an offered traffic rate equal to the maximum
+ forwarding rate; A delta of +50% indicates an offered rate half-way
+ between the maximum forwarding rate and line-rate, whereas a delta of
+ -50% indicates an offered rate of half the maximum rate. Therefore the
+ range of the delta figure is natuarlly bounded at -100% (zero offered
+ traffic) and +100% (traffic offered at line rate).
+
+ The following deltas to the maximum forwarding rate should be applied:
+
+ - -50%, -10%, 0%, +10% & +50%
+
+ **Expected Result**: For each packet size a profile should be produced
+ of how throughput and latency vary with offered rate.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The forwarding rate in Frames Per Second (FPS) and Mbps of the DUT
+ for each delta to the maximum forwarding rate and for each frame
+ size.
+ - The average latency for each delta to the maximum forwarding rate and
+ for each frame size.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+ - Any failures experienced (for example if the vSwitch crashes, stops
+ processing packets, restarts or becomes unresponsive to commands)
+ when the offered load is above Maximum Throughput MUST be recorded
+ and reported with the results.
+
+Test ID: LTD.Throughput.RFC2544.SystemRecoveryTime
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2544 System Recovery Time Test
+
+ **Prerequisite Test** LTD.Throughput.RFC2544.PacketLossRatio
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine the length of time it takes the DUT
+ to recover from an overload condition for a constant load (fixed length
+ frames at a fixed interval time). The selected frame sizes are those
+ previously defined under `Default Test Parameters <#DefaultParams>`__,
+ traffic should be sent to the DUT under normal conditions. During the
+ duration of the test and while the traffic flows are passing though the
+ DUT, at least one situation leading to an overload condition for the DUT
+ should occur. The time from the end of the overload condition to when
+ the DUT returns to normal operations should be measured to determine
+ recovery time. Prior to overloading the DUT, one should record the
+ average latency for 10,000 packets forwarded through the DUT.
+
+ The overload condition SHOULD be to transmit traffic at a very high
+ frame rate to the DUT (150% of the maximum 0% packet loss rate as
+ determined by LTD.Throughput.RFC2544.PacketLossRatio or line-rate
+ whichever is lower), for at least 60 seconds, then reduce the frame rate
+ to 75% of the maximum 0% packet loss rate. A number of time-stamps
+ should be recorded: - Record the time-stamp at which the frame rate was
+ reduced and record a second time-stamp at the time of the last frame
+ lost. The recovery time is the difference between the two timestamps. -
+ Record the average latency for 10,000 frames after the last frame loss
+ and continue to record average latency measurements for every 10,000
+ frames, when latency returns to within 10% of pre-overload levels record
+ the time-stamp.
+
+ **Expected Result**:
+
+ **Metrics collected**
+
+ The following are the metrics collected for this test:
+
+ - The length of time it takes the DUT to recover from an overload
+ condition.
+ - The length of time it takes the DUT to recover the average latency to
+ pre-overload conditions.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+Test ID: LTD.Throughput.RFC2544.BackToBackFrames
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC2544 Back To Back Frames Test
+
+ **Prerequisite Test**: N
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to characterize the ability of the DUT to
+ process back-to-back frames. For each frame size previously defined
+ under `Default Test Parameters <#DefaultParams>`__, a burst of traffic
+ is sent to the DUT with the minimum inter-frame gap between each frame.
+ If the number of received frames equals the number of frames that were
+ transmitted, the burst size should be increased and traffic is sent to
+ the DUT again. The value measured is the back-to-back value, that is the
+ maximum burst size the DUT can handle without any frame loss.
+
+ **Expected Result**:
+
+ Tests of back-to-back frames with physical devices have produced
+ unstable results in some cases. All tests should be repeated in multiple
+ test sessions and results stability should be examined.
+
+ **Metrics collected**
+
+ The following are the metrics collected for this test:
+
+ - The back-to-back value, which is the the number of frames in the
+ longest burst that the DUT will handle without the loss of any
+ frames.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+Test ID: LTD.Throughput.RFC2889.Soak
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2889 X% packet loss Throughput Soak Test
+
+ **Prerequisite Test** LTD.Throughput.RFC2544.PacketLossRatio
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand the Throughput stability over an
+ extended test duration in order to uncover any outliers. To allow for an
+ extended test duration, the test should ideally run for 24 hours or, if
+ this is not possible, for at least 6 hours. For this test, each frame
+ size must be sent at the highest Throughput with X% packet loss, as
+ determined in the prerequisite test. The default loss percentages to be
+ tested are: - X = 0% - X = 10^-7%
+
+ Note: Other values can be tested if required by the user.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - Throughput stability of the DUT.
+
+ - This means reporting the number of packets lost per time interval
+ and reporting any time intervals with packet loss. The
+ `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__
+ Forwarding Rate shall be measured in each interval.
+ An interval of 60s is suggested.
+
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+ - The `RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__
+ PDV form of delay variation on the traffic flow,
+ using the 99th percentile.
+
+Test ID: LTD.Throughput.RFC2889.SoakFrameModification
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2889 Throughput Soak Test with Frame Modification
+
+ **Prerequisite Test**: LTD.Throughput.RFC2544.PacketLossRatioFrameModification (0% Packet Loss)
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand the throughput stability over an
+ extended test duration in order to uncover any outliers. To allow for an
+ extended test duration, the test should ideally run for 24 hours or, if
+ this is not possible, for at least 6 hour. For this test, each frame
+ size must be sent at the highest Throughput with 0% packet loss, as
+ determined in the prerequisite test.
+
+ During this test, the DUT must perform the following operations on the
+ traffic flow:
+
+ - Perform packet parsing on the DUT's ingress port.
+ - Perform any relevant address look-ups on the DUT's ingress ports.
+ - Modify the packet header before forwarding the packet to the DUT's
+ egress port. Packet modifications include:
+
+ - Modifying the Ethernet source or destination MAC address.
+ - Modifying/adding a VLAN tag (**Recommended**).
+ - Modifying/adding a MPLS tag.
+ - Modifying the source or destination ip address.
+ - Modifying the TOS/DSCP field.
+ - Modifying the source or destination ports for UDP/TCP/SCTP.
+ - Modifying the TTL.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - Throughput stability of the DUT.
+
+ - This means reporting the number of packets lost per time interval
+ and reporting any time intervals with packet loss. The
+ `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__
+ Forwarding Rate shall be measured in each interval.
+ An interval of 60s is suggested.
+
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+ - The `RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__ PDV form of delay variation on the traffic flow,
+ using the 99th percentile.
+
+Test ID: LTD.Throughput.RFC6201.ResetTime
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 6201 Reset Time Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine the length of time it takes the DUT
+ to recover from a reset.
+
+ Two reset methods are defined - planned and unplanned. A planned reset
+ requires stopping and restarting the virtual switch by the usual
+ 'graceful' method defined by it's documentation. An unplanned reset
+ requires simulating a fatal internal fault in the virtual switch - for
+ example by using kill -SIGKILL on a Linux environment.
+
+ Both reset methods SHOULD be exercised.
+
+ For each frame size previously defined under `Default Test
+ Parameters <#DefaultParams>`__, traffic should be sent to the DUT under
+ normal conditions. During the duration of the test and while the traffic
+ flows are passing through the DUT, the DUT should be reset and the Reset
+ time measured. The Reset time is the total time that a device is
+ determined to be out of operation and includes the time to perform the
+ reset and the time to recover from it (cf. `RFC6201 <https://www.rfc-editor.org/rfc/rfc6201.txt>`__).
+
+ `RFC6201 <https://www.rfc-editor.org/rfc/rfc6201.txt>`__ defines two methods to measure the Reset time:
+ - Frame-Loss Method: which requires the monitoring of the number of
+ lost frames and calculates the Reset time based on the number of
+ frames lost and the offered rate according to the following
+ formula:
+
+ .. code-block:: console
+
+ Frames_lost (packets)
+ Reset_time = -------------------------------------
+ Offered_rate (packets per second)
+
+ - Timestamp Method: which measures the time from which the last frame
+ is forwarded from the DUT to the time the first frame is forwarded
+ after the reset. This involves time-stamping all transmitted frames
+ and recording the timestamp of the last frame that was received prior
+ to the reset and also measuring the timestamp of the first frame that
+ is received after the reset. The Reset time is the difference between
+ these two timestamps.
+
+ According to `RFC6201 <https://www.rfc-editor.org/rfc/rfc6201.txt>`__ the choice of method depends on the test
+ tool's capability; the Frame-Loss method SHOULD be used if the test tool
+ supports: - Counting the number of lost frames per stream. -
+ Transmitting test frame despite the physical link status.
+
+ whereas the Timestamp method SHOULD be used if the test tool supports: -
+ Timestamping each frame. - Monitoring received frame's timestamp. -
+ Transmitting frames only if the physical link status is up.
+
+ **Expected Result**:
+
+ **Metrics collected**
+
+ The following are the metrics collected for this test: - Average Reset
+ Time over the number of trials performed.
+
+ Results of this test should include the following information: - The
+ reset method used. - Throughput in Fps and Mbps. - Average Frame Loss
+ over the number of trials performed. - Average Reset Time in
+ milliseconds over the number of trials performed. - Number of trials
+ performed. - Protocol: IPv4, IPv6, MPLS, etc. - Frame Size in Octets -
+ Port Media: Ethernet, Gigabit Ethernet (GbE), etc. - Port Speed: 10
+ Gbps, 40 Gbps etc. - Interface Encapsulation: Ethernet, Ethernet VLAN,
+ etc.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+Test ID: LTD.Throughput.RFC2889.MaxForwardingRate
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC2889 Forwarding Rate Test
+
+ **Prerequisite Test**: LTD.Throughput.RFC2544.PacketLossRatio
+
+ **Priority**:
+
+ **Description**:
+
+ This test measures the DUT's Max Forwarding Rate when the Offered Load
+ is varied between the throughput and the Maximum Offered Load for fixed
+ length frames at a fixed time interval. The selected frame sizes are
+ those previously defined under `Default Test
+ Parameters <#DefaultParams>`__. The throughput is the maximum offered
+ load with 0% frame loss (measured by the prerequisite test), and the
+ Maximum Offered Load (as defined by
+ `RFC2285 <https://www.rfc-editor.org/rfc/rfc2285.txt>`__) is *"the highest
+ number of frames per second that an external source can transmit to a
+ DUT/SUT for forwarding to a specified output interface or interfaces"*.
+
+ Traffic should be sent to the DUT at a particular rate (TX rate)
+ starting with TX rate equal to the throughput rate. The rate of
+ successfully received frames at the destination counted (in FPS). If the
+ RX rate is equal to the TX rate, the TX rate should be increased by a
+ fixed step size and the RX rate measured again until the Max Forwarding
+ Rate is found.
+
+ The trial duration for each iteration should last for the period of time
+ needed for the system to reach steady state for the frame size being
+ tested. Under `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__
+ (Sec. 5.6.3.1) test methodology, the test
+ duration should run for a minimum period of 30 seconds, regardless
+ whether the system reaches steady state before the minimum duration
+ ends.
+
+ **Expected Result**: According to
+ `RFC2889 <https://www.rfc-editor.org/rfc/rfc2289.txt>`__ The Max Forwarding Rate
+ is the highest forwarding rate of a DUT taken from an iterative set of
+ forwarding rate measurements. The iterative set of forwarding rate
+ measurements are made by setting the intended load transmitted from an
+ external source and measuring the offered load (i.e what the DUT is
+ capable of forwarding). If the Throughput == the Maximum Offered Load,
+ it follows that Max Forwarding Rate is equal to the Maximum Offered
+ Load.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The Max Forwarding Rate for the DUT for each packet size.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+Test ID: LTD.Throughput.RFC2889.ForwardPressure
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC2889 Forward Pressure Test
+
+ **Prerequisite Test**: LTD.Throughput.RFC2889.MaxForwardingRate
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine if the DUT transmits frames with an
+ inter-frame gap that is less than 12 bytes. This test overloads the DUT
+ and measures the output for forward pressure. Traffic should be
+ transmitted to the DUT with an inter-frame gap of 11 bytes, this will
+ overload the DUT by 1 byte per frame. The forwarding rate of the DUT
+ should be measured.
+
+ **Expected Result**: The forwarding rate should not exceed the maximum
+ forwarding rate of the DUT collected by
+ LTD.Throughput.RFC2889.MaxForwardingRate.
+
+ **Metrics collected**
+
+ The following are the metrics collected for this test:
+
+ - Forwarding rate of the DUT in FPS or Mbps.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+Test ID: LTD.Throughput.RFC2889.AddressCachingCapacity
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC2889 Address Caching Capacity Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ Please note this test is only applicable to switches that are capable of
+ MAC learning. The aim of this test is to determine the address caching
+ capacity of the DUT for a constant load (fixed length frames at a fixed
+ interval time). The selected frame sizes are those previously defined
+ under `Default Test Parameters <#DefaultParams>`__.
+
+ In order to run this test the aging time, that is the maximum time the
+ DUT will keep a learned address in its flow table, and a set of initial
+ addresses, whose value should be >= 1 and <= the max number supported by
+ the implementation must be known. Please note that if the aging time is
+ configurable it must be longer than the time necessary to produce frames
+ from the external source at the specified rate. If the aging time is
+ fixed the frame rate must be brought down to a value that the external
+ source can produce in a time that is less than the aging time.
+
+ Learning Frames should be sent from an external source to the DUT to
+ install a number of flows. The Learning Frames must have a fixed
+ destination address and must vary the source address of the frames. The
+ DUT should install flows in its flow table based on the varying source
+ addresses. Frames should then be transmitted from an external source at
+ a suitable frame rate to see if the DUT has properly learned all of the
+ addresses. If there is no frame loss and no flooding, the number of
+ addresses sent to the DUT should be increased and the test is repeated
+ until the max number of cached addresses supported by the DUT
+ determined.
+
+ **Expected Result**:
+
+ **Metrics collected**:
+
+ The following are the metrics collected for this test:
+
+ - Number of cached addresses supported by the DUT.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+Test ID: LTD.Throughput.RFC2889.AddressLearningRate
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC2889 Address Learning Rate Test
+
+ **Prerequisite Test**: LTD.Memory.RFC2889.AddressCachingCapacity
+
+ **Priority**:
+
+ **Description**:
+
+ Please note this test is only applicable to switches that are capable of
+ MAC learning. The aim of this test is to determine the rate of address
+ learning of the DUT for a constant load (fixed length frames at a fixed
+ interval time). The selected frame sizes are those previously defined
+ under `Default Test Parameters <#DefaultParams>`__, traffic should be
+ sent with each IPv4/IPv6 address incremented by one. The rate at which
+ the DUT learns a new address should be measured. The maximum caching
+ capacity from LTD.Memory.RFC2889.AddressCachingCapacity should be taken
+ into consideration as the maximum number of addresses for which the
+ learning rate can be obtained.
+
+ **Expected Result**: It may be worthwhile to report the behaviour when
+ operating beyond address capacity - some DUTS may be more friendly to
+ new addresses than others.
+
+ **Metrics collected**:
+
+ The following are the metrics collected for this test:
+
+ - The address learning rate of the DUT.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+Test ID: LTD.Throughput.RFC2889.ErrorFramesFiltering
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC2889 Error Frames Filtering Test
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine whether the DUT will propagate any
+ erroneous frames it receives or whether it is capable of filtering out
+ the erroneous frames. Traffic should be sent with erroneous frames
+ included within the flow at random intervals. Illegal frames that must
+ be tested include: - Oversize Frames. - Undersize Frames. - CRC Errored
+ Frames. - Dribble Bit Errored Frames - Alignment Errored Frames
+
+ The traffic flow exiting the DUT should be recorded and checked to
+ determine if the erroneous frames where passed through the DUT.
+
+ **Expected Result**: Broken frames are not passed!
+
+ **Metrics collected**
+
+ No Metrics are collected in this test, instead it determines:
+
+ - Whether the DUT will propagate erroneous frames.
+ - Or whether the DUT will correctly filter out any erroneous frames
+ from traffic flow with out removing correct frames.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+Test ID: LTD.Throughput.RFC2889.BroadcastFrameForwarding
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC2889 Broadcast Frame Forwarding Test
+
+ **Prerequisite Test**: N
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to determine the maximum forwarding rate of the
+ DUT when forwarding broadcast traffic. For each frame previously defined
+ under `Default Test Parameters <#DefaultParams>`__, the traffic should
+ be set up as broadcast traffic. The traffic throughput of the DUT should
+ be measured.
+
+ The test should be conducted with at least 4 physical ports on the DUT.
+ The number of ports used MUST be recorded.
+
+ As broadcast involves forwarding a single incoming packet to several
+ destinations, the latency of a single packet is defined as the average
+ of the latencies for each of the broadcast destinations.
+
+ The incoming packet is transmitted on each of the other physical ports,
+ it is not transmitted on the port on which it was received. The test MAY
+ be conducted using different broadcasting ports to uncover any
+ performance differences.
+
+ **Expected Result**:
+
+ **Metrics collected**:
+
+ The following are the metrics collected for this test:
+
+ - The forwarding rate of the DUT when forwarding broadcast traffic.
+ - The minimum, average & maximum packets latencies observed.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch 3x physical.
+
+2.3.2 Packet Latency tests
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+These tests will measure the store and forward latency as well as the packet
+delay variation for various packet types through the virtual switch. The
+following list is not exhaustive but should indicate the type of tests
+that should be required. It is expected that more will be added.
+
+Test ID: LTD.PacketLatency.InitialPacketProcessingLatency
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: Initial Packet Processing Latency
+
+ **Prerequisite Test**: N/A
+
+ **Priority**:
+
+ **Description**:
+
+ In some virtual switch architectures, the first packets of a flow will
+ take the system longer to process than subsequent packets in the flow.
+ This test determines the latency for these packets. The test will
+ measure the latency of the packets as they are processed by the
+ flow-setup-path of the DUT. There are two methods for this test, a
+ recommended method and a nalternative method that can be used if it is
+ possible to disable the fastpath of the virtual switch.
+
+ Recommended method: This test will send 64,000 packets to the DUT, each
+ belonging to a different flow. Average packet latency will be determined
+ over the 64,000 packets.
+
+ Alternative method: This test will send a single packet to the DUT after
+ a fixed interval of time. The time interval will be equivalent to the
+ amount of time it takes for a flow to time out in the virtual switch
+ plus 10%. Average packet latency will be determined over 1,000,000
+ packets.
+
+ This test is intended only for non-learning switches; For learning
+ switches use RFC2889.
+
+ For this test, only unidirectional traffic is required.
+
+ **Expected Result**: The average latency for the initial packet of all
+ flows should be greater than the latency of subsequent traffic.
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - Average latency of the initial packets of all flows that are
+ processed by the DUT.
+
+ **Deployment scenario**:
+
+ - Physical → Virtual Switch → Physical.
+
+Test ID: LTD.PacketDelayVariation.RFC3393.Soak
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: Packet Delay Variation Soak Test
+
+ **Prerequisite Tests**: LTD.Throughput.RFC2544.PacketLossRatio (0% Packet Loss)
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand the distribution of packet delay
+ variation for different frame sizes over an extended test duration and
+ to determine if there are any outliers. To allow for an extended test
+ duration, the test should ideally run for 24 hours or, if this is not
+ possible, for at least 6 hour. For this test, each frame size must be
+ sent at the highest possible throughput with 0% packet loss, as
+ determined in the prerequisite test.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The packet delay variation value for traffic passing through the DUT.
+ - The `RFC5481 <https://www.rfc-editor.org/rfc/rfc5481.txt>`__
+ PDV form of delay variation on the traffic flow,
+ using the 99th percentile, for each 60s interval during the test.
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+2.3.3 Scalability tests
+~~~~~~~~~~~~~~~~~~~~~~~~
+The general aim of these tests is to understand the impact of large flow
+table size and flow lookups on throughput. The following list is not
+exhaustive but should indicate the type of tests that should be required.
+It is expected that more will be added.
+
+Test ID: LTD.Scalability.RFC2544.0PacketLoss
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2544 0% loss Scalability throughput test
+
+ **Prerequisite Test**:
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to measure how throughput changes as the number
+ of flows in the DUT increases. The test will measure the throughput
+ through the fastpath, as such the flows need to be installed on the DUT
+ before passing traffic.
+
+ For each frame size previously defined under `Default Test
+ Parameters <#DefaultParams>`__ and for each of the following number of
+ flows:
+
+ - 1,000
+ - 2,000
+ - 4,000
+ - 8,000
+ - 16,000
+ - 32,000
+ - 64,000
+ - Max supported number of flows.
+
+ The maximum 0% packet loss throughput should be determined in a manner
+ identical to LTD.Throughput.RFC2544.PacketLossRatio.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum number of frames per second that can be forwarded at the
+ specified number of flows and the specified frame size, with zero
+ packet loss.
+
+Test ID: LTD.MemoryBandwidth.RFC2544.0PacketLoss.Scalability
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2544 0% loss Memory Bandwidth Scalability test
+
+ **Prerequisite Tests**:
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand how the DUT's performance is
+ affected by cache sharing and memory bandwidth between processes.
+
+ During the test all cores not used by the vSwitch should be running a
+ memory intensive application. This application should read and write
+ random data to random addresses in unused physical memory. The random
+ nature of the data and addresses is intended to consume cache, exercise
+ main memory access (as opposed to cache) and exercise all memory buses
+ equally. Furthermore: - the ratio of reads to writes should be recorded.
+ A ratio of 1:1 SHOULD be used. - the reads and writes MUST be of
+ cache-line size and be cache-line aligned. - in NUMA architectures
+ memory access SHOULD be local to the core's node. Whether only local
+ memory or a mix of local and remote memory is used MUST be recorded. -
+ the memory bandwidth (reads plus writes) used per-core MUST be recorded;
+ the test MUST be run with a per-core memory bandwidth equal to half the
+ maximum system memory bandwidth divided by the number of cores. The test
+ MAY be run with other values for the per-core memory bandwidth. - the
+ test MAY also be run with the memory intensive application running on
+ all cores.
+
+ Under these conditions the DUT's 0% packet loss throughput is determined
+ as per LTD.Throughput.RFC2544.PacketLossRatio.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The DUT's 0% packet loss throughput in the presence of cache sharing and memory bandwidth between processes.
+
+2.3.5 Coupling between control path and datapath Tests
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The following tests aim to determine how tightly coupled the datapath
+and the control path are within a virtual switch. The following list
+is not exhaustive but should indicate the type of tests that should be
+required. It is expected that more will be added.
+
+Test ID: LTD.CPDPCouplingFlowAddition
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: Control Path and Datapath Coupling
+
+ **Prerequisite Test**:
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand how exercising the DUT's control
+ path affects datapath performance.
+
+ Initially a certain number of flow table entries are installed in the
+ vSwitch. Then over the duration of an RFC2544 throughput test
+ flow-entries are added and removed at the rates specified below. No
+ traffic is 'hitting' these flow-entries, they are simply added and
+ removed.
+
+ The test MUST be repeated with the following initial number of
+ flow-entries installed: - < 10 - 1000 - 100,000 - 10,000,000 (or the
+ maximum supported number of flow-entries)
+
+ The test MUST be repeated with the following rates of flow-entry
+ addition and deletion per second: - 0 - 1 (i.e. 1 addition plus 1
+ deletion) - 100 - 10,000
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - The maximum forwarding rate in Frames Per Second (FPS) and Mbps of
+ the DUT.
+ - The average latency of the traffic flow when passing through the DUT
+ (if testing for latency, note that this average is different from the
+ test specified in Section 26.3 of
+ `RFC2544 <https://www.rfc-editor.org/rfc/rfc2544.txt>`__).
+ - CPU and memory utilization may also be collected as part of this
+ test, to determine the vSwitch's performance footprint on the system.
+
+ **Deployment scenario**:
+
+ - Physical → virtual switch → physical.
+
+2.3.4 CPU and memory consumption
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The following tests will profile a virtual switch's CPU and memory
+utilization under various loads and circumstances. The following
+list is not exhaustive but should indicate the type of tests that
+should be required. It is expected that more will be added.
+
+Test ID: LTD.CPU.RFC2544.0PacketLoss
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ **Title**: RFC 2544 0% Loss Compute Test
+
+ **Prerequisite Test**:
+
+ **Priority**:
+
+ **Description**:
+
+ The aim of this test is to understand the overall performance of the
+ system when a CPU intensive application is run on the same DUT as the
+ Virtual Switch. For each frame size, an
+ LTD.Throughput.RFC2544.PacketLossRatio (0% Packet Loss) test should be
+ performed. Throughout the entire test a CPU intensive application should
+ be run on all cores on the system not in use by the Virtual Switch. For
+ NUMA system only cores on the same NUMA node are loaded.
+
+ It is recommended that stress-ng be used for loading the non-Virtual
+ Switch cores but any stress tool MAY be used.
+
+ **Expected Result**:
+
+ **Metrics Collected**:
+
+ The following are the metrics collected for this test:
+
+ - CPU utilization of the cores running the Virtual Switch.
+ - The number of identity of the cores allocated to the Virtual Switch.
+ - The configuration of the stress tool (for example the command line
+ parameters used to start it.)
+
+2.3.9 Summary List of Tests
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+1. Throughput tests
+
+ - Test ID: LTD.Throughput.RFC2544.PacketLossRatio
+ - Test ID: LTD.Throughput.RFC2544.PacketLossRatioFrameModification
+ - Test ID: LTD.Throughput.RFC2544.Profile
+ - Test ID: LTD.Throughput.RFC2544.SystemRecoveryTime
+ - Test ID: LTD.Throughput.RFC2544.BackToBackFrames
+ - Test ID: LTD.Throughput.RFC2889.Soak
+ - Test ID: LTD.Throughput.RFC2889.SoakFrameModification
+ - Test ID: LTD.Throughput.RFC6201.ResetTime
+ - Test ID: LTD.Throughput.RFC2889.MaxForwardingRate
+ - Test ID: LTD.Throughput.RFC2889.ForwardPressure
+ - Test ID: LTD.Throughput.RFC2889.AddressCachingCapacity
+ - Test ID: LTD.Throughput.RFC2889.AddressLearningRate
+ - Test ID: LTD.Throughput.RFC2889.ErrorFramesFiltering
+ - Test ID: LTD.Throughput.RFC2889.BroadcastFrameForwarding
+
+2. Packet Latency tests
+
+ - Test ID: LTD.PacketLatency.InitialPacketProcessingLatency
+ - Test ID: LTD.PacketDelayVariation.RFC3393.Soak
+
+3. Scalability tests
+
+ - Test ID: LTD.Scalability.RFC2544.0PacketLoss
+ - Test ID: LTD.MemoryBandwidth.RFC2544.0PacketLoss.Scalability
+
+4. Coupling between control path and datapath Tests
+
+ - Test ID: LTD.CPDPCouplingFlowAddition
+
+5. CPU and memory consumption
+
+ - Test ID: LTD.CPU.RFC2544.0PacketLoss