aboutsummaryrefslogtreecommitdiffstats
path: root/docs/developer
diff options
context:
space:
mode:
Diffstat (limited to 'docs/developer')
-rw-r--r--docs/developer/building.rst77
-rw-r--r--docs/developer/design/design.rst79
-rw-r--r--docs/developer/design/index.rst17
-rw-r--r--docs/developer/design/ndrpdr.rst83
-rw-r--r--docs/developer/design/overview.rst24
-rw-r--r--docs/developer/design/traffic_desc.rst85
-rw-r--r--docs/developer/design/versioning.rst16
-rw-r--r--docs/developer/index.rst16
-rw-r--r--docs/developer/nfvbenchvm.rst365
-rw-r--r--docs/developer/testing-nfvbench.rst91
10 files changed, 853 insertions, 0 deletions
diff --git a/docs/developer/building.rst b/docs/developer/building.rst
new file mode 100644
index 0000000..00b8654
--- /dev/null
+++ b/docs/developer/building.rst
@@ -0,0 +1,77 @@
+
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Cisco Systems, Inc
+
+Building Containers and VM Images
+=================================
+
+NFVbench is delivered as Docker container which is built using the Dockerfile under the docker directory.
+This container includes the following parts:
+
+- TRex traffic generator
+- NFVbench orchestration
+- NFVbench test VM (qcow2)
+
+.. _nfvbench-artefact-versioning:
+
+Versioning
+----------
+These 3 parts are versioned independently and the Dockerfile will determine the combination of versions that
+are packaged in the container for the version associated to the Dockerfile.
+
+The NFVbench version is controlled by the git tag that conforms to the semver version (e.g. "3.3.0").
+This tag controls the version of the Dockerfile used for building the container.
+
+The TRex version is controlled by the TREX_VER variable in Dockerfile (e.g. ENV TREX_VER "v2.56").
+TRex is installed in container from https://github.com/cisco-system-traffic-generator/trex-core/releases
+
+The Test VM version is controlled by the VM_IMAGE_VER variable in Dockerfile (e.g. ENV VM_IMAGE_VER "0.8").
+The VM is extracted from google storage (http://artifacts.opnfv.org)
+
+Updating the VM image
+---------------------
+
+When the VM image is changed, its version must be increased in order to distinguish from previous image versions.
+The version strings to change are located in 2 files:
+
+- docker/Dockerfile
+- nfvbench/nfvbenchvm/dib/build-image.sh
+
+Building and uploading the VM image
+-----------------------------------
+The VM image is built on gerrit verify when the image is not present in google storage.
+It is not uploaded yet on google storage.
+
+The build + upload of the new VM image is done after the review is merged.
+
+For details on how this is done, refer to ./jjb/nfvbench/nfvbench.yaml in the opnfv releng repository.
+
+Building a new NFVbench container image
+---------------------------------------
+A new container image can be built and published to Dockerhub by CI/CD by applying a new semver tag to the
+nfvbench repository.
+
+
+Workflow summary
+----------------
+
+NFVbench code has changed:
+
+- commit with gerrit
+- apply a new semver tag to trigger the container image build/publication
+
+VM code has changed:
+
+- update VM version in the 2 locations
+- commit VM changes with gerrit to trigger VM build and publication to google storage
+- IMPORTANT! wait for the VM image to be pushed to google storage before going to the next step
+ (otherwise the container build will fail as it will not find the VM image)
+- apply a new semver tag to trigger the container image build/publication
+
+To increase the TRex version:
+
+- change the Trex version in Dockerfile
+- commit with gerrit
+- apply a new semver tag to trigger the container image build/publication
diff --git a/docs/developer/design/design.rst b/docs/developer/design/design.rst
new file mode 100644
index 0000000..43011ad
--- /dev/null
+++ b/docs/developer/design/design.rst
@@ -0,0 +1,79 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Cisco Systems, Inc
+
+
+*******************
+NFVbench components
+*******************
+
+NFVbench can be decomposed in the following components:
+
+- Configuration
+- Orchestration:
+
+ - Staging
+ - Traffic generation
+ - Results analysis
+
+Configuration
+=============
+This component is in charge of getting the configuration options from the user and consolidate them with
+the default configuration into a running configuration.
+
+default configuration + user configuration options = running configuration
+
+User configuration can come from:
+
+- CLI configuration shortcut arguments (e.g --frame-size)
+- CLI configuration file (--config [file])
+- CLI configuration string (--config [string])
+- REST request body
+- custom platform pluging
+
+The precedence order for configuration is (from highest precedence to lowest precedence)
+
+- CLI configuration or REST configuration
+- custom platform plugin
+- default configuration
+
+The custom platform plugin is an optional python class that can be used to override default configuration options
+with default platform options which can be either hardcoded or calculated at runtime from platform specific sources
+(such as platform deployment configuration files).
+A custom platform plugin class is a child of the parent class nfvbench.config_plugin.ConfigPlugin.
+
+Orchestration
+=============
+Once the configuration is settled, benchmark orchestration is managed by the ChainRunner class (nfvbench.chain_runner.ChainRunner).
+The chain runner will take care of orchestrating the staging, traffic generation and results analysis.
+
+
+Staging
+-------
+The staging component is in charge of staging the OpenStack resources that are used for the requested packet path.
+For example, for a PVP packet path, this module will create 2 Neutron networks and one VM instance connected to these 2 networks.
+Multi-chaining and VM placement is also handled by this module.
+
+Main class: nfvbench.chaining.ChainManager
+
+Traffic Generation
+------------------
+The traffic generation component is in charge of contrilling the TRex traffic generator using its python API.
+It includes tasks such as:
+
+- traffic check end to end to make sure the packet path is clear in both directions before starting a benchmark
+- programming the TRex traffic flows based on requested parameters
+- fixed rate control
+- NDR/PDR binary search
+
+Main class: nfvbench.traffic_client.TrafficClient
+
+
+Traffic Generator Results Analysis
+----------------------------------
+At the end of a traffic generation session, this component collects the results from TRex and packages them in a format that
+is suitable for the various output formats (JSON, REST, file, fluentd).
+In the case of multi-chaining, it handles aggregation of results across chains.
+
+Main class: nfvbench.stats_manager.StatsManager
diff --git a/docs/developer/design/index.rst b/docs/developer/design/index.rst
new file mode 100644
index 0000000..4c52f98
--- /dev/null
+++ b/docs/developer/design/index.rst
@@ -0,0 +1,17 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Cisco Systems, Inc
+
+=====================
+NFVbench Design Notes
+=====================
+
+.. toctree::
+ :maxdepth: 2
+
+ overview
+ design
+ versioning
+ traffic_desc
+ ndrpdr
diff --git a/docs/developer/design/ndrpdr.rst b/docs/developer/design/ndrpdr.rst
new file mode 100644
index 0000000..dd769c0
--- /dev/null
+++ b/docs/developer/design/ndrpdr.rst
@@ -0,0 +1,83 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Cisco Systems, Inc
+
+NDR/PDR Binary Search
+=====================
+
+The NDR/PDR binary search algorithm used by NFVbench is based on the algorithm used by the
+FD.io CSIT project, with some additional optimizations.
+
+Algorithm Outline
+-----------------
+
+The ServiceChain class (nfvbench/service_chain.py) is responsible for calculating the NDR/PDR
+or all frame sizes requested in the configuration.
+Calculation for 1 frame size is delegated to the TrafficClient class (nfvbench/traffic_client.py)
+
+Call chain for calculating the NDR-PDR for a list of frame sizes:
+
+- ServiceChain.run()
+ - ServiceChain._get_chain_results()
+ - for every frame size:
+ - ServiceChain.__get_result_per_frame_size()
+ - TrafficClient.get_ndr_pdr()
+ - TrafficClient.__range_search() recursive binary search
+
+The search range is delimited by a left and right rate (expressed as a % of line rate per direction).
+The search always start at line rate per port, e.g. in the case of 2x10Gbps, the first iteration
+will send 10Gbps of traffic on each port.
+
+The load_epsilon configuration parameter defines the accuracy of the result as a % of line rate.
+The default value of 0.1 indicates for example that the measured NDR and PDR are within 0.1% of line rate of the
+actual NDR/PDR (e.g. 0.1% of 10Gbps is 10Mbps). It also determines how small the search range must be in the binary search.
+Smaller values of load_epsilon will result in more iterations and will take more time but may not
+always be beneficial if the absolute value falls below the precision level of the measurement.
+For example a value of 0.01% would translate to an absolute value of 1Mbps (for a 10Gbps port) or
+around 10kpps (at 64 byte size) which might be too fine grain.
+
+The recursion narrows down the range by half and stops when:
+
+- the range is smaller than the configured load_epsilon value
+- or when the search hits 100% or 0% of line rate
+
+Optimization
+------------
+
+Binary search algorithms assume that the drop rate curve is monotonically increasing with the Tx rate.
+To save time, the algorithm used by NFVbench is capable of calculating the optimal Tx rate for an
+arbitrary list of target maximum drop rates in one pass instead of the usual 1 pass per target maximum drop rate.
+This saves time linearly to the number target drop rates.
+For example, a typical NDR/PDR search will have 2 target maximum drop rates:
+
+- NDR = 0.001%
+- PDR = 0.1%
+
+The binary search will then start with a sorted list of 2 target drop rates: [0.1, 0.001].
+The first part of the binary search will then focus on finding the optimal rate for the first target
+drop rate (0.1%). When found, the current target drop rate is removed from the list and
+iteration continues with the next target drop rate in the list but this time
+starting from the upper/lower range of the previous target drop rate, which saves significant time.
+The binary search continues until the target maximum drop rate list is empty.
+
+Results Granularity
+-------------------
+The binary search results contain per direction stats (forward and reverse).
+In the case of multi-chaining, results contain per chain stats.
+The current code only reports aggregated stats (forward + reverse for all chains) but could be enhanced
+to report per chain stats.
+
+
+CPU Limitations
+---------------
+One particularity of using a software traffic generator is that the requested Tx rate may not always be met due to
+resource limitations (e.g. CPU is not fast enough to generate a very high load). The algorithm should take this into
+consideration:
+
+- always monitor the actual Tx rate achieved as reported back by the traffic generator
+- actual Tx rate is always <= requested Tx rate
+- the measured drop rate should always be relative to the actual Tx rate
+- if the actual Tx rate is < requested Tx rate and the measured drop rate is already within threshold
+ (<NDR/PDR threshold) then the binary search must stop with proper warning because the actual NDR/PDR
+ might probably be higher than the reported values
diff --git a/docs/developer/design/overview.rst b/docs/developer/design/overview.rst
new file mode 100644
index 0000000..9876d62
--- /dev/null
+++ b/docs/developer/design/overview.rst
@@ -0,0 +1,24 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Cisco Systems, Inc
+
+Overview
+--------
+
+NFVbench is a python application that is designed to run in a compact and portable format inside a container and on production pods.
+As such it only uses open sourec software with minimal hardware requirements (just a NIC card that is DPDK compatible).
+Traffic generation is handled by TRex on 2 physical ports (2x10G or higher) forming traffic loops up to VNF level and following
+a path that is common to all NFV applications: external source to top of rack switch(es) to compute node(s) to vswitch (if applicable)
+to VNF(s) and back.
+
+Configuration of benchmarks is through a yaml configuraton file and command line arguments.
+
+Results are available in different formats:
+
+- text output with tabular results
+- json result in file or in REST reply (most detailed)
+
+Logging is available in a log file.
+
+Benchmark results and logs can be optionally sent to one or more remote fluentd aggeregators using json format.
diff --git a/docs/developer/design/traffic_desc.rst b/docs/developer/design/traffic_desc.rst
new file mode 100644
index 0000000..d6bbb6b
--- /dev/null
+++ b/docs/developer/design/traffic_desc.rst
@@ -0,0 +1,85 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Cisco Systems, Inc
+
+Traffic Description
+===================
+
+The general packet path model followed by NFVbench requires injecting traffic into an arbitrary
+number of service chains, where each service chain is identified by 2 edge networks (left and right).
+In the current multi-chaining model:
+
+- all service chains can either share the same left and right edge networks or can have their own edge networks
+- each port associated to the traffic generator is dedicated to send traffic to one side of the edge networks
+
+If VLAN encapsulation is used, all traffic sent to a port will either have the same VLAN id (shared networks) or distinct VLAN ids (dedicated egde networks)
+
+Basic Packet Description
+------------------------
+
+The code to create the UDP packet is located in TRex.create_pkt() (nfvbench/traffic_gen/trex.py).
+
+NFVbench always generates UDP packets (even when doing L2 forwarding).
+The final size of the frame containing each UDP packet will be based on the requested L2 frame size.
+When taking into account the minimum payload size requirements from the traffic generator for
+the latency streams, the minimum L2 frame size is 64 byte.
+
+Flows Specification
+-------------------
+
+Mac Addresses
+.............
+The source MAC address is always the local port MAC address (for each port).
+The destination MAC address is based on the configuration and can be:
+
+- the traffic generator peer port MAC address in the case of L2 loopback at the switch level
+ or when using a loopback cable
+- the dest MAC as specified by the configuration file (EXT chain no ARP)
+- the dest MAC as discovered by ARP (EXT chain)
+- the router MAC as discovered from Neutron API (PVPL3 chain)
+- the VM MAC as dicovered from Neutron API (PVP, PVVP chains)
+
+NFVbench does not currently range on the MAC addresses.
+
+IP addresses
+............
+The source IP address is fixed per chain.
+The destination IP address is variable within a distinct range per chain.
+
+UDP ports
+.........
+The source and destination ports are fixed for all packets and can be set in the configuratoon
+file (default is 53).
+
+Payload User Data
+.................
+The length of the user data is based on the requested L2 frame size and takes into account the
+size of the L2 header - including the VLAN tag if applicable.
+
+
+IMIX Support
+------------
+In the case of IMIX, each direction is made of 4 streams:
+
+- 1 latency stream
+- 1 stream for each IMIX frame size
+
+The IMIX ratio is encoded into the number of consecutive packets sent by each stream in turn.
+
+Service Chains and Streams
+--------------------------
+A stream identifies one "stream" of packets with same characteristics such as rate and destination address.
+NFVbench will create 2 streams per service chain per direction:
+
+- 1 latency stream set to 1000pps
+- 1 main traffic stream set to the requested Tx rate less the latency stream rate (1000pps)
+
+For example, a benchmark with 1 chain (fixed rate) will result in a total of 4 streams.
+A benchmark with 20 chains will results in a total of 80 streams (fixed rate, it is more with IMIX).
+
+The overall flows are split equally between the number of chains by using the appropriate destination
+MAC address.
+
+For example, in the case of 10 chains, 1M flows and fixed rate, there will be a total of 40 streams.
+Each of the 20 non-latency stream will generate packets corresponding to 50,000 flows (unique src/dest address tuples).
diff --git a/docs/developer/design/versioning.rst b/docs/developer/design/versioning.rst
new file mode 100644
index 0000000..40e70f2
--- /dev/null
+++ b/docs/developer/design/versioning.rst
@@ -0,0 +1,16 @@
+
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Cisco Systems, Inc
+
+Versioning
+==========
+
+NFVbench uses semver compatible git tags such as "1.0.0". These tags are also called project tags and applied at important commits on the master branch exclusively.
+Rules for the version numbers follow the semver 2.0 specification (https://semver.org).
+These git tags are applied indepently of the OPNFV release tags which are applied only on the stable release branches (e.g. "opnfv-5.0.0").
+
+In general it is recommeneded to always have a project git version tag associated to any OPNFV release tag content obtained from a sync from master.
+
+NFVbench Docker containers will be versioned based on the NFVbench project tags.
diff --git a/docs/developer/index.rst b/docs/developer/index.rst
new file mode 100644
index 0000000..d9bf844
--- /dev/null
+++ b/docs/developer/index.rst
@@ -0,0 +1,16 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Cisco Systems, Inc
+
+************************
+NFVbench Developer Guide
+************************
+
+.. toctree::
+ :maxdepth: 3
+
+ building
+ nfvbenchvm
+ testing-nfvbench
+ design/index
diff --git a/docs/developer/nfvbenchvm.rst b/docs/developer/nfvbenchvm.rst
new file mode 100644
index 0000000..5d6166f
--- /dev/null
+++ b/docs/developer/nfvbenchvm.rst
@@ -0,0 +1,365 @@
+.. Copyright 2016 - 2023, Cisco Systems, Inc. and the NFVbench project contributors
+.. SPDX-License-Identifier: CC-BY-4.0
+
+NFVBENCH VM IMAGES FOR OPENSTACK
+++++++++++++++++++++++++++++++++
+
+This repo will build two centos 7 images with:
+ - testpmd and VPP installed for loop VM use case
+ - NFVbench and TRex installed for generator VM use case
+
+These VMs will come with a pre-canned user/password: nfvbench/nfvbench
+
+BUILD INSTRUCTIONS
+==================
+
+Pre-requisites
+--------------
+- must run on Linux
+- the following packages must be installed prior to using this script:
+ - python3 (+ python3-venv on Ubuntu)
+ - python3-pip
+ - git
+ - qemu-img (CentOs) or qemu-utils (Ubuntu)
+ - kpartx
+
+.. note:: The image build process is based on `diskimage-builder
+ <https://docs.openstack.org/diskimage-builder/latest/index.html>`_
+ that will be installed in a Python virtual environment by nfvbenchvm
+ build script build-image.sh.
+
+.. note:: build-image.sh uses the `gsutil <https://pypi.org/project/gsutil/>`_
+ tool to interact with Google cloud storage (to check if the images
+ exist and to upload the images). This is normally only needed in the
+ context of OPNFV build infrastructure, and build-image.sh can be used
+ without that tool in development environments.
+
+Build the image
+---------------
+- cd dib
+- update the version number for the image (if needed) by modifying __version__ in build-image.sh
+- setup your http_proxy if needed
+- run ``build-image.sh`` to build the images. A few examples:
+
+ - to build all the images and publish the code to Google cloud storage:
+ - ``bash build-image.sh``
+ - to build and publish only the loop VM:
+ - ``bash build-image.sh -l``
+ - to build and publish only the generator VM:
+ - ``bash build-image.sh -g``
+ - to build the generator VM without publishing it:
+ - ``bash build-image.sh -gv``
+
+.. note:: Run ``bash build-image.sh`` -h to see all options available.
+
+.. note:: By default, the generator VM image embeds the latest nfvbench code
+ found at the time of the build on the master branch of NFVbench Git
+ repository on OPNFV Gerrit instance (latest commit).
+
+ During development phases, it is also possible to build the image with
+ all the committed changes found in the current working copy of
+ nfvbench (local code). To do that, run the image build with the ``-s``
+ option, for instance: ``bash build-image.sh -gvs``.
+
+ In that case, the version of the generator VM image will be extended
+ with nfvbench development version number to be able to distinguish the
+ development images from the latest published image.
+
+LOOP VM IMAGE INSTANCE AND CONFIG
+=================================
+
+Interface Requirements
+----------------------
+The instance must be launched using OpenStack with 2 network interfaces.
+For best performance, it should use a flavor with:
+
+- 2 vCPU
+- 4 GB RAM
+- cpu pinning set to exclusive
+
+Auto-configuration
+------------------
+nfvbench VM will automatically find the two virtual interfaces to use, and use the forwarder specifed in the config file.
+
+In the case testpmd is used, testpmd will be launched with mac forwarding mode where the destination macs rewritten according to the config file.
+
+In the case VPP is used, VPP will set up a L3 router, and forwarding traffic from one port to the other.
+
+nfvbenchvm Config
+-----------------
+nfvbenchvm config file is located at ``/etc/nfvbenchvm.conf``.
+
+.. code-block:: bash
+
+ FORWARDER=testpmd
+ INTF_MAC1=FA:16:3E:A2:30:41
+ INTF_MAC2=FA:16:3E:10:DA:10
+ TG_MAC1=00:10:94:00:0A:00
+ TG_MAC2=00:11:94:00:0A:00
+ VNF_GATEWAY1_CIDR=1.1.0.2/8
+ VNF_GATEWAY2_CIDR=2.2.0.2/8
+ TG_NET1=10.0.0.0/8
+ TG_NET2=20.0.0.0/8
+ TG_GATEWAY1_IP=1.1.0.100
+ TG_GATEWAY2_IP=2.2.0.100
+
+
+Launching nfvbenchvm VM
+-----------------------
+
+Normally this image will be used together with NFVBench, and the required configurations will be automatically generated and pushed to VM by NFVBench. If launched manually, no forwarder will be run. Users will have the full control to run either testpmd or VPP via VNC console.
+
+To check if testpmd is running, you can run this command in VNC console:
+
+.. code-block:: bash
+
+ sudo screen -r testpmd
+
+To check if VPP is running, you can run this command in VNC console:
+
+.. code-block:: bash
+
+ service vpp status
+
+
+Hardcoded Username and Password
+--------------------------------
+- Username: nfvbench
+- Password: nfvbench
+
+
+GENERATOR IMAGE INSTANCE AND CONFIG
+===================================
+
+Pre-requisites
+--------------
+To use openstack APIs, NFVbench generator VM will use `clouds.yaml` file as openstack configuration.
+The OpenStack clouds configuration from clouds.yaml file to use.
+clouds.yaml file must be in one of the following paths:
+- ~/.config/openstack
+- /etc/openstack
+
+Example of `clouds.yaml`:
+
+.. code-block:: yaml
+
+ clouds:
+ devstack:
+ auth:
+ auth_url: http://192.168.122.10:35357/
+ project_name: demo
+ username: demo
+ password: 0penstack
+ region_name: RegionOne
+
+.. note:: Add `CLOUD_DETAIL` property with the accurate value for your openstack configuration (`devstack` in the above example) in ``/etc/nfvbenchvm.conf``
+
+Interface Requirements
+----------------------
+The instance must be launched using OpenStack with 2 network interfaces for dataplane traffic (using SR-IOV function) and 1 management interface to control nfvbench.
+For best performance, it should use network interfaces for dataplane traffic with a `vnic_type` to `direct-physical` (or `direct` if physical function is not possible)
+and a flavor with:
+
+- 6 vCPU
+- 8 GB RAM
+- cpu pinning set to exclusive
+
+.. note:: For the management interface: any interface type can be used. This interface required a routable IP (through floating IP or direct) and an access to the openstack APIs.
+.. note:: CPU pinning: 1 core dedicated for guest OS and NFVbench process, other provided cores are used by TRex
+
+Template of a genarator profile using CPU pinning:
+
+.. code-block:: bash
+
+ generator_profile:
+ - name: {{name}}
+ tool: {{tool}}
+ ip: {{ip}}
+ zmq_pub_port: {{zmq_pub_port}}
+ zmq_rpc_port: {{zmq_rpc_port}}
+ software_mode: {{software_mode}}
+ cores: {{CORES}}
+ platform:
+ master_thread_id: '0'
+ latency_thread_id: '1'
+ dual_if:
+ - socket: 0
+ threads: [{{CORE_THREADS}}]
+
+ interfaces:
+ - port: 0
+ pci: "{{PCI_ADDRESS_1}}"
+ switch:
+ - port: 1
+ pci: "{{PCI_ADDRESS_2}}"
+ switch:
+ intf_speed:
+
+.. note:: `CORE_THREADS` value is determined automatically based on the cores available on the VM starting from 2 to last worker core available.
+
+Auto-configuration
+------------------
+nfvbench VM will automatically find the two virtual interfaces to use for dataplane based on MAC addresses or openstack port name (see config part below).
+This applies to the management interface as well.
+
+nfvbenchvm Config
+-----------------
+nfvbenchvm config file is located at ``/etc/nfvbenchvm.conf``.
+
+Example of configuration:
+
+.. code-block:: bash
+
+ ACTION=e2e
+ LOOPBACK_INTF_MAC1=FA:16:3E:A2:30:41
+ LOOPBACK_INTF_MAC2=FA:16:3E:10:DA:10
+ E2E_INTF_MAC1=FA:16:3E:B0:E2:43
+ E2E_INTF_MAC2=FA:16:3E:D3:6A:FC
+
+.. note:: `ACTION` parameter is not mandatory but will permit to start NFVbench with the accurate ports (loopback or e2e).
+.. note:: Set of MAC parameters cannot be used in parallel as only one NFVbench/TRex process is running.
+.. note:: Switching from `loopback` to `e2e` action can be done manually using `/nfvbench/start-nfvbench.sh <action>` with the accurate keyword for `action` parameter. This script will restart NFVbench with the good set of MAC.
+
+nfvbenchvm config file with management interface:
+
+.. code-block:: bash
+
+ ACTION=e2e
+ LOOPBACK_INTF_MAC1=FA:16:3E:A2:30:41
+ LOOPBACK_INTF_MAC2=FA:16:3E:10:DA:10
+ INTF_MAC_MGMT=FA:16:3E:06:11:8A
+ INTF_MGMT_CIDR=172.20.56.228/2
+ INTF_MGMT_IP_GW=172.20.56.225
+ INTF_MGMT_MTU=1500
+
+.. note:: `INTF_MGMT_IP_GW` and `INTF_MGMT_CIDR` parameters are used by the VM to automatically configure virtual interface and route to allow an external access through SSH.
+
+.. note:: ``INTF_MGMT_MTU`` allows to specify the MTU of the management
+ interface in bytes.
+
+ If ``INTF_MGMT_MTU`` is not specified, the MTU will be configured to
+ the conservative value of 1500: this will reduce the risk to get an
+ unmanageable VM.
+
+ ``INTF_MGMT_MTU`` can also be set to the special value ``auto``: in
+ that case, the MTU will not be configured and it will keep the value
+ set by the hypervisor (default nfvbench behavior up to version
+ 5.0.3).
+
+Using pre-created direct-physical ports on openstack, mac addresses value are only known when VM is deployed. In this case, you can pass the port name in config:
+
+.. code-block:: bash
+
+ LOOPBACK_PORT_NAME1=nfvbench-pf1
+ LOOPBACK_PORT_NAME2=nfvbench-pf2
+ E2E_PORT_NAME1=nfvbench-pf1
+ E2E_PORT_NAME1=nfvbench-pf3
+ INTF_MAC_MGMT=FA:16:3E:06:11:8A
+ INTF_MGMT_CIDR=172.20.56.228/2
+ INTF_MGMT_IP_GW=172.20.56.225
+ DNS_SERVERS=8.8.8.8,dns.server.com
+
+.. note:: A management interface is required to automatically find the virtual interface to use according to the MAC address provided (see `INTF_MAC_MGMT` parameter).
+.. note:: NFVbench VM will call openstack API through the management interface to retrieve mac address for these ports
+.. note:: If openstack API required a host name resolution, add the parameter DNS_SERVERS to add IP or DNS server names (multiple servers can be added separated by a `,`)
+
+Control nfvbenchvm VM and run test
+----------------------------------
+
+By default, NFVbench will be started in server mode (`--server`) and will act as an API.
+
+NFVbench VM will be accessible through SSH or HTTP using the management interface IP.
+
+NFVbench API endpoint is : `http://<management_ip>:<port>`
+
+.. note:: by default port value is 7555
+
+Get NFVbench status
+^^^^^^^^^^^^^^^^^^^
+
+To check NFVbench is up and running use REST request:
+
+.. code-block:: bash
+
+ curl -XGET '<management_ip>:<port>/status'
+
+Example of answer:
+
+.. code-block:: bash
+
+ {
+ "error_message": "nfvbench run still pending",
+ "status": "PENDING"
+ }
+
+Start NFVbench test
+^^^^^^^^^^^^^^^^^^^
+
+To start a test run using NFVbench API use this type of REST request:
+
+.. code-block:: bash
+
+ curl -XPOST '<management_ip>:<port>/start_run' -H "Content-Type: application/json" -d @nfvbenchconfig.json
+
+Example of return when the submission is successful:
+
+.. code-block:: bash
+
+ {
+ "error_message": "NFVbench run still pending",
+ "request_id": "42cccb7effdc43caa47f722f0ca8ec96",
+ "status": "PENDING"
+ }
+
+
+Start NFVbench test using Xtesting
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To start a test run using Xtesting python library and NFVbench API use this type of command on the VM:
+
+.. code-block:: bash
+
+ run_tests -t nfvbench-demo
+
+.. note:: `-t` option determine which test case to be runned by Xtesting
+ (see `xtesting/testcases.yaml` file content to see available list of test cases)
+
+
+Connect to the VM using SSH keypair
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If a key is provided at VM creation you can use it to log on the VM using `cloud-user` username:
+
+.. code-block:: bash
+
+ ssh -i key.pem cloud-user@<management_ip>
+
+
+Connect to VM using SSH username/password
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+VM is accessible over SSH using the hardcoded username and password (see below):
+
+.. code-block:: bash
+
+ ssh nfvbench@<management_ip>
+
+
+Launching nfvbenchvm VM
+-----------------------
+
+Normally this image will be deployed using Ansible role, and the required configurations will be automatically generated and pushed to VM by Ansible.
+If launched manually, users will have the full control to configure and run NFVbench via VNC console.
+
+To check if NFVbench is running, you can run this command in VNC console:
+
+.. code-block:: bash
+
+ sudo screen -r nfvbench
+
+
+Hardcoded Username and Password
+--------------------------------
+- Username: nfvbench
+- Password: nfvbench
+
diff --git a/docs/developer/testing-nfvbench.rst b/docs/developer/testing-nfvbench.rst
new file mode 100644
index 0000000..fd6c6f7
--- /dev/null
+++ b/docs/developer/testing-nfvbench.rst
@@ -0,0 +1,91 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+
+================
+Testing NFVbench
+================
+
+tox
+===
+
+NFVbench project uses `tox`_ to orchestrate the testing of the code base:
+
+* run unit tests
+* check code style
+* run linter
+* check links in the docs
+
+In addition to testing, tox is also used to generate the documentation in HTML
+format.
+
+What tox should do is specified in a ``tox.ini`` file located at the project root.
+
+tox is used in continuous integration: all the actions performed by tox must
+succeed before a patchset can be merged. As a developer, it is also useful to
+run tox locally to detect and fix the issues before pushing the code for review.
+
+.. _tox: https://tox.wiki/en/latest/
+
+
+
+Using tox on a developer's machine
+==================================
+
+Requirement: |python-version|
+-----------------------------
+
+.. |python-version| replace:: Python 3.8
+
+The current version of Python used by NFVbench is |python-version|. In
+particular, this means that |python-version| is used:
+
+* by tox in CI
+* in nfvbench Docker image
+* in nfvbench traffic generator VM image
+
+|python-version| is needed to be able to run tox locally. If it is not
+available through the package manager, it can be installed using `pyenv`_. In
+that case, it will also be necessary to install the `pyenv-virtualenv`_ plugin.
+Refer to the documentation of those projects for installation instructions.
+
+.. _pyenv: https://github.com/pyenv/pyenv
+.. _pyenv-virtualenv: https://github.com/pyenv/pyenv-virtualenv
+
+
+tox installation
+----------------
+
+Install tox with::
+
+ $ pip install tox==3.21.4
+
+.. note:: tox 3.21.4 is the version that comes with Ubuntu 22.04 and that can be
+ found on gerrit.opnfv.org build servers.
+
+
+Running tox
+-----------
+
+In nfvbench root directory, simply run tox with::
+
+ $ tox
+
+If all goes well, tox shows a green summary such as::
+
+ py38: commands succeeded
+ pep8: commands succeeded
+ lint: commands succeeded
+ docs: commands succeeded
+ docs-linkcheck: commands succeeded
+ congratulations :)
+
+It is possible to run only a subset of tox *environments* with the ``-e``
+command line option. For instance, to check the code style only, do::
+
+ $ tox -e pep8
+
+Each tox *environment* uses a dedicated python virtual environment. The
+``-r`` command line option can be used to force the recreation of the virtual
+environment(s). For instance::
+
+ $ tox -r