aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing')
-rw-r--r--docs/testing/user/userguide/04-installation.rst116
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst21
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst477
-rw-r--r--docs/testing/user/userguide/14-nsb-operation.rst96
-rwxr-xr-x[-rw-r--r--]docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst3
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst177
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst179
-rwxr-xr-xdocs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst102
-rw-r--r--docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst6
-rw-r--r--docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst96
-rw-r--r--docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst113
11 files changed, 1226 insertions, 160 deletions
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst
index 2dff80ef9..6fe42232d 100644
--- a/docs/testing/user/userguide/04-installation.rst
+++ b/docs/testing/user/userguide/04-installation.rst
@@ -72,6 +72,7 @@ Several prerequisites are needed for Yardstick:
the end of this document. That section details some tips/tricks which *may*
be of help in a proxified environment.
+.. _Install Yardstick using Docker:
Install Yardstick using Docker (first option) (**recommended**)
---------------------------------------------------------------
@@ -463,112 +464,111 @@ These configuration files can be found in the ``samples`` directory.
Default location for the output is ``/tmp/yardstick.out``.
-Automatic installation of Yardstick using ansible
--------------------------------------------------
+Automatic installation of Yardstick
+-----------------------------------
-Automatic installation can be used as an alternative to the manual.
-Yardstick can be installed on the bare metal and to the container. Yardstick
+Automatic installation can be used as an alternative to the manual by
+providing parameters for ansible script ``install.yaml`` in a ``nsb_setup.sh``
+file. Yardstick can be installed on the bare metal and to the container. Yardstick
container can be either pulled or built.
Bare metal installation
^^^^^^^^^^^^^^^^^^^^^^^
-Use ansible script ``install.yaml`` to install Yardstick on Ubuntu server:
+Modify ``nsb_setup.sh`` file ``install.yaml`` parameters to install Yardstick
+on Ubuntu server:
.. code-block:: console
ansible-playbook -i install-inventory.ini install.yaml \
+ -e IMAGE_PROPERTY='none' \
-e YARDSTICK_DIR=<path to Yardstick folder>
.. note:: By default ``INSTALLATION_MODE`` is ``baremetal``.
-.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to
- Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter.
+.. note:: No modification in ``install-inventory.ini`` is needed for Yardstick
+ installation.
.. note:: To install Yardstick in virtual environment pass parameter
``-e VIRTUAL_ENVIRONMENT=True``.
-To build Yardstick NSB image pass ``IMG_PROPERTY=nsb`` as input parameter:
-
-.. code-block:: console
-
- ansible-playbook -i install-inventory.ini install.yaml \
- -e IMAGE_PROPERTY=nsb \
- -e YARDSTICK_DIR=<path to Yardstick folder>
-
-.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF
- images will be built. Image type is defined by parameter ``IMAGE_PROPERTY``.
- By default Yardstick image will be built.
-
Container installation
^^^^^^^^^^^^^^^^^^^^^^
-Use ansible script ``install.yaml`` to pull or build Yardstick
-container. To pull Yardstick image and start container run:
+Modify ``install.yaml`` parameters in ``nsb_setup.sh`` file to pull or build
+Yardstick container. To pull Yardstick image and start container run:
.. code-block:: console
ansible-playbook -i install-inventory.ini install.yaml \
- -e YARDSTICK_DIR=<path to Yardstick folder> \
+ -e IMAGE_PROPERTY='none' \
-e INSTALLATION_MODE=container_pull
-.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF
- images will be built. Image type is defined by variable ``IMG_PROPERTY`` in
- file ``ansible/group_vars/all.yml``. By default Yardstick image will be
- built.
-
-.. note:: Open question: How to know if Docker image is built on Ubuntu 16.04 and 18.04?
- Do we need separate tag to be used?
+.. note:: Yardstick docker image is available for both Ubuntu 16.04 and Ubuntu
+ 18.04. By default Ubuntu 16.04 based docker image is used. To use
+ Ubuntu 18.04 based docker image pass ``-i opnfv/yardstick-ubuntu-18.04``
+ parameter to ``nsb_setup.sh``.
-To build Yardstick image run:
+To build Yardstick image modify Dockerfile as per comments in it and run:
.. code-block:: console
- ansible-playbook -i install-inventory.ini install.yaml \
- -e YARDSTICK_DIR=<path to Yardstick folder> \
- -e INSTALLATION_MODE=container
+ cd yardstick
+ docker build -f docker/Dockerfile -t opnfv/yardstick:<tag> .
-.. note:: In this ``INSTALLATION_MODE`` mode neither Yardstick image nor SampleVNF
- image will be built.
+.. note:: Yardstick docker image based on Ubuntu 16.04 will be built.
+ Pass ``-f docker/Dockerfile_ubuntu18`` to build Yardstick docker image based
+ on Ubuntu 18.04.
-.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to
- Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter.
+.. note:: Add ``--build-arg http_proxy=http://<proxy_host>:<proxy_port>`` to
+ build docker image if server is behind the proxy.
Parameters for ``install.yaml``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Description of the parameters used with ``install.yaml`` script
+Description of the parameters used with ``install.yaml``:
+-------------------------+-------------------------------------------------+
| Parameters | Detail |
+=========================+=================================================+
- | -i install-inventory.ini| Installs package dependency to remote servers |
- | | Mandatory parameter |
- | | By default no remote servers are provided |
- | | Needed packages will be installed on localhost |
+ | -i install-inventory.ini|| Installs package dependency to remote servers |
+ | || and localhost |
+ | || Mandatory parameter |
+ | || By default no remote servers are provided |
+-------------------------+-------------------------------------------------+
- | -e YARDSTICK_DIR | Path to Yardstick folder |
- | | Mandatory parameter |
+ | -e YARDSTICK_DIR || Path to Yardstick folder |
+ | || Mandatory parameter for Yardstick bare metal |
+ | || installation |
+-------------------------+-------------------------------------------------+
- | -e INSTALLATION_MODE | baremetal: Yardstick is installed to the bare |
- | | metal |
- | | Default parameter |
+ | -e INSTALLATION_MODE || baremetal: Yardstick is installed to the bare |
+ | | metal |
+ | || Default parameter |
| +-------------------------------------------------+
- | | container: Yardstick is installed in container |
- | | Container is built from Dockerfile |
+ | || container: Yardstick is installed in container |
+ | || Container is built from Dockerfile |
| +-------------------------------------------------+
- | | container_pull: Yardstick is installed in |
- | | container |
- | | Container is pulled from docker hub |
+ | || container_pull: Yardstick is installed in |
+ | || container |
+ | || Container is pulled from docker hub |
+-------------------------+-------------------------------------------------+
- | -e OS_RELEASE | xenial or bionic: Ubuntu version to be used |
- | | Default is Ubuntu 16.04 (xenial) |
+ | -e OS_RELEASE || xenial or bionic: Ubuntu version to be used for|
+ | || VM image (nsb or normal) |
+ | || Default is Ubuntu 16.04, xenial |
+ +-------------------------+-------------------------------------------------+
+ | -e IMAGE_PROPERTY || nsb: Build Yardstick NSB VM image |
+ | || Used to run Yardstick NSB tests on sample VNF |
+ | || Default parameter |
+ | +-------------------------------------------------+
+ | || normal: Build VM image to run ping test in |
+ | || OpenStack |
+ | +-------------------------------------------------+
+ | || none: don't build a VM image. |
+-------------------------+-------------------------------------------------+
- | -e IMAGE_PROPERTY | normal or nsb: Type of the VM image to be built |
- | | Default image is Yardstick |
+ | -e VIRTUAL_ENVIRONMENT || False or True: Whether install in virtualenv |
+ | || Default is False |
+-------------------------+-------------------------------------------------+
- | -e VIRTUAL_ENVIRONMENT | False or True: Whether install in virtualenv |
- | | Default is False |
+ | -e YARD_IMAGE_ARCH || CPU architecture on servers |
+ | || Default is 'amd64' |
+-------------------------+-------------------------------------------------+
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index 70aba1e37..45b087a47 100644
--- a/docs/testing/user/userguide/12-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -1,7 +1,7 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
+.. (c) OPNFV, 2016-2019 Intel Corporation.
.. Convention for heading levels in Yardstick documentation:
@@ -55,8 +55,8 @@ traffic according to user defined profiles.
NSB extension includes:
* Generic data models of Network Services, based on ETSI spec
- `ETSI GS NFV-TST 001`_
-* Standalone :term:`context` for VNF testing SRIOV, OVS, OVS-DPDK, etc
+ `ETSI GS NFV-TST001`_
+* Standalone :term:`context` for VNF testing SRIOV, OVS-DPDK, etc
* Generic VNF configuration models and metrics implemented with Python
classes
* Traffic generator features and traffic profiles
@@ -65,6 +65,14 @@ NSB extension includes:
* L4-L7 state-full traffic profiles
* Tunneling protocol/network overlay support
+* Scenarios that handle NSB test cases execution
+
+ * NSPerf - scenario that handles generic NSB test case execution
+ (setup and init tg/vnf, trigger traffic on tg, collect kpi)
+ * NSPerf-RFC2544 - scenario that allows repeatable triggering of traffic on
+ traffic generators until test case acceptance criteria is met
+ (for example RFC2544 binary search)
+
* Test case samples
* Ping
@@ -167,8 +175,8 @@ Supported testcases scenarios:
* Correlated UDP traffic using TREX traffic generator and replay VNF.
- * using different IMIX configuration like pure voice, pure video traffic etc
- * using different number IP flows e.g. 1, 1K, 16K, 64K, 256K, 1M flows
+ * Using different IMIX configuration like pure voice, pure video traffic etc
+ * Using different number IP flows e.g. 1, 1K, 16K, 64K, 256K, 1M flows
* Using different number of rules configured e.g. 1, 1K, 10K rules
For UDP correlated traffic following Key Performance Indicators are collected
@@ -186,6 +194,7 @@ There is already some reporting in NSB available, but NSB collects all KPIs for
analytics to process.
Below is an example list of basic KPIs:
+
* Throughput
* Latency
* Packet delay variation
@@ -206,7 +215,7 @@ the following collectd plug-ins are enabled for NSB testcases:
* RAM
* CPU usage
* IntelĀ® PMU
-* Intel(r) RDT
+* IntelĀ® RDT
Graphical Overview
------------------
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
index 71ced43ea..35f67b92f 100644
--- a/docs/testing/user/userguide/13-nsb-installation.rst
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -1,7 +1,7 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2018 Intel Corporation.
+.. (c) OPNFV, 2016-2019 Intel Corporation.
..
Convention for heading levels in Yardstick documentation:
@@ -21,6 +21,7 @@ NSB Installation
.. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
.. _devstack: https://docs.openstack.org/devstack/pike/>
+.. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
Abstract
--------
@@ -28,7 +29,7 @@ Abstract
The steps needed to run Yardstick with NSB testing are:
* Install Yardstick (NSB Testing).
-* Setup/reference ``pod.yaml`` describing Test topology
+* Setup/reference ``pod.yaml`` describing Test topology.
* Create/reference the test configuration yaml file.
* Run the test case.
@@ -89,79 +90,218 @@ Boot and BIOS settings:
Install Yardstick (NSB Testing)
-------------------------------
-Download the source code and check out the latest stable branch::
+Yardstick with NSB can be installed using ``nsb_setup.sh``.
+The ``nsb_setup.sh`` allows to:
-.. code-block:: console
-
- git clone https://gerrit.opnfv.org/gerrit/yardstick
- cd yardstick
- # Switch to latest stable branch
- git checkout stable/gambia
-
-Configure the network proxy, either using the environment variables or setting
-the global environment file.
+1. Install Yardstick in specified mode: bare metal or container.
+ Refer :doc:`04-installation`.
+2. Install package dependencies on remote servers used as traffic generator or
+ sample VNF. Install DPDK, sample VNFs, TREX, collectd.
+ Add such servers to ``install-inventory.ini`` file to either
+ ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
+ It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
+3. Build VM image either nsb or normal. The nsb VM image is used to run
+ Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
+ The normal VM image is used to run Yardstick ping tests in OpenStack context.
+4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
-* Set environment
+Firstly, configure the network proxy, either using the environment variables or
+setting the global environment file.
-.. code-block::
+Set environment in the file::
http_proxy='http://proxy.company.com:port'
https_proxy='http://proxy.company.com:port'
+Set environment variables:
+
.. code-block:: console
export http_proxy='http://proxy.company.com:port'
export https_proxy='http://proxy.company.com:port'
-Modify the Yardstick installation inventory, used by Ansible::
+Download the source code and check out the latest stable branch:
+
+.. code-block:: console
+
+ git clone https://gerrit.opnfv.org/gerrit/yardstick
+ cd yardstick
+ # Switch to latest stable branch
+ git checkout stable/gambia
+
+Modify the Yardstick installation inventory used by Ansible:
+
+.. code-block:: ini
cat ./ansible/install-inventory.ini
[jumphost]
localhost ansible_connection=local
- [yardstick-standalone]
- yardstick-standalone-node ansible_host=192.168.1.2
- yardstick-standalone-node-2 ansible_host=192.168.1.3
-
# section below is only due backward compatibility.
# it will be removed later
[yardstick:children]
jumphost
+ [yardstick-baremetal]
+ baremetal ansible_host=192.168.2.51 ansible_connection=ssh
+
+ [yardstick-standalone]
+ standalone ansible_host=192.168.2.52 ansible_connection=ssh
+
[all:vars]
- ansible_user=root
- ansible_pass=root
+ # Uncomment credentials below if needed
+ ansible_user=root
+ ansible_ssh_pass=root
+ # ansible_ssh_private_key_file=/root/.ssh/id_rsa
+ # When IMG_PROPERTY is passed neither normal nor nsb set
+ # "path_to_vm=/path/to/image" to add it to OpenStack
+ # path_to_img=/tmp/workspace/yardstick-image.img
+
+ # List of CPUs to be isolated (not used by default)
+ # Grub line will be extended with:
+ # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
+ # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
+
+.. warning::
+
+ Before running ``nsb_setup.sh`` make sure python is installed on servers
+ added to ``yardstick-standalone`` and ``yardstick-baremetal`` groups.
.. note::
SSH access without password needs to be configured for all your nodes
- defined in ``yardstick-install-inventory.ini`` file.
+ defined in ``install-inventory.ini`` file.
If you want to use password authentication you need to install ``sshpass``::
sudo -EH apt-get install sshpass
-To execute an installation for a BareMetal or a Standalone context::
- ./nsb_setup.sh
+.. note::
+
+ A VM image built by other means than Yardstick can be added to OpenStack.
+ Uncomment and set correct path to the VM image in the
+ ``install-inventory.ini`` file::
+
+ path_to_img=/tmp/workspace/yardstick-image.img
+
+
+.. note::
+
+ CPU isolation can be applied to the remote servers, like:
+ ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
+ ``install-inventory.ini`` file.
+By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
+docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
+installs packages to the servers given in ``yardstick-standalone`` and
+``yardstick-baremetal`` host groups.
-To execute an installation for an OpenStack context::
+To pull Yardstick built based on Ubuntu 18 run::
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
+
+To change default behavior modify parameters for ``install.yaml`` in
+``nsb_setup.sh`` file.
+
+Refer chapter :doc:`04-installation` for more details on ``install.yaml``
+parameters.
+
+To execute an installation for a **BareMetal** or a **Standalone context**::
+
+ ./nsb_setup.sh
+
+To execute an installation for an **OpenStack** context::
./nsb_setup.sh <path to admin-openrc.sh>
+.. note::
+
+ Yardstick may not be operational after distributive linux kernel update if
+ it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
+
+.. warning::
+
+ The Yardstick VM image (NSB or normal) cannot be built inside a VM.
+
+.. warning::
+
+ The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
+ Reboot of the servers from ``yardstick-standalone`` or
+ ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
+ required to apply those changes.
+
The above commands will set up Docker with the latest Yardstick code. To
execute::
docker exec -it yardstick bash
+.. note::
+
+ It may be needed to configure tty in docker container to extend commandline
+ character length, for example:
+
+ stty size rows 58 cols 234
+
It will also automatically download all the packages needed for NSB Testing
-setup. Refer chapter :doc:`04-installation` for more on Docker
+setup. Refer chapter :doc:`04-installation` for more on Docker:
+:ref:`Install Yardstick using Docker`
-**Install Yardstick using Docker (recommended)**
+Bare Metal context example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG and DUT servers to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies. Install
+ python on servers.
+3. Start deployment using docker image based on Ubuntu 16:
+
+.. code-block:: console
+
+ ./nsb_setup.sh
+
+4. Reboot bare metal servers.
+5. Enter to yardstick container and modify pod yaml file and run tests.
+
+Standalone context example for Ubuntu 18
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+Ubuntu 18 is installed on all servers.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG server to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies.
+ Add server where VM with sample VNF will be deployed to
+ ``yardstick-standalone`` group in ``install-inventory.ini`` file.
+ Target VM image named ``yardstick-nsb-image.img`` will be placed to
+ ``/var/lib/libvirt/images/``.
+ Install python on servers.
+3. Modify ``nsb_setup.sh`` on jump host:
+
+.. code-block:: console
+
+ ansible-playbook \
+ -e IMAGE_PROPERTY='nsb' \
+ -e OS_RELEASE='bionic' \
+ -e INSTALLATION_MODE='container_pull' \
+ -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
+ -i install-inventory.ini install.yaml
+
+4. Start deployment with Yardstick docker images based on Ubuntu 18:
+
+.. code-block:: console
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
+
+5. Reboot servers.
+6. Enter to yardstick container and modify pod yaml file and run tests.
-Another way to execute an installation for a Bare-Metal or a Standalone context
-is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
-for more details.
System Topology
---------------
@@ -175,7 +315,7 @@ System Topology
| | | |
| | (1)<-----(1) | |
+----------+ +----------+
- trafficgen_1 vnf
+ trafficgen_0 vnf
Environment parameters and credentials
@@ -185,14 +325,16 @@ Configure yardstick.conf
^^^^^^^^^^^^^^^^^^^^^^^^
If you did not run ``yardstick env influxdb`` inside the container to generate
- ``yardstick.conf``, then create the config file manually (run inside the
+``yardstick.conf``, then create the config file manually (run inside the
container)::
cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
vi /etc/yardstick/yardstick.conf
Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
-section::
+section:
+
+.. code-block:: ini
[DEFAULT]
debug = True
@@ -223,9 +365,10 @@ Connect to the Yardstick container::
docker exec -it yardstick /bin/bash
If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
+
source /etc/yardstick/openstack.creds
-In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
+In addition to the above, you need to set the ``EXTERNAL_NETWORK`` for
OpenStack::
export EXTERNAL_NETWORK="<openstack public network>"
@@ -251,7 +394,7 @@ Bare-Metal 2-Node setup
| | | |
| | (n)<-----(n) | |
+----------+ +----------+
- trafficgen_1 vnf
+ trafficgen_0 vnf
Bare-Metal 3-Node setup - Correlated Traffic
++++++++++++++++++++++++++++++++++++++++++++
@@ -265,7 +408,7 @@ Bare-Metal 3-Node setup - Correlated Traffic
| | | | | |
| | | |(1)<---->(0)| |
+----------+ +----------+ +------------+
- trafficgen_1 vnf trafficgen_2
+ trafficgen_0 vnf trafficgen_1
Bare-Metal Config pod.yaml
@@ -273,13 +416,13 @@ Bare-Metal Config pod.yaml
Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
topology and update all the required fields.::
- cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
+ cp <yardstick>/etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
.. code-block:: YAML
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -298,7 +441,7 @@ topology and update all the required fields.::
dpdk_port_num: 1
local_ip: "152.16.40.20"
netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
+ local_mac: "00:00:00:00:00:02"
-
name: vnf
@@ -346,6 +489,15 @@ topology and update all the required fields.::
Standalone Virtualization
-------------------------
+VM can be deployed manually or by Yardstick. If parameter *vm_deploy* is set
+to `True` VM will be deployed by Yardstick. Otherwise VM should be deployed
+manually. Test case example, context section::
+
+ contexts:
+ ...
+ vm_deploy: True
+
+
SR-IOV
^^^^^^
@@ -353,7 +505,7 @@ SR-IOV Pre-requisites
+++++++++++++++++++++
On Host, where VM is created:
- a) Create and configure a bridge named ``br-int`` for VM to connect to
+ 1. Create and configure a bridge named ``br-int`` for VM to connect to
external network. Currently this can be done using VXLAN tunnel.
Execute the following on host, where VM is created::
@@ -382,18 +534,18 @@ On Host, where VM is created:
.. note:: Host and jump host are different baremetal servers.
- b) Modify test case management CIDR.
+ 2. Modify test case management CIDR.
IP addresses IP#1, IP#2 and CIDR must be in the same network.
.. code-block:: YAML
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.7/24'
- c) Build guest image for VNF to run.
+ 3. Build guest image for VNF to run.
Most of the sample test cases in Yardstick are using a guest image called
``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
Yardstick has a tool for building this custom image with SampleVNF.
@@ -413,8 +565,6 @@ On Host, where VM is created:
For instructions on generating a cloud image using Ansible, refer to
:doc:`04-installation`.
- for more details refer to chapter :doc:`04-installation`
-
.. note:: VM should be build with static IP and be accessible from the
Yardstick host.
@@ -446,7 +596,7 @@ SR-IOV 2-Node setup
| | (n)<----->(n) | ----------------- |
| | | |
+----------+ +-------------------------+
- trafficgen_1 host
+ trafficgen_0 host
@@ -474,7 +624,7 @@ SR-IOV 3-Node setup - Correlated Traffic
| | | | | | |
| | (n)<----->(n) | -----| (n)<-->(n) | |
+----------+ +---------------------+ +--------------+
- trafficgen_1 host trafficgen_2
+ trafficgen_0 host trafficgen_1
Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
topology and update all the required fields.
@@ -493,7 +643,7 @@ SR-IOV Config pod_trex.yaml
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -513,7 +663,7 @@ SR-IOV Config pod_trex.yaml
dpdk_port_num: 1
local_ip: "152.16.40.20"
netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
+ local_mac: "00:00:00:00:00:02"
SR-IOV Config host_sriov.yaml
+++++++++++++++++++++++++++++
@@ -554,7 +704,7 @@ Update contexts section
user: "" # update VM username
password: "" # update password
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -575,6 +725,93 @@ Update contexts section
gateway_ip: '152.16.100.20'
+SRIOV configuration options
++++++++++++++++++++++++++++
+
+The only configuration option available for SRIOV is *vpci*. It is used as base
+address for VFs that are created during SRIOV test case execution.
+
+ .. code-block:: yaml+jinja
+
+ networks:
+ uplink_0:
+ phy_port: "0000:05:00.0"
+ vpci: "0000:00:07.0"
+ cidr: '152.16.100.10/24'
+ gateway_ip: '152.16.100.20'
+ downlink_0:
+ phy_port: "0000:05:00.1"
+ vpci: "0000:00:08.0"
+ cidr: '152.16.40.10/24'
+ gateway_ip: '152.16.100.20'
+
+.. _`VM image properties label`:
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties example under *flavor* section:
+
+ .. code-block:: console
+
+ flavor:
+ images: <path>
+ ram: 8192
+ extra_specs:
+ machine_type: 'pc-i440fx-xenial'
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ hw_socket: 0
+ cputune: |
+ <cputune>
+ <vcpupin vcpu="0" cpuset="7"/>
+ <vcpupin vcpu="1" cpuset="8"/>
+ ...
+ <vcpupin vcpu="11" cpuset="18"/>
+ <emulatorpin cpuset="11"/>
+ </cputune>
+ user: ""
+ password: ""
+
+VM image properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | images || Path to the VM image generated by |
+ | | ``nsb_setup.sh`` |
+ | || Default path is ``/var/lib/libvirt/images/`` |
+ | || Default file name ``yardstick-nsb-image.img`` |
+ | | or ``yardstick-image.img`` |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for VM |
+ | || Default is 4096 MB |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_sockets || Number of sockets provided to the guest VM |
+ | || Default is 1 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_cores || Number of cores provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_threads || Number of threads provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw_socket || Generate vcpu cpuset from given HW socket |
+ | || Default is 0 |
+ +-------------------------+-------------------------------------------------+
+ | cputune || Maps virtual cpu with logical cpu |
+ +-------------------------+-------------------------------------------------+
+ | machine_type || Machine type to be emulated in VM |
+ | || Default is 'pc-i440fx-xenial' |
+ +-------------------------+-------------------------------------------------+
+ | user || User name to access the VM |
+ | || Default value is 'root' |
+ +-------------------------+-------------------------------------------------+
+ | password || Password to access the VM |
+ +-------------------------+-------------------------------------------------+
+
+
OVS-DPDK
^^^^^^^^
@@ -582,7 +819,7 @@ OVS-DPDK Pre-requisites
+++++++++++++++++++++++
On Host, where VM is created:
- a) Create and configure a bridge named ``br-int`` for VM to connect to
+ 1. Create and configure a bridge named ``br-int`` for VM to connect to
external network. Currently this can be done using VXLAN tunnel.
Execute the following on host, where VM is created:
@@ -613,18 +850,18 @@ On Host, where VM is created:
.. note:: Host and jump host are different baremetal servers.
- b) Modify test case management CIDR.
+ 2. Modify test case management CIDR.
IP addresses IP#1, IP#2 and CIDR must be in the same network.
.. code-block:: YAML
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.7/24'
- c) Build guest image for VNF to run.
+ 3. Build guest image for VNF to run.
Most of the sample test cases in Yardstick are using a guest image called
``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
Yardstick has a tool for building this custom image with SampleVNF.
@@ -647,11 +884,11 @@ On Host, where VM is created:
.. note:: VM should be build with static IP and should be accessible from
yardstick host.
-3. OVS & DPDK version.
- * OVS 2.7 and DPDK 16.11.1 above version is supported
+4. OVS & DPDK version:
-4. Setup `OVS-DPDK`_ on host.
+ * OVS 2.7 and DPDK 16.11.1 above version is supported
+Refer setup instructions at `OVS-DPDK`_ on host.
OVS-DPDK Config pod.yaml describing Topology
++++++++++++++++++++++++++++++++++++++++++++
@@ -683,7 +920,7 @@ OVS-DPDK 2-Node setup
| | | (ovs-dpdk) | |
| | (n)<----->(n) |------------------ |
+----------+ +-------------------------+
- trafficgen_1 host
+ trafficgen_0 host
OVS-DPDK 3-Node setup - Correlated Traffic
@@ -713,7 +950,7 @@ OVS-DPDK 3-Node setup - Correlated Traffic
| | | (ovs-dpdk) | | | |
| | (n)<----->(n) | ------ |(n)<-->(n)| |
+----------+ +-------------------------+ +------------+
- trafficgen_1 host trafficgen_2
+ trafficgen_0 host trafficgen_1
Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
@@ -731,7 +968,7 @@ OVS-DPDK Config pod_trex.yaml
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -750,7 +987,7 @@ OVS-DPDK Config pod_trex.yaml
dpdk_port_num: 1
local_ip: "152.16.40.20"
netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
+ local_mac: "00:00:00:00:00:02"
OVS-DPDK Config host_ovs.yaml
+++++++++++++++++++++++++++++
@@ -802,7 +1039,7 @@ Update contexts section
user: "" # update VM username
password: "" # update password
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -822,6 +1059,92 @@ Update contexts section
cidr: '152.16.40.10/24'
gateway_ip: '152.16.100.20'
+OVS-DPDK configuration options
+++++++++++++++++++++++++++++++
+
+There are number of configuration options available for OVS-DPDK context in
+test case. Mostly they are used for performance tuning.
+
+OVS-DPDK properties:
+''''''''''''''''''''
+
+OVS-DPDK properties example under *ovs_properties* section:
+
+ .. code-block:: console
+
+ ovs_properties:
+ version:
+ ovs: 2.8.1
+ dpdk: 17.05.2
+ pmd_threads: 4
+ pmd_cpu_mask: "0x3c"
+ ram:
+ socket_0: 2048
+ socket_1: 2048
+ queues: 2
+ vpath: "/usr/local"
+ max_idle: 30000
+ lcore_mask: 0x02
+ dpdk_pmd-rxq-affinity:
+ 0: "0:2,1:2"
+ 1: "0:2,1:2"
+ 2: "0:3,1:3"
+ 3: "0:3,1:3"
+ vhost_pmd-rxq-affinity:
+ 0: "0:3,1:3"
+ 1: "0:3,1:3"
+ 2: "0:4,1:4"
+ 3: "0:4,1:4"
+
+OVS-DPDK properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | version || Version of OVS and DPDK to be installed |
+ | || There is a relation between OVS and DPDK |
+ | | version which can be found at |
+ | | `OVS-DPDK-versions`_ |
+ | || By default OVS: 2.6.0, DPDK: 16.07.2 |
+ +-------------------------+-------------------------------------------------+
+ | lcore_mask || Core bitmask used during DPDK initialization |
+ | | where the non-datapath OVS-DPDK threads such |
+ | | as handler and revalidator threads run |
+ +-------------------------+-------------------------------------------------+
+ | pmd_cpu_mask || Core bitmask that sets which cores are used by |
+ | || OVS-DPDK for datapath packet processing |
+ +-------------------------+-------------------------------------------------+
+ | pmd_threads || Number of PMD threads used by OVS-DPDK for |
+ | | datapath |
+ | || This core mask is evaluated in Yardstick |
+ | || It will be used if pmd_cpu_mask is not given |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for each socket, MB |
+ | || Default is 2048 MB |
+ +-------------------------+-------------------------------------------------+
+ | queues || Number of RX queues used for DPDK physical |
+ | | interface |
+ +-------------------------+-------------------------------------------------+
+ | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vpath || User path for openvswitch files |
+ | || Default is ``/usr/local`` |
+ +-------------------------+-------------------------------------------------+
+ | max_idle || The maximum time that idle flows will remain |
+ | | cached in the datapath, ms |
+ +-------------------------+-------------------------------------------------+
+
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties are same as for SRIOV :ref:`VM image properties label`.
+
OpenStack with SR-IOV support
-----------------------------
@@ -859,7 +1182,7 @@ Single node OpenStack with external TG
| | (PF1)<----->(PF1) +--------------------+ |
| | | |
+----------+ +----------------------------+
- trafficgen_1 host
+ trafficgen_0 host
Host pre-configuration
@@ -962,7 +1285,7 @@ DevStack installation
If you want to try out NSB, but don't have OpenStack set-up, you can use
`Devstack`_ to install OpenStack on a host. Please note, that the
``stable/pike`` branch of devstack repo should be used during the installation.
-The required ``local.conf`` configuration file are described below.
+The required ``local.conf`` configuration file is described below.
DevStack configuration file:
@@ -973,7 +1296,7 @@ DevStack configuration file:
commands to get device and vendor id of the virtual function (VF).
.. literalinclude:: code/single-devstack-local.conf
- :language: console
+ :language: ini
Start the devstack installation on a host.
@@ -990,7 +1313,7 @@ Run the Sample VNF test case
There is an example of Sample VNF test case ready to be executed in an
OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
-tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
+tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_trex.yaml``.
Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
context.
@@ -1003,7 +1326,7 @@ container:
command to get the PF PCI address for ``vpci`` field.
.. literalinclude:: code/single-yardstick-pod.conf
- :language: console
+ :language: ini
Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
@@ -1011,7 +1334,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section.
Multi node OpenStack TG and VNF setup (two nodes)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
@@ -1022,7 +1345,7 @@ Multi node OpenStack TG and VNF setup (two nodes)
| |sample-VNF VM | | | |sample-VNF VM | |
| | | | | | | |
| | TG | | | | DUT | |
- | | trafficgen_1 | | | | (VNF) | |
+ | | trafficgen_0 | | | | (VNF) | |
| | | | | | | |
| +--------+ +--------+ | | +--------+ +--------+ |
| | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
@@ -1063,12 +1386,12 @@ devstack repo should be used during the installation.
DevStack configuration file for controller host:
.. literalinclude:: code/multi-devstack-controller-local.conf
- :language: console
+ :language: ini
DevStack configuration file for compute host:
.. literalinclude:: code/multi-devstack-compute-local.conf
- :language: console
+ :language: ini
Start the devstack installation on the controller and compute hosts.
@@ -1094,7 +1417,7 @@ Enabling other Traffic generators
---------------------------------
IxLoad
-~~~~~~
+^^^^^^
1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
@@ -1116,7 +1439,7 @@ IxLoad
Config ``pod_ixia.yaml``
.. literalinclude:: code/pod_ixia.yaml
- :language: console
+ :language: yaml
for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
for ovs-dpdk/sriov configuration
@@ -1152,7 +1475,7 @@ installed as part of the requirements of the project.
Configure ``pod_ixia.yaml``
.. literalinclude:: code/pod_ixia.yaml
- :language: console
+ :language: yaml
for sriov/ovs_dpdk pod files, please refer to above
`Standalone Virtualization`_ for ovs-dpdk/sriov configuration
@@ -1197,9 +1520,9 @@ to be preinstalled and properly configured.
``PYTHONPATH`` environment variable.
.. important::
- The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
- For LsApi module to initialize correctly following lines (184-186) in
- lsapi.py
+ The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
+ For LsApi module to initialize correctly following lines (184-186) in
+ lsapi.py
.. code-block:: python
diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst
index 12e269187..1f9e4d4c6 100644
--- a/docs/testing/user/userguide/14-nsb-operation.rst
+++ b/docs/testing/user/userguide/14-nsb-operation.rst
@@ -1,7 +1,7 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2018 Intel Corporation.
+.. (c) OPNFV, 2016-2019 Intel Corporation.
..
Convention for heading levels in Yardstick documentation:
@@ -136,7 +136,7 @@ case, please follow the instructions below.
image: yardstick-samplevnfs
...
servers:
- vnf__0:
+ vnf_0:
...
availability_zone: <AZ_NAME>
...
@@ -265,8 +265,8 @@ to the VNF.
An example scale-up Heat testcase is:
-.. literalinclude:: /../samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
- :language: yaml
+.. literalinclude:: /../samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_trex_scale-up.yaml
+ :language: yaml+jinja
This testcase template requires specifying the number of VCPUs, Memory and Ports.
We set the VCPUs and memory using the ``--task-args`` options
@@ -281,7 +281,7 @@ In order to support ports scale-up, traffic and topology templates need to be us
A example topology template is:
.. literalinclude:: /../samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
- :language: yaml
+ :language: yaml+jinja
This template has ``vports`` as an argument. To pass this argument it needs to
be configured in ``extra_args`` scenario definition. Please note that more
@@ -303,7 +303,7 @@ For example:
A example traffic profile template is:
.. literalinclude:: /../samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
- :language: yaml
+ :language: yaml+jinja
There is an option to provide predefined config for SampleVNFs. Path to config
file may by specified in ``vnf_config`` scenario section.
@@ -319,11 +319,10 @@ Baremetal
^^^^^^^^^
1. Follow above traffic generator section to setup.
2. Edit num of threads in
- ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml``
+ ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_trex_scale_up.yaml``
e.g, 6 Threads for given VNF
-.. code-block:: yaml
-
+.. code-block:: yaml+jinja
schema: yardstick:task:0.1
scenarios:
@@ -332,8 +331,8 @@ Baremetal
traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml
topology: vfw-tg-topology.yaml
nodes:
- tg__0: trafficgen_1.yardstick
- vnf__0: vnf.yardstick
+ tg__0: trafficgen_0.yardstick
+ vnf__0: vnf_0.yardstick
options:
framesize:
uplink: {64B: 100}
@@ -382,7 +381,7 @@ Scale-out not supported on Baremetal.
.. code-block:: console
cd <repo>/ansible
- trex: standalone_ovs_scale_out_trex_test.yaml or standalone_sriov_scale_out_trex_test.yaml
+ trex: standalone_ovs_scale_out_test.yaml or standalone_sriov_scale_out_test.yaml
ixia: standalone_ovs_scale_out_ixia_test.yaml or standalone_sriov_scale_out_ixia_test.yaml
ixia_correlated: standalone_ovs_scale_out_ixia_correlated_test.yaml or standalone_sriov_scale_out_ixia_correlated_test.yaml
@@ -427,7 +426,7 @@ options section.
scenarios:
- type: NSPerf
nodes:
- tg__0: tg_0.yardstick
+ tg__0: trafficgen_0.yardstick
options:
tg_0:
@@ -503,8 +502,8 @@ Sample test case file
4. Modify ``networks/phy_port`` accordingly to the baremetal setup.
5. Run test from:
-.. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
- :language: yaml
+.. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_trex.yaml
+ :language: yaml+jinja
Preparing test run of vEPC test case
------------------------------------
@@ -617,7 +616,7 @@ The vPE (Provider Edge Router) is a :term: `VNF` approximation
serving as an Edge Router. The vPE is approximated using the
``ip_pipeline`` dpdk application.
- .. image:: images/vPE_Diagram.png
+ .. image:: /../docs/testing/developer/devguide/images/vPE_Diagram.png
:width: 800px
:alt: NSB vPE Diagram
@@ -640,3 +639,68 @@ A testcase can be started with the following command as an example:
.. code-block:: bash
yardstick task start /yardstick/samples/vnf_samples/nsut/vpe/tc_baremetal_rfc2544_ipv4_1flow_64B_ixia.yaml
+
+Preparing test run of vIPSEC test case
+--------------------------------------
+
+Location of vIPSEC test cases: ``samples/vnf_samples/nsut/ipsec/``.
+
+Before running a specific vIPSEC test case using NSB, some dependencies have to be
+preinstalled and properly configured.
+- VPP
+
+.. code-block:: console
+
+ export UBUNTU="xenial"
+ export RELEASE=".stable.1810"
+ sudo rm /etc/apt/sources.list.d/99fd.io.list
+ echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io$RELEASE.ubuntu.$UBUNTU.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
+ sudo apt-get update
+ sudo apt-get install vpp vpp-lib vpp-plugin vpp-dbg vpp-dev vpp-api-java vpp-api-python vpp-api-lua
+
+- VAT templates
+
+ VAT templates is required for the VPP API.
+
+.. code-block:: console
+
+ mkdir -p /opt/nsb_bin/vpp/templates/
+ echo 'exec trace add dpdk-input 50' > /opt/nsb_bin/vpp/templates/enable_dpdk_traces.vat
+ echo 'exec trace add vhost-user-input 50' > /opt/nsb_bin/vpp/templates/enable_vhost_user_traces.vat
+ echo 'exec trace add memif-input 50' > /opt/nsb_bin/vpp/templates/enable_memif_traces.vat
+ cat > /opt/nsb_bin/vpp/templates/dump_interfaces.vat << EOL
+ sw_interface_dump
+ dump_interface_table
+ quit
+ EOL
+
+
+Preparing test run of vCMTS test case
+-------------------------------------
+
+Location of vCMTS test cases: ``samples/vnf_samples/nsut/cmts/``.
+
+Before running a specific vIPSEC test case using NSB, some changes must be
+made to the original vCMTS package.
+
+Allow SSH access to the docker images
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Follow the documentation at ``https://docs.docker.com/engine/examples/running_ssh_service/``
+to allow SSH access to the Pktgen/vcmts-d containers located at:
+
+* ``$VCMTS_ROOT/pktgen/docker/docker-image-pktgen/Dockerfile`` and
+* ``$VCMTS_ROOT/vcmtsd/docker/docker-image-vcmtsd/Dockerfile``
+
+
+Deploy the ConfigMaps for Pktgen and vCMTSd
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. code-block:: bash
+
+ cd $VCMTS_ROOT/kubernetes/helm/pktgen
+ helm template . -x templates/pktgen-configmap.yaml > configmap.yaml
+ kubectl create -f configmap.yaml
+
+ cd $VCMTS_ROOT/kubernetes/helm/vcmtsd
+ helm template . -x templates/vcmts-configmap.yaml > configmap.yaml
+ kubectl create -f configmap.yaml
+
diff --git a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
index 6c18c7d89..562c80ff7 100644..100755
--- a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
+++ b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
@@ -36,3 +36,6 @@ NSB PROX Test Case Descriptions
tc_vfw_rfc2544
tc_vfw_rfc2544_correlated
tc_vfw_rfc3511
+ tc_vpp_baremetal_crypto_ipsec
+ tc_vims_context_sipp
+ tc_pktgen_k8s_vcmts
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
new file mode 100644
index 000000000..ffe4f6c19
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
@@ -0,0 +1,177 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case without link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscriber connections to BNG, runs prioritized traffic|
+| | on maximum throughput on all ports and collects network |
+| | and PPPoE subscriber metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up.yaml |
+| | |
+| | Mentioned test case is a template and number of ports in the |
+| | setup could be passed using cli arguments, e.g: |
+| | |
+| | yardstick -d task start --task-args='{vports: 8}' <tc_yaml> |
+| | |
+| | By default, vports=2. |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test case can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * Setup ports number; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * Enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolves the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports |
+| | (traffic flows are creating between access-core ports |
+| | pairs, traffic is bi-directional); |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. In case drop percentage rate is higher than expected, |
+| | reduce traffic line rate and repeat steps 7-10 again; |
+| | 11. In case drop percentage rate is as expected or number |
+| | of maximum iterations in step 10 achieved, disconnect |
+| | PPPoE subscribers and stop traffic; |
+| | 12. Stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During each iteration interval in the test run, all specified|
+| | metrics are retrieved from IxNetwork and stored in the |
+| | yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The vBNG RFC2544 test case will achieve maximum traffic line |
+| | rate with zero packet loss (or other non-zero allowed |
+| | partial drop rate). |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
new file mode 100644
index 000000000..889ba2410
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
@@ -0,0 +1,179 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case with link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscribers connections to BNG, run prioritized traffic|
+| | causing congestion of access port (port xe0) and collects |
+| | network and PPPoE subscribers metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX.yaml |
+| | |
+| | Number of ports: |
+| | * 8 ports |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
+| | NOTE: that only parameter that can't be changed is ports |
+| | number. To run the test with another number of ports |
+| | traffic profile should be updated. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test cases can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolve the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports. |
+| | While test covers case with access port congestion, |
+| | flows between ports will be created in the following |
+| | way: traffic from two core ports are going to one access |
+| | port causing port congestion and traffic from other two |
+| | core ports is splitting between remaining three access |
+| | ports; |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. Measure drop percentage rate of different priority |
+| | packets on congested port. Expected that all high and |
+| | medium priority packets was forwarded and only low |
+| | priority packets has drops. |
+| | 11. Disconnect PPPoE subscribers and stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During test run, in the end of each iteration all specified |
+| | in the document metrics are retrieved from IxNetwork and |
+| | stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case is successful if all high and medium priority |
+| | packets on congested port was forwarded and only low |
+| | priority packets has drops. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst b/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst
new file mode 100755
index 000000000..56f5c27ed
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst
@@ -0,0 +1,102 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB vCMTS
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB Pktgen test for vCMTS characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_vcmts_k8s_pktgen |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Upstream Processing (Per Service Group); |
+| | * Downstream Processing (Per Service Group); |
+| | * Upstream Throughput; |
+| | * Downstream Throughput; |
+| | * Platform Metrics; |
+| | * Power Consumption; |
+| | * Upstream Throughput Time Series; |
+| | * Downstream Throughput Time Series; |
+| | * System Summary; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | * The vCMTS test handles service groups and packet generation |
+| | containers setup, and metrics collection. |
+| | |
+| | * The vCMTS test case is implemented to run in Kubernetes |
+| | environment with vCMTS pre-installed. |
++--------------+---------------------------------------------------------------+
+|configuration | The vCMTS test case configurable values are listed below |
+| | |
+| | * num_sg: Number of service groups (Upstream/Downstream |
+| | container pairs). |
+| | * num_tg: Number of Pktgen containers. |
+| | * vcmtsd_image: vCMTS container image (feat/perf). |
+| | * qat_on: QAT status (true/false). |
+| | |
+| | num_sg and num_tg values should be configured in the test |
+| | case file and in the topology file. |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Intel vCMTS Reference Dataplane |
+| | Reference implementation of a DPDK-based vCMTS (DOCSIS MAC) |
+| | dataplane in a Kubernetes-orchestrated Linux Container |
+| | environment. |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This test cases can be configured with different: |
+| | |
+| | * Number of service groups |
+| | * Number of Pktgen instances |
+| | * QAT offloading |
+| | * Feat/Perf Images for performance or features (more data |
+| | collection) |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | Intel vCMTS Reference Dataplane should be installed and |
+|conditions | runnable on 2 nodes Kubernetes environment with modifications |
+| | to the containers to allow yardstick ssh access, and the |
+| | ConfigMaps from the original vCMTS package deployed. |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | Yardstick is connected to the Kubernetes Master node using |
+| | the configuration file in /etc/kubernetes/admin.yaml |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | The TG containers are created and started on the traffic |
+| | generator server (Master node), While the VNF containers are |
+| | created and started on the data plan server. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Yardstick is connected with the TG and VNF by using ssh. |
+| | to start vCMTS-d, and Pktgen. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | Yardstick connects to the running Pktgen instances to start |
+| | generating traffic using the configurations from: |
+| | /etc/yardstick/pktgen_values.yaml |
+| | |
+| | and connects to the vCMTS-d containers to start the upstream |
+| | and downstream processing using the configurations from: |
+| | /etc/yardstick/vcmtsd_values.yaml |
+| | |
++--------------+---------------------------------------------------------------+
+|step 5 | Yardstick copies vCMTS metrics regularly from the remote |
+| | InfluxDB (deployed by the vCMTS Package) to the local |
+| | Yardstick InfluxDB as configured in the options section in |
+| | the test case file. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | None. The test case will collect the KPIs and plot on |
+| | Grafana. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file
diff --git a/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst b/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
index 6827b0525..3beb5303f 100644
--- a/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
+++ b/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
@@ -3,9 +3,9 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, 2017 Intel Corporation.
-**********************************************
-Yardstick Test Case Description: NSB PROXi VPE
-**********************************************
+*********************************************
+Yardstick Test Case Description: NSB PROX VPE
+*********************************************
+-----------------------------------------------------------------------------+
|NSB PROX test for NFVI characterization |
diff --git a/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
new file mode 100644
index 000000000..6df4ab880
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
@@ -0,0 +1,96 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2019 Viosoft Corporation.
+
+**********************************************
+Yardstick Test Case Description: NSB VIMS
+**********************************************
+
++-----------------------------------------------------------------------------+
+|NSB VIMS test for vIMS characterization |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_vims_{context}_sipp |
+| | |
+| | * context = baremetal or heat; |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | * Successful registrations per second; |
+| | * Total number of active registrations per server; |
+| | * Successful de-registrations per second; |
+| | * Successful session establishments per second; |
+| | * Total number of active sessions per server; |
+| | * Mean session setup time; |
+| | * Successful re-registrations per second; |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The vIMS test handles registration rate, call rate, |
+| | round trip delay, and message statistics of vIMS system. |
+| | |
+| | The vIMS test cases are implemented to run in baremetal |
+| | and heat context default configuration. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The vIMS test cases are listed below: |
+| | |
+| | * tc_vims_baremetal_sipp.yaml |
+| | * tc_vims_heat_sipp.yaml |
+| | |
+| | Each test runs one time and collects all the KPIs. |
+| | The configuration of vIMS and SIPp can be changed in each |
+| | test. |
++--------------+--------------------------------------------------------------+
+|test tool | SIPp |
+| | |
+| | SIPp is an application that can simulate SIP scenarios, can |
+| | generate RTP traffic and used for vIMS characterization. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | The SIPp test cases can be configured with different: |
+| | |
+| | * number of accounts; |
+| | * the call per second (cps) of SIP test; |
+| | * the holding time; |
+| | * RTP configuratioin; |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | For Openstack test case, only vIMS is deployed by external |
+|conditions | heat template, SIPp needs pod.yaml file with the necessary |
+| | system and NIC information |
+| | |
+| | For Baremetal tests cases SIPp and vIMS must be installed in |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
+| | For Heat test: One host VM for vIMS is booted, based on |
+| | the test flavor. Another host for SIPp is booted as |
+| | traffic generator, based on pod.yaml file |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with the vIMS and SIPp via ssh. |
+| | The test will resolve the topology, instantiate the vIMS and |
+| | SIPp and collect the KPIs/metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | The SIPp will run scenario tests with parameters configured |
+| | in test case files (tc_vims_baremetal_sipp.yaml and |
+| | tc_vims_heat_sipp.yaml files). |
+| | This is done until the KPIs of SIPp are within an acceptable |
+| | threshold. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application. |
+| | |
+| | In Heat test: The host VM of vIMS is deleted on test |
+| | completion. |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will collect the KPIs and plot on Grafana. |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
new file mode 100644
index 000000000..6a4a37697
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
@@ -0,0 +1,113 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB VPP IPSEC
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB VPP test for vIPSEC characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_baremetal_rfc2544_ipv4_{crypto_dev}_{crypto_alg} |
+| | |
+| | * crypto_dev = HW_cryptodev or SW_cryptodev; |
+| | * crypto_alg = aes-gcm or cbc-sha1; |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Network Throughput NDR or PDR; |
+| | * Connections Per Second (CPS); |
+| | * Latency; |
+| | * Number of tunnels; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * Dropped packets; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | IPv4 IPsec tunnel mode performance test: |
+| | |
+| | * Finds and reports throughput NDR (Non Drop Rate) with zero |
+| | packet loss tolerance or throughput PDR (Partial Drop Rate) |
+| | with non-zero packet loss tolerance (LT) expressed in |
+| | number of packets transmitted. |
+| | |
+| | * The IPSEC test cases are implemented to run in baremetal |
+| | |
++--------------+---------------------------------------------------------------+
+|configuration | The IPSEC test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_trex.yaml |
+| | |
+| | Test duration is set as 500sec for each test. |
+| | Packet size set as 64 bytes or higher. |
+| | Number of tunnels set as 1 or higher. |
+| | Number of connections set as 1 or higher |
+| | These can be configured |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Vector Packet Processing (VPP) |
+| | The VPP platform is an extensible framework that provides |
+| | out-of-the-box production quality switch/router functionality.|
+| | Its high performance, proven technology, its modularity and, |
+| | flexibility and rich feature set |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This VPP IPSEC test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test durations; |
+| | * tolerated loss; |
+| | * crypto device type; |
+| | * number of physical cores; |
+| | * number of tunnels; |
+| | * number of connections; |
+| | * encryption algorithms - integrity algorithm; |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | For Baremetal tests cases VPP and DPDK must be installed in |
+|conditions | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | Yardstick is connected with the TG and VNF by using ssh. |
+| | The test will resolve the topology and instantiate the VNF |
+| | and TG and collect the KPI's/metrics. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Test packets are generated by TG on links to DUTs. If the |
+| | number of dropped packets is more than the tolerated loss |
+| | the line rate or throughput is halved. This is done until |
+| | the dropped packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for a packet size |
+| | specified in the test case with an accepted minimal packet |
+| | loss for the default configuration. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file