aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing')
-rw-r--r--docs/testing/user/userguide/04-installation.rst115
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst8
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst375
-rw-r--r--docs/testing/user/userguide/14-nsb-operation.rst73
-rwxr-xr-x[-rw-r--r--]docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst3
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst177
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst179
-rwxr-xr-xdocs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst102
-rw-r--r--docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst96
-rw-r--r--docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst113
10 files changed, 1129 insertions, 112 deletions
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst
index 2dff80ef9..213821798 100644
--- a/docs/testing/user/userguide/04-installation.rst
+++ b/docs/testing/user/userguide/04-installation.rst
@@ -463,112 +463,111 @@ These configuration files can be found in the ``samples`` directory.
Default location for the output is ``/tmp/yardstick.out``.
-Automatic installation of Yardstick using ansible
--------------------------------------------------
+Automatic installation of Yardstick
+-----------------------------------
-Automatic installation can be used as an alternative to the manual.
-Yardstick can be installed on the bare metal and to the container. Yardstick
+Automatic installation can be used as an alternative to the manual by
+providing parameters for ansible script ``install.yaml`` in a ``nsb_setup.sh``
+file. Yardstick can be installed on the bare metal and to the container. Yardstick
container can be either pulled or built.
Bare metal installation
^^^^^^^^^^^^^^^^^^^^^^^
-Use ansible script ``install.yaml`` to install Yardstick on Ubuntu server:
+Modify ``nsb_setup.sh`` file ``install.yaml`` parameters to install Yardstick
+on Ubuntu server:
.. code-block:: console
ansible-playbook -i install-inventory.ini install.yaml \
+ -e IMAGE_PROPERTY='none' \
-e YARDSTICK_DIR=<path to Yardstick folder>
.. note:: By default ``INSTALLATION_MODE`` is ``baremetal``.
-.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to
- Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter.
+.. note:: No modification in ``install-inventory.ini`` is needed for Yardstick
+ installation.
.. note:: To install Yardstick in virtual environment pass parameter
``-e VIRTUAL_ENVIRONMENT=True``.
-To build Yardstick NSB image pass ``IMG_PROPERTY=nsb`` as input parameter:
-
-.. code-block:: console
-
- ansible-playbook -i install-inventory.ini install.yaml \
- -e IMAGE_PROPERTY=nsb \
- -e YARDSTICK_DIR=<path to Yardstick folder>
-
-.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF
- images will be built. Image type is defined by parameter ``IMAGE_PROPERTY``.
- By default Yardstick image will be built.
-
Container installation
^^^^^^^^^^^^^^^^^^^^^^
-Use ansible script ``install.yaml`` to pull or build Yardstick
-container. To pull Yardstick image and start container run:
+Modify ``install.yaml`` parameters in ``nsb_setup.sh`` file to pull or build
+Yardstick container. To pull Yardstick image and start container run:
.. code-block:: console
ansible-playbook -i install-inventory.ini install.yaml \
- -e YARDSTICK_DIR=<path to Yardstick folder> \
+ -e IMAGE_PROPERTY='none' \
-e INSTALLATION_MODE=container_pull
-.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF
- images will be built. Image type is defined by variable ``IMG_PROPERTY`` in
- file ``ansible/group_vars/all.yml``. By default Yardstick image will be
- built.
-
-.. note:: Open question: How to know if Docker image is built on Ubuntu 16.04 and 18.04?
- Do we need separate tag to be used?
+.. note:: Yardstick docker image is available for both Ubuntu 16.04 and Ubuntu
+ 18.04. By default Ubuntu 16.04 based docker image is used. To use
+ Ubuntu 18.04 based docker image pass ``-i opnfv/yardstick-ubuntu-18.04``
+ parameter to ``nsb_setup.sh``.
-To build Yardstick image run:
+To build Yardstick image modify Dockerfile as per comments in it and run:
.. code-block:: console
- ansible-playbook -i install-inventory.ini install.yaml \
- -e YARDSTICK_DIR=<path to Yardstick folder> \
- -e INSTALLATION_MODE=container
+ cd yardstick
+ docker build -f docker/Dockerfile -t opnfv/yardstick:<tag> .
-.. note:: In this ``INSTALLATION_MODE`` mode neither Yardstick image nor SampleVNF
- image will be built.
+.. note:: Yardstick docker image based on Ubuntu 16.04 will be built.
+ Pass ``-f docker/Dockerfile_ubuntu18`` to build Yardstick docker image based
+ on Ubuntu 18.04.
-.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to
- Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter.
+.. note:: Add ``--build-arg http_proxy=http://<proxy_host>:<proxy_port>`` to
+ build docker image if server is behind the proxy.
Parameters for ``install.yaml``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Description of the parameters used with ``install.yaml`` script
+Description of the parameters used with ``install.yaml``:
+-------------------------+-------------------------------------------------+
| Parameters | Detail |
+=========================+=================================================+
- | -i install-inventory.ini| Installs package dependency to remote servers |
- | | Mandatory parameter |
- | | By default no remote servers are provided |
- | | Needed packages will be installed on localhost |
+ | -i install-inventory.ini|| Installs package dependency to remote servers |
+ | || and localhost |
+ | || Mandatory parameter |
+ | || By default no remote servers are provided |
+-------------------------+-------------------------------------------------+
- | -e YARDSTICK_DIR | Path to Yardstick folder |
- | | Mandatory parameter |
+ | -e YARDSTICK_DIR || Path to Yardstick folder |
+ | || Mandatory parameter for Yardstick bare metal |
+ | || installation |
+-------------------------+-------------------------------------------------+
- | -e INSTALLATION_MODE | baremetal: Yardstick is installed to the bare |
- | | metal |
- | | Default parameter |
+ | -e INSTALLATION_MODE || baremetal: Yardstick is installed to the bare |
+ | | metal |
+ | || Default parameter |
| +-------------------------------------------------+
- | | container: Yardstick is installed in container |
- | | Container is built from Dockerfile |
+ | || container: Yardstick is installed in container |
+ | || Container is built from Dockerfile |
| +-------------------------------------------------+
- | | container_pull: Yardstick is installed in |
- | | container |
- | | Container is pulled from docker hub |
+ | || container_pull: Yardstick is installed in |
+ | || container |
+ | || Container is pulled from docker hub |
+-------------------------+-------------------------------------------------+
- | -e OS_RELEASE | xenial or bionic: Ubuntu version to be used |
- | | Default is Ubuntu 16.04 (xenial) |
+ | -e OS_RELEASE || xenial or bionic: Ubuntu version to be used for|
+ | || VM image (nsb or normal) |
+ | || Default is Ubuntu 16.04, xenial |
+ +-------------------------+-------------------------------------------------+
+ | -e IMAGE_PROPERTY || nsb: Build Yardstick NSB VM image |
+ | || Used to run Yardstick NSB tests on sample VNF |
+ | || Default parameter |
+ | +-------------------------------------------------+
+ | || normal: Build VM image to run ping test in |
+ | || OpenStack |
+ | +-------------------------------------------------+
+ | || none: don't build a VM image. |
+-------------------------+-------------------------------------------------+
- | -e IMAGE_PROPERTY | normal or nsb: Type of the VM image to be built |
- | | Default image is Yardstick |
+ | -e VIRTUAL_ENVIRONMENT || False or True: Whether install in virtualenv |
+ | || Default is False |
+-------------------------+-------------------------------------------------+
- | -e VIRTUAL_ENVIRONMENT | False or True: Whether install in virtualenv |
- | | Default is False |
+ | -e YARD_IMAGE_ARCH || CPU architecture on servers |
+ | || Default is 'amd64' |
+-------------------------+-------------------------------------------------+
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index 70aba1e37..012ea6c3d 100644
--- a/docs/testing/user/userguide/12-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -65,6 +65,14 @@ NSB extension includes:
* L4-L7 state-full traffic profiles
* Tunneling protocol/network overlay support
+* Scenarios that handle NSB test cases execution
+
+ * NSPerf - scenario that handles generic NSB test case execution
+ (setup and init tg/vnf, trigger traffic on tg, collect kpi)
+ * NSPerf-RFC2544 - scenario that allows repeatable triggering of traffic on
+ traffic generators until test case acceptance criteria is met
+ (for example RFC2544 binary search)
+
* Test case samples
* Ping
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
index 80e42685f..3a06be648 100644
--- a/docs/testing/user/userguide/13-nsb-installation.rst
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -21,6 +21,7 @@ NSB Installation
.. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
.. _devstack: https://docs.openstack.org/devstack/pike/>
+.. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
Abstract
--------
@@ -28,7 +29,7 @@ Abstract
The steps needed to run Yardstick with NSB testing are:
* Install Yardstick (NSB Testing).
-* Setup/reference ``pod.yaml`` describing Test topology
+* Setup/reference ``pod.yaml`` describing Test topology.
* Create/reference the test configuration yaml file.
* Run the test case.
@@ -89,21 +90,25 @@ Boot and BIOS settings:
Install Yardstick (NSB Testing)
-------------------------------
-Download the source code and check out the latest stable branch::
+Yardstick with NSB can be installed using ``nsb_setup.sh``.
+The ``nsb_setup.sh`` allows to:
-.. code-block:: console
-
- git clone https://gerrit.opnfv.org/gerrit/yardstick
- cd yardstick
- # Switch to latest stable branch
- git checkout stable/gambia
+1. Install Yardstick in specified mode: bare metal or container.
+ Refer :doc:`04-installation`.
+2. Install package dependencies on remote servers used as traffic generator or
+ sample VNF. Install DPDK, sample VNFs, TREX, collectd.
+ Add such servers to ``install-inventory.ini`` file to either
+ ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
+ It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
+3. Build VM image either nsb or normal. The nsb VM image is used to run
+ Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
+ The normal VM image is used to run Yardstick ping tests in OpenStack context.
+4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
-Configure the network proxy, either using the environment variables or setting
-the global environment file.
+Firstly, configure the network proxy, either using the environment variables or
+setting the global environment file.
-* Set environment
-
-.. code-block::
+Set environment::
http_proxy='http://proxy.company.com:port'
https_proxy='http://proxy.company.com:port'
@@ -113,55 +118,187 @@ the global environment file.
export http_proxy='http://proxy.company.com:port'
export https_proxy='http://proxy.company.com:port'
-Modify the Yardstick installation inventory, used by Ansible::
+Download the source code and check out the latest stable branch
+
+.. code-block:: console
+
+ git clone https://gerrit.opnfv.org/gerrit/yardstick
+ cd yardstick
+ # Switch to latest stable branch
+ git checkout stable/gambia
+
+Modify the Yardstick installation inventory used by Ansible::
cat ./ansible/install-inventory.ini
[jumphost]
localhost ansible_connection=local
- [yardstick-standalone]
- yardstick-standalone-node ansible_host=192.168.1.2
- yardstick-standalone-node-2 ansible_host=192.168.1.3
-
# section below is only due backward compatibility.
# it will be removed later
[yardstick:children]
jumphost
+ [yardstick-baremetal]
+ baremetal ansible_host=192.168.2.51 ansible_connection=ssh
+
+ [yardstick-standalone]
+ standalone ansible_host=192.168.2.52 ansible_connection=ssh
+
[all:vars]
- ansible_user=root
- ansible_pass=root
+ # Uncomment credentials below if needed
+ ansible_user=root
+ ansible_ssh_pass=root
+ # ansible_ssh_private_key_file=/root/.ssh/id_rsa
+ # When IMG_PROPERTY is passed neither normal nor nsb set
+ # "path_to_vm=/path/to/image" to add it to OpenStack
+ # path_to_img=/tmp/workspace/yardstick-image.img
+
+ # List of CPUs to be isolated (not used by default)
+ # Grub line will be extended with:
+ # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
+ # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
+
+.. warning::
+
+ Before running ``nsb_setup.sh`` make sure python is installed on servers
+ added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
.. note::
SSH access without password needs to be configured for all your nodes
- defined in ``yardstick-install-inventory.ini`` file.
+ defined in ``install-inventory.ini`` file.
If you want to use password authentication you need to install ``sshpass``::
sudo -EH apt-get install sshpass
-To execute an installation for a BareMetal or a Standalone context::
- ./nsb_setup.sh
+.. note::
+
+ A VM image built by other means than Yardstick can be added to OpenStack.
+ Uncomment and set correct path to the VM image in the
+ ``install-inventory.ini`` file::
+
+ path_to_img=/tmp/workspace/yardstick-image.img
+
+
+.. note::
+
+ CPU isolation can be applied to the remote servers, like:
+ ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
+ ``install-inventory.ini`` file.
+
+By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
+docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
+installs packages to the servers given in ``yardstick-standalone`` and
+``yardstick-baremetal`` host groups.
+To pull Yardstick built based on Ubuntu 18 run::
-To execute an installation for an OpenStack context::
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
+
+To change default behavior modify parameters for ``install.yaml`` in
+``nsb_setup.sh`` file.
+
+Refer chapter :doc:`04-installation` for more details on ``install.yaml``
+parameters.
+
+To execute an installation for a **BareMetal** or a **Standalone context**::
+
+ ./nsb_setup.sh
+
+To execute an installation for an **OpenStack** context::
./nsb_setup.sh <path to admin-openrc.sh>
+.. note::
+
+ Yardstick may not be operational after distributive linux kernel update if
+ it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
+
+.. warning::
+
+ The Yardstick VM image (NSB or normal) cannot be built inside a VM.
+
+.. warning::
+
+ The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
+ Reboot of the servers from ``yardstick-standalone`` or
+ ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
+ required to apply those changes.
+
The above commands will set up Docker with the latest Yardstick code. To
execute::
docker exec -it yardstick bash
+.. note::
+
+ It may be needed to configure tty in docker container to extend commandline
+ character length, for example:
+
+ stty size rows 58 cols 234
+
It will also automatically download all the packages needed for NSB Testing
-setup. Refer chapter :doc:`04-installation` for more on Docker
+setup. Refer chapter :doc:`04-installation` for more on Docker.
**Install Yardstick using Docker (recommended)**
-Another way to execute an installation for a Bare-Metal or a Standalone context
-is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
-for more details.
+Bare Metal context example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG and DUT servers to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies. Install
+ python on servers.
+3. Start deployment using docker image based on Ubuntu 16:
+
+.. code-block:: console
+
+ ./nsb_setup.sh
+
+4. Reboot bare metal servers.
+5. Enter to yardstick container and modify pod yaml file and run tests.
+
+Standalone context example for Ubuntu 18
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+Ubuntu 18 is installed on all servers.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG server to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies.
+ Add server where VM with sample VNF will be deployed to
+ ``yardstick-standalone`` group in ``install-inventory.ini`` file.
+ Target VM image named ``yardstick-nsb-image.img`` will be placed to
+ ``/var/lib/libvirt/images/``.
+ Install python on servers.
+3. Modify ``nsb_setup.sh`` on jump host:
+
+.. code-block:: console
+
+ ansible-playbook \
+ -e IMAGE_PROPERTY='nsb' \
+ -e OS_RELEASE='bionic' \
+ -e INSTALLATION_MODE='container_pull' \
+ -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
+ -i install-inventory.ini install.yaml
+
+4. Start deployment with Yardstick docker images based on Ubuntu 18:
+
+.. code-block:: console
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
+
+5. Reboot servers.
+6. Enter to yardstick container and modify pod yaml file and run tests.
+
System Topology
---------------
@@ -175,7 +312,7 @@ System Topology
| | | |
| | (1)<-----(1) | |
+----------+ +----------+
- trafficgen_1 vnf
+ trafficgen_0 vnf
Environment parameters and credentials
@@ -185,7 +322,7 @@ Configure yardstick.conf
^^^^^^^^^^^^^^^^^^^^^^^^
If you did not run ``yardstick env influxdb`` inside the container to generate
- ``yardstick.conf``, then create the config file manually (run inside the
+``yardstick.conf``, then create the config file manually (run inside the
container)::
cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
@@ -251,7 +388,7 @@ Bare-Metal 2-Node setup
| | | |
| | (n)<-----(n) | |
+----------+ +----------+
- trafficgen_1 vnf
+ trafficgen_0 vnf
Bare-Metal 3-Node setup - Correlated Traffic
++++++++++++++++++++++++++++++++++++++++++++
@@ -265,7 +402,7 @@ Bare-Metal 3-Node setup - Correlated Traffic
| | | | | |
| | | |(1)<---->(0)| |
+----------+ +----------+ +------------+
- trafficgen_1 vnf trafficgen_2
+ trafficgen_0 vnf trafficgen_1
Bare-Metal Config pod.yaml
@@ -279,7 +416,7 @@ topology and update all the required fields.::
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -388,7 +525,7 @@ On Host, where VM is created:
.. code-block:: YAML
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.7/24'
@@ -446,7 +583,7 @@ SR-IOV 2-Node setup
| | (n)<----->(n) | ----------------- |
| | | |
+----------+ +-------------------------+
- trafficgen_1 host
+ trafficgen_0 host
@@ -474,7 +611,7 @@ SR-IOV 3-Node setup - Correlated Traffic
| | | | | | |
| | (n)<----->(n) | -----| (n)<-->(n) | |
+----------+ +---------------------+ +--------------+
- trafficgen_1 host trafficgen_2
+ trafficgen_0 host trafficgen_1
Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
topology and update all the required fields.
@@ -493,7 +630,7 @@ SR-IOV Config pod_trex.yaml
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -554,7 +691,7 @@ Update contexts section
user: "" # update VM username
password: "" # update password
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -619,7 +756,7 @@ On Host, where VM is created:
.. code-block:: YAML
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.7/24'
@@ -683,7 +820,7 @@ OVS-DPDK 2-Node setup
| | | (ovs-dpdk) | |
| | (n)<----->(n) |------------------ |
+----------+ +-------------------------+
- trafficgen_1 host
+ trafficgen_0 host
OVS-DPDK 3-Node setup - Correlated Traffic
@@ -713,7 +850,7 @@ OVS-DPDK 3-Node setup - Correlated Traffic
| | | (ovs-dpdk) | | | |
| | (n)<----->(n) | ------ |(n)<-->(n)| |
+----------+ +-------------------------+ +------------+
- trafficgen_1 host trafficgen_2
+ trafficgen_0 host trafficgen_1
Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
@@ -731,7 +868,7 @@ OVS-DPDK Config pod_trex.yaml
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -802,7 +939,7 @@ Update contexts section
user: "" # update VM username
password: "" # update password
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -822,6 +959,144 @@ Update contexts section
cidr: '152.16.40.10/24'
gateway_ip: '152.16.100.20'
+OVS-DPDK configuration options
+++++++++++++++++++++++++++++++
+
+There are number of configuration options available for OVS-DPDK context in
+test case. Mostly they are used for performance tuning.
+
+OVS-DPDK properties:
+''''''''''''''''''''
+
+OVS-DPDK properties example under *ovs_properties* section:
+
+ .. code-block:: console
+
+ ovs_properties:
+ version:
+ ovs: 2.8.1
+ dpdk: 17.05.2
+ pmd_threads: 4
+ pmd_cpu_mask: "0x3c"
+ ram:
+ socket_0: 2048
+ socket_1: 2048
+ queues: 2
+ vpath: "/usr/local"
+ max_idle: 30000
+ lcore_mask: 0x02
+ dpdk_pmd-rxq-affinity:
+ 0: "0:2,1:2"
+ 1: "0:2,1:2"
+ 2: "0:3,1:3"
+ 3: "0:3,1:3"
+ vhost_pmd-rxq-affinity:
+ 0: "0:3,1:3"
+ 1: "0:3,1:3"
+ 2: "0:4,1:4"
+ 3: "0:4,1:4"
+
+OVS-DPDK properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | version || Version of OVS and DPDK to be installed |
+ | || There is a relation between OVS and DPDK |
+ | | version which can be found at |
+ | | `OVS-DPDK-versions`_ |
+ | || By default OVS: 2.6.0, DPDK: 16.07.2 |
+ +-------------------------+-------------------------------------------------+
+ | lcore_mask || Core bitmask used during DPDK initialization |
+ | | where the non-datapath OVS-DPDK threads such |
+ | | as handler and revalidator threads run |
+ +-------------------------+-------------------------------------------------+
+ | pmd_cpu_mask || Core bitmask that sets which cores are used by |
+ | || OVS-DPDK for datapath packet processing |
+ +-------------------------+-------------------------------------------------+
+ | pmd_threads || Number of PMD threads used by OVS-DPDK for |
+ | | datapath |
+ | || This core mask is evaluated in Yardstick |
+ | || It will be used if pmd_cpu_mask is not given |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for each socket, MB |
+ | || Default is 2048 MB |
+ +-------------------------+-------------------------------------------------+
+ | queues || Number of RX queues used for DPDK physical |
+ | | interface |
+ +-------------------------+-------------------------------------------------+
+ | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vpath || User path for openvswitch files |
+ | || Default is ``/usr/local`` |
+ +-------------------------+-------------------------------------------------+
+ | max_idle || The maximum time that idle flows will remain |
+ | | cached in the datapath, ms |
+ +-------------------------+-------------------------------------------------+
+
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties example under *flavor* section:
+
+ .. code-block:: console
+
+ flavor:
+ images: <path>
+ ram: 8192
+ extra_specs:
+ machine_type: 'pc-i440fx-xenial'
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ hw_socket: 0
+ cputune: |
+ <cputune>
+ <vcpupin vcpu="0" cpuset="7"/>
+ <vcpupin vcpu="1" cpuset="8"/>
+ ...
+ <vcpupin vcpu="11" cpuset="18"/>
+ <emulatorpin cpuset="11"/>
+ </cputune>
+
+VM image properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | images || Path to the VM image generated by |
+ | | ``nsb_setup.sh`` |
+ | || Default path is ``/var/lib/libvirt/images/`` |
+ | || Default file name ``yardstick-nsb-image.img`` |
+ | | or ``yardstick-image.img`` |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for VM |
+ | || Default is 4096 MB |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_sockets || Number of sockets provided to the guest VM |
+ | || Default is 1 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_cores || Number of cores provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_threads || Number of threads provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw_socket || Generate vcpu cpuset from given HW socket |
+ | || Default is 0 |
+ +-------------------------+-------------------------------------------------+
+ | cputune || Maps virtual cpu with logical cpu |
+ +-------------------------+-------------------------------------------------+
+ | machine_type || Machine type to be emulated in VM |
+ | || Default is 'pc-i440fx-xenial' |
+ +-------------------------+-------------------------------------------------+
+
OpenStack with SR-IOV support
-----------------------------
@@ -859,7 +1134,7 @@ Single node OpenStack with external TG
| | (PF1)<----->(PF1) +--------------------+ |
| | | |
+----------+ +----------------------------+
- trafficgen_1 host
+ trafficgen_0 host
Host pre-configuration
@@ -1011,7 +1286,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section.
Multi node OpenStack TG and VNF setup (two nodes)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
@@ -1022,7 +1297,7 @@ Multi node OpenStack TG and VNF setup (two nodes)
| |sample-VNF VM | | | |sample-VNF VM | |
| | | | | | | |
| | TG | | | | DUT | |
- | | trafficgen_1 | | | | (VNF) | |
+ | | trafficgen_0 | | | | (VNF) | |
| | | | | | | |
| +--------+ +--------+ | | +--------+ +--------+ |
| | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
@@ -1094,7 +1369,7 @@ Enabling other Traffic generators
---------------------------------
IxLoad
-~~~~~~
+^^^^^^
1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
@@ -1197,9 +1472,9 @@ to be preinstalled and properly configured.
``PYTHONPATH`` environment variable.
.. important::
- The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
- For LsApi module to initialize correctly following lines (184-186) in
- lsapi.py
+ The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
+ For LsApi module to initialize correctly following lines (184-186) in
+ lsapi.py
.. code-block:: python
diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst
index 12e269187..8d9a1108a 100644
--- a/docs/testing/user/userguide/14-nsb-operation.rst
+++ b/docs/testing/user/userguide/14-nsb-operation.rst
@@ -136,7 +136,7 @@ case, please follow the instructions below.
image: yardstick-samplevnfs
...
servers:
- vnf__0:
+ vnf_0:
...
availability_zone: <AZ_NAME>
...
@@ -332,8 +332,8 @@ Baremetal
traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml
topology: vfw-tg-topology.yaml
nodes:
- tg__0: trafficgen_1.yardstick
- vnf__0: vnf.yardstick
+ tg__0: trafficgen_0.yardstick
+ vnf__0: vnf_0.yardstick
options:
framesize:
uplink: {64B: 100}
@@ -427,7 +427,7 @@ options section.
scenarios:
- type: NSPerf
nodes:
- tg__0: tg_0.yardstick
+ tg__0: trafficgen_0.yardstick
options:
tg_0:
@@ -640,3 +640,68 @@ A testcase can be started with the following command as an example:
.. code-block:: bash
yardstick task start /yardstick/samples/vnf_samples/nsut/vpe/tc_baremetal_rfc2544_ipv4_1flow_64B_ixia.yaml
+
+Preparing test run of vIPSEC test case
+------------------------------------
+
+Location of vIPSEC test cases: ``samples/vnf_samples/nsut/ipsec/``.
+
+Before running a specific vIPSEC test case using NSB, some dependencies have to be
+preinstalled and properly configured.
+- VPP
+
+.. code-block:: console
+
+ export UBUNTU="xenial"
+ export RELEASE=".stable.1810"
+ sudo rm /etc/apt/sources.list.d/99fd.io.list
+ echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io$RELEASE.ubuntu.$UBUNTU.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
+ sudo apt-get update
+ sudo apt-get install vpp vpp-lib vpp-plugin vpp-dbg vpp-dev vpp-api-java vpp-api-python vpp-api-lua
+
+- VAT templates
+
+ VAT templates is required for the VPP API.
+
+.. code-block:: console
+
+ mkdir -p /opt/nsb_bin/vpp/templates/
+ echo 'exec trace add dpdk-input 50' > /opt/nsb_bin/vpp/templates/enable_dpdk_traces.vat
+ echo 'exec trace add vhost-user-input 50' > /opt/nsb_bin/vpp/templates/enable_vhost_user_traces.vat
+ echo 'exec trace add memif-input 50' > /opt/nsb_bin/vpp/templates/enable_memif_traces.vat
+ cat > /opt/nsb_bin/vpp/templates/dump_interfaces.vat << EOL
+ sw_interface_dump
+ dump_interface_table
+ quit
+ EOL
+
+
+Preparing test run of vCMTS test case
+-------------------------------------
+
+Location of vCMTS test cases: ``samples/vnf_samples/nsut/cmts/``.
+
+Before running a specific vIPSEC test case using NSB, some changes must be
+made to the original vCMTS package.
+
+Allow SSH access to the docker images
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Follow the documentation at ``https://docs.docker.com/engine/examples/running_ssh_service/``
+to allow SSH access to the Pktgen/vcmts-d containers located at:
+
+* ``$VCMTS_ROOT/pktgen/docker/docker-image-pktgen/Dockerfile`` and
+* ``$VCMTS_ROOT/vcmtsd/docker/docker-image-vcmtsd/Dockerfile``
+
+
+Deploy the ConfigMaps for Pktgen and vCMTSd
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. code-block:: bash
+
+ cd $VCMTS_ROOT/kubernetes/helm/pktgen
+ helm template . -x templates/pktgen-configmap.yaml > configmap.yaml
+ kubectl create -f configmap.yaml
+
+ cd $VCMTS_ROOT/kubernetes/helm/vcmtsd
+ helm template . -x templates/vcmts-configmap.yaml > configmap.yaml
+ kubectl create -f configmap.yaml
+
diff --git a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
index 6c18c7d89..562c80ff7 100644..100755
--- a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
+++ b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
@@ -36,3 +36,6 @@ NSB PROX Test Case Descriptions
tc_vfw_rfc2544
tc_vfw_rfc2544_correlated
tc_vfw_rfc3511
+ tc_vpp_baremetal_crypto_ipsec
+ tc_vims_context_sipp
+ tc_pktgen_k8s_vcmts
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
new file mode 100644
index 000000000..ffe4f6c19
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
@@ -0,0 +1,177 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case without link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscriber connections to BNG, runs prioritized traffic|
+| | on maximum throughput on all ports and collects network |
+| | and PPPoE subscriber metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up.yaml |
+| | |
+| | Mentioned test case is a template and number of ports in the |
+| | setup could be passed using cli arguments, e.g: |
+| | |
+| | yardstick -d task start --task-args='{vports: 8}' <tc_yaml> |
+| | |
+| | By default, vports=2. |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test case can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * Setup ports number; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * Enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolves the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports |
+| | (traffic flows are creating between access-core ports |
+| | pairs, traffic is bi-directional); |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. In case drop percentage rate is higher than expected, |
+| | reduce traffic line rate and repeat steps 7-10 again; |
+| | 11. In case drop percentage rate is as expected or number |
+| | of maximum iterations in step 10 achieved, disconnect |
+| | PPPoE subscribers and stop traffic; |
+| | 12. Stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During each iteration interval in the test run, all specified|
+| | metrics are retrieved from IxNetwork and stored in the |
+| | yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The vBNG RFC2544 test case will achieve maximum traffic line |
+| | rate with zero packet loss (or other non-zero allowed |
+| | partial drop rate). |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
new file mode 100644
index 000000000..889ba2410
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
@@ -0,0 +1,179 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case with link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscribers connections to BNG, run prioritized traffic|
+| | causing congestion of access port (port xe0) and collects |
+| | network and PPPoE subscribers metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX.yaml |
+| | |
+| | Number of ports: |
+| | * 8 ports |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
+| | NOTE: that only parameter that can't be changed is ports |
+| | number. To run the test with another number of ports |
+| | traffic profile should be updated. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test cases can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolve the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports. |
+| | While test covers case with access port congestion, |
+| | flows between ports will be created in the following |
+| | way: traffic from two core ports are going to one access |
+| | port causing port congestion and traffic from other two |
+| | core ports is splitting between remaining three access |
+| | ports; |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. Measure drop percentage rate of different priority |
+| | packets on congested port. Expected that all high and |
+| | medium priority packets was forwarded and only low |
+| | priority packets has drops. |
+| | 11. Disconnect PPPoE subscribers and stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During test run, in the end of each iteration all specified |
+| | in the document metrics are retrieved from IxNetwork and |
+| | stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case is successful if all high and medium priority |
+| | packets on congested port was forwarded and only low |
+| | priority packets has drops. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst b/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst
new file mode 100755
index 000000000..56f5c27ed
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst
@@ -0,0 +1,102 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB vCMTS
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB Pktgen test for vCMTS characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_vcmts_k8s_pktgen |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Upstream Processing (Per Service Group); |
+| | * Downstream Processing (Per Service Group); |
+| | * Upstream Throughput; |
+| | * Downstream Throughput; |
+| | * Platform Metrics; |
+| | * Power Consumption; |
+| | * Upstream Throughput Time Series; |
+| | * Downstream Throughput Time Series; |
+| | * System Summary; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | * The vCMTS test handles service groups and packet generation |
+| | containers setup, and metrics collection. |
+| | |
+| | * The vCMTS test case is implemented to run in Kubernetes |
+| | environment with vCMTS pre-installed. |
++--------------+---------------------------------------------------------------+
+|configuration | The vCMTS test case configurable values are listed below |
+| | |
+| | * num_sg: Number of service groups (Upstream/Downstream |
+| | container pairs). |
+| | * num_tg: Number of Pktgen containers. |
+| | * vcmtsd_image: vCMTS container image (feat/perf). |
+| | * qat_on: QAT status (true/false). |
+| | |
+| | num_sg and num_tg values should be configured in the test |
+| | case file and in the topology file. |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Intel vCMTS Reference Dataplane |
+| | Reference implementation of a DPDK-based vCMTS (DOCSIS MAC) |
+| | dataplane in a Kubernetes-orchestrated Linux Container |
+| | environment. |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This test cases can be configured with different: |
+| | |
+| | * Number of service groups |
+| | * Number of Pktgen instances |
+| | * QAT offloading |
+| | * Feat/Perf Images for performance or features (more data |
+| | collection) |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | Intel vCMTS Reference Dataplane should be installed and |
+|conditions | runnable on 2 nodes Kubernetes environment with modifications |
+| | to the containers to allow yardstick ssh access, and the |
+| | ConfigMaps from the original vCMTS package deployed. |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | Yardstick is connected to the Kubernetes Master node using |
+| | the configuration file in /etc/kubernetes/admin.yaml |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | The TG containers are created and started on the traffic |
+| | generator server (Master node), While the VNF containers are |
+| | created and started on the data plan server. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Yardstick is connected with the TG and VNF by using ssh. |
+| | to start vCMTS-d, and Pktgen. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | Yardstick connects to the running Pktgen instances to start |
+| | generating traffic using the configurations from: |
+| | /etc/yardstick/pktgen_values.yaml |
+| | |
+| | and connects to the vCMTS-d containers to start the upstream |
+| | and downstream processing using the configurations from: |
+| | /etc/yardstick/vcmtsd_values.yaml |
+| | |
++--------------+---------------------------------------------------------------+
+|step 5 | Yardstick copies vCMTS metrics regularly from the remote |
+| | InfluxDB (deployed by the vCMTS Package) to the local |
+| | Yardstick InfluxDB as configured in the options section in |
+| | the test case file. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | None. The test case will collect the KPIs and plot on |
+| | Grafana. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file
diff --git a/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
new file mode 100644
index 000000000..6df4ab880
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
@@ -0,0 +1,96 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2019 Viosoft Corporation.
+
+**********************************************
+Yardstick Test Case Description: NSB VIMS
+**********************************************
+
++-----------------------------------------------------------------------------+
+|NSB VIMS test for vIMS characterization |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_vims_{context}_sipp |
+| | |
+| | * context = baremetal or heat; |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | * Successful registrations per second; |
+| | * Total number of active registrations per server; |
+| | * Successful de-registrations per second; |
+| | * Successful session establishments per second; |
+| | * Total number of active sessions per server; |
+| | * Mean session setup time; |
+| | * Successful re-registrations per second; |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The vIMS test handles registration rate, call rate, |
+| | round trip delay, and message statistics of vIMS system. |
+| | |
+| | The vIMS test cases are implemented to run in baremetal |
+| | and heat context default configuration. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The vIMS test cases are listed below: |
+| | |
+| | * tc_vims_baremetal_sipp.yaml |
+| | * tc_vims_heat_sipp.yaml |
+| | |
+| | Each test runs one time and collects all the KPIs. |
+| | The configuration of vIMS and SIPp can be changed in each |
+| | test. |
++--------------+--------------------------------------------------------------+
+|test tool | SIPp |
+| | |
+| | SIPp is an application that can simulate SIP scenarios, can |
+| | generate RTP traffic and used for vIMS characterization. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | The SIPp test cases can be configured with different: |
+| | |
+| | * number of accounts; |
+| | * the call per second (cps) of SIP test; |
+| | * the holding time; |
+| | * RTP configuratioin; |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | For Openstack test case, only vIMS is deployed by external |
+|conditions | heat template, SIPp needs pod.yaml file with the necessary |
+| | system and NIC information |
+| | |
+| | For Baremetal tests cases SIPp and vIMS must be installed in |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
+| | For Heat test: One host VM for vIMS is booted, based on |
+| | the test flavor. Another host for SIPp is booted as |
+| | traffic generator, based on pod.yaml file |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with the vIMS and SIPp via ssh. |
+| | The test will resolve the topology, instantiate the vIMS and |
+| | SIPp and collect the KPIs/metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | The SIPp will run scenario tests with parameters configured |
+| | in test case files (tc_vims_baremetal_sipp.yaml and |
+| | tc_vims_heat_sipp.yaml files). |
+| | This is done until the KPIs of SIPp are within an acceptable |
+| | threshold. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application. |
+| | |
+| | In Heat test: The host VM of vIMS is deleted on test |
+| | completion. |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will collect the KPIs and plot on Grafana. |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
new file mode 100644
index 000000000..6a4a37697
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
@@ -0,0 +1,113 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB VPP IPSEC
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB VPP test for vIPSEC characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_baremetal_rfc2544_ipv4_{crypto_dev}_{crypto_alg} |
+| | |
+| | * crypto_dev = HW_cryptodev or SW_cryptodev; |
+| | * crypto_alg = aes-gcm or cbc-sha1; |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Network Throughput NDR or PDR; |
+| | * Connections Per Second (CPS); |
+| | * Latency; |
+| | * Number of tunnels; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * Dropped packets; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | IPv4 IPsec tunnel mode performance test: |
+| | |
+| | * Finds and reports throughput NDR (Non Drop Rate) with zero |
+| | packet loss tolerance or throughput PDR (Partial Drop Rate) |
+| | with non-zero packet loss tolerance (LT) expressed in |
+| | number of packets transmitted. |
+| | |
+| | * The IPSEC test cases are implemented to run in baremetal |
+| | |
++--------------+---------------------------------------------------------------+
+|configuration | The IPSEC test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_trex.yaml |
+| | |
+| | Test duration is set as 500sec for each test. |
+| | Packet size set as 64 bytes or higher. |
+| | Number of tunnels set as 1 or higher. |
+| | Number of connections set as 1 or higher |
+| | These can be configured |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Vector Packet Processing (VPP) |
+| | The VPP platform is an extensible framework that provides |
+| | out-of-the-box production quality switch/router functionality.|
+| | Its high performance, proven technology, its modularity and, |
+| | flexibility and rich feature set |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This VPP IPSEC test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test durations; |
+| | * tolerated loss; |
+| | * crypto device type; |
+| | * number of physical cores; |
+| | * number of tunnels; |
+| | * number of connections; |
+| | * encryption algorithms - integrity algorithm; |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | For Baremetal tests cases VPP and DPDK must be installed in |
+|conditions | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | Yardstick is connected with the TG and VNF by using ssh. |
+| | The test will resolve the topology and instantiate the VNF |
+| | and TG and collect the KPI's/metrics. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Test packets are generated by TG on links to DUTs. If the |
+| | number of dropped packets is more than the tolerated loss |
+| | the line rate or throughput is halved. This is done until |
+| | the dropped packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for a packet size |
+| | specified in the test case with an accepted minimal packet |
+| | loss for the default configuration. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file