aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing/user/userguide')
-rwxr-xr-xdocs/testing/user/userguide/01-introduction.rst30
-rw-r--r--docs/testing/user/userguide/02-methodology.rst27
-rwxr-xr-xdocs/testing/user/userguide/03-architecture.rst73
-rw-r--r--docs/testing/user/userguide/04-installation.rst182
-rw-r--r--docs/testing/user/userguide/05-operation.rst2
-rw-r--r--docs/testing/user/userguide/06-yardstick-plugin.rst26
-rw-r--r--docs/testing/user/userguide/07-result-store-InfluxDB.rst19
-rw-r--r--docs/testing/user/userguide/08-grafana.rst26
-rw-r--r--docs/testing/user/userguide/09-api.rst63
-rw-r--r--docs/testing/user/userguide/10-yardstick-user-interface.rst60
-rw-r--r--docs/testing/user/userguide/11-vtc-overview.rst128
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst306
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst1050
-rw-r--r--docs/testing/user/userguide/14-nsb-operation.rst433
-rw-r--r--docs/testing/user/userguide/15-list-of-tcs.rst275
-rw-r--r--docs/testing/user/userguide/code/pod_ixia.yaml31
-rw-r--r--docs/testing/user/userguide/comp-intro.rst4
-rw-r--r--docs/testing/user/userguide/glossary.rst136
-rw-r--r--docs/testing/user/userguide/index.rst2
-rwxr-xr-x[-rw-r--r--]docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst13
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst177
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst179
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst156
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst149
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst159
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst167
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst174
-rwxr-xr-xdocs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst102
-rw-r--r--docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst6
-rw-r--r--docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst189
-rw-r--r--docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst130
-rw-r--r--docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst133
-rw-r--r--docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst96
-rw-r--r--docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst113
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc010.rst3
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc011.rst6
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc012.rst1
-rwxr-xr-xdocs/testing/user/userguide/opnfv_yardstick_tc015.rst141
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc019.rst22
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc025.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc027.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc040.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc042.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc050.rst49
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc052.rst19
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc055.rst4
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc057.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc063.rst1
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc069.rst6
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc073.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc074.rst93
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc081.rst4
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc084.rst25
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc087.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc088.rst129
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc089.rst129
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc092.rst201
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc093.rst189
-rw-r--r--docs/testing/user/userguide/references.rst23
59 files changed, 4876 insertions, 1026 deletions
diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst
index d846e759c..2a3ab4ea7 100755
--- a/docs/testing/user/userguide/01-introduction.rst
+++ b/docs/testing/user/userguide/01-introduction.rst
@@ -1,7 +1,17 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB and others.
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) OPNFV, Ericsson AB and others.
+
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
============
Introduction
@@ -9,8 +19,8 @@ Introduction
**Welcome to Yardstick's documentation !**
-.. _Pharos: https://wiki.opnfv.org/pharos
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Pharos: https://wiki.opnfv.org/display/pharos
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
.. _Presentation: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_yardstick_project.pdf?version=1&modificationDate=1458848320000&api=v2
Yardstick_ is an OPNFV Project.
@@ -32,7 +42,7 @@ independent.
About This Document
-===================
+-------------------
This document consists of the following chapters:
@@ -66,13 +76,11 @@ This document consists of the following chapters:
yardstick report CLI to view the test result in table format and also values
pinned on to a graph
-* Chapter :doc:`11-vtc-overview` provides information on the :term:`VTC`.
-
* Chapter :doc:`12-nsb-overview` describes the methodology implemented by the
Yardstick - Network service benchmarking to test real world usecase for a
given VNF.
-* Chapter :doc:`13-nsb_installation` provides instructions to install
+* Chapter :doc:`13-nsb-installation` provides instructions to install
*Yardstick - Network Service Benchmarking (NSB) testing*.
* Chapter :doc:`14-nsb-operation` provides information on running *NSB*
@@ -81,8 +89,8 @@ This document consists of the following chapters:
cases.
Contact Yardstick
-=================
+-----------------
Feedback? `Contact us`_
-.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="[yardstick]"
+.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="#yardstick"
diff --git a/docs/testing/user/userguide/02-methodology.rst b/docs/testing/user/userguide/02-methodology.rst
index 34d271095..bb490c404 100644
--- a/docs/testing/user/userguide/02-methodology.rst
+++ b/docs/testing/user/userguide/02-methodology.rst
@@ -1,20 +1,30 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB and others.
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) OPNFV, Ericsson AB and others.
+
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
===========
Methodology
===========
Abstract
-========
+--------
This chapter describes the methodology implemented by the Yardstick project for
verifying the :term:`NFVI` from the perspective of a :term:`VNF`.
ETSI-NFV
-========
+--------
.. _NFV-TST001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
.. _Yardsticktst: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_bridging_opnfv_and_etsi.pdf?version=1&modificationDate=1458848320000&api=v2
@@ -53,7 +63,7 @@ The methodology includes five steps:
.. seealso:: Yardsticktst_ for material on alignment ETSI TST001 and Yardstick.
Metrics
-=======
+-------
The metrics, as defined by ETSI GS NFV-TST001, are shown in
:ref:`Table1 <table2_1>`, :ref:`Table2 <table2_2>` and
@@ -98,6 +108,10 @@ options).
| | * Latency for storage read/write operations |
| | * Throughput for storage read/write operations |
| | |
++---------+-------------------------------------------------------------------|
+| Energy | * Energy consumption in Watts (transversal to all others |
+| | scenario) |
+| | |
+---------+-------------------------------------------------------------------+
.. _table2_2:
@@ -173,6 +187,7 @@ options).
| | TC010 | TC024 | |
| | TC012 | TC055 | |
| | TC014 | | |
+| | TC015 | | |
| | TC069 | | |
+---------+-------------------+----------------+------------------------------+
| Network | TC001 | TC044 | TC016 [1]_ |
diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst
index 622002ee4..94081b0e1 100755
--- a/docs/testing/user/userguide/03-architecture.rst
+++ b/docs/testing/user/userguide/03-architecture.rst
@@ -1,23 +1,34 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2016 Huawei Technologies Co.,Ltd and others
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) 2016 Huawei Technologies Co.,Ltd and others
+
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
============
Architecture
============
Abstract
-========
-This chapter describes the yardstick framework software architecture. We will
-introduce it from Use-Case View, Logical View, Process View and Deployment
+--------
+
+This chapter describes the Yardstick framework software architecture. We will
+introduce it from Use Case View, Logical View, Process View and Deployment
View. More technical details will be introduced in this chapter.
Overview
-========
+--------
Architecture overview
----------------------
+^^^^^^^^^^^^^^^^^^^^^
Yardstick is mainly written in Python, and test configurations are made
in YAML. Documentation is written in reStructuredText format, i.e. .rst
files. Yardstick is inspired by Rally. Yardstick is intended to run on a
@@ -34,7 +45,8 @@ the test result will be shown with grafana.
Concept
--------
+^^^^^^^
+
**Benchmark** - assess the relative performance of something
**Benchmark** configuration file - describes a single test case in yaml format
@@ -62,7 +74,7 @@ configuration file and evaluated by the runner.
Runner types
-------------
+^^^^^^^^^^^^
There exists several predefined runner types to choose between when designing
a test scenario:
@@ -129,7 +141,8 @@ Snippet of an Iteration runner configuration:
Use-Case View
-=============
+-------------
+
Yardstick Use-Case View shows two kinds of users. One is the Tester who will
do testing in cloud, the other is the User who is more concerned with test
result and result analyses.
@@ -158,7 +171,8 @@ on OPNFV testing dashboard which use MongoDB as backend.
:alt: Yardstick Use-Case View
Logical View
-============
+------------
+
Yardstick Logical View describes the most important classes, their
organization, and the most important use-case realizations.
@@ -195,7 +209,8 @@ finished.
:alt: Yardstick framework architecture in Danube
Process View (Test execution flow)
-==================================
+----------------------------------
+
Yardstick process view shows how yardstick runs a test case. Below is the
sequence graph about the test execution flow using heat context, and each
object represents one module in yardstick:
@@ -222,7 +237,8 @@ will call "Openstack" to undeploy the heat stack. Once the stack is
undepoyed, the whole test ends.
Deployment View
-===============
+---------------
+
Yardstick deployment view shows how the yardstick tool can be deployed into the
underlying platform. Generally, yardstick tool is installed on JumpServer(see
`07-installation` for detail installation steps), and JumpServer is
@@ -235,7 +251,7 @@ result for better showing.
:alt: Yardstick Deployment View
Yardstick Directory structure
-=============================
+-----------------------------
**yardstick/** - Yardstick main directory.
@@ -243,28 +259,27 @@ Yardstick Directory structure
with support for different installers.
*docs/* - All documentation is stored here, such as configuration guides,
- user guides and Yardstick descriptions.
+ user guides and Yardstick test case descriptions.
*etc/* - Used for test cases requiring specific POD configurations.
*samples/* - test case samples are stored here, most of all scenario and
- feature's samples are shown in this directory.
+ feature samples are shown in this directory.
-*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as
- well as the test cases run to verify the NFVI (*opnfv/*) are stored.
- Also configurations of what to run daily and weekly at the different
- PODs is located here.
+*tests/* - The test cases run to verify the NFVI (*opnfv/*) are stored here.
+ The configurations of what to run daily and weekly at the different
+ PODs are also located here.
-*tools/* - Currently contains tools to build image for VMs which are deployed
- by Heat. Currently contains how to build the yardstick-trusty-server
- image with the different tools that are needed from within the
- image.
+*tools/* - Contains tools to build image for VMs which are deployed by Heat.
+ Currently contains how to build the yardstick-image with the
+ different tools that are needed from within the image.
*plugin/* - Plug-in configuration files are stored here.
-*vTC/* - Contains the files for running the virtual Traffic Classifier tests.
-
-*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts,
- CLI parsing, keys, plotting tools, dispatcher, plugin
+*yardstick/* - Contains the internals of Yardstick: :term:`Runners <runner>`,
+ :term:`Scenarios <scenario>`, :term:`Contexts <context>`, CLI
+ parsing, keys, plotting tools, dispatcher, plugin
install/remove scripts and so on.
+*yardstick/tests* - The Yardstick internal tests (*functional/* and *unit/*)
+ are stored here.
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst
index a4846230e..6fe42232d 100644
--- a/docs/testing/user/userguide/04-installation.rst
+++ b/docs/testing/user/userguide/04-installation.rst
@@ -1,13 +1,33 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others.
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others.
+
+
+ Convention for heading levels in Yardstick documentation:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ~~~~~~~ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
======================
Yardstick Installation
======================
-
Yardstick supports installation by Docker or directly in Ubuntu. The
installation procedure for Docker and direct installation are detailed in
the sections below.
@@ -52,6 +72,7 @@ Several prerequisites are needed for Yardstick:
the end of this document. That section details some tips/tricks which *may*
be of help in a proxified environment.
+.. _Install Yardstick using Docker:
Install Yardstick using Docker (first option) (**recommended**)
---------------------------------------------------------------
@@ -128,7 +149,7 @@ in the following sections. Before that, access the Yardstick container::
and then configure Yardstick environments in the Yardstick container.
Using the CLI command ``env prepare`` (first way) (**recommended**)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In the Yardstick container, the Yardstick repository is located in the
``/home/opnfv/repos`` directory. Yardstick provides a CLI to prepare OpenStack
@@ -160,10 +181,10 @@ terminal window and execute the following command::
Manually exporting the env variables and initializing OpenStack (second way)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Export OpenStack environment variables
-######################################
+''''''''''''''''''''''''''''''''''''''
Before running Yardstick it is necessary to export OpenStack environment
variables::
@@ -189,7 +210,7 @@ A sample ``openrc`` file may look like this::
Manual creation of Yardstick flavor and guest images
-####################################################
+''''''''''''''''''''''''''''''''''''''''''''''''''''
Before executing Yardstick test cases, make sure that Yardstick flavor and
guest image are available in OpenStack. Detailed steps about creating the
@@ -246,14 +267,14 @@ image. Add Cirros and Ubuntu images to OpenStack::
Automatic initialization of OpenStack (third way)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++++++++
Similar to the second way, the first step is also to
`Export OpenStack environment variables`_. Then the following steps should be
done.
Automatic creation of Yardstick flavor and guest images
-#######################################################
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''
Yardstick has a script for automatically creating Yardstick flavor and building
Yardstick guest images. This script is mainly used for CI and can be also used
@@ -329,7 +350,6 @@ For installing Yardstick directly in Ubuntu, the ``yardstick env`` command is
not available. You need to prepare OpenStack environment variables and create
Yardstick flavor and guest images manually.
-
Uninstall Yardstick
^^^^^^^^^^^^^^^^^^^
@@ -444,6 +464,114 @@ These configuration files can be found in the ``samples`` directory.
Default location for the output is ``/tmp/yardstick.out``.
+Automatic installation of Yardstick
+-----------------------------------
+
+Automatic installation can be used as an alternative to the manual by
+providing parameters for ansible script ``install.yaml`` in a ``nsb_setup.sh``
+file. Yardstick can be installed on the bare metal and to the container. Yardstick
+container can be either pulled or built.
+
+Bare metal installation
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Modify ``nsb_setup.sh`` file ``install.yaml`` parameters to install Yardstick
+on Ubuntu server:
+
+.. code-block:: console
+
+ ansible-playbook -i install-inventory.ini install.yaml \
+ -e IMAGE_PROPERTY='none' \
+ -e YARDSTICK_DIR=<path to Yardstick folder>
+
+.. note:: By default ``INSTALLATION_MODE`` is ``baremetal``.
+
+.. note:: No modification in ``install-inventory.ini`` is needed for Yardstick
+ installation.
+
+.. note:: To install Yardstick in virtual environment pass parameter
+ ``-e VIRTUAL_ENVIRONMENT=True``.
+
+Container installation
+^^^^^^^^^^^^^^^^^^^^^^
+
+Modify ``install.yaml`` parameters in ``nsb_setup.sh`` file to pull or build
+Yardstick container. To pull Yardstick image and start container run:
+
+.. code-block:: console
+
+ ansible-playbook -i install-inventory.ini install.yaml \
+ -e IMAGE_PROPERTY='none' \
+ -e INSTALLATION_MODE=container_pull
+
+.. note:: Yardstick docker image is available for both Ubuntu 16.04 and Ubuntu
+ 18.04. By default Ubuntu 16.04 based docker image is used. To use
+ Ubuntu 18.04 based docker image pass ``-i opnfv/yardstick-ubuntu-18.04``
+ parameter to ``nsb_setup.sh``.
+
+To build Yardstick image modify Dockerfile as per comments in it and run:
+
+.. code-block:: console
+
+ cd yardstick
+ docker build -f docker/Dockerfile -t opnfv/yardstick:<tag> .
+
+.. note:: Yardstick docker image based on Ubuntu 16.04 will be built.
+ Pass ``-f docker/Dockerfile_ubuntu18`` to build Yardstick docker image based
+ on Ubuntu 18.04.
+
+.. note:: Add ``--build-arg http_proxy=http://<proxy_host>:<proxy_port>`` to
+ build docker image if server is behind the proxy.
+
+Parameters for ``install.yaml``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description of the parameters used with ``install.yaml``:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | -i install-inventory.ini|| Installs package dependency to remote servers |
+ | || and localhost |
+ | || Mandatory parameter |
+ | || By default no remote servers are provided |
+ +-------------------------+-------------------------------------------------+
+ | -e YARDSTICK_DIR || Path to Yardstick folder |
+ | || Mandatory parameter for Yardstick bare metal |
+ | || installation |
+ +-------------------------+-------------------------------------------------+
+ | -e INSTALLATION_MODE || baremetal: Yardstick is installed to the bare |
+ | | metal |
+ | || Default parameter |
+ | +-------------------------------------------------+
+ | || container: Yardstick is installed in container |
+ | || Container is built from Dockerfile |
+ | +-------------------------------------------------+
+ | || container_pull: Yardstick is installed in |
+ | || container |
+ | || Container is pulled from docker hub |
+ +-------------------------+-------------------------------------------------+
+ | -e OS_RELEASE || xenial or bionic: Ubuntu version to be used for|
+ | || VM image (nsb or normal) |
+ | || Default is Ubuntu 16.04, xenial |
+ +-------------------------+-------------------------------------------------+
+ | -e IMAGE_PROPERTY || nsb: Build Yardstick NSB VM image |
+ | || Used to run Yardstick NSB tests on sample VNF |
+ | || Default parameter |
+ | +-------------------------------------------------+
+ | || normal: Build VM image to run ping test in |
+ | || OpenStack |
+ | +-------------------------------------------------+
+ | || none: don't build a VM image. |
+ +-------------------------+-------------------------------------------------+
+ | -e VIRTUAL_ENVIRONMENT || False or True: Whether install in virtualenv |
+ | || Default is False |
+ +-------------------------+-------------------------------------------------+
+ | -e YARD_IMAGE_ARCH || CPU architecture on servers |
+ | || Default is 'amd64' |
+ +-------------------------+-------------------------------------------------+
+
+
Deploy InfluxDB and Grafana using Docker
----------------------------------------
@@ -455,17 +583,17 @@ Grafana to display data in the following sections.
Automatic deployment of InfluxDB and Grafana containers (**recommended**)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Firstly, enter the Yardstick container::
+1. Enter the Yardstick container::
- sudo -EH docker exec -it yardstick /bin/bash
+ sudo -EH docker exec -it yardstick /bin/bash
-Secondly, create InfluxDB container and configure with the following command::
+2. Create InfluxDB container and configure with the following command::
- yardstick env influxdb
+ yardstick env influxdb
-Thirdly, create and configure Grafana container::
+3. Create and configure Grafana container::
- yardstick env grafana
+ yardstick env grafana
Then you can run a test case and visit http://host_ip:1948
(``admin``/``admin``) to see the results.
@@ -493,21 +621,21 @@ Run influxDB::
sudo -EH docker run -d --name influxdb \
-p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \
tutum/influxdb
- docker exec -it influxdb bash
Configure influxDB::
- influx
- >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES
- >CREATE DATABASE yardstick;
- >use yardstick;
- >show MEASUREMENTS;
+ docker exec -it influxdb influx
+ > CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES
+ > CREATE DATABASE yardstick;
+ > use yardstick;
+ > show MEASUREMENTS;
+ > exit
Run Grafana::
sudo -EH docker run -d --name grafana -p 1948:3000 grafana/grafana
-Log on http://{YOUR_IP_HERE}:1948 using ``admin``/``admin`` and configure
+Log on to ``http://{YOUR_IP_HERE}:1948`` using ``admin``/``admin`` and configure
database resource to be ``{YOUR_IP_HERE}:8086``.
.. image:: images/Grafana_config.png
@@ -520,7 +648,7 @@ Configure ``yardstick.conf``::
sudo cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
sudo vi /etc/yardstick/yardstick.conf
-Modify ``yardstick.conf``::
+Modify ``yardstick.conf`` to add the ``influxdb`` dispatcher::
[DEFAULT]
debug = True
@@ -533,13 +661,11 @@ Modify ``yardstick.conf``::
username = root
password = root
-Now you can run Yardstick test cases and store the results in influxDB.
-
+Now Yardstick will store results in InfluxDB when you run a testcase.
Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**)
---------------------------------------------------------
-
Proxy Support
-------------
diff --git a/docs/testing/user/userguide/05-operation.rst b/docs/testing/user/userguide/05-operation.rst
index f390d1643..82539c97f 100644
--- a/docs/testing/user/userguide/05-operation.rst
+++ b/docs/testing/user/userguide/05-operation.rst
@@ -183,7 +183,7 @@ Combining these elements together, a sample Heat context config looks like:
.. literalinclude::
../../../../yardstick/tests/integration/dummy-scenario-heat-context.yaml
:start-after: ---
- :empahsise-lines: 14-
+ :emphasize-lines: 14-
Using exisiting HOT Templates
'''''''''''''''''''''''''''''
diff --git a/docs/testing/user/userguide/06-yardstick-plugin.rst b/docs/testing/user/userguide/06-yardstick-plugin.rst
index bc35e239d..a5d890b14 100644
--- a/docs/testing/user/userguide/06-yardstick-plugin.rst
+++ b/docs/testing/user/userguide/06-yardstick-plugin.rst
@@ -3,13 +3,23 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others.
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
===================================
Installing a plug-in into Yardstick
===================================
Abstract
-========
+--------
Yardstick provides a ``plugin`` CLI command to support integration with other
OPNFV testing projects. Below is an example invocation of Yardstick plugin
@@ -17,7 +27,7 @@ command and Storperf plug-in sample.
Installing Storperf into Yardstick
-==================================
+----------------------------------
Storperf is delivered as a Docker container from
https://hub.docker.com/r/opnfv/storperf/tags/.
@@ -31,7 +41,7 @@ In this introduction we will install Storperf on Jump Host.
Step 0: Environment preparation
--------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Running Storperf on Jump Host
Requirements:
@@ -100,7 +110,7 @@ container. You may need to copy it to the root directory of the Storperf
deployed host.
Step 1: Plug-in configuration file preparation
-----------------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To install a plug-in, first you need to prepare a plug-in configuration file in
YAML format and store it in the "plugin" directory. The plugin configration
@@ -125,7 +135,7 @@ Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host
in my local environment.
Step 2: Plug-in install/remove scripts preparation
---------------------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In ``yardstick/resource/scripts`` directory, there are two folders: an
``install`` folder and a ``remove`` folder. You need to store the plug-in
@@ -139,15 +149,15 @@ For example, the install and remove scripts for Storperf are both named
``storperf.bash``.
Step 3: Install and remove Storperf
------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To install Storperf, simply execute the following command::
# Install Storperf
yardstick plugin install plugin/storperf.yaml
-Removing Storperf from yardstick
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Removing Storperf from Yardstick
+++++++++++++++++++++++++++++++++
To remove Storperf, simply execute the following command::
diff --git a/docs/testing/user/userguide/07-result-store-InfluxDB.rst b/docs/testing/user/userguide/07-result-store-InfluxDB.rst
index cde931376..8a9196b1b 100644
--- a/docs/testing/user/userguide/07-result-store-InfluxDB.rst
+++ b/docs/testing/user/userguide/07-result-store-InfluxDB.rst
@@ -1,14 +1,23 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others.
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others.
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
==============================================
Store Other Project's Test Results in InfluxDB
==============================================
Abstract
-========
+--------
.. _Framework: https://wiki.opnfv.org/download/attachments/6827660/wiki.png?version=1&modificationDate=1470298075000&api=v2
@@ -21,7 +30,7 @@ into community's InfluxDB. The framework is shown in Framework_.
:alt: Store Other Project's Test Results in InfluxDB
Store Storperf Test Results into Community's InfluxDB
-=====================================================
+-----------------------------------------------------
.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py
.. _Mingjiang: mailto:limingjiang@huawei.com
diff --git a/docs/testing/user/userguide/08-grafana.rst b/docs/testing/user/userguide/08-grafana.rst
index 29bc23a08..ebe9f570d 100644
--- a/docs/testing/user/userguide/08-grafana.rst
+++ b/docs/testing/user/userguide/08-grafana.rst
@@ -3,13 +3,23 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) 2016 Huawei Technologies Co.,Ltd and others
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
=================
Grafana dashboard
=================
Abstract
-========
+--------
This chapter describes the Yardstick grafana dashboard. The Yardstick grafana
dashboard can be found here: http://testresults.opnfv.org/grafana/
@@ -21,14 +31,14 @@ dashboard can be found here: http://testresults.opnfv.org/grafana/
Public access
-=============
+-------------
Yardstick provids a public account for accessing to the dashboard. The username
and password are both set to ‘opnfv’.
Testcase dashboard
-==================
+------------------
For each test case, there is a dedicated dashboard. Shown here is the dashboard
of TC002.
@@ -36,7 +46,7 @@ of TC002.
.. image:: images/TC002.png
:width: 800px
- :alt:TC002 dashboard
+ :alt: TC002 dashboard
For each test case dashboard. On the top left, we have a dashboard selection,
you can switch to different test cases using this pull-down menu.
@@ -56,7 +66,7 @@ zoom out the chart.
Administration access
-=====================
+---------------------
For a user with administration rights it is easy to update and save any
dashboard configuration. Saved updates immediately take effect and become live.
@@ -72,11 +82,11 @@ This may cause issues like:
Any change made by administrator should be careful.
-Add a dashboard into yardstick grafana
-======================================
+Add a dashboard into Yardstick Grafana
+--------------------------------------
Due to security concern, users that using the public opnfv account are not able
-to edit the yardstick grafana directly.It takes a few more steps for a
+to edit the yardstick grafana directly. It takes a few more steps for a
non-yardstick user to add a custom dashboard into yardstick grafana.
There are 6 steps to go.
diff --git a/docs/testing/user/userguide/09-api.rst b/docs/testing/user/userguide/09-api.rst
index f0ae3980b..f227878ae 100644
--- a/docs/testing/user/userguide/09-api.rst
+++ b/docs/testing/user/userguide/09-api.rst
@@ -2,6 +2,15 @@
.. License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
=====================
Yardstick Restful API
@@ -9,16 +18,16 @@ Yardstick Restful API
Abstract
-========
+--------
Yardstick support restful API since Danube.
Available API
-=============
+-------------
/yardstick/env/action
----------------------
+^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to prepare Yardstick test environment.
For Euphrates, it supports:
@@ -69,7 +78,7 @@ get the task result.
/yardstick/asynctask
---------------------
+^^^^^^^^^^^^^^^^^^^^
Description: This API is used to get the status of asynchronous tasks
@@ -91,7 +100,7 @@ NOTE::
/yardstick/testcases
---------------------
+^^^^^^^^^^^^^^^^^^^^
Description: This API is used to list all released Yardstick test cases.
@@ -106,7 +115,7 @@ Example::
/yardstick/testcases/release/action
------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to run a Yardstick released test case.
@@ -130,7 +139,7 @@ result.
/yardstick/testcases/samples/action
------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to run a Yardstick sample test case.
@@ -154,7 +163,7 @@ the result.
/yardstick/testcases/<testcase_name>/docs
------------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to the documentation of a certain released test
case.
@@ -170,7 +179,7 @@ Example::
/yardstick/testsuites/action
-----------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to run a Yardstick test suite.
@@ -194,7 +203,7 @@ result.
/yardstick/tasks/<task_id>/log
-------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to get the real time log of test case execution.
@@ -209,7 +218,7 @@ Example::
/yardstick/results
-------------------
+^^^^^^^^^^^^^^^^^^
Description: This API is used to get the test results of tasks. If you call
/yardstick/testcases/samples/action API, it will return a task id. You can use
@@ -228,7 +237,7 @@ This API will return a list of test case result
/api/v2/yardstick/openrcs
--------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API provides functionality of handling OpenStack credential
file (openrc). For Euphrates, it supports:
@@ -282,7 +291,7 @@ Example::
/api/v2/yardstick/openrcs/<openrc_id>
--------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API provides functionality of handling OpenStack credential file (openrc). For Euphrates, it supports:
@@ -308,7 +317,7 @@ Example::
/api/v2/yardstick/pods
-----------------------
+^^^^^^^^^^^^^^^^^^^^^^
Description: This API provides functionality of handling Yardstick pod file
(pod.yaml). For Euphrates, it supports:
@@ -334,7 +343,7 @@ Example::
/api/v2/yardstick/pods/<pod_id>
--------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API provides functionality of handling Yardstick pod file (pod.yaml). For Euphrates, it supports:
@@ -358,7 +367,7 @@ Example::
/api/v2/yardstick/images
-------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to Yardstick VM images.
For Euphrates, it supports:
@@ -383,7 +392,7 @@ Example::
/api/v2/yardstick/images/<image_id>
------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to Yardstick VM images. For Euphrates, it supports:
@@ -407,7 +416,7 @@ Example::
/api/v2/yardstick/tasks
------------------------
+^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to yardstick tasks. For
Euphrates, it supports:
@@ -433,7 +442,7 @@ Example::
/api/v2/yardstick/tasks/<task_id>
---------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to yardstick tasks. For Euphrates, it supports:
@@ -518,7 +527,7 @@ Example::
/api/v2/yardstick/testcases
----------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to Yardstick testcases.
For Euphrates, it supports:
@@ -553,7 +562,7 @@ Example::
/api/v2/yardstick/testcases/<case_name>
----------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to yardstick testcases. For Euphrates, it supports:
@@ -579,7 +588,7 @@ Example::
/api/v2/yardstick/testsuites
-----------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to yardstick test suites.
For Euphrates, it supports:
@@ -617,7 +626,7 @@ Example::
/api/v2/yardstick/testsuites
-----------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to yardstick test suites. For Euphrates, it supports:
@@ -643,7 +652,7 @@ Example::
/api/v2/yardstick/projects
---------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to Yardstick test
projects. For Euphrates, it supports:
@@ -678,7 +687,7 @@ Example::
/api/v2/yardstick/projects
---------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to yardstick test projects. For Euphrates, it supports:
@@ -704,7 +713,7 @@ Example::
/api/v2/yardstick/containers
-----------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to Docker containers.
For Euphrates, it supports:
@@ -744,7 +753,7 @@ Example::
/api/v2/yardstick/containers/<container_id>
--------------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to do some work related to Docker containers. For Euphrates, it supports:
diff --git a/docs/testing/user/userguide/10-yardstick-user-interface.rst b/docs/testing/user/userguide/10-yardstick-user-interface.rst
index cadec78ef..246e1b1df 100644
--- a/docs/testing/user/userguide/10-yardstick-user-interface.rst
+++ b/docs/testing/user/userguide/10-yardstick-user-interface.rst
@@ -1,30 +1,64 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
========================
Yardstick User Interface
========================
-This interface provides a user to view the test result
-in table format and also values pinned on to a graph.
+This chapter describes how to generate HTML reports, used to view, store, share
+or publish test results in table and graph formats.
+
+The following layouts are available:
+
+* The compact HTML report layout is suitable for testcases producing a few
+ metrics over a short period of time. All metrics for all timestamps are
+ displayed in the data table and on the graph.
+* The dynamic HTML report layout consists of a wider data table, a graph, and
+ a tree that allows selecting the metrics to be displayed. This layout is
+ suitable for testcases, such as NSB ones, producing a lot of metrics over
+ a longer period of time.
-Command
-=======
-::
+
+Commands
+--------
+
+To generate the compact HTML report, run::
yardstick report generate <task-ID> <testcase-filename>
+To generate the dynamic HTML report, run::
+
+ yardstick report generate-nsb <task-ID> <testcase-filename>
+
Description
-===========
+-----------
-1. When the command is triggered using the task-id and the testcase
-name provided the respective values are retrieved from the
-database (influxdb in this particular case).
+1. When the command is triggered, the relevant values for the
+ provided task-id and testcase name are retrieved from the
+ database (`InfluxDB`_ in this particular case).
-2. The values are then formatted and then provided to the html
-template framed with complete html body using Django Framework.
+2. The values are then formatted and provided to the html
+ template to be rendered using `Jinja2`_.
-3. Then the whole template is written into a html file.
+3. Then the rendered template is written into a html file.
The graph is framed with Timestamp on x-axis and output values
(differ from testcase to testcase) on y-axis with the help of
-"Highcharts".
+`Chart.js`_.
+
+.. _InfluxDB: https://www.influxdata.com/time-series-platform/influxdb/
+.. _Jinja2: http://jinja.pocoo.org/docs/2.10/
+.. _Chart.js: https://www.chartjs.org/
diff --git a/docs/testing/user/userguide/11-vtc-overview.rst b/docs/testing/user/userguide/11-vtc-overview.rst
deleted file mode 100644
index 47582358c..000000000
--- a/docs/testing/user/userguide/11-vtc-overview.rst
+++ /dev/null
@@ -1,128 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others.
-
-==========================
-Virtual Traffic Classifier
-==========================
-
-Abstract
-========
-
-.. _TNOVA: http://www.t-nova.eu/
-.. _TNOVAresults: http://www.t-nova.eu/results/
-.. _Yardstick: https://wiki.opnfv.org/yardstick
-
-This chapter provides an overview of the virtual Traffic Classifier, a
-contribution to OPNFV Yardstick_ from the EU Project TNOVA_.
-Additional documentation is available in TNOVAresults_.
-
-Overview
-========
-
-The virtual Traffic Classifier (:term:`VTC`) :term:`VNF`, comprises of a
-Virtual Network Function Component (:term:`VNFC`). The :term:`VNFC` contains
-both the Traffic Inspection module, and the Traffic forwarding module, needed
-to run the :term:`VNF`. The exploitation of Deep Packet Inspection
-(:term:`DPI`) methods for traffic classification is built around two basic
-assumptions:
-
-* third parties unaffiliated with either source or recipient are able to
- inspect each IP packet's payload
-
-* the classifier knows the relevant syntax of each application's packet
- payloads (protocol signatures, data patterns, etc.).
-
-The proposed :term:`DPI` based approach will only use an indicative, small
-number of the initial packets from each flow in order to identify the content
-and not inspect each packet.
-
-In this respect it follows the Packet Based per Flow State (term:`PBFS`). This
-method uses a table to track each session based on the 5-tuples (src address,
-dest address, src port,dest port, transport protocol) that is maintained for
-each flow.
-
-Concepts
-========
-
-* *Traffic Inspection*: The process of packet analysis and application
- identification of network traffic that passes through the :term:`VTC`.
-
-* *Traffic Forwarding*: The process of packet forwarding from an incoming
- network interface to a pre-defined outgoing network interface.
-
-* *Traffic Rule Application*: The process of packet tagging, based on a
- predefined set of rules. Packet tagging may include e.g. Type of Service
- (:term:`ToS`) field modification.
-
-Architecture
-============
-
-The Traffic Inspection module is the most computationally intensive component
-of the :term:`VNF`. It implements filtering and packet matching algorithms in
-order to support the enhanced traffic forwarding capability of the :term:`VNF`.
-The component supports a flow table (exploiting hashing algorithms for fast
-indexing of flows) and an inspection engine for traffic classification.
-
-The implementation used for these experiments exploits the nDPI library.
-The packet capturing mechanism is implemented using libpcap. When the
-:term:`DPI` engine identifies a new flow, the flow register is updated with the
-appropriate information and transmitted across the Traffic Forwarding module,
-which then applies any required policy updates.
-
-The Traffic Forwarding moudle is responsible for routing and packet forwarding.
-It accepts incoming network traffic, consults the flow table for classification
-information for each incoming flow and then applies pre-defined policies
-marking e.g. :term:`ToS`/Differentiated Services Code Point (:term:`DSCP`)
-multimedia traffic for Quality of Service (:term:`QoS`) enablement on the
-forwarded traffic.
-It is assumed that the traffic is forwarded using the default policy until it
-is identified and new policies are enforced.
-
-The expected response delay is considered to be negligible, as only a small
-number of packets are required to identify each flow.
-
-Graphical Overview
-==================
-
-.. code-block:: console
-
- +----------------------------+
- | |
- | Virtual Traffic Classifier |
- | |
- | Analysing/Forwarding |
- | ------------> |
- | ethA ethB |
- | |
- +----------------------------+
- | ^
- | |
- v |
- +----------------------------+
- | |
- | Virtual Switch |
- | |
- +----------------------------+
-
-Install
-=======
-
-run the vTC/build.sh with root privileges
-
-Run
-===
-
-::
-
- sudo ./pfbridge -a eth1 -b eth2
-
-
-.. note:: Virtual Traffic Classifier is not support in OPNFV Danube release.
-
-
-Development Environment
-=======================
-
-Ubuntu 14.04 Ubuntu 16.04
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index 71a5c1130..45b087a47 100644
--- a/docs/testing/user/userguide/12-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -1,77 +1,98 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
+.. (c) OPNFV, 2016-2019 Intel Corporation.
+
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
===================================
Network Services Benchmarking (NSB)
===================================
-Abstract
-========
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
+.. _`ETSI GS NFV-TST001`: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+Abstract
+--------
This chapter provides an overview of the NSB, a contribution to OPNFV
Yardstick_ from Intel.
Overview
-========
-
-The goal of NSB is to Extend Yardstick to perform real world VNFs and NFVi
-Characterization and benchmarking with repeatable and deterministic methods.
-
-The Network Service Benchmarking (NSB) extends the yardstick framework to do
-VNF characterization and benchmarking in three different execution
-environments - bare metal i.e. native Linux environment, standalone virtual
-environment and managed virtualized environment (e.g. Open stack etc.).
-It also brings in the capability to interact with external traffic generators
-both hardware & software based for triggering and validating the traffic
-according to user defined profiles.
+--------
+
+Network Services Benchmarking (:term:`NSB`) uses the :term:`Yardstick`
+framework for performing :term:`VNF` and :term:`NFVI` characterisation in an
+:term:`NFV` environment.
+
+For VNF characterisation, NSB will onboard a VNF, source and sink traffic to it
+via traffic generators, and collect a variety of key performance indicators
+(:term:`KPI`) during VNF execution. The stream of KPI data is stored in a
+database, and it is visualized in a performance-visualization dashboard.
+
+For NFVI characterisation, a fixed test VNF, called :term:`PROX` is used.
+PROX implements a suite of test cases and visualizes the output data of the
+test suite. The PROX test cases implement various execution kernels found in
+real-world VNFs, and the output of the test cases provides an indication of
+the fitness of the infrastructure for running NFV services, in addition to
+indicating potential performance optimizations for the NFVI.
+
+NSB extends the Yardstick framework to do VNF characterization in three
+different execution environments - bare metal i.e. native Linux environment,
+standalone virtual environment and managed virtualized environment (e.g.
+OpenStack). It also brings in the capability to interact with external traffic
+generators, both hardware and software based, for triggering and validating the
+traffic according to user defined profiles.
NSB extension includes:
- - Generic data models of Network Services, based on ETSI spec `ETSI GS NFV-TST 001 <http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf>`_
-
- - New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc
-
- - Generic VNF configuration models and metrics implemented with Python
- classes
-
- - Traffic generator features and traffic profiles
-
- - L1-L3 state-less traffic profiles
+* Generic data models of Network Services, based on ETSI spec
+ `ETSI GS NFV-TST001`_
+* Standalone :term:`context` for VNF testing SRIOV, OVS-DPDK, etc
+* Generic VNF configuration models and metrics implemented with Python
+ classes
+* Traffic generator features and traffic profiles
- - L4-L7 state-full traffic profiles
+ * L1-L3 stateless traffic profiles
+ * L4-L7 state-full traffic profiles
+ * Tunneling protocol/network overlay support
- - Tunneling protocol / network overlay support
+* Scenarios that handle NSB test cases execution
- - Test case samples
+ * NSPerf - scenario that handles generic NSB test case execution
+ (setup and init tg/vnf, trigger traffic on tg, collect kpi)
+ * NSPerf-RFC2544 - scenario that allows repeatable triggering of traffic on
+ traffic generators until test case acceptance criteria is met
+ (for example RFC2544 binary search)
- - Ping
+* Test case samples
- - Trex
+ * Ping
+ * Trex
+ * vPE, vCGNAT, vFirewall etc - ipv4 throughput, latency etc
- - vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc
+* Traffic generators i.e. Trex, ab/nginx, ixia, iperf, etc
+* KPIs for a given use case:
- - Traffic generators like Trex, ab/nginx, ixia, iperf etc
+ * System agent support for collecting NFVi KPI. This includes:
- - KPIs for a given use case:
+ * CPU statistic
+ * Memory BW
+ * OVS-DPDK Stats
- - System agent support for collecting NFVi KPI. This includes:
-
- - CPU statistic
-
- - Memory BW
-
- - OVS-DPDK Stats
-
- - Network KPIs, e.g., inpackets, outpackets, thoughput, latency etc
-
- - VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc
+ * Network KPIs e.g. inpackets, outpackets, thoughput, latency
+ * VNF KPIs e.g. packet_in, packet_drop, packet_fwd
Architecture
-============
+------------
The Network Service (NS) defines a set of Virtual Network Functions (VNF)
connected together using NFV infrastructure.
@@ -83,124 +104,155 @@ performed network functionality. The part of the data model is a set of the
configuration parameters, number of connection points used and flavor including
core and memory amount.
-The ETSI defines a Network Service as a set of configurable VNFs working in
-some NFV Infrastructure connecting each other using Virtual Links available
-through Connection Points. The ETSI MANO specification defines a set of
-management entities called Network Service Descriptors (NSD) and
-VNF Descriptors (VNFD) that define real Network Service. The picture below
-makes an example how the real Network Operator use-case can map into ETSI
-Network service definition
+ETSI defines a Network Service as a set of configurable VNFs working in some
+NFV Infrastructure connecting each other using Virtual Links available through
+Connection Points. The ETSI MANO specification defines a set of management
+entities called Network Service Descriptors (NSD) and VNF Descriptors (VNFD)
+that define real Network Service. The picture below makes an example how the
+real Network Operator use-case can map into ETSI Network service definition.
+
+Network Service framework performs the necessary test steps. It may involve:
+
+* Interacting with traffic generator and providing the inputs on traffic
+ type / packet structure to generate the required traffic as per the
+ test case. Traffic profiles will be used for this.
+* Executing the commands required for the test procedure and analyses the
+ command output for confirming whether the command got executed correctly
+ or not e.g. as per the test case, run the traffic for the given
+ time period and wait for the necessary time delay.
+* Verify the test result.
+* Validate the traffic flow from SUT.
+* Fetch the data from SUT and verify the value as per the test case.
+* Upload the logs from SUT onto the Test Harness server
+* Retrieve the KPI's provided by particular VNF
-Network Service framework performs the necessary test steps. It may involve
-
- - Interacting with traffic generator and providing the inputs on traffic
- type / packet structure to generate the required traffic as per the
- test case. Traffic profiles will be used for this.
+Components of Network Service
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- - Executing the commands required for the test procedure and analyses the
- command output for confirming whether the command got executed correctly
- or not. E.g. As per the test case, run the traffic for the given
- time period / wait for the necessary time delay
+.. TODO: provide a list of components in this section and describe them in
+ later sub-sections
- - Verify the test result.
+.. Components are the methodology, TGs, framework extensions, KPI collection,
+ Testcases, SampleVNFs
+.. Framework extentions include: VNF models, NSPerf Scenario, contexts
- - Validate the traffic flow from SUT
+* *Models for Network Service benchmarking*: The Network Service benchmarking
+ requires the proper modelling approach. The NSB provides models using Python
+ files and defining of NSDs and VNFDs.
- - Fetch the table / data from SUT and verify the value as per the test case
+The benchmark control application being a part of OPNFV Yardstick can call
+that Python models to instantiate and configure the VNFs. Depending on
+infrastructure type (bare-metal or fully virtualized) that calls could be
+made directly or using MANO system.
- - Upload the logs from SUT onto the Test Harness server
+* *Traffic generators in NSB*: Any benchmark application requires a set of
+ traffic generator and traffic profiles defining the method in which traffic
+ is generated.
- - Read the KPI's provided by particular VNF
+The Network Service benchmarking model extends the Network Service
+definition with a set of Traffic Generators (TG) that are treated
+same way as other VNFs being a part of benchmarked network service.
+Same as other VNFs the traffic generator are instantiated and terminated.
-Components of Network Service
------------------------------
+Every traffic generator has own configuration defined as a traffic profile
+and a set of KPIs supported. The python models for TG is extended by
+specific calls to listen and generate traffic.
- * *Models for Network Service benchmarking*: The Network Service benchmarking
- requires the proper modelling approach. The NSB provides models using Python
- files and defining of NSDs and VNFDs.
+* *The stateless TREX traffic generator*: The main traffic generator used as
+ Network Service stimulus is open source TREX tool.
- The benchmark control application being a part of OPNFV yardstick can call
- that python models to instantiate and configure the VNFs. Depending on
- infrastructure type (bare-metal or fully virtualized) that calls could be
- made directly or using MANO system.
+The TREX tool can generate any kind of stateless traffic.
- * *Traffic generators in NSB*: Any benchmark application requires a set of
- traffic generator and traffic profiles defining the method in which traffic
- is generated.
-
- The Network Service benchmarking model extends the Network Service
- definition with a set of Traffic Generators (TG) that are treated
- same way as other VNFs being a part of benchmarked network service.
- Same as other VNFs the traffic generator are instantiated and terminated.
+.. code-block:: console
- Every traffic generator has own configuration defined as a traffic profile
- and a set of KPIs supported. The python models for TG is extended by
- specific calls to listen and generate traffic.
+ +--------+ +-------+ +--------+
+ | | | | | |
+ | Trex | ---> | VNF | ---> | Trex |
+ | | | | | |
+ +--------+ +-------+ +--------+
- * *The stateless TREX traffic generator*: The main traffic generator used as
- Network Service stimulus is open source TREX tool.
+Supported testcases scenarios:
- The TREX tool can generate any kind of stateless traffic.
+* Correlated UDP traffic using TREX traffic generator and replay VNF.
- .. code-block:: console
+ * Using different IMIX configuration like pure voice, pure video traffic etc
+ * Using different number IP flows e.g. 1, 1K, 16K, 64K, 256K, 1M flows
+ * Using different number of rules configured e.g. 1, 1K, 10K rules
- +--------+ +-------+ +--------+
- | | | | | |
- | Trex | ---> | VNF | ---> | Trex |
- | | | | | |
- +--------+ +-------+ +--------+
+For UDP correlated traffic following Key Performance Indicators are collected
+for every combination of test case parameters:
- Supported testcases scenarios:
+* RFC2544 throughput for various loss rate defined (1% is a default)
- - Correlated UDP traffic using TREX traffic generator and replay VNF.
+KPI Collection
+^^^^^^^^^^^^^^
- - using different IMIX configuration like pure voice, pure video traffic etc
+KPI collection is the process of sampling KPIs at multiple intervals to allow
+for investigation into anomalies during runtime. Some KPI intervals are
+adjustable. KPIs are collected from traffic generators and NFVI for the SUT.
+There is already some reporting in NSB available, but NSB collects all KPIs for
+analytics to process.
- - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows
+Below is an example list of basic KPIs:
- - Using different number of rules configured like 1 rule, 1K, 10K rules
+* Throughput
+* Latency
+* Packet delay variation
+* Maximum establishment rate
+* Maximum tear-down rate
+* Maximum simultaneous number of sessions
- For UDP correlated traffic following Key Performance Indicators are collected
- for every combination of test case parameters:
+Of course, there can be many other KPIs that will be relevant for a specific
+NFVI, but in most cases these KPIs are enough to give you a basic picture of
+the SUT. NSB also uses :term:`collectd` in order to collect the KPIs. Currently
+the following collectd plug-ins are enabled for NSB testcases:
- - RFC2544 throughput for various loss rate defined (1% is a default)
+* Libvirt
+* Interface stats
+* OvS events
+* vSwitch stats
+* Huge Pages
+* RAM
+* CPU usage
+* Intel® PMU
+* Intel® RDT
Graphical Overview
-==================
+------------------
-NSB Testing with yardstick framework facilitate performance testing of various
+NSB Testing with Yardstick framework facilitate performance testing of various
VNFs provided.
.. code-block:: console
+-----------+
- | | +-----------+
- | vPE | ->|TGen Port 0|
- | TestCase | | +-----------+
- | | |
- +-----------+ +------------------+ +-------+ |
- | | -- API --> | VNF | <--->
- +-----------+ | Yardstick | +-------+ |
- | Test Case | --> | NSB Testing | |
- +-----------+ | | |
- | | | |
- | +------------------+ |
- +-----------+ | +-----------+
- | Traffic | ->|TGen Port 1|
- | patterns | +-----------+
+ | | +-------------+
+ | vPE | -->| TGen Port 0 |
+ | TestCase | | +-------------+
+ | | |
+ +-----------+ +---------------+ +-------+ |
+ | | ---> | VNF | <--->
+ +-----------+ | Yardstick | +-------+ |
+ | Test Case | --> | NSB Testing | |
+ +-----------+ | | |
+ | | | |
+ | +---------------+ |
+ +-----------+ | +-------------+
+ | Traffic | -->| TGen Port 1 |
+ | patterns | +-------------+
+-----------+
Figure 1: Network Service - 2 server configuration
-VNFs supported for chracterization:
------------------------------------
+VNFs supported for chracterization
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. CGNAPT - Carrier Grade Network Address and port Translation
2. vFW - Virtual Firewall
3. vACL - Access Control List
-4. Prox - Packet pROcessing eXecution engine:
- - VNF can act as Drop, Basic Forwarding (no touch),
- L2 Forwarding (change MAC), GRE encap/decap, Load balance based on
- packet fields, Symmetric load balancing
- - QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
+4. PROX - Packet pROcessing eXecution engine:
+ * VNF can act as Drop, Basic Forwarding (no touch),
+ L2 Forwarding (change MAC), GRE encap/decap, Load balance based on
+ packet fields, Symmetric load balancing
+ * QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
5. UDP_Replay
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
index 00f8cfd97..35f67b92f 100644
--- a/docs/testing/user/userguide/13-nsb-installation.rst
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -1,36 +1,43 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
+.. (c) OPNFV, 2016-2019 Intel Corporation.
-=====================================
-Yardstick - NSB Testing -Installation
-=====================================
+..
+ Convention for heading levels in Yardstick documentation:
-Abstract
-========
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
-The Network Service Benchmarking (NSB) extends the yardstick framework to do
-VNF characterization and benchmarking in three different execution
-environments viz., bare metal i.e. native Linux environment, standalone virtual
-environment and managed virtualized environment (e.g. Open stack etc.).
-It also brings in the capability to interact with external traffic generators
-both hardware & software based for triggering and validating the traffic
-according to user defined profiles.
+
+================
+NSB Installation
+================
+
+.. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
+.. _devstack: https://docs.openstack.org/devstack/pike/>
+.. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
+
+Abstract
+--------
The steps needed to run Yardstick with NSB testing are:
* Install Yardstick (NSB Testing).
-* Setup/Reference pod.yaml describing Test topology
-* Create/Reference the test configuration yaml file.
+* Setup/reference ``pod.yaml`` describing Test topology.
+* Create/reference the test configuration yaml file.
* Run the test case.
-
Prerequisites
-=============
+-------------
-Refer chapter Yardstick Installation for more information on yardstick
-prerequisites
+Refer to :doc:`04-installation` for more information on Yardstick
+prerequisites.
Several prerequisites are needed for Yardstick (VNF testing):
@@ -46,11 +53,10 @@ Several prerequisites are needed for Yardstick (VNF testing):
* intel-cmt-cat
Hardware & Software Ingredients
--------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SUT requirements:
-
======= ===================
Item Description
======= ===================
@@ -63,7 +69,6 @@ SUT requirements:
Boot and BIOS settings:
-
============= =================================================
Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
@@ -82,85 +87,224 @@ Boot and BIOS settings:
Turbo Boost Disabled
============= =================================================
-
-
Install Yardstick (NSB Testing)
-===============================
-
-Download the source code and install Yardstick from it
-
-.. code-block:: console
-
- git clone https://gerrit.opnfv.org/gerrit/yardstick
+-------------------------------
- cd yardstick
+Yardstick with NSB can be installed using ``nsb_setup.sh``.
+The ``nsb_setup.sh`` allows to:
- # Switch to latest stable branch
- # git checkout <tag or stable branch>
- git checkout stable/euphrates
+1. Install Yardstick in specified mode: bare metal or container.
+ Refer :doc:`04-installation`.
+2. Install package dependencies on remote servers used as traffic generator or
+ sample VNF. Install DPDK, sample VNFs, TREX, collectd.
+ Add such servers to ``install-inventory.ini`` file to either
+ ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
+ It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
+3. Build VM image either nsb or normal. The nsb VM image is used to run
+ Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
+ The normal VM image is used to run Yardstick ping tests in OpenStack context.
+4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
-Configure the network proxy, either using the environment variables or setting
-the global environment file:
+Firstly, configure the network proxy, either using the environment variables or
+setting the global environment file.
-.. code-block:: ini
+Set environment in the file::
- cat /etc/environment
http_proxy='http://proxy.company.com:port'
https_proxy='http://proxy.company.com:port'
+Set environment variables:
+
.. code-block:: console
export http_proxy='http://proxy.company.com:port'
export https_proxy='http://proxy.company.com:port'
-The last step is to modify the Yardstick installation inventory, used by
-Ansible:
+Download the source code and check out the latest stable branch:
+
+.. code-block:: console
+
+ git clone https://gerrit.opnfv.org/gerrit/yardstick
+ cd yardstick
+ # Switch to latest stable branch
+ git checkout stable/gambia
+
+Modify the Yardstick installation inventory used by Ansible:
.. code-block:: ini
- cat ./ansible/yardstick-install-inventory.ini
+ cat ./ansible/install-inventory.ini
[jumphost]
- localhost ansible_connection=local
-
- [yardstick-standalone]
- yardstick-standalone-node ansible_host=192.168.1.2
- yardstick-standalone-node-2 ansible_host=192.168.1.3
+ localhost ansible_connection=local
# section below is only due backward compatibility.
# it will be removed later
[yardstick:children]
jumphost
+ [yardstick-baremetal]
+ baremetal ansible_host=192.168.2.51 ansible_connection=ssh
+
+ [yardstick-standalone]
+ standalone ansible_host=192.168.2.52 ansible_connection=ssh
+
[all:vars]
- ansible_user=root
- ansible_pass=root
+ # Uncomment credentials below if needed
+ ansible_user=root
+ ansible_ssh_pass=root
+ # ansible_ssh_private_key_file=/root/.ssh/id_rsa
+ # When IMG_PROPERTY is passed neither normal nor nsb set
+ # "path_to_vm=/path/to/image" to add it to OpenStack
+ # path_to_img=/tmp/workspace/yardstick-image.img
+ # List of CPUs to be isolated (not used by default)
+ # Grub line will be extended with:
+ # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
+ # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
-To execute an installation for a Bare-Metal or a Standalone context:
+.. warning::
-.. code-block:: console
+ Before running ``nsb_setup.sh`` make sure python is installed on servers
+ added to ``yardstick-standalone`` and ``yardstick-baremetal`` groups.
- ./nsb_setup.sh
+.. note::
+ SSH access without password needs to be configured for all your nodes
+ defined in ``install-inventory.ini`` file.
+ If you want to use password authentication you need to install ``sshpass``::
-To execute an installation for an OpenStack context:
+ sudo -EH apt-get install sshpass
-.. code-block:: console
+
+.. note::
+
+ A VM image built by other means than Yardstick can be added to OpenStack.
+ Uncomment and set correct path to the VM image in the
+ ``install-inventory.ini`` file::
+
+ path_to_img=/tmp/workspace/yardstick-image.img
+
+
+.. note::
+
+ CPU isolation can be applied to the remote servers, like:
+ ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
+ ``install-inventory.ini`` file.
+
+By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
+docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
+installs packages to the servers given in ``yardstick-standalone`` and
+``yardstick-baremetal`` host groups.
+
+To pull Yardstick built based on Ubuntu 18 run::
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
+
+To change default behavior modify parameters for ``install.yaml`` in
+``nsb_setup.sh`` file.
+
+Refer chapter :doc:`04-installation` for more details on ``install.yaml``
+parameters.
+
+To execute an installation for a **BareMetal** or a **Standalone context**::
+
+ ./nsb_setup.sh
+
+To execute an installation for an **OpenStack** context::
./nsb_setup.sh <path to admin-openrc.sh>
-Above command setup docker with latest yardstick code. To execute
+.. note::
-.. code-block:: console
+ Yardstick may not be operational after distributive linux kernel update if
+ it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
+
+.. warning::
+
+ The Yardstick VM image (NSB or normal) cannot be built inside a VM.
+
+.. warning::
+
+ The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
+ Reboot of the servers from ``yardstick-standalone`` or
+ ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
+ required to apply those changes.
+
+The above commands will set up Docker with the latest Yardstick code. To
+execute::
docker exec -it yardstick bash
+.. note::
+
+ It may be needed to configure tty in docker container to extend commandline
+ character length, for example:
+
+ stty size rows 58 cols 234
+
It will also automatically download all the packages needed for NSB Testing
-setup. Refer chapter :doc:`04-installation` for more on docker
-**Install Yardstick using Docker (recommended)**
+setup. Refer chapter :doc:`04-installation` for more on Docker:
+:ref:`Install Yardstick using Docker`
-System Topology:
-================
+Bare Metal context example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG and DUT servers to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies. Install
+ python on servers.
+3. Start deployment using docker image based on Ubuntu 16:
+
+.. code-block:: console
+
+ ./nsb_setup.sh
+
+4. Reboot bare metal servers.
+5. Enter to yardstick container and modify pod yaml file and run tests.
+
+Standalone context example for Ubuntu 18
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+Ubuntu 18 is installed on all servers.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG server to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies.
+ Add server where VM with sample VNF will be deployed to
+ ``yardstick-standalone`` group in ``install-inventory.ini`` file.
+ Target VM image named ``yardstick-nsb-image.img`` will be placed to
+ ``/var/lib/libvirt/images/``.
+ Install python on servers.
+3. Modify ``nsb_setup.sh`` on jump host:
+
+.. code-block:: console
+
+ ansible-playbook \
+ -e IMAGE_PROPERTY='nsb' \
+ -e OS_RELEASE='bionic' \
+ -e INSTALLATION_MODE='container_pull' \
+ -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
+ -i install-inventory.ini install.yaml
+
+4. Start deployment with Yardstick docker images based on Ubuntu 18:
+
+.. code-block:: console
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
+
+5. Reboot servers.
+6. Enter to yardstick container and modify pod yaml file and run tests.
+
+
+System Topology
+---------------
.. code-block:: console
@@ -171,30 +315,30 @@ System Topology:
| | | |
| | (1)<-----(1) | |
+----------+ +----------+
- trafficgen_1 vnf
+ trafficgen_0 vnf
Environment parameters and credentials
-======================================
+--------------------------------------
-Config yardstick conf
----------------------
+Configure yardstick.conf
+^^^^^^^^^^^^^^^^^^^^^^^^
-If user did not run 'yardstick env influxdb' inside the container, which will
-generate correct ``yardstick.conf``, then create the config file manually (run
-inside the container):
-::
+If you did not run ``yardstick env influxdb`` inside the container to generate
+``yardstick.conf``, then create the config file manually (run inside the
+container)::
cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
vi /etc/yardstick/yardstick.conf
-Add trex_path, trex_client_lib and bin_path in 'nsb' section.
+Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
+section:
-::
+.. code-block:: ini
[DEFAULT]
debug = True
- dispatcher = file, influxdb
+ dispatcher = influxdb
[dispatcher_influxdb]
timeout = 5
@@ -209,30 +353,38 @@ Add trex_path, trex_client_lib and bin_path in 'nsb' section.
trex_client_lib=/opt/nsb_bin/trex_client/stl
Run Yardstick - Network Service Testcases
-=========================================
-
+-----------------------------------------
NS testing - using yardstick CLI
---------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See :doc:`04-installation`
-.. code-block:: console
-
+Connect to the Yardstick container::
docker exec -it yardstick /bin/bash
- source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
- export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
+
+If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
+
+ source /etc/yardstick/openstack.creds
+
+In addition to the above, you need to set the ``EXTERNAL_NETWORK`` for
+OpenStack::
+
+ export EXTERNAL_NETWORK="<openstack public network>"
+
+Finally, you should be able to run the testcase::
+
yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
Network Service Benchmarking - Bare-Metal
-=========================================
+-----------------------------------------
Bare-Metal Config pod.yaml describing Topology
-----------------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Bare-Metal 2-Node setup
-^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++
.. code-block:: console
+----------+ +----------+
@@ -242,10 +394,10 @@ Bare-Metal 2-Node setup
| | | |
| | (n)<-----(n) | |
+----------+ +----------+
- trafficgen_1 vnf
+ trafficgen_0 vnf
Bare-Metal 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++++
.. code-block:: console
+----------+ +----------+ +------------+
@@ -256,21 +408,21 @@ Bare-Metal 3-Node setup - Correlated Traffic
| | | | | |
| | | |(1)<---->(0)| |
+----------+ +----------+ +------------+
- trafficgen_1 vnf trafficgen_2
+ trafficgen_0 vnf trafficgen_1
Bare-Metal Config pod.yaml
---------------------------
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
topology and update all the required fields.::
- cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
+ cp <yardstick>/etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
.. code-block:: YAML
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -289,7 +441,7 @@ topology and update all the required fields.::
dpdk_port_num: 1
local_ip: "152.16.40.20"
netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
+ local_mac: "00:00:00:00:00:02"
-
name: vnf
@@ -334,53 +486,94 @@ topology and update all the required fields.::
if: "xe1"
-Network Service Benchmarking - Standalone Virtualization
-========================================================
+Standalone Virtualization
+-------------------------
+
+VM can be deployed manually or by Yardstick. If parameter *vm_deploy* is set
+to `True` VM will be deployed by Yardstick. Otherwise VM should be deployed
+manually. Test case example, context section::
+
+ contexts:
+ ...
+ vm_deploy: True
+
SR-IOV
-------
+^^^^^^
SR-IOV Pre-requisites
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
-On Host:
- a) Create a bridge for VM to connect to external network
+On Host, where VM is created:
+ 1. Create and configure a bridge named ``br-int`` for VM to connect to
+ external network. Currently this can be done using VXLAN tunnel.
- .. code-block:: console
+ Execute the following on host, where VM is created::
+ ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
brctl addbr br-int
- brctl addif br-int <interface_name> #This interface is connected to internet
+ brctl addif br-int vxlan0
+ ip link set dev vxlan0 up
+ ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
+ ip link set dev br-int up
- b) Build guest image for VNF to run.
- Most of the sample test cases in Yardstick are using a guest image called
- ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
- Yardstick has a tool for building this custom image with samplevnf.
- It is necessary to have ``sudo`` rights to use this tool.
+ .. note:: You may need to add extra rules to iptable to forward traffic.
- Also you may need to install several additional packages to use this tool, by
- following the commands below::
+ .. code-block:: console
- sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+ iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
+ iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
- This image can be built using the following command in the directory where Yardstick is installed
+ Execute the following on a jump host:
- .. code-block:: console
+ .. code-block:: console
- export YARD_IMG_ARCH='amd64'
- sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+ ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
+ ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
+ ip link set dev vxlan0 up
- Please use ansible script to generate a cloud image refer to :doc:`04-installation`
+ .. note:: Host and jump host are different baremetal servers.
+
+ 2. Modify test case management CIDR.
+ IP addresses IP#1, IP#2 and CIDR must be in the same network.
+
+ .. code-block:: YAML
+
+ servers:
+ vnf_0:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.7/24'
+
+ 3. Build guest image for VNF to run.
+ Most of the sample test cases in Yardstick are using a guest image called
+ ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with SampleVNF.
+ It is necessary to have ``sudo`` rights to use this tool.
- for more details refer to chapter :doc:`04-installation`
+ Also you may need to install several additional packages to use this tool, by
+ following the commands below::
- .. note:: VM should be build with static IP and should be accessible from yardstick host.
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+
+ This image can be built using the following command in the directory where
+ Yardstick is installed::
+
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+
+ For instructions on generating a cloud image using Ansible, refer to
+ :doc:`04-installation`.
+
+ .. note:: VM should be build with static IP and be accessible from the
+ Yardstick host.
SR-IOV Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++
-SR-IOV 2-Node setup:
-^^^^^^^^^^^^^^^^^^^^
+SR-IOV 2-Node setup
++++++++++++++++++++
.. code-block:: console
+--------------------+
@@ -398,42 +591,42 @@ SR-IOV 2-Node setup:
+----------+ +-------------------------+
| | | ^ ^ |
| | | | | |
- | | (0)<----->(0) | ------ | |
- | TG1 | | SUT | |
- | | | | |
- | | (n)<----->(n) |------------------ |
+ | | (0)<----->(0) | ------ SUT | |
+ | TG1 | | | |
+ | | (n)<----->(n) | ----------------- |
+ | | | |
+----------+ +-------------------------+
- trafficgen_1 host
+ trafficgen_0 host
SR-IOV 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++
.. code-block:: console
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | VF NIC | | VF NIC |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +----------+ +-------------------------+ +--------------+
- | | | ^ ^ | | |
- | | | | | | | |
- | | (0)<----->(0) | ------ | | | TG2 |
- | TG1 | | SUT | | | (UDP Replay) |
- | | | | | | |
- | | (n)<----->(n) | ------ | (n)<-->(n) | |
- +----------+ +-------------------------+ +--------------+
- trafficgen_1 host trafficgen_2
-
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | VF NIC | | VF NIC |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +----------+ +---------------------+ +--------------+
+ | | | ^ ^ | | |
+ | | | | | | | |
+ | | (0)<----->(0) |----- | | | TG2 |
+ | TG1 | | SUT | | | (UDP Replay) |
+ | | | | | | |
+ | | (n)<----->(n) | -----| (n)<-->(n) | |
+ +----------+ +---------------------+ +--------------+
+ trafficgen_0 host trafficgen_1
+
+Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
topology and update all the required fields.
.. code-block:: console
@@ -444,13 +637,13 @@ topology and update all the required fields.
.. note:: Update all the required fields like ip, user, password, pcis, etc...
SR-IOV Config pod_trex.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++
.. code-block:: YAML
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -470,10 +663,10 @@ SR-IOV Config pod_trex.yaml
dpdk_port_num: 1
local_ip: "152.16.40.20"
netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
+ local_mac: "00:00:00:00:00:02"
SR-IOV Config host_sriov.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
.. code-block:: YAML
@@ -488,8 +681,8 @@ SR-IOV Config host_sriov.yaml
SR-IOV testcase update:
``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
-Update "contexts" section
-"""""""""""""""""""""""""
+Update contexts section
+'''''''''''''''''''''''
.. code-block:: YAML
@@ -511,7 +704,7 @@ Update "contexts" section
user: "" # update VM username
password: "" # update password
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -532,55 +725,176 @@ Update "contexts" section
gateway_ip: '152.16.100.20'
+SRIOV configuration options
++++++++++++++++++++++++++++
+
+The only configuration option available for SRIOV is *vpci*. It is used as base
+address for VFs that are created during SRIOV test case execution.
+
+ .. code-block:: yaml+jinja
+
+ networks:
+ uplink_0:
+ phy_port: "0000:05:00.0"
+ vpci: "0000:00:07.0"
+ cidr: '152.16.100.10/24'
+ gateway_ip: '152.16.100.20'
+ downlink_0:
+ phy_port: "0000:05:00.1"
+ vpci: "0000:00:08.0"
+ cidr: '152.16.40.10/24'
+ gateway_ip: '152.16.100.20'
+
+.. _`VM image properties label`:
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties example under *flavor* section:
+
+ .. code-block:: console
+
+ flavor:
+ images: <path>
+ ram: 8192
+ extra_specs:
+ machine_type: 'pc-i440fx-xenial'
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ hw_socket: 0
+ cputune: |
+ <cputune>
+ <vcpupin vcpu="0" cpuset="7"/>
+ <vcpupin vcpu="1" cpuset="8"/>
+ ...
+ <vcpupin vcpu="11" cpuset="18"/>
+ <emulatorpin cpuset="11"/>
+ </cputune>
+ user: ""
+ password: ""
+
+VM image properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | images || Path to the VM image generated by |
+ | | ``nsb_setup.sh`` |
+ | || Default path is ``/var/lib/libvirt/images/`` |
+ | || Default file name ``yardstick-nsb-image.img`` |
+ | | or ``yardstick-image.img`` |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for VM |
+ | || Default is 4096 MB |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_sockets || Number of sockets provided to the guest VM |
+ | || Default is 1 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_cores || Number of cores provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_threads || Number of threads provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw_socket || Generate vcpu cpuset from given HW socket |
+ | || Default is 0 |
+ +-------------------------+-------------------------------------------------+
+ | cputune || Maps virtual cpu with logical cpu |
+ +-------------------------+-------------------------------------------------+
+ | machine_type || Machine type to be emulated in VM |
+ | || Default is 'pc-i440fx-xenial' |
+ +-------------------------+-------------------------------------------------+
+ | user || User name to access the VM |
+ | || Default value is 'root' |
+ +-------------------------+-------------------------------------------------+
+ | password || Password to access the VM |
+ +-------------------------+-------------------------------------------------+
+
OVS-DPDK
---------
+^^^^^^^^
OVS-DPDK Pre-requisites
-^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++
+
+On Host, where VM is created:
+ 1. Create and configure a bridge named ``br-int`` for VM to connect to
+ external network. Currently this can be done using VXLAN tunnel.
-On Host:
- a) Create a bridge for VM to connect to external network
+ Execute the following on host, where VM is created:
.. code-block:: console
+ ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
brctl addbr br-int
- brctl addif br-int <interface_name> #This interface is connected to internet
+ brctl addif br-int vxlan0
+ ip link set dev vxlan0 up
+ ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
+ ip link set dev br-int up
- b) Build guest image for VNF to run.
+ .. note:: May be needed to add extra rules to iptable to forward traffic.
+
+ .. code-block:: console
+
+ iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
+ iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
+
+ Execute the following on a jump host:
+
+ .. code-block:: console
+
+ ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
+ ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
+ ip link set dev vxlan0 up
+
+ .. note:: Host and jump host are different baremetal servers.
+
+ 2. Modify test case management CIDR.
+ IP addresses IP#1, IP#2 and CIDR must be in the same network.
+
+ .. code-block:: YAML
+
+ servers:
+ vnf_0:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.7/24'
+
+ 3. Build guest image for VNF to run.
Most of the sample test cases in Yardstick are using a guest image called
- ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
- Yardstick has a tool for building this custom image with samplevnf.
+ ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with SampleVNF.
It is necessary to have ``sudo`` rights to use this tool.
- Also you may need to install several additional packages to use this tool, by
- following the commands below::
+ You may need to install several additional packages to use this tool, by
+ following the commands below::
- sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
- This image can be built using the following command in the directory where Yardstick is installed::
+ This image can be built using the following command in the directory where
+ Yardstick is installed::
- export YARD_IMG_ARCH='amd64'
- sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
- sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+ sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
- for more details refer to chapter :doc:`04-installation`
+ for more details refer to chapter :doc:`04-installation`
- .. note:: VM should be build with static IP and should be accessible from yardstick host.
+ .. note:: VM should be build with static IP and should be accessible from
+ yardstick host.
- c) OVS & DPDK version.
- - OVS 2.7 and DPDK 16.11.1 above version is supported
+4. OVS & DPDK version:
- d) Setup OVS/DPDK on host.
- Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
+ * OVS 2.7 and DPDK 16.11.1 above version is supported
+Refer setup instructions at `OVS-DPDK`_ on host.
OVS-DPDK Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++++
OVS-DPDK 2-Node setup
-^^^^^^^^^^^^^^^^^^^^^
-
++++++++++++++++++++++
.. code-block:: console
@@ -606,11 +920,11 @@ OVS-DPDK 2-Node setup
| | | (ovs-dpdk) | |
| | (n)<----->(n) |------------------ |
+----------+ +-------------------------+
- trafficgen_1 host
+ trafficgen_0 host
OVS-DPDK 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++
.. code-block:: console
@@ -636,13 +950,11 @@ OVS-DPDK 3-Node setup - Correlated Traffic
| | | (ovs-dpdk) | | | |
| | (n)<----->(n) | ------ |(n)<-->(n)| |
+----------+ +-------------------------+ +------------+
- trafficgen_1 host trafficgen_2
+ trafficgen_0 host trafficgen_1
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.
-
-.. code-block:: console
+Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
+the topology and update all the required fields::
cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
@@ -650,13 +962,13 @@ topology and update all the required fields.
.. note:: Update all the required fields like ip, user, password, pcis, etc...
OVS-DPDK Config pod_trex.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
.. code-block:: YAML
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -675,10 +987,10 @@ OVS-DPDK Config pod_trex.yaml
dpdk_port_num: 1
local_ip: "152.16.40.20"
netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
+ local_mac: "00:00:00:00:00:02"
OVS-DPDK Config host_ovs.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
.. code-block:: YAML
@@ -693,8 +1005,8 @@ OVS-DPDK Config host_ovs.yaml
ovs_dpdk testcase update:
``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
-Update "contexts" section
-"""""""""""""""""""""""""
+Update contexts section
+'''''''''''''''''''''''
.. code-block:: YAML
@@ -727,7 +1039,7 @@ Update "contexts" section
user: "" # update VM username
password: "" # update password
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -747,17 +1059,103 @@ Update "contexts" section
cidr: '152.16.40.10/24'
gateway_ip: '152.16.100.20'
+OVS-DPDK configuration options
+++++++++++++++++++++++++++++++
-Network Service Benchmarking - OpenStack with SR-IOV support
-============================================================
+There are number of configuration options available for OVS-DPDK context in
+test case. Mostly they are used for performance tuning.
+
+OVS-DPDK properties:
+''''''''''''''''''''
+
+OVS-DPDK properties example under *ovs_properties* section:
+
+ .. code-block:: console
+
+ ovs_properties:
+ version:
+ ovs: 2.8.1
+ dpdk: 17.05.2
+ pmd_threads: 4
+ pmd_cpu_mask: "0x3c"
+ ram:
+ socket_0: 2048
+ socket_1: 2048
+ queues: 2
+ vpath: "/usr/local"
+ max_idle: 30000
+ lcore_mask: 0x02
+ dpdk_pmd-rxq-affinity:
+ 0: "0:2,1:2"
+ 1: "0:2,1:2"
+ 2: "0:3,1:3"
+ 3: "0:3,1:3"
+ vhost_pmd-rxq-affinity:
+ 0: "0:3,1:3"
+ 1: "0:3,1:3"
+ 2: "0:4,1:4"
+ 3: "0:4,1:4"
+
+OVS-DPDK properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | version || Version of OVS and DPDK to be installed |
+ | || There is a relation between OVS and DPDK |
+ | | version which can be found at |
+ | | `OVS-DPDK-versions`_ |
+ | || By default OVS: 2.6.0, DPDK: 16.07.2 |
+ +-------------------------+-------------------------------------------------+
+ | lcore_mask || Core bitmask used during DPDK initialization |
+ | | where the non-datapath OVS-DPDK threads such |
+ | | as handler and revalidator threads run |
+ +-------------------------+-------------------------------------------------+
+ | pmd_cpu_mask || Core bitmask that sets which cores are used by |
+ | || OVS-DPDK for datapath packet processing |
+ +-------------------------+-------------------------------------------------+
+ | pmd_threads || Number of PMD threads used by OVS-DPDK for |
+ | | datapath |
+ | || This core mask is evaluated in Yardstick |
+ | || It will be used if pmd_cpu_mask is not given |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for each socket, MB |
+ | || Default is 2048 MB |
+ +-------------------------+-------------------------------------------------+
+ | queues || Number of RX queues used for DPDK physical |
+ | | interface |
+ +-------------------------+-------------------------------------------------+
+ | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vpath || User path for openvswitch files |
+ | || Default is ``/usr/local`` |
+ +-------------------------+-------------------------------------------------+
+ | max_idle || The maximum time that idle flows will remain |
+ | | cached in the datapath, ms |
+ +-------------------------+-------------------------------------------------+
+
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties are same as for SRIOV :ref:`VM image properties label`.
+
+
+OpenStack with SR-IOV support
+-----------------------------
This section describes how to run a Sample VNF test case, using Heat context,
with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
DevStack, with SR-IOV support.
-Single node OpenStack setup with external TG
---------------------------------------------
+Single node OpenStack with external TG
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
@@ -784,32 +1182,28 @@ Single node OpenStack setup with external TG
| | (PF1)<----->(PF1) +--------------------+ |
| | | |
+----------+ +----------------------------+
- trafficgen_1 host
+ trafficgen_0 host
Host pre-configuration
-^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++
-.. warning:: The following configuration requires sudo access to the system. Make
- sure that your user have the access.
+.. warning:: The following configuration requires sudo access to the system.
+ Make sure that your user have the access.
-Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
-disable this extension by default.
+Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
+manufacturers disable this extension by default.
Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
config file ``/etc/default/grub``.
-For the Intel platform:
-
-.. code:: bash
+For the Intel platform::
...
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
...
-For the AMD platform:
-
-.. code:: bash
+For the AMD platform::
...
GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
@@ -824,9 +1218,7 @@ Update the grub configuration file and restart the system:
sudo update-grub
sudo reboot
-Make sure the extension has been enabled:
-
-.. code:: bash
+Make sure the extension has been enabled::
sudo journalctl -b 0 | grep -e IOMMU -e DMAR
@@ -839,11 +1231,13 @@ Make sure the extension has been enabled:
Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
+.. TODO: Refer to the yardstick installation guide for proxy set up
+
Setup system proxy (if needed). Add the following configuration into the
``/etc/environment`` file:
.. note:: The proxy server name/port and IPs should be changed according to
- actuall/current proxy configuration in the lab.
+ actual/current proxy configuration in the lab.
.. code:: bash
@@ -861,17 +1255,15 @@ Upgrade the system:
sudo -EH apt-get upgrade
sudo -EH apt-get dist-upgrade
-Install dependencies needed for the DevStack
+Install dependencies needed for DevStack
.. code:: bash
- sudo -EH apt-get install python
- sudo -EH apt-get install python-dev
- sudo -EH apt-get install python-pip
+ sudo -EH apt-get install python python-dev python-pip
Setup SR-IOV ports on the host:
-.. note:: The ``enp24s0f0``, ``enp24s0f0`` are physical function (PF) interfaces
+.. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
interface names should be changed according to the HW environment used for
testing.
@@ -888,12 +1280,12 @@ Setup SR-IOV ports on the host:
DevStack installation
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
-Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
-documentation to install OpenStack on a host. Please note, that stable
-``pike`` branch of devstack repo should be used during the installation.
-The required `local.conf`` configuration file are described below.
+If you want to try out NSB, but don't have OpenStack set-up, you can use
+`Devstack`_ to install OpenStack on a host. Please note, that the
+``stable/pike`` branch of devstack repo should be used during the installation.
+The required ``local.conf`` configuration file is described below.
DevStack configuration file:
@@ -904,28 +1296,26 @@ DevStack configuration file:
commands to get device and vendor id of the virtual function (VF).
.. literalinclude:: code/single-devstack-local.conf
- :language: console
+ :language: ini
Start the devstack installation on a host.
-
TG host configuration
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
-Yardstick automatically install and configure Trex traffic generator on TG
+Yardstick automatically installs and configures Trex traffic generator on TG
host based on provided POD file (see below). Anyway, it's recommended to check
-the compatibility of the installed NIC on the TG server with software Trex using
-the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
-
+the compatibility of the installed NIC on the TG server with software Trex
+using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
Run the Sample VNF test case
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++
There is an example of Sample VNF test case ready to be executed in an
OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
-tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
+tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_trex.yaml``.
-Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
+Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
context.
Create pod file for TG in the yardstick repo folder located in the yardstick
@@ -936,7 +1326,7 @@ container:
command to get the PF PCI address for ``vpci`` field.
.. literalinclude:: code/single-yardstick-pod.conf
- :language: console
+ :language: ini
Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
@@ -944,7 +1334,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section.
Multi node OpenStack TG and VNF setup (two nodes)
--------------------------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
@@ -955,7 +1345,7 @@ Multi node OpenStack TG and VNF setup (two nodes)
| |sample-VNF VM | | | |sample-VNF VM | |
| | | | | | | |
| | TG | | | | DUT | |
- | | trafficgen_1 | | | | (VNF) | |
+ | | trafficgen_0 | | | | (VNF) | |
| | | | | | | |
| +--------+ +--------+ | | +--------+ +--------+ |
| | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
@@ -975,19 +1365,17 @@ Multi node OpenStack TG and VNF setup (two nodes)
Controller/Compute pre-configuration
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++
Pre-configuration of the controller and compute hosts are the same as
-described in `Host pre-configuration`_ section. Follow the steps in the section.
-
+described in `Host pre-configuration`_ section.
DevStack configuration
-^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++
-Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
-documentation to install OpenStack on a host. Please note, that stable
-``pike`` branch of devstack repo should be used during the installation.
-The required `local.conf`` configuration file are described below.
+A reference ``local.conf`` for deploying OpenStack in a multi-host environment
+using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
+devstack repo should be used during the installation.
.. note:: Update the devstack configuration files by replacing angluar brackets
with a short description inside.
@@ -998,26 +1386,26 @@ The required `local.conf`` configuration file are described below.
DevStack configuration file for controller host:
.. literalinclude:: code/multi-devstack-controller-local.conf
- :language: console
+ :language: ini
DevStack configuration file for compute host:
.. literalinclude:: code/multi-devstack-compute-local.conf
- :language: console
+ :language: ini
Start the devstack installation on the controller and compute hosts.
-
Run the sample vFW TC
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
-Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
+Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
context.
-Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
-tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
-context using steps described in `NS testing - using yardstick CLI`_ section
-and the following yardtick command line arguments:
+Run the sample vFW RFC2544 SR-IOV test case
+(``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
+in the heat context using steps described in
+`NS testing - using yardstick CLI`_ section and the following Yardstick command
+line arguments:
.. code:: bash
@@ -1025,8 +1413,8 @@ and the following yardtick command line arguments:
samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
-Enabling other Traffic generator
-================================
+Enabling other Traffic generators
+---------------------------------
IxLoad
^^^^^^
@@ -1045,46 +1433,16 @@ IxLoad
.. code-block:: console
- cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
+ cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
+ etc/yardstick/nodes/pod_ixia.yaml
Config ``pod_ixia.yaml``
- .. code-block:: yaml
-
- nodes:
- -
- name: trafficgen_1
- role: IxNet
- ip: 1.2.1.1 #ixia machine ip
- user: user
- password: r00t
- key_filename: /root/.ssh/id_rsa
- tg_config:
- ixchassis: "1.2.1.7" #ixia chassis ip
- tcl_port: "8009" # tcl server port
- lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
- root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
- py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
- py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
- dut_result_dir: "/mnt/ixia"
- version: 8.1
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:5" # Card:port
- driver: "none"
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:10:64:14:00"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:6" # [(Card, port)]
- driver: "none"
- dpdk_port_num: 1
- local_ip: "152.40.40.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:28:28:14:00"
-
- for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
+ .. literalinclude:: code/pod_ixia.yaml
+ :language: yaml
+
+ for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
+ for ovs-dpdk/sriov configuration
3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
You will also need to configure the IxLoad machine to start the IXIA
@@ -1094,7 +1452,7 @@ IxLoad
* Go to:
``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
or
- ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
+ ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
4. Create a folder ``Results`` in c:\ and share the folder on the network.
@@ -1102,57 +1460,27 @@ IxLoad
``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
IxNetwork
----------
+^^^^^^^^^
+
+IxNetwork testcases use IxNetwork API Python Bindings module, which is
+installed as part of the requirements of the project.
-1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
- (Download from ixia support site)
- Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
-2. Update pod_ixia.yaml file with ixia details.
+1. Update ``pod_ixia.yaml`` file with ixia details.
.. code-block:: console
- cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
-
- Config pod_ixia.yaml
-
- .. code-block:: yaml
-
- nodes:
- -
- name: trafficgen_1
- role: IxNet
- ip: 1.2.1.1 #ixia machine ip
- user: user
- password: r00t
- key_filename: /root/.ssh/id_rsa
- tg_config:
- ixchassis: "1.2.1.7" #ixia chassis ip
- tcl_port: "8009" # tcl server port
- lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
- root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
- py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
- py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
- dut_result_dir: "/mnt/ixia"
- version: 8.1
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:5" # Card:port
- driver: "none"
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:10:64:14:00"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:6" # [(Card, port)]
- driver: "none"
- dpdk_port_num: 1
- local_ip: "152.40.40.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:28:28:14:00"
-
- for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
-
-3. Start IxNetwork TCL Server
+ cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
+ etc/yardstick/nodes/pod_ixia.yaml
+
+ Configure ``pod_ixia.yaml``
+
+ .. literalinclude:: code/pod_ixia.yaml
+ :language: yaml
+
+ for sriov/ovs_dpdk pod files, please refer to above
+ `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
+
+2. Start IxNetwork TCL Server
You will also need to configure the IxNetwork machine to start the IXIA
IxNetworkTclServer. This can be started like so:
@@ -1161,6 +1489,54 @@ IxNetwork
``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
(or ``IxNetworkApiServer``)
-4. Execute testcase in samplevnf folder e.g.
+3. Execute testcase in samplevnf folder e.g.
``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
+Spirent Landslide
+-----------------
+
+In order to use Spirent Landslide for vEPC testcases, some dependencies have
+to be preinstalled and properly configured.
+
+- Java
+
+ 32-bit Java installation is required for the Spirent Landslide TCL API.
+
+ | ``$ sudo apt-get install openjdk-8-jdk:i386``
+
+ .. important::
+ Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
+ check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
+ section of installation instructions.
+
+- LsApi (Tcl API module)
+
+ Follow Landslide documentation for detailed instructions on Linux
+ installation of Tcl API and its dependencies
+ ``http://TAS_HOST_IP/tclapiinstall.html``.
+ For working with LsApi Python wrapper only steps 1-5 are required.
+
+ .. note:: After installation make sure your API home path is included in
+ ``PYTHONPATH`` environment variable.
+
+ .. important::
+ The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
+ For LsApi module to initialize correctly following lines (184-186) in
+ lsapi.py
+
+ .. code-block:: python
+
+ ldpath = os.environ.get('LD_LIBRARY_PATH', '')
+ if ldpath == '':
+ environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
+
+ should be changed to:
+
+ .. code-block:: python
+
+ ldpath = os.environ.get('LD_LIBRARY_PATH', '')
+ if not ldpath == '':
+ environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
+
+.. note:: The Spirent landslide TCL software package needs to be updated in case
+ the user upgrades to a new version of Spirent landslide software.
diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst
index 2e741822e..1f9e4d4c6 100644
--- a/docs/testing/user/userguide/14-nsb-operation.rst
+++ b/docs/testing/user/userguide/14-nsb-operation.rst
@@ -1,7 +1,17 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
+.. (c) OPNFV, 2016-2019 Intel Corporation.
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
Yardstick - NSB Testing - Operation
===================================
@@ -84,11 +94,121 @@ In this example we have ``TRex xe0 <-> xe0 VNF xe1 <-> xe0 UDP_Replay``
downlink_0:
- xe0
+
+Availability zone
+^^^^^^^^^^^^^^^^^
+
+The configuration of the availability zone is requred in cases where location
+of exact compute host/group of compute hosts needs to be specified for
+:term:`SampleVNF` or traffic generator in the heat test case. If this is the
+case, please follow the instructions below.
+
+.. _`Create a host aggregate`:
+
+1. Create a host aggregate in the OpenStack and add the available compute hosts
+ into the aggregate group.
+
+ .. note:: Change the ``<AZ_NAME>`` (availability zone name), ``<AGG_NAME>``
+ (host aggregate name) and ``<HOST>`` (host name of one of the compute) in the
+ commands below.
+
+ .. code-block:: bash
+
+ # create host aggregate
+ openstack aggregate create --zone <AZ_NAME> \
+ --property availability_zone=<AZ_NAME> <AGG_NAME>
+ # show available hosts
+ openstack compute service list --service nova-compute
+ # add selected host into the host aggregate
+ openstack aggregate add host <AGG_NAME> <HOST>
+
+2. To specify the OpenStack location (the exact compute host or group of the hosts)
+ of SampleVNF or traffic generator in the heat test case, the ``availability_zone`` server
+ configuration option should be used. For example:
+
+ .. note:: The ``<AZ_NAME>`` (availability zone name) should be changed according
+ to the name used during the host aggregate creation steps above.
+
+ .. code-block:: yaml
+
+ context:
+ name: yardstick
+ image: yardstick-samplevnfs
+ ...
+ servers:
+ vnf_0:
+ ...
+ availability_zone: <AZ_NAME>
+ ...
+ tg__0:
+ ...
+ availability_zone: <AZ_NAME>
+ ...
+ networks:
+ ...
+
+There are two example of SampleVNF scale out test case which use the
+``availability zone`` feature to specify the exact location of scaled VNFs and
+traffic generators.
+
+Those are:
+
+.. code-block:: console
+
+ <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml
+ <repo>/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_out.yaml
+
+.. note:: This section describes the PROX scale-out testcase, but the same
+ procedure is used for the vFW test case.
+
+1. Before running the scale-out test case, make sure the host aggregates are
+ configured in the OpenStack environment. To check this, run the following
+ command:
+
+ .. code-block:: console
+
+ # show configured host aggregates (example)
+ openstack aggregate list
+ +----+------+-------------------+
+ | ID | Name | Availability Zone |
+ +----+------+-------------------+
+ | 4 | agg0 | AZ_NAME_0 |
+ | 5 | agg1 | AZ_NAME_1 |
+ +----+------+-------------------+
+
+2. If no host aggregates are configured, please follow the instructions to
+ `Create a host aggregate`_
+
+
+3. Run the SampleVNF PROX scale-out test case, specifying the
+ ``availability zone`` of each VNF and traffic generator as task arguments.
+
+ .. note:: The ``az_0`` and ``az_1`` should be changed according to the host
+ aggregates created in the OpenStack.
+
+ .. code-block:: console
+
+ yardstick -d task start \
+ <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml\
+ --task-args='{
+ "num_vnfs": 4, "availability_zone": {
+ "vnf_0": "az_0", "tg_0": "az_1",
+ "vnf_1": "az_0", "tg_1": "az_1",
+ "vnf_2": "az_0", "tg_2": "az_1",
+ "vnf_3": "az_0", "tg_3": "az_1"
+ }
+ }'
+
+ ``num_vnfs`` specifies how many VNFs are going to be deployed in the
+ ``heat`` contexts. ``vnf_X`` and ``tg_X`` arguments configure the
+ availability zone where the VNF and traffic generator is going to be deployed.
+
+
Collectd KPIs
-------------
NSB can collect KPIs from collected. We have support for various plugins
-enabled by the Barometer project.
+enabled by the :term:`Barometer` project.
The default yardstick-samplevnf has collectd installed. This allows for
collecting KPIs from the VNF.
@@ -98,12 +218,11 @@ We assume that collectd is not installed on the compute nodes.
To collectd KPIs from the NFVi compute nodes:
-
* install_collectd on the compute nodes
* create pod.yaml for the compute nodes
* enable specific plugins depending on the vswitch and DPDK
- example pod.yaml section for Compute node running collectd.
+ example ``pod.yaml`` section for Compute node running collectd.
.. code-block:: yaml
@@ -146,23 +265,23 @@ to the VNF.
An example scale-up Heat testcase is:
-.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
- :language: yaml
+.. literalinclude:: /../samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_trex_scale-up.yaml
+ :language: yaml+jinja
This testcase template requires specifying the number of VCPUs, Memory and Ports.
We set the VCPUs and memory using the ``--task-args`` options
.. code-block:: console
- yardstick task start --task-args='{"mem": 10480, "vcpus": 4, "ports": 2}' \
+ yardstick task start --task-args='{"mem": 10480, "vcpus": 4, "vports": 2}' \
samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
In order to support ports scale-up, traffic and topology templates need to be used in testcase.
A example topology template is:
-.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
- :language: yaml
+.. literalinclude:: /../samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
+ :language: yaml+jinja
This template has ``vports`` as an argument. To pass this argument it needs to
be configured in ``extra_args`` scenario definition. Please note that more
@@ -183,8 +302,8 @@ For example:
A example traffic profile template is:
-.. literalinclude:: /submodules/yardstick/samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
- :language: yaml
+.. literalinclude:: /../samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
+ :language: yaml+jinja
There is an option to provide predefined config for SampleVNFs. Path to config
file may by specified in ``vnf_config`` scenario section.
@@ -200,11 +319,10 @@ Baremetal
^^^^^^^^^
1. Follow above traffic generator section to setup.
2. Edit num of threads in
- ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml``
+ ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_trex_scale_up.yaml``
e.g, 6 Threads for given VNF
-.. code-block:: yaml
-
+.. code-block:: yaml+jinja
schema: yardstick:task:0.1
scenarios:
@@ -213,8 +331,8 @@ Baremetal
traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml
topology: vfw-tg-topology.yaml
nodes:
- tg__0: trafficgen_1.yardstick
- vnf__0: vnf.yardstick
+ tg__0: trafficgen_0.yardstick
+ vnf__0: vnf_0.yardstick
options:
framesize:
uplink: {64B: 100}
@@ -242,12 +360,12 @@ Baremetal
file: /etc/yardstick/nodes/pod.yaml
Scale-Out
---------------------
+---------
VNFs performance data with scale-out helps
- * in capacity planning to meet the given network node requirements
- * in comparison between different VNF vendor offerings
+ * capacity planning to meet the given network node requirements
+ * comparison between different VNF vendor offerings
* better the scale-out index, provides the flexibility in meeting future
capacity requirements
@@ -263,7 +381,7 @@ Scale-out not supported on Baremetal.
.. code-block:: console
cd <repo>/ansible
- trex: standalone_ovs_scale_out_trex_test.yaml or standalone_sriov_scale_out_trex_test.yaml
+ trex: standalone_ovs_scale_out_test.yaml or standalone_sriov_scale_out_test.yaml
ixia: standalone_ovs_scale_out_ixia_test.yaml or standalone_sriov_scale_out_ixia_test.yaml
ixia_correlated: standalone_ovs_scale_out_ixia_correlated_test.yaml or standalone_sriov_scale_out_ixia_correlated_test.yaml
@@ -308,8 +426,281 @@ options section.
scenarios:
- type: NSPerf
nodes:
- tg__0: tg_0.yardstick
+ tg__0: trafficgen_0.yardstick
options:
tg_0:
queues_per_port: 2
+
+
+Standalone configuration
+------------------------
+
+NSB supports certain Standalone deployment configurations.
+Standalone supports provisioning a VM in a standalone visualised environment using kvm/qemu.
+There two types of Standalone contexts available: OVS-DPDK and SRIOV.
+OVS-DPDK uses OVS network with DPDK drivers.
+SRIOV enables network traffic to bypass the software switch layer of the Hyper-V stack.
+
+Emulated machine type
+^^^^^^^^^^^^^^^^^^^^^
+
+For better performance test results of emulated VM spawned by Yardstick SA
+context (OvS-DPDK/SRIOV), it may be important to control the emulated machine
+type used by QEMU emulator. This attribute can be configured via TC definition
+in ``contexts`` section under ``extra_specs`` configuration.
+
+For example:
+
+.. code-block:: yaml
+
+ contexts:
+ ...
+ - type: StandaloneSriov
+ ...
+ flavor:
+ ...
+ extra_specs:
+ ...
+ machine_type: pc-i440fx-bionic
+
+Where, ``machine_type`` can be set to one of the emulated machine type
+supported by QEMU running on SUT platform. To get full list of supported
+emulated machine types, the following command can be used on the target SUT
+host.
+
+.. code-block:: yaml
+
+ # qemu-system-x86_64 -machine ?
+
+By default, the ``machine_type`` option is set to ``pc-i440fx-xenial`` which is
+suitable for running Ubuntu 16.04 VM image. So, if this type is not supported
+by the target platform or another VM image is used for stand alone (SA) context
+VM (e.g.: ``bionic`` image for Ubuntu 18.04), this configuration should be
+changed accordingly.
+
+Standalone with OVS-DPDK
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+SampleVNF image is spawned in a VM on a baremetal server.
+OVS with DPDK is installed on the baremetal server.
+
+.. note:: Ubuntu 17.10 requires DPDK v.17.05 and higher, DPDK v.17.05 requires OVS v.2.8.0.
+
+Default values for OVS-DPDK:
+
+ * queues: 4
+ * lcore_mask: ""
+ * pmd_cpu_mask: "0x6"
+
+Sample test case file
+^^^^^^^^^^^^^^^^^^^^^
+
+1. Prepare SampleVNF image and copy it to ``flavor/images``.
+2. Prepare context files for TREX and SampleVNF under ``contexts/file``.
+3. Add bridge named ``br-int`` to the baremetal where SampleVNF image is deployed.
+4. Modify ``networks/phy_port`` accordingly to the baremetal setup.
+5. Run test from:
+
+.. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_trex.yaml
+ :language: yaml+jinja
+
+Preparing test run of vEPC test case
+------------------------------------
+
+Provided vEPC test cases are examples of emulation of vEPC infrastructure
+components, such as UE, eNodeB, MME, SGW, PGW.
+
+Location of vEPC test cases: ``samples/vnf_samples/nsut/vepc/``.
+
+Before running a specific vEPC test case using NSB, some preconfiguration
+needs to be done.
+
+Update Spirent Landslide TG configuration in pod file
+=====================================================
+
+Examples of ``pod.yaml`` files could be found in
+:file:`etc/yardstick/nodes/standalone`.
+The name of related pod file could be checked in the context section of NSB
+test case.
+
+The ``pod.yaml`` related to vEPC test case uses some sub-structures that hold the
+details of accessing the Spirent Landslide traffic generator.
+These subsections and the changes to be done in provided example pod file are
+described below.
+
+1. ``tas_manager``: data under this key holds the information required to
+access Landslide TAS (Test Administration Server) and perform needed
+configurations on it.
+
+ * ``ip``: IP address of TAS Manager node; should be updated according to test
+ setup used
+ * ``super_user``: superuser name; could be retrieved from Landslide documentation
+ * ``super_user_password``: superuser password; could be retrieved from
+ Landslide documentation
+ * ``cfguser_password``: password of predefined user named 'cfguser'; default
+ password could be retrieved from Landslide documentation
+ * ``test_user``: username to be used during test run as a Landslide library
+ name; to be defined by test run operator
+ * ``test_user_password``: password of test user; to be defined by test run
+ operator
+ * ``proto``: *http* or *https*; to be defined by test run operator
+ * ``license``: Landslide license number installed on TAS
+
+2. The ``config`` section holds information about test servers (TSs) and
+systems under test (SUTs). Data is represented as a list of entries.
+Each such entry contains:
+
+ * ``test_server``: this subsection represents data related to test server
+ configuration, such as:
+
+ * ``name``: test server name; unique custom name to be defined by test
+ operator
+ * ``role``: this value is used as a key to bind specific Test Server and
+ TestCase; should be set to one of test types supported by TAS license
+ * ``ip``: Test Server IP address
+ * ``thread_model``: parameter related to Test Server performance mode.
+ The value should be one of the following: "Legacy" | "Max" | "Fireball".
+ Refer to Landslide documentation for details.
+ * ``phySubnets``: a structure used to specify IP ranges reservations on
+ specific network interfaces of related Test Server. Structure fields are:
+
+ * ``base``: start of IP address range
+ * ``mask``: IP range mask in CIDR format
+ * ``name``: network interface name, e.g. *eth1*
+ * ``numIps``: size of IP address range
+
+ * ``preResolvedArpAddress``: a structure used to specify the range of IP
+ addresses for which the ARP responses will be emulated
+
+ * ``StartingAddress``: IP address specifying the start of IP address range
+ * ``NumNodes``: size of the IP address range
+
+ * ``suts``: a structure that contains definitions of each specific SUT
+ (represents a vEPC component). SUT structure contains following key/value
+ pairs:
+
+ * ``name``: unique custom string specifying SUT name
+ * ``role``: string value corresponding with an SUT role specified in the
+ session profile (test session template) file
+ * ``managementIp``: SUT management IP adress
+ * ``phy``: network interface name, e.g. *eth1*
+ * ``ip``: vEPC component IP address used in test case topology
+ * ``nextHop``: next hop IP address, to allow for vEPC inter-node communication
+
+Update NSB test case definitions
+================================
+NSB test case file designated for vEPC testing contains an example of specific
+test scenario configuration.
+Test operator may change these definitions as required for the use case that
+requires testing.
+Specifically, following subsections of the vEPC test case (section **scenarios**)
+may be changed.
+
+1. Subsection ``options``: contains custom parameters used for vEPC testing
+
+ * subsection ``dmf``: may contain one or more parameters specified in
+ ``traffic_profile`` template file
+ * subsection ``test_cases``: contains re-definitions of parameters specified
+ in ``session_profile`` template file
+
+ .. note:: All parameters in ``session_profile``, value of which is a
+ placeholder, needs to be re-defined to construct a valid test session.
+
+2. Subsection ``runner``: specifies the test duration and the interval of
+TG and VNF side KPIs polling. For more details, refer to :doc:`03-architecture`.
+
+Preparing test run of vPE test case
+-----------------------------------
+The vPE (Provider Edge Router) is a :term: `VNF` approximation
+serving as an Edge Router. The vPE is approximated using the
+``ip_pipeline`` dpdk application.
+
+ .. image:: /../docs/testing/developer/devguide/images/vPE_Diagram.png
+ :width: 800px
+ :alt: NSB vPE Diagram
+
+The ``vpe_config`` file must be passed as it is not auto generated.
+The ``vpe_script`` defines the rules applied to each of the pipelines. This can be
+auto generated or a file can be passed using the ``script_file`` option in
+``vnf_config`` as shown below. The ``full_tm_profile_file`` option must be
+used if a traffic manager is defined in ``vpe_config``.
+
+.. code-block:: yaml
+
+ vnf_config: { file: './vpe_config/vpe_config_2_ports',
+ action_bulk_file: './vpe_config/action_bulk_512.txt',
+ full_tm_profile_file: './vpe_config/full_tm_profile_10G.cfg',
+ script_file: './vpe_config/vpe_script_sample' }
+
+Testcases for vPE can be found in the ``vnf_samples/nsut/vpe`` directory.
+A testcase can be started with the following command as an example:
+
+.. code-block:: bash
+
+ yardstick task start /yardstick/samples/vnf_samples/nsut/vpe/tc_baremetal_rfc2544_ipv4_1flow_64B_ixia.yaml
+
+Preparing test run of vIPSEC test case
+--------------------------------------
+
+Location of vIPSEC test cases: ``samples/vnf_samples/nsut/ipsec/``.
+
+Before running a specific vIPSEC test case using NSB, some dependencies have to be
+preinstalled and properly configured.
+- VPP
+
+.. code-block:: console
+
+ export UBUNTU="xenial"
+ export RELEASE=".stable.1810"
+ sudo rm /etc/apt/sources.list.d/99fd.io.list
+ echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io$RELEASE.ubuntu.$UBUNTU.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
+ sudo apt-get update
+ sudo apt-get install vpp vpp-lib vpp-plugin vpp-dbg vpp-dev vpp-api-java vpp-api-python vpp-api-lua
+
+- VAT templates
+
+ VAT templates is required for the VPP API.
+
+.. code-block:: console
+
+ mkdir -p /opt/nsb_bin/vpp/templates/
+ echo 'exec trace add dpdk-input 50' > /opt/nsb_bin/vpp/templates/enable_dpdk_traces.vat
+ echo 'exec trace add vhost-user-input 50' > /opt/nsb_bin/vpp/templates/enable_vhost_user_traces.vat
+ echo 'exec trace add memif-input 50' > /opt/nsb_bin/vpp/templates/enable_memif_traces.vat
+ cat > /opt/nsb_bin/vpp/templates/dump_interfaces.vat << EOL
+ sw_interface_dump
+ dump_interface_table
+ quit
+ EOL
+
+
+Preparing test run of vCMTS test case
+-------------------------------------
+
+Location of vCMTS test cases: ``samples/vnf_samples/nsut/cmts/``.
+
+Before running a specific vIPSEC test case using NSB, some changes must be
+made to the original vCMTS package.
+
+Allow SSH access to the docker images
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Follow the documentation at ``https://docs.docker.com/engine/examples/running_ssh_service/``
+to allow SSH access to the Pktgen/vcmts-d containers located at:
+
+* ``$VCMTS_ROOT/pktgen/docker/docker-image-pktgen/Dockerfile`` and
+* ``$VCMTS_ROOT/vcmtsd/docker/docker-image-vcmtsd/Dockerfile``
+
+
+Deploy the ConfigMaps for Pktgen and vCMTSd
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. code-block:: bash
+
+ cd $VCMTS_ROOT/kubernetes/helm/pktgen
+ helm template . -x templates/pktgen-configmap.yaml > configmap.yaml
+ kubectl create -f configmap.yaml
+
+ cd $VCMTS_ROOT/kubernetes/helm/vcmtsd
+ helm template . -x templates/vcmts-configmap.yaml > configmap.yaml
+ kubectl create -f configmap.yaml
+
diff --git a/docs/testing/user/userguide/15-list-of-tcs.rst b/docs/testing/user/userguide/15-list-of-tcs.rst
index 678f0f9a9..b727aa3c9 100644
--- a/docs/testing/user/userguide/15-list-of-tcs.rst
+++ b/docs/testing/user/userguide/15-list-of-tcs.rst
@@ -1,139 +1,136 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB and others.
-
-====================
-Yardstick Test Cases
-====================
-
-Abstract
-========
-
-This chapter lists available Yardstick test cases.
-Yardstick test cases are divided in two main categories:
-
-* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology
- described in :doc:`02-methodology`
-
-* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more
- aspect of a feature delivered by an OPNFV Project, including the test cases
- developed for the :term:`VTC`.
-
-Generic NFVI Test Case Descriptions
-===================================
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc001.rst
- opnfv_yardstick_tc002.rst
- opnfv_yardstick_tc004.rst
- opnfv_yardstick_tc005.rst
- opnfv_yardstick_tc008.rst
- opnfv_yardstick_tc009.rst
- opnfv_yardstick_tc010.rst
- opnfv_yardstick_tc011.rst
- opnfv_yardstick_tc012.rst
- opnfv_yardstick_tc014.rst
- opnfv_yardstick_tc024.rst
- opnfv_yardstick_tc037.rst
- opnfv_yardstick_tc038.rst
- opnfv_yardstick_tc042.rst
- opnfv_yardstick_tc043.rst
- opnfv_yardstick_tc044.rst
- opnfv_yardstick_tc055.rst
- opnfv_yardstick_tc061.rst
- opnfv_yardstick_tc063.rst
- opnfv_yardstick_tc069.rst
- opnfv_yardstick_tc070.rst
- opnfv_yardstick_tc071.rst
- opnfv_yardstick_tc072.rst
- opnfv_yardstick_tc073.rst
- opnfv_yardstick_tc074.rst
- opnfv_yardstick_tc075.rst
- opnfv_yardstick_tc076.rst
- opnfv_yardstick_tc078.rst
- opnfv_yardstick_tc079.rst
- opnfv_yardstick_tc080.rst
- opnfv_yardstick_tc081.rst
- opnfv_yardstick_tc083.rst
-
-OPNFV Feature Test Cases
-========================
-
-H A
----
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc019.rst
- opnfv_yardstick_tc025.rst
- opnfv_yardstick_tc045.rst
- opnfv_yardstick_tc046.rst
- opnfv_yardstick_tc047.rst
- opnfv_yardstick_tc048.rst
- opnfv_yardstick_tc049.rst
- opnfv_yardstick_tc050.rst
- opnfv_yardstick_tc051.rst
- opnfv_yardstick_tc052.rst
- opnfv_yardstick_tc053.rst
- opnfv_yardstick_tc054.rst
- opnfv_yardstick_tc056.rst
- opnfv_yardstick_tc057.rst
- opnfv_yardstick_tc058.rst
- opnfv_yardstick_tc087.rst
-
-IPv6
-----
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc027.rst
-
-KVM
----
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc028.rst
-
-Parser
-------
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc040.rst
-
-StorPerf
---------
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc074.rst
-
-virtual Traffic Classifier
---------------------------
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc006.rst
- opnfv_yardstick_tc007.rst
- opnfv_yardstick_tc020.rst
- opnfv_yardstick_tc021.rst
-
-Templates
-=========
-
-.. toctree::
- :maxdepth: 1
-
- testcase_description_v2_template
- Yardstick_task_templates
-
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Ericsson AB and others.
+
+====================
+Yardstick Test Cases
+====================
+
+Abstract
+========
+
+This chapter lists available Yardstick test cases.
+Yardstick test cases are divided in two main categories:
+
+* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology
+ described in :doc:`02-methodology`
+
+* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more
+ aspect of a feature delivered by an OPNFV Project.
+
+Generic NFVI Test Case Descriptions
+===================================
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc001.rst
+ opnfv_yardstick_tc002.rst
+ opnfv_yardstick_tc004.rst
+ opnfv_yardstick_tc005.rst
+ opnfv_yardstick_tc006.rst
+ opnfv_yardstick_tc008.rst
+ opnfv_yardstick_tc009.rst
+ opnfv_yardstick_tc010.rst
+ opnfv_yardstick_tc011.rst
+ opnfv_yardstick_tc012.rst
+ opnfv_yardstick_tc014.rst
+ opnfv_yardstick_tc015.rst
+ opnfv_yardstick_tc024.rst
+ opnfv_yardstick_tc037.rst
+ opnfv_yardstick_tc038.rst
+ opnfv_yardstick_tc042.rst
+ opnfv_yardstick_tc043.rst
+ opnfv_yardstick_tc044.rst
+ opnfv_yardstick_tc055.rst
+ opnfv_yardstick_tc061.rst
+ opnfv_yardstick_tc063.rst
+ opnfv_yardstick_tc069.rst
+ opnfv_yardstick_tc070.rst
+ opnfv_yardstick_tc071.rst
+ opnfv_yardstick_tc072.rst
+ opnfv_yardstick_tc073.rst
+ opnfv_yardstick_tc074.rst
+ opnfv_yardstick_tc075.rst
+ opnfv_yardstick_tc076.rst
+ opnfv_yardstick_tc078.rst
+ opnfv_yardstick_tc079.rst
+ opnfv_yardstick_tc080.rst
+ opnfv_yardstick_tc081.rst
+ opnfv_yardstick_tc083.rst
+ opnfv_yardstick_tc084.rst
+
+OPNFV Feature Test Cases
+========================
+
+H A
+---
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc019.rst
+ opnfv_yardstick_tc025.rst
+ opnfv_yardstick_tc045.rst
+ opnfv_yardstick_tc046.rst
+ opnfv_yardstick_tc047.rst
+ opnfv_yardstick_tc048.rst
+ opnfv_yardstick_tc049.rst
+ opnfv_yardstick_tc050.rst
+ opnfv_yardstick_tc051.rst
+ opnfv_yardstick_tc052.rst
+ opnfv_yardstick_tc053.rst
+ opnfv_yardstick_tc054.rst
+ opnfv_yardstick_tc056.rst
+ opnfv_yardstick_tc057.rst
+ opnfv_yardstick_tc058.rst
+ opnfv_yardstick_tc087.rst
+ opnfv_yardstick_tc088.rst
+ opnfv_yardstick_tc089.rst
+ opnfv_yardstick_tc090.rst
+ opnfv_yardstick_tc091.rst
+ opnfv_yardstick_tc092.rst
+ opnfv_yardstick_tc093.rst
+
+IPv6
+----
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc027.rst
+
+KVM
+---
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc028.rst
+
+Parser
+------
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc040.rst
+
+StorPerf
+--------
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc074.rst
+
+Templates
+=========
+
+.. toctree::
+ :maxdepth: 1
+
+ testcase_description_v2_template
+ Yardstick_task_templates
+
diff --git a/docs/testing/user/userguide/code/pod_ixia.yaml b/docs/testing/user/userguide/code/pod_ixia.yaml
new file mode 100644
index 000000000..4ab56fe4e
--- /dev/null
+++ b/docs/testing/user/userguide/code/pod_ixia.yaml
@@ -0,0 +1,31 @@
+nodes:
+-
+ name: trafficgen_1
+ role: IxNet
+ ip: 1.2.1.1 #ixia machine ip
+ user: user
+ password: r00t
+ key_filename: /root/.ssh/id_rsa
+ tg_config:
+ ixchassis: "1.2.1.7" #ixia chassis ip
+ tcl_port: "8009" # tcl server port
+ lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
+ root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
+ py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
+ dut_result_dir: "/mnt/ixia"
+ version: 8.1
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "2:5" # Card:port
+ driver: "none"
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.0.0"
+ local_mac: "00:98:10:64:14:00"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "2:6" # [(Card, port)]
+ driver: "none"
+ dpdk_port_num: 1
+ local_ip: "152.40.40.20"
+ netmask: "255.255.0.0"
+ local_mac: "00:98:28:28:14:00"
diff --git a/docs/testing/user/userguide/comp-intro.rst b/docs/testing/user/userguide/comp-intro.rst
index ad354b66d..bab6e60da 100644
--- a/docs/testing/user/userguide/comp-intro.rst
+++ b/docs/testing/user/userguide/comp-intro.rst
@@ -7,10 +7,10 @@
Yardstick
=========
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
.. _Presentation: https://wiki.opnfv.org/_media/opnfv_summit_-_yardstick_project.pdf
.. _NFV-TST001: https://docbox.etsi.org/ISG/NFV/Open/Drafts/TST001_-_Pre-deployment_Validation/
-.. _Yardsticktst: https://wiki.opnfv.org/_media/opnfv_summit_-_bridging_opnfv_and_etsi.pdf
+.. _Yardsticktst: http://events17.linuxfoundation.org/sites/events/files/slides/OPNFV%20Summit%20-%20bridging_opnfv_and_etsi.pdf
The project's goal is to verify infrastructure compliance, from the perspective
of a Virtual Network Function (VNF).
diff --git a/docs/testing/user/userguide/glossary.rst b/docs/testing/user/userguide/glossary.rst
index f8ff41887..cef9b69a5 100644
--- a/docs/testing/user/userguide/glossary.rst
+++ b/docs/testing/user/userguide/glossary.rst
@@ -13,53 +13,153 @@ Glossary
API
Application Programming Interface
- DPI
- Deep Packet Inspection
+ Barometer
+ OPNFV NFVi Service Assurance project. Barometer upstreams changes to
+ collectd, OpenStack, etc to improve features related to NFVi monitoring
+ and service assurance.
+ More info on: https://opnfv-barometer.readthedocs.io/en/latest/
+
+ collectd
+ collectd is a system statistics collection daemon.
+ More info on: https://collectd.org/
+
+ context
+ A context describes the environment in which a yardstick testcase will
+ be run. It can refer to a pre-provisioned environment, or an environment
+ that will be set up using OpenStack or Kubernetes.
+
+ Docker
+ Docker provisions and manages containers. Yardstick and many other OPNFV
+ projects are deployed in containers. Docker is required to launch the
+ containerized versions of these projects.
DPDK
Data Plane Development Kit
+ DPI
+ Deep Packet Inspection
+
DSCP
Differentiated Services Code Point
+ flavor
+ A specification of virtual resources used by OpenStack in the creation
+ of a VM instance.
+
+ Grafana
+ A visualization tool, used in Yardstick to retrieve test data from
+ InfluxDB and display it. Grafana works by defining dashboards, which are
+ combinations of visualization panes (e.g. line charts and gauges) and
+ forms that assist the user in formulating SQL-like queries for InfluxDB.
+ More info on: https://grafana.com/
+
IGMP
Internet Group Management Protocol
+ InfluxDB
+ One of the Dispatchers supported by Yardstick, it allows test results to
+ be reported to a time-series database.
+ More info on: https://www.influxdata.com/
+
IOPS
Input/Output Operations Per Second
+ A performance measurement used to benchmark storage devices.
+
+ KPI
+ Key Performance Indicator
+
+ Kubernetes
+ k8s
+ Kubernetes is an open-source container-orchestration system for automating
+ deployment, scaling and management of containerized applications.
+ It is one of the contexts supported in Yardstick.
+
+ MPLS
+ Multiprotocol Label Switching
+
+ NFV
+ Network Function Virtualization
+ NFV is an initiative to take network services which were traditionally run
+ on proprietary, dedicated hardware, and virtualize them to run on general
+ purpose hardware.
+
+ NFVI
+ Network Function Virtualization Infrastructure
+ The servers, routers, switches, etc on which the NFV system runs.
NIC
Network Interface Controller
+ NSB
+ Network Services Benchmarking. A subset of Yardstick features concerned
+ with NFVI and VNF characterization.
+
+ OpenStack
+ OpenStack is a cloud operating system that controls pools of compute,
+ storage, and networking resources. OpenStack is an open source project
+ licensed under the Apache License 2.0.
+
PBFS
Packet Based per Flow State
+ PROX
+ Packet pROcessing eXecution engine
+
QoS
Quality of Service
+ The ability to guarantee certain network or storage requirements to
+ satisfy a Service Level Agreement (SLA) between an application provider
+ and end users.
+ Typically includes performance requirements like networking bandwidth,
+ latency, jitter correction, and reliability as well as storage
+ performance in Input/Output Operations Per Second (IOPS), throttling
+ agreements, and performance expectations at peak load
+
+ runner
+ The part of a Yardstick testcase that determines how the test will be run
+ (e.g. for x iterations, y seconds or until state z is reached). The runner
+ also determines when the metrics are collected/reported.
+
+ SampleVNF
+ OPNFV project providing a repository of reference VNFs.
+ More info on: https://opnfv-samplevnf.readthedocs.io/en/latest/
+
+ scenario
+ The part of a Yardstick testcase that describes each test step.
+
+ SLA
+ Service Level Agreement
+ An SLA is an agreement between a service provider and a customer to
+ provide a certain level of service/performance.
+
+ SR-IOV
+ Single Root IO Virtualization
+ A specification that, when implemented by a physical PCIe
+ device, enables it to appear as multiple separate PCIe devices. This
+ enables multiple virtualized guests to share direct access to the
+ physical device.
+
+ SUT
+ System Under Test
+
+ testcase
+ A task in Yardstick; the yaml file that is read by Yardstick to
+ determine how to run a test.
+
+ ToS
+ Type of Service
VLAN
- Virtual LAN
+ Virtual LAN (Local Area Network)
VM
Virtual Machine
+ An operating system instance that runs on top of a hypervisor.
+ Multiple VMs can run at the same time on the same physical
+ host.
VNF
Virtual Network Function
VNFC
Virtual Network Function Component
-
- NFVI
- Network Function Virtualization Infrastructure
-
- SR-IOV
- Single Root IO Virtualization
-
- SUT
- System Under Test
-
- ToS
- Type of Service
-
- VTC
- Virtual Traffic Classifier
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index b936e723d..ff0bb6f5d 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -11,7 +11,6 @@ Yardstick User Guide
.. toctree::
:maxdepth: 4
- :numbered:
01-introduction
02-methodology
@@ -23,7 +22,6 @@ Yardstick User Guide
08-grafana
09-api
10-yardstick-user-interface
- 11-vtc-overview
12-nsb-overview
13-nsb-installation
14-nsb-operation
diff --git a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
index 895837283..562c80ff7 100644..100755
--- a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
+++ b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
@@ -27,4 +27,15 @@ NSB PROX Test Case Descriptions
tc_prox_context_buffering_port
tc_prox_context_load_balancer_port
tc_prox_context_vpe_port
- tc_prox_context_lw_after_port
+ tc_prox_context_lw_aftr_port
+ tc_epc_default_bearer_landslide
+ tc_epc_dedicated_bearer_landslide
+ tc_epc_saegw_tput_relocation_landslide
+ tc_epc_network_service_request_landslide
+ tc_epc_ue_service_request_landslide
+ tc_vfw_rfc2544
+ tc_vfw_rfc2544_correlated
+ tc_vfw_rfc3511
+ tc_vpp_baremetal_crypto_ipsec
+ tc_vims_context_sipp
+ tc_pktgen_k8s_vcmts
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
new file mode 100644
index 000000000..ffe4f6c19
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
@@ -0,0 +1,177 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case without link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscriber connections to BNG, runs prioritized traffic|
+| | on maximum throughput on all ports and collects network |
+| | and PPPoE subscriber metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up.yaml |
+| | |
+| | Mentioned test case is a template and number of ports in the |
+| | setup could be passed using cli arguments, e.g: |
+| | |
+| | yardstick -d task start --task-args='{vports: 8}' <tc_yaml> |
+| | |
+| | By default, vports=2. |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test case can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * Setup ports number; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * Enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolves the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports |
+| | (traffic flows are creating between access-core ports |
+| | pairs, traffic is bi-directional); |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. In case drop percentage rate is higher than expected, |
+| | reduce traffic line rate and repeat steps 7-10 again; |
+| | 11. In case drop percentage rate is as expected or number |
+| | of maximum iterations in step 10 achieved, disconnect |
+| | PPPoE subscribers and stop traffic; |
+| | 12. Stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During each iteration interval in the test run, all specified|
+| | metrics are retrieved from IxNetwork and stored in the |
+| | yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The vBNG RFC2544 test case will achieve maximum traffic line |
+| | rate with zero packet loss (or other non-zero allowed |
+| | partial drop rate). |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
new file mode 100644
index 000000000..889ba2410
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
@@ -0,0 +1,179 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case with link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscribers connections to BNG, run prioritized traffic|
+| | causing congestion of access port (port xe0) and collects |
+| | network and PPPoE subscribers metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX.yaml |
+| | |
+| | Number of ports: |
+| | * 8 ports |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
+| | NOTE: that only parameter that can't be changed is ports |
+| | number. To run the test with another number of ports |
+| | traffic profile should be updated. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test cases can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolve the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports. |
+| | While test covers case with access port congestion, |
+| | flows between ports will be created in the following |
+| | way: traffic from two core ports are going to one access |
+| | port causing port congestion and traffic from other two |
+| | core ports is splitting between remaining three access |
+| | ports; |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. Measure drop percentage rate of different priority |
+| | packets on congested port. Expected that all high and |
+| | medium priority packets was forwarded and only low |
+| | priority packets has drops. |
+| | 11. Disconnect PPPoE subscribers and stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During test run, in the end of each iteration all specified |
+| | in the document metrics are retrieved from IxNetwork and |
+| | stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case is successful if all high and medium priority |
+| | packets on congested port was forwarded and only low |
+| | priority packets has drops. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst
new file mode 100644
index 000000000..c8865ed93
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst
@@ -0,0 +1,156 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*********************************************************
+Yardstick Test Case Description: NSB EPC DEDICATED BEARER
+*********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC dedicated bearer test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_{initiator}_dedicated_bearer_landslide |
+| | |
+| | * initiator: dedicated bearer creation initiator side could |
+| | be UE (ue) or Network (network). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability under |
+| | different levels of load (number of subscriber, generated |
+| | traffic throughput, etc.) for case when default and dedicated|
+| | bearers are creating and using for traffic transferring. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC dedicated bearer test cases are listed below: |
+| | |
+| | * tc_epc_ue_dedicated_bearer_create_landslide.yaml |
+| | * tc_epc_network_dedicated_bearer_create_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Number of dedicated bearers per default bearer: |
+| | |
+| | * 1. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional and |
+| | performance testing of different types of mobile networks. |
+| | It emulates real-world control and data traffic of mobile |
+| | subscribers moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC DEDICATED BEARER test cases can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * number of dedicated bearers per default; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * dedicated bearers activation timeout; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies need to be installed. |
+| conditions | The steps are described in NSB installation chapter for the|
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information (TAS |
+| | VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administrator Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start the emulated EPC network nodes; |
+| | * Establish the subscribers connections to EPC network |
+| | (default bearers); |
+| | * Establish the number of dedicated bearers as per per |
+| | default bearer for each subscriber; |
+| | * Create the sessions and transmit traffic through EPC |
+| | network nodes during the specified traffic duration time; |
+| | * Disconnect dedicated bearers; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the results|
+| | in the database for benchmarking purposes. The aim is only |
+| | to collect all the metrics that are provided by Spirent |
+| | Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst
new file mode 100644
index 000000000..9e6d77825
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst
@@ -0,0 +1,149 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*******************************************************
+Yardstick Test Case Description: NSB EPC DEFAULT BEARER
+*******************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC default bearer test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_default_bearer_landslide_{dmf_setup} |
+| | |
+| | * dmf_setup: single or multi dmf test session setup; |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability of EPC under |
+| | different levels of load (number of subscriber, generated |
+| | traffic throughput) for case when only one default bearer is |
+| | using for transferring traffic from UE to Network. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC default bearer test cases are listed below: |
+| | |
+| | * tc_epc_default_bearer_create_landslide.yaml |
+| | * tc_epc_default_bearer_create_landslide_multi_dmf.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP - for single DMF test case; |
+| | * UDP and TCP - for multi DMF test case; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes for UDP packets; |
+| | * 1518 bytes for TCP packets; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC DEFAULT BEARER test cases can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (only |
+| | default bearers are established); |
+| | * Create the sessions and transmit traffic through EPC |
+| | network nodes during the specified traffic duration time; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst
new file mode 100644
index 000000000..85e6ce11a
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst
@@ -0,0 +1,159 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+****************************************************************
+Yardstick Test Case Description: NSB EPC NETWORK SERVICE REQUEST
+****************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC network service request test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_network_service_request_landslide |
+| | |
+| | * initiator: service request initiator side could be UE (ue) |
+| | or Network (network). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test covers case of network initiated service request & |
+| | allows to check processing capabilities of EPC handling high |
+| | amount of continuous Downlink Data Notification messages from|
+| | network to UEs which are in Idle state. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC network service request test cases are listed below: |
+| | |
+| | * tc_epc_network_service_request_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 0.1 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Idle entry time (timeout after which UE goes to Idle state): |
+| | |
+| | * 5s; |
+| | |
+| | Traffic start delay: |
+| | |
+| | * 1000ms. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC NETWORK SERVICE REQUEST test case can be configured |
+| | with different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * timeout after which UE goes to Idle state; |
+| | * Traffic start delay; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Switch UE to Idle state after specified in test case |
+| | timeout; |
+| | * Send Downlink Data Notification from network to UE, that |
+| | will return UE to active state. This process is continuous |
+| | and during whole test run UEs will be going to Idle state |
+| | and will be switched back to active state after Downlink |
+| | Data Notification was received; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst
new file mode 100644
index 000000000..102517562
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst
@@ -0,0 +1,167 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*********************************************************
+Yardstick Test Case Description: NSB EPC SAEGW RELOCATION
+*********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC SAEGW throughput with relocation test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_saegw_tput_relocation_landslide |
+| | |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability of EPC |
+| | handling large amount of subscribers X2 handovers between |
+| | different eNBs while UEs are sending traffic. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC SAEGW throughput with relocation tests are listed |
+| | below: |
+| | |
+| | * tc_epc_saegw_tput_relocation_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Handover type: |
+| | |
+| | * X2 handover; |
+| | |
+| | Mobility time (timeout after sessions were established after |
+| | which handover will start): |
+| | |
+| | * 10000ms; |
+| | |
+| | Handover start type: |
+| | |
+| | * When all sessions started; |
+| | |
+| | Mobility mode: |
+| | |
+| | * Single handoff; |
+| | |
+| | Mobility Rate: |
+| | |
+| | * 120 subscribers/s. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC UE SERVICE REQUEST test cases can be configured with|
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * handover type; |
+| | * mobility rate; |
+| | * mobility time; |
+| | * mobility mode; |
+| | * handover start condition; |
+| | * subscribers disconnection rate; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Start run traffic; |
+| | * After specified in test case mobility timeout, start |
+| | handover process on specified mobility rate; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst
new file mode 100644
index 000000000..0711a0ce3
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst
@@ -0,0 +1,174 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+***********************************************************
+Yardstick Test Case Description: NSB EPC UE SERVICE REQUEST
+***********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC UE service request test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_{initiator}_service_request_landslide |
+| | |
+| | * initiator: service request initiator side could be UE (ue) |
+| | or Network (nw). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capabilities of EPC |
+| | under high user connections rate and traffic load for case |
+| | when UEs initiates service request (UE initiates bearer |
+| | modification request to provide dedicated bearer for new |
+| | type of traffic) |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC ue service request test cases are listed below: |
+| | |
+| | * tc_epc_ue_service_request_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Number of dedicated bearers per default bearer: |
+| | |
+| | * 1. |
+| | |
+| | TFT settings for dedicated bearers: |
+| | |
+| | * TFT configured to filter TCP traffic (Protocol ID 6) |
+| | |
+| | Modified TFT settings: |
+| | |
+| | * Create new TFT to filter UDP traffic (Protocol ID 17) from |
+| | 2002 local port and 2003 remote port; |
+| | |
+| | Modified QoS settings: |
+| | |
+| | * Set QCI 5 for dedicated bearers; |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC UE SERVICE REQUEST test case can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * number of dedicated bearers per default; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * dedicated bearers activation timeout; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | * Starting TFT settings for dedicated bearers; |
+| | * Modified TFT settings for dedicated bearers; |
+| | * Modified QoS settings for dedicated bearers; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Establish the number of dedicated bearer as specified in |
+| | the test case as per default bearer for each subscriber; |
+| | * start run users traffic through EPC network nodes; |
+| | * During traffic is running, send bearer modification request|
+| | after specified in test case timeout; |
+| | * Disconnect dedicated bearers; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst b/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst
new file mode 100755
index 000000000..56f5c27ed
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst
@@ -0,0 +1,102 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB vCMTS
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB Pktgen test for vCMTS characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_vcmts_k8s_pktgen |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Upstream Processing (Per Service Group); |
+| | * Downstream Processing (Per Service Group); |
+| | * Upstream Throughput; |
+| | * Downstream Throughput; |
+| | * Platform Metrics; |
+| | * Power Consumption; |
+| | * Upstream Throughput Time Series; |
+| | * Downstream Throughput Time Series; |
+| | * System Summary; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | * The vCMTS test handles service groups and packet generation |
+| | containers setup, and metrics collection. |
+| | |
+| | * The vCMTS test case is implemented to run in Kubernetes |
+| | environment with vCMTS pre-installed. |
++--------------+---------------------------------------------------------------+
+|configuration | The vCMTS test case configurable values are listed below |
+| | |
+| | * num_sg: Number of service groups (Upstream/Downstream |
+| | container pairs). |
+| | * num_tg: Number of Pktgen containers. |
+| | * vcmtsd_image: vCMTS container image (feat/perf). |
+| | * qat_on: QAT status (true/false). |
+| | |
+| | num_sg and num_tg values should be configured in the test |
+| | case file and in the topology file. |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Intel vCMTS Reference Dataplane |
+| | Reference implementation of a DPDK-based vCMTS (DOCSIS MAC) |
+| | dataplane in a Kubernetes-orchestrated Linux Container |
+| | environment. |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This test cases can be configured with different: |
+| | |
+| | * Number of service groups |
+| | * Number of Pktgen instances |
+| | * QAT offloading |
+| | * Feat/Perf Images for performance or features (more data |
+| | collection) |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | Intel vCMTS Reference Dataplane should be installed and |
+|conditions | runnable on 2 nodes Kubernetes environment with modifications |
+| | to the containers to allow yardstick ssh access, and the |
+| | ConfigMaps from the original vCMTS package deployed. |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | Yardstick is connected to the Kubernetes Master node using |
+| | the configuration file in /etc/kubernetes/admin.yaml |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | The TG containers are created and started on the traffic |
+| | generator server (Master node), While the VNF containers are |
+| | created and started on the data plan server. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Yardstick is connected with the TG and VNF by using ssh. |
+| | to start vCMTS-d, and Pktgen. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | Yardstick connects to the running Pktgen instances to start |
+| | generating traffic using the configurations from: |
+| | /etc/yardstick/pktgen_values.yaml |
+| | |
+| | and connects to the vCMTS-d containers to start the upstream |
+| | and downstream processing using the configurations from: |
+| | /etc/yardstick/vcmtsd_values.yaml |
+| | |
++--------------+---------------------------------------------------------------+
+|step 5 | Yardstick copies vCMTS metrics regularly from the remote |
+| | InfluxDB (deployed by the vCMTS Package) to the local |
+| | Yardstick InfluxDB as configured in the options section in |
+| | the test case file. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | None. The test case will collect the KPIs and plot on |
+| | Grafana. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file
diff --git a/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst b/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
index 6827b0525..3beb5303f 100644
--- a/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
+++ b/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
@@ -3,9 +3,9 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, 2017 Intel Corporation.
-**********************************************
-Yardstick Test Case Description: NSB PROXi VPE
-**********************************************
+*********************************************
+Yardstick Test Case Description: NSB PROX VPE
+*********************************************
+-----------------------------------------------------------------------------+
|NSB PROX test for NFVI characterization |
diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst
new file mode 100644
index 000000000..139990bc3
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst
@@ -0,0 +1,189 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+************************************************
+Yardstick Test Case Description: NSB vFW RFC2544
+************************************************
+
++------------------------------------------------------------------------------+
+| NSB vFW test for VNF characterization |
+| |
++---------------+--------------------------------------------------------------+
+| test case id | tc_{context}_rfc2544_ipv4_1rule_1flow_{pkt_size}_{tg_type} |
+| | |
+| | * context = baremetal, heat, heat_external, ovs, sriov |
+| | heat_sriov_external contexts; |
+| | * tg_type = ixia (context != heat,heat_sriov_external), |
+| | trex; |
+| | * pkt_size = 64B - all contexts; |
+| | 128B, 256B, 512B, 1024B, 1280B, 1518B - |
+| | (context = heat, tg_type = ixia) |
+| | |
++---------------+--------------------------------------------------------------+
+| metric | * Network Throughput; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * TG Latency; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * VNF Packets Fwd; |
+| | * Dropped packets; |
+| | |
++---------------+--------------------------------------------------------------+
+| test purpose | The VFW RFC2544 tests measure performance characteristics of |
+| | the SUT (multiple ports) and sends UDP bidirectional traffic |
+| | from all TG ports to SampleVNF vFW application. The |
+| | application forwards received traffic based on rules |
+| | provided by the user in the TC configuration and default |
+| | rules created by vFW to send traffic from uplink ports to |
+| | downlink and voice versa. |
+| | |
++---------------+--------------------------------------------------------------+
+| configuration | The 2 ports RFC2544 test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml |
+| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1024B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1280B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_128B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1518B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_256B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_512B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | * tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex. |
+| | yaml |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | * tc_ovs_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml |
+| | * tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | * tc_sriov_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml |
+| | * tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | |
+| | The 4 ports RFC2544 test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia_4port.yaml |
+| | * tc_tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_4port. |
+| | yaml |
+| | * tc_tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_trex_4 |
+| | port.yaml |
+| | * tc_tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_4port.yaml |
+| | |
+| | The scale-up RFC2544 test cases are listed below: |
+| | |
+| | * tc_tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml |
+| | |
+| | The scale-out RFC2544 test cases are listed below: |
+| | |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_out.yaml |
+| | |
+| | Test duration is set as 30 sec for each test and default |
+| | number of rules are applied. These can be configured |
+| | |
++---------------+--------------------------------------------------------------+
+| test tool | The vFW is a DPDK application that performs basic filtering |
+| | for malformed packets and dynamic packet filtering of |
+| | incoming packets using the connection tracker library. |
+| | |
++---------------+--------------------------------------------------------------+
+| applicability | The vFW RFC2544 test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test duration; |
+| | * tolerated loss; |
+| | * traffic flows; |
+| | * rules; |
+| | |
+| | Default values exist. |
+| | |
++---------------+--------------------------------------------------------------+
+| pre-test | For OpenStack test case image (yardstick-samplevnf) needs |
+| conditions | to be installed into Glance with vFW and DPDK included in |
+| | it (NSB install). |
+| | |
+| | For Baremetal tests cases vFW and DPDK must be installed on |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information. |
+| | |
+| | For standalone (SA) SRIOV/OvS test cases the |
+| | yardstick-samplevnf image needs to be installed on hosts and |
+| | pod.yaml file must be provided with necessary system, NIC |
+| | information. |
+| | |
++---------------+--------------------------------------------------------------+
+| test sequence | Description and expected result |
+| | |
++---------------+--------------------------------------------------------------+
+| step 1 | For Baremetal test: The TG (except IXIA) and VNF are started |
+| | on the hosts based on the pod file. |
+| | |
+| | For Heat test: Two host VMs are booted, as Traffic generator |
+| | and VNF(vFW) based on the test flavor. In case of scale-out |
+| | scenario the multiple VNF VMs will be started. |
+| | |
+| | For Heat external test: vFW VM is booted and TG (except IXIA)|
+| | generator is started on the external host based on the pod |
+| | file. In case of scale-out scenario the multiple VNF VMs |
+| | will be deployed. |
+| | |
+| | For Heat SRIOV external test: vFW VM is booted with network |
+| | interfaces of `direct` type which are mapped to VFs that are |
+| | available to OpenStack. TG (except IXIA) is started on the |
+| | external host based on the pod file. In case of scale-out |
+| | scenario the multiple VNF VMs will be deployed. |
+| | |
+| | For SRIOV test: VF ports are created on host's PFs specified |
+| | in the TC file and VM is booed using those ports and image |
+| | provided in the configuration. TG (except IXIA) is started |
+| | on other host connected to VNF machine based on the pod |
+| | file. The vFW is started in the booted VM. In case of |
+| | scale-out scenario the multiple VNF VMs will be created. |
+| | |
+| | For OvS-DPDK test: OvS DPDK switch is started and bridges |
+| | are created with ports specified in the TC file. DPDK vHost |
+| | ports are added to corresponding bridge and VM is booed |
+| | using those ports and image provided in the configuration. |
+| | TG (except IXIA) is started on other host connected to VNF |
+| | machine based on the pod file. The vFW is started in the |
+| | booted VM. In case of scale-out scenario the multiple VNF |
+| | VMs will be deployed. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 2 | Yardstick is connected with the TG and VNF by using ssh (in |
+| | case of IXIA TG is connected via TCL interface). The test |
+| | will resolve the topology and instantiate all VNFs |
+| | and TG and collect the KPI's/metrics. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 3 | The TG will send packets to the VNFs. If the number of |
+| | dropped packets is more than the tolerated loss the line |
+| | rate or throughput is halved. This is done until the dropped |
+| | packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for different |
+| | packet size with an accepted minimal packet loss for the |
+| | default configuration. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
+| | In Heat test: All VNF VMs and TG are deleted on test |
+| | completion. |
+| | |
+| | In SRIOV test: The deployed VM with vFW is destroyed on the |
+| | host and TG (exclude IXIA) is stopped. |
+| | |
+| | In Heat SRIOV test: The deployed VM with vFW is destroyed, |
+| | VFs are released and TG (exclude IXIA) is stopped. |
+| | |
+| | In OvS test: The deployed VM with vFW is destroyed on the |
+| | host and OvS DPDK switch is stopped and ports are unbinded. |
+| | The TG (exclude IXIA) is stopped. |
+| | |
++---------------+--------------------------------------------------------------+
+| test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++---------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst
new file mode 100644
index 000000000..de490900d
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst
@@ -0,0 +1,130 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*************************************************************
+Yardstick Test Case Description: NSB vFW RFC2544 (correlated)
+*************************************************************
+
++------------------------------------------------------------------------------+
+| NSB vFW test for VNF characterization using correlated traffic |
+| |
++---------------+--------------------------------------------------------------+
+| test case id | tc_{context}_rfc2544_ipv4_1rule_1flow_64B_trex_corelated |
+| | |
+| | * context = baremetal, heat |
+| | |
++---------------+--------------------------------------------------------------+
+| metric | * Network Throughput; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * TG Latency; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * VNF Packets Fwd; |
+| | * Dropped packets; |
+| | |
+| | NOTE: For correlated TCs the TG metrics are available on |
+| | uplink ports. |
+| | |
++---------------+--------------------------------------------------------------+
+| test purpose | The VFW RFC2544 correlated tests measure performance |
+| | characteristics of the SUT (multiple ports) and sends UDP |
+| | traffic from uplink TG ports to SampleVNF vFW application. |
+| | The application forwards received traffic from uplink ports |
+| | to downlink ports based on rules provided by the user in the |
+| | TC configuration and default rules created by vFW. The VNF |
+| | downlink traffic is received by another UDPReplay VNF and it |
+| | is mirrored back to the VNF on the same port. Finally, the |
+| | traffic is received back to the TG uplink port. |
+| | |
++---------------+--------------------------------------------------------------+
+| configuration | The 2 ports RFC2544 correlated test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_corelated |
+| | _traffic.yaml |
+| | |
+| | Multiple VNF (2, 4, 10) RFC2544 correlated test cases are |
+| | listed below: |
+| | |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated |
+| | _scale_10.yaml |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale |
+| | _2.yaml |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale |
+| | _4.yaml |
+| | |
+| | The scale-out RFC2544 test cases are listed below: |
+| | |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale |
+| | _out.yaml |
+| | |
+| | Test duration is set as 30 sec for each test and default |
+| | number of rules are applied. These can be configured |
+| | |
++---------------+--------------------------------------------------------------+
+| test tool | The vFW is a DPDK application that performs basic filtering |
+| | for malformed packets and dynamic packet filtering of |
+| | incoming packets using the connection tracker library. |
+| | |
++---------------+--------------------------------------------------------------+
+| applicability | The vFW RFC2544 test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test duration; |
+| | * tolerated loss; |
+| | * traffic flows; |
+| | * rules; |
+| | |
+| | Default values exist. |
+| | |
++---------------+--------------------------------------------------------------+
+| pre-test | For OpenStack test case image (yardstick-samplevnf) needs |
+| conditions | to be installed into Glance with vFW and DPDK included in |
+| | it (NSB install). |
+| | |
+| | For Baremetal tests cases vFW and DPDK must be installed on |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information. |
+| | |
++---------------+--------------------------------------------------------------+
+| test sequence | Description and expected result |
+| | |
++---------------+--------------------------------------------------------------+
+| step 1 | For Baremetal test: The TG (except IXIA), vFW and UDPReplay |
+| | VNFs are started on the hosts based on the pod file. |
+| | |
+| | For Heat test: Three host VMs are booted, as Traffic |
+| | generator, vFW and UDPReplay VNF(vFW) based on the test |
+| | flavor. In case of scale-out scenario the multiple vFW VNF |
+| | VMs will be started. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 2 | Yardstick is connected with the TG, vFW and UDPReplay VNF by |
+| | using ssh (in case of IXIA TG is connected via TCL |
+| | interface). The test will resolve the topology and |
+| | instantiate all VNFs and TG and collect the KPI's/metrics. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 3 | The TG will send packets to the VNFs. If the number of |
+| | dropped packets is more than the tolerated loss the line |
+| | rate or throughput is halved. This is done until the dropped |
+| | packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for 64B packet |
+| | size with an accepted minimal packet loss for the default |
+| | configuration. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
+| | In Heat test: All VNF VMs and TG are deleted on test |
+| | completion. |
+| | |
++---------------+--------------------------------------------------------------+
+| test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++---------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst
new file mode 100644
index 000000000..9051fc4df
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst
@@ -0,0 +1,133 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*******************************************************
+Yardstick Test Case Description: NSB vFW RFC3511 (HTTP)
+*******************************************************
+
++------------------------------------------------------------------------------+
+| NSB vFW test for VNF characterization based on RFC3511 and IXIA |
+| |
++---------------+--------------------------------------------------------------+
+| test case id | tc_{context}_http_ixload_{http_size}_Requests-65000_{type} |
+| | |
+| | * context = baremetal, heat_external |
+| | * http_size = 1b, 4k, 64k, 256k, 512k, 1024k payload size |
+| | * type = Concurrency, Connections, Throughput |
+| | |
++---------------+--------------------------------------------------------------+
+| metric | * HTTP Total Throughput (Kbps); |
+| | * HTTP Simulated Users; |
+| | * HTTP Concurrent Connections; |
+| | * HTTP Connection Rate; |
+| | * HTTP Transaction Rate |
+| | |
++---------------+--------------------------------------------------------------+
+| test purpose | The vFW RFC3511 tests measure performance characteristics of |
+| | the SUT by sending the HTTP traffic from uplink to downlink |
+| | TG ports through vFW VNF. The application forwards received |
+| | traffic based on rules provided by the user in the TC |
+| | configuration and default rules created by vFW to send |
+| | traffic from uplink ports to downlink and voice versa. |
+| | |
++---------------+--------------------------------------------------------------+
+| configuration | The 2 ports RFC3511 test cases are listed below: |
+| | |
+| | * tc_baremetal_http_ixload_1024k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_1b_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_256k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_4k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_512k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_64k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_heat_external_http_ixload_1b_Requests-10Gbps |
+| | _Throughput.yaml |
+| | * tc_heat_external_http_ixload_1b_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_heat_external_http_ixload_1b_Requests-65000 |
+| | _Connections.yaml |
+| | |
+| | The 4 ports RFC3511 test cases are listed below: |
+| | |
+| | * tc_baremetal_http_ixload_1b_Requests-65000 |
+| | _Concurrency_4port.yaml |
+| | |
++---------------+--------------------------------------------------------------+
+| test tool | The vFW is a DPDK application that performs basic filtering |
+| | for malformed packets and dynamic packet filtering of |
+| | incoming packets using the connection tracker library. |
+| | |
++---------------+--------------------------------------------------------------+
+| applicability | The vFW RFC3511 test cases can be configured with different: |
+| | |
+| | * http payload sizes; |
+| | * traffic flows; |
+| | * rules; |
+| | |
+| | Default values exist. |
+| | |
++---------------+--------------------------------------------------------------+
+| pre-test | For OpenStack test case image (yardstick-samplevnf) needs |
+| conditions | to be installed into Glance with vFW and DPDK included in |
+| | it (NSB install). |
+| | |
+| | For Baremetal tests cases vFW and DPDK must be installed on |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information. |
+| | |
++---------------+--------------------------------------------------------------+
+| test sequence | Description and expected result |
+| | |
++---------------+--------------------------------------------------------------+
+| step 1 | For Baremetal test: The vFW VNF is started on the hosts |
+| | based on the pod file. |
+| | |
+| | For Heat external test: The vFW VM are deployed and booted. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 2 | Yardstick is connected with the TG (IxLoad) via IxLoad API |
+| | and VNF by using ssh. The test will resolve the topology and |
+| | instantiate all VNFs and TG and collect the KPI's/metrics. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 3 | The TG simulates HTTP traffic based on selected type of TC. |
+| | |
+| | Concurrency: |
+| | The TC attempts to simulate some number of human users. |
+| | The simulated users are gradually brought online until 64K |
+| | users is met (the Ramp-Up phase), then taken offline (the |
+| | Ramp Down phase). |
+| | |
+| | Connections: |
+| | The TC creates some number of HTTP connections per second. |
+| | It will attempt to generate the 64K of HTTP connections |
+| | per second. |
+| | |
+| | Throughput: |
+| | TC simultaneously transmits and receives TCP payload |
+| | (bytes) at a certain rate measured in Megabits per second |
+| | (Mbps), Kilobits per second (Kbps), or Gigabits per |
+| | second. The 10 Gbits is default throughput. |
+| | |
+| | At the end of the TC, the KPIs are collected and stored |
+| | (depends on the selected dispatcher). |
+| | |
++---------------+--------------------------------------------------------------+
+| step 4 | In Baremetal test: The test quits the application and |
+| | unbinds the DPDK ports. |
+| | |
+| | In Heat test: All VNF VMs are deleted and connections to TG |
+| | are terminated. |
+| | |
++---------------+--------------------------------------------------------------+
+| test verdict | The test case will try to achieve the configured HTTP |
+| | Concurrency/Throughput/Connections. |
++---------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
new file mode 100644
index 000000000..6df4ab880
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
@@ -0,0 +1,96 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2019 Viosoft Corporation.
+
+**********************************************
+Yardstick Test Case Description: NSB VIMS
+**********************************************
+
++-----------------------------------------------------------------------------+
+|NSB VIMS test for vIMS characterization |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_vims_{context}_sipp |
+| | |
+| | * context = baremetal or heat; |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | * Successful registrations per second; |
+| | * Total number of active registrations per server; |
+| | * Successful de-registrations per second; |
+| | * Successful session establishments per second; |
+| | * Total number of active sessions per server; |
+| | * Mean session setup time; |
+| | * Successful re-registrations per second; |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The vIMS test handles registration rate, call rate, |
+| | round trip delay, and message statistics of vIMS system. |
+| | |
+| | The vIMS test cases are implemented to run in baremetal |
+| | and heat context default configuration. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The vIMS test cases are listed below: |
+| | |
+| | * tc_vims_baremetal_sipp.yaml |
+| | * tc_vims_heat_sipp.yaml |
+| | |
+| | Each test runs one time and collects all the KPIs. |
+| | The configuration of vIMS and SIPp can be changed in each |
+| | test. |
++--------------+--------------------------------------------------------------+
+|test tool | SIPp |
+| | |
+| | SIPp is an application that can simulate SIP scenarios, can |
+| | generate RTP traffic and used for vIMS characterization. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | The SIPp test cases can be configured with different: |
+| | |
+| | * number of accounts; |
+| | * the call per second (cps) of SIP test; |
+| | * the holding time; |
+| | * RTP configuratioin; |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | For Openstack test case, only vIMS is deployed by external |
+|conditions | heat template, SIPp needs pod.yaml file with the necessary |
+| | system and NIC information |
+| | |
+| | For Baremetal tests cases SIPp and vIMS must be installed in |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
+| | For Heat test: One host VM for vIMS is booted, based on |
+| | the test flavor. Another host for SIPp is booted as |
+| | traffic generator, based on pod.yaml file |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with the vIMS and SIPp via ssh. |
+| | The test will resolve the topology, instantiate the vIMS and |
+| | SIPp and collect the KPIs/metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | The SIPp will run scenario tests with parameters configured |
+| | in test case files (tc_vims_baremetal_sipp.yaml and |
+| | tc_vims_heat_sipp.yaml files). |
+| | This is done until the KPIs of SIPp are within an acceptable |
+| | threshold. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application. |
+| | |
+| | In Heat test: The host VM of vIMS is deleted on test |
+| | completion. |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will collect the KPIs and plot on Grafana. |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
new file mode 100644
index 000000000..6a4a37697
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
@@ -0,0 +1,113 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB VPP IPSEC
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB VPP test for vIPSEC characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_baremetal_rfc2544_ipv4_{crypto_dev}_{crypto_alg} |
+| | |
+| | * crypto_dev = HW_cryptodev or SW_cryptodev; |
+| | * crypto_alg = aes-gcm or cbc-sha1; |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Network Throughput NDR or PDR; |
+| | * Connections Per Second (CPS); |
+| | * Latency; |
+| | * Number of tunnels; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * Dropped packets; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | IPv4 IPsec tunnel mode performance test: |
+| | |
+| | * Finds and reports throughput NDR (Non Drop Rate) with zero |
+| | packet loss tolerance or throughput PDR (Partial Drop Rate) |
+| | with non-zero packet loss tolerance (LT) expressed in |
+| | number of packets transmitted. |
+| | |
+| | * The IPSEC test cases are implemented to run in baremetal |
+| | |
++--------------+---------------------------------------------------------------+
+|configuration | The IPSEC test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_trex.yaml |
+| | |
+| | Test duration is set as 500sec for each test. |
+| | Packet size set as 64 bytes or higher. |
+| | Number of tunnels set as 1 or higher. |
+| | Number of connections set as 1 or higher |
+| | These can be configured |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Vector Packet Processing (VPP) |
+| | The VPP platform is an extensible framework that provides |
+| | out-of-the-box production quality switch/router functionality.|
+| | Its high performance, proven technology, its modularity and, |
+| | flexibility and rich feature set |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This VPP IPSEC test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test durations; |
+| | * tolerated loss; |
+| | * crypto device type; |
+| | * number of physical cores; |
+| | * number of tunnels; |
+| | * number of connections; |
+| | * encryption algorithms - integrity algorithm; |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | For Baremetal tests cases VPP and DPDK must be installed in |
+|conditions | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | Yardstick is connected with the TG and VNF by using ssh. |
+| | The test will resolve the topology and instantiate the VNF |
+| | and TG and collect the KPI's/metrics. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Test packets are generated by TG on links to DUTs. If the |
+| | number of dropped packets is more than the tolerated loss |
+| | the line rate or throughput is halved. This is done until |
+| | the dropped packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for a packet size |
+| | specified in the test case with an accepted minimal packet |
+| | loss for the default configuration. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
index 202307de6..19cc80e30 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
@@ -34,6 +34,7 @@ Yardstick Test Case Description TC010
| | |
| | Lmbench is a suite of operating system microbenchmarks. This |
| | test uses lat_mem_rd tool from that suite including: |
+| | |
| | * Context switching |
| | * Networking: connection establishment, pipe, TCP, UDP, and |
| | RPC hot potato |
@@ -55,7 +56,7 @@ Yardstick Test Case Description TC010
| | The benchmark runs as two nested loops. The outer loop is |
| | the stride size. The inner loop is the array size. For each |
| | array size, the benchmark creates a ring of pointers that |
-| | point backward one stride.Traversing the array is done by: |
+| | point backward one stride. Traversing the array is done by:: |
| | |
| | p = (char **)*p; |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
index 48bdef497..cbb1db91f 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
@@ -60,14 +60,14 @@ Yardstick Test Case Description TC011
| | |
| | * options: |
| | protocol: udp # The protocol used by iperf3 tools |
-| | bandwidth: 20m # It will send the given number of packets |
-| | without pausing |
+| | # Send the given number of packets without pausing: |
+| | bandwidth: 20m |
| | * runner: |
| | duration: 30 # Total test duration 30 seconds. |
| | |
| | * SLA (optional): |
| | jitter: 10 (ms) # The maximum amount of jitter that is |
-| | accepted. |
+| | accepted. |
| | |
+--------------+--------------------------------------------------------------+
|applicability | Test can be configured with different: |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
index b56e829f5..2502f5d94 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
@@ -34,6 +34,7 @@ Yardstick Test Case Description TC012
| | |
| | LMbench is a suite of operating system microbenchmarks. |
| | This test uses bw_mem tool from that suite including: |
+| | |
| | * Cached file read |
| | * Memory copy (bcopy) |
| | * Memory read |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc015.rst b/docs/testing/user/userguide/opnfv_yardstick_tc015.rst
new file mode 100755
index 000000000..277614ad4
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc015.rst
@@ -0,0 +1,141 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Orange and others.
+
+*************************************
+Yardstick Test Case Description TC015
+*************************************
+
+.. _unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench
+
++-----------------------------------------------------------------------------+
+| Processing speed with impact on energy consumption and CPU load |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC015_PROCESSING SPEED |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | score of single cpu running, |
+| | score of parallel running, |
+| | energy consumption |
+| | cpu load |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The purpose of TC015 is to evaluate the IaaS compute |
+| | performance with regards to CPU processing speed with |
+| | its impact on the energy consumption |
+| | It measures score of single cpu running and parallel |
+| | running. Energy consumption and cpu load are monitored while |
+| | the cpu test is running. |
+| | |
+| | The purpose is also to be able to spot the trends. |
+| | Test results, graphs and similar shall be stored for |
+| | comparison reasons and product evolution understanding |
+| | between different OPNFV versions and/or configurations, |
+| | different server types. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | UnixBench |
+| | |
+| | Unixbench is the most used CPU benchmarking software tool. |
+| | It can measure the performance of bash scripts, CPUs in |
+| | multithreading and single threading. It can also measure the |
+| | performance for parallel tasks. Also, specific disk IO for |
+| | small and large files are performed. You can use it to |
+| | measure either linux dedicated servers and linux vps |
+| | servers, running CentOS, Debian, Ubuntu, Fedora and other |
+| | distros. |
+| | |
+| | (UnixBench is not always part of a Linux distribution, hence |
+| | it needs to be installed. As an example see the |
+| | /yardstick/tools/ directory for how to generate a Linux |
+| | image with UnixBench included.) |
+| | |
+| | Redfish API |
+| | This HTTPS interface is provided by BMC of every telco grade |
+| | server. Is is a standard interface. |
+| | |
++--------------+--------------------------------------------------------------+
+|test | The UnixBench runs system benchmarks on a compute, getting |
+|description | information on the CPUs in the system. If the system has |
+| | more than one CPU, the tests will be run twice -- once with |
+| | a single copy of each test running at once, and once with N |
+| | N copies, where N is the number of CPUs. |
+| | |
+| | UnixBench will process a set of results from a single test |
+| | by averaging the individual pass results into a single final |
+| | value. |
+| | |
+| | While the cpu test is running Energy scenario run in |
+| | background to monitor the number of watt consumed by the |
+| | compute server on the fly. The same is done using Cpuload |
+| | scenario to monitor the overall percentage of CPU used on |
+| | the fly. This enables to balance the CPU score with its |
+| | impact on energy consumption. Synchronized measurements |
+| | enables to look at any relation between CPU load and energy |
+| | consumption. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | file: opnfv_yardstick_tc015.yaml |
+| | |
+| | run_mode: |
+| | Run Energy and Cpuload in background |
+| | Run unixbench in quiet mode or verbose mode |
+| | test_type: dhry2reg, whetstone and so on |
+| | |
+| | Duration and Interval are set globally for Energy and |
+| | Cpuload, aligned with duration of UnixBench test. |
+| | SLA can be set for each scenario type. Default is NA. |
+| | For SLA with single_score and parallel_score, both can be |
+| | set by user, default is NA. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Test shall be applied to node context only |
+| | It can be configured with different: |
+| | |
+| | * test types: dhry2reg, whetstone |
+| | |
+| | Default values exist. |
+| | |
+| | SLA (optional) : min_score: The minimun UnixBench score that |
+| | is accepted. |
+| | |
++--------------+--------------------------------------------------------------+
+|usability | This test case is one of Yardstick's generic test. Thus it |
+| | is runnable on most of the scenarios. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | unixbench_ |
+| | |
+| | ETSI-NFV-TST001 |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | The target shall have unixbench installed on it. |
+|conditions | |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick is connected with the target node using ssh. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Energy and Cpuload are launched silently in background one |
+| | after the other. |
+| | Then UnixBench is invoked. All the tests are executed using |
+| | the "Run" script in the top-level of UnixBench directory. |
+| | The "Run" script will run a standard "index" test, and save |
+| | the report in the "results" directory. Then the report is |
+| | processed by "unixbench_benchmark" and checked against the |
+| | SLA. |
+| | While unibench runs energy and cpu load are catched |
+| | periodically according to interval value. |
+| | |
+| | Result: Logs are stored. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
index 8d79e011a..d27b201c5 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
@@ -43,20 +43,24 @@ Yardstick Test Case Description TC019
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, two kinds of monitor are needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters: |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scritps. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
+| | |
+| | 1. monitor_type: which is used for finding the monitor |
+| | class and related scritps. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2. command_name: which is the command name used for |
+| | request |
| | |
| | 2. the "process" monitor check whether a process is running |
| | on a specific node, which needs three parameters: |
-| | 1) monitor_type: which used for finding the monitor class |
-| | and related scritps. It should be always set to "process" |
-| | for this monitor. |
-| | 2) process_name: which is the process name for monitor |
-| | 3) host: which is the name of the node runing the process |
+| | |
+| | 1. monitor_type: which used for finding the monitor class |
+| | and related scritps. It should be always set to |
+| | "process" for this monitor. |
+| | 2. process_name: which is the process name for monitor |
+| | 3. host: which is the name of the node runing the process |
| | |
| | e.g. |
| | monitor1: |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
index 0e2e9a5f8..f3f9ea6bf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
@@ -39,12 +39,15 @@ Yardstick Test Case Description TC025
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, one kind of monitor are needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scritps. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
+| | |
+| | 1) monitor_type: which is used for finding the monitor |
+| | class and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for |
+| | request |
| | |
| | There are four instance of the "openstack-cmd" monitor: |
| | monitor1: |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
index 125fd59fa..90790e2e3 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC027
*************************************
-.. _ipv6: https://wiki.opnfv.org/ipv6_opnfv_project
+.. _ipv6: https://wiki.opnfv.org/display/ipv6
+-----------------------------------------------------------------------------+
|IPv6 connectivity between nodes on the tenant network |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
index d62fbf787..4c73c9677 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC040
*************************************
-.. _Parser: https://wiki.opnfv.org/parser
+.. _Parser: https://wiki.opnfv.org/display/parser
+-----------------------------------------------------------------------------+
|Verify Parser Yang-to-Tosca |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
index a0c487c7b..23b98c8f4 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
@@ -9,7 +9,7 @@ Yardstick Test Case Description TC042
.. _DPDK: http://dpdk.org/doc/guides/index.html
.. _Testpmd: http://dpdk.org/doc/guides/testpmd_app_ug/index.html
-.. _Pktgen-dpdk: http://pktgen.readthedocs.io/en/latest/index.html
+.. _Pktgen-dpdk: https://pktgen-dpdk.readthedocs.io/en/latest/index.html
+-----------------------------------------------------------------------------+
|Network Performance |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
index 82a491b72..7d01cb99a 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
@@ -35,18 +35,18 @@ Yardstick Test Case Description TC050
| | 3) interface: the network interface to be turned off. |
| | |
| | The interface to be closed by the attacker can be set by the |
-| | variable of "{{ interface_name }}" |
+| | variable of "{{ interface_name }}":: |
| | |
-| | attackers: |
-| | - |
-| | fault_type: "general-attacker" |
-| | host: {{ attack_host }} |
-| | key: "close-br-public" |
-| | attack_key: "close-interface" |
-| | action_parameter: |
-| | interface: {{ interface_name }} |
-| | rollback_parameter: |
-| | interface: {{ interface_name }} |
+| | attackers: |
+| | - |
+| | fault_type: "general-attacker" |
+| | host: {{ attack_host }} |
+| | key: "close-br-public" |
+| | attack_key: "close-interface" |
+| | action_parameter: |
+| | interface: {{ interface_name }} |
+| | rollback_parameter: |
+| | interface: {{ interface_name }} |
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, the monitor named "openstack-cmd" is |
@@ -56,19 +56,20 @@ Yardstick Test Case Description TC050
| | "openstack-cmd" for this monitor. |
| | 2) command_name: which is the command name used for request |
| | |
-| | There are four instance of the "openstack-cmd" monitor: |
-| | monitor1: |
-| | - monitor_type: "openstack-cmd" |
-| | - command_name: "nova image-list" |
-| | monitor2: |
-| | - monitor_type: "openstack-cmd" |
-| | - command_name: "neutron router-list" |
-| | monitor3: |
-| | - monitor_type: "openstack-cmd" |
-| | - command_name: "heat stack-list" |
-| | monitor4: |
-| | - monitor_type: "openstack-cmd" |
-| | - command_name: "cinder list" |
+| | There are four instance of the "openstack-cmd" monitor:: |
+| | |
+| | monitor1: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "nova image-list" |
+| | monitor2: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "neutron router-list" |
+| | monitor3: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "heat stack-list" |
+| | monitor4: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "cinder list" |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there is one metric: |
| | 1)service_outage_time: which indicates the maximum outage |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
index 9514b6819..7f2be6e7d 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
@@ -65,15 +65,16 @@ Yardstick Test Case Description TC052
| | |
| | In this case, the "operation" adds a flavor and the "result |
| | checker" checks whether ths flavor is created. Their |
-| | parameters show as follows: |
-| | operation: |
-| | -operation_type: "nova-create-flavor" |
-| | -action_parameter: |
-| | flavorconfig: "test-001 test-001 100 1 1" |
-| | result checker: |
-| | -checker_type: "check-flavor" |
-| | -expectedValue: "test-001" |
-| | -condition: "in" |
+| | parameters show as follows:: |
+| | |
+| | operation: |
+| | -operation_type: "nova-create-flavor" |
+| | -action_parameter: |
+| | flavorconfig: "test-001 test-001 100 1 1" |
+| | result checker: |
+| | -checker_type: "check-flavor" |
+| | -expectedValue: "test-001" |
+| | -condition: "in" |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there is one metric: |
| | 1)service_outage_time: which indicates the maximum outage |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
index c861ca90c..25703d3fb 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC055
*************************************
-.. _/proc/cpuinfo: http://www.linfo.org/proc_cpuinfo.html
+.. _`/proc/cpuinfo`: http://www.linfo.org/proc_cpuinfo.html
+-----------------------------------------------------------------------------+
|Compute Capacity |
@@ -41,7 +41,7 @@ Yardstick Test Case Description TC055
| | capacity output. |
| | |
+--------------+--------------------------------------------------------------+
-|references | /proc/cpuinfo_ |
+|references | `/proc/cpuinfo`_ |
| | |
| | ETSI-NFV-TST001 |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
index 1bb43c9e7..245a58e08 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
@@ -49,12 +49,15 @@ Yardstick Test Case Description TC057
| | -host: node1 |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, a kind of monitor is needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters: |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scripts. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
+| | |
+| | 1. monitor_type: which is used for finding the monitor |
+| | class and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2. command_name: which is the command name used for |
+| | request |
| | |
| | In this case, the command_name of monitor1 should be |
| | services that are managed by the cluster manager. |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
index a77653aa5..7b8ee06c7 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
@@ -58,6 +58,7 @@ Yardstick Test Case Description TC063
| | * count: 15 - how many times to stat disk utilization |
| | type: int |
| | unit: na |
+| | |
| | There are default values for each above-mentioned option. |
| | Run in background with other test cases. |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
index af0e64fbf..e1bfd5399 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
@@ -9,9 +9,6 @@ Yardstick Test Case Description TC069
.. _RAMspeed: http://alasir.com/software/ramspeed/
-.. table::
- :class: longtable
-
+-----------------------------------------------------------------------------+
|Memory Bandwidth |
| |
@@ -41,7 +38,8 @@ Yardstick Test Case Description TC069
| | * SLA (optional): 7000 (MBps) min_bandwidth: The minimum |
| | amount of memory bandwidth that is accepted. |
| | * type_id: 1 - runs a specified benchmark |
-| | (by an ID number): |
+| | (by an ID number):: |
+| | |
| | 1 -- INTmark [writing] 4 -- FLOATmark [writing] |
| | 2 -- INTmark [reading] 5 -- FLOATmark [reading] |
| | 3 -- INTmem 6 -- FLOATmem |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
index ad4526405..873c5c99e 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC073
*************************************
-.. _netperf: http://www.netperf.org/netperf/training/Netperf.html
+.. _netperf: https://hewlettpackard.github.io/netperf/
+-----------------------------------------------------------------------------+
|Throughput per NFVI node test |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
index 92cd51439..8d025eecf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
@@ -19,16 +19,27 @@ Yardstick Test Case Description TC074
|metric | Storage performance |
| | |
+--------------+--------------------------------------------------------------+
-|test purpose | Storperf integration with yardstick. The purpose of StorPerf |
-| | is to provide a tool to measure block and object storage |
-| | performance in an NFVI. When complemented with a |
-| | characterization of typical VF storage performance |
-| | requirements, it can provide pass/fail thresholds for test, |
-| | staging, and production NFVI environments. |
-| | |
-| | The benchmarks developed for block and object storage will |
-| | be sufficiently varied to provide a good preview of expected |
-| | storage performance behavior for any type of VNF workload. |
+|test purpose | To evaluate and report on the Cinder volume performance. |
+| | |
+| | This testcase integrates with OPNFV StorPerf to measure |
+| | block performance of the underlying Cinder drivers. Many |
+| | options are supported, and even the root disk (Glance |
+| | ephemeral storage can be profiled. |
+| | |
+| | The fundamental concept of the test case is to first fill |
+| | the volumes with random data to ensure reported metrics |
+| | are indicative of continued usage and not skewed by |
+| | transitional performance while the underlying storage |
+| | driver allocates blocks. |
+| | The metrics for filling the volumes with random data |
+| | are not reported in the final results. The test also |
+| | ensures the volumes are performing at a consistent level |
+| | of performance by measuring metrics every minute, and |
+| | comparing the trend of the metrics over the run. By |
+| | evaluating the min and max values, as well as the slope of |
+| | the trend, it can make the determination that the metrics |
+| | are stable, and not fluctuating beyond industry standard |
+| | norms. |
| | |
+--------------+--------------------------------------------------------------+
|configuration | file: opnfv_yardstick_tc074.yaml |
@@ -38,7 +49,8 @@ Yardstick Test Case Description TC074
| | * public_network: "ext-net" - name of public network |
| | * volume_size: 2 - cinder volume size |
| | * block_sizes: "4096" - data block size |
-| | * queue_depths: "4" |
+| | * queue_depths: "4" - the number of simultaneous I/Os |
+| | to perform at all times |
| | * StorPerf_ip: "192.168.200.2" |
| | * query_interval: 10 - state query interval |
| | * timeout: 600 - maximum allowed job time |
@@ -50,7 +62,11 @@ Yardstick Test Case Description TC074
| | performance in an NFVI. |
| | |
| | StorPerf is delivered as a Docker container from |
-| | https://hub.docker.com/r/opnfv/storperf/tags/. |
+| | https://hub.docker.com/r/opnfv/storperf-master/tags/. |
+| | |
+| | The underlying tool used is FIO, and StorPerf supports |
+| | any FIO option in order to tailor the test to the exact |
+| | workload needed. |
| | |
+--------------+--------------------------------------------------------------+
|references | Storperf_ |
@@ -75,33 +91,56 @@ Yardstick Test Case Description TC074
| | * workload=[workload module] |
| | If not specified, the default is to run all workloads. The |
| | workload types are: |
+| | |
| | - rs: 100% Read, sequential data |
| | - ws: 100% Write, sequential data |
| | - rr: 100% Read, random access |
| | - wr: 100% Write, random access |
| | - rw: 70% Read / 30% write, random access |
-| | * nossd: Do not perform SSD style preconditioning. |
-| | * nowarm: Do not perform a warmup prior to |
+| | |
| | measurements. |
+| | |
+| | * workloads={json maps} |
+| | This parameter supercedes the workload and calls the V2.0 |
+| | API in StorPerf. It allows for greater control of the |
+| | parameters to be passed to FIO. For example, running a |
+| | random read/write with a mix of 90% read and 10% write |
+| | would be expressed as follows: |
+| | {"9010randrw": {"rw":"randrw","rwmixread": "90"}} |
+| | Note: This must be passed in as a string, so don't forget |
+| | to escape or otherwise properly deal with the quotes. |
+| | |
| | * report= [job_id] |
| | Query the status of the supplied job_id and report on |
| | metrics. If a workload is supplied, will report on only |
| | that subset. |
+| | * availability_zone: Specify the availability zone which |
+| | the stack will use to create instances. |
+| | * volume_type: |
+| | Cinder volumes can have different types, for example |
+| | encrypted vs. not encrypted. |
+| | To be able to profile the difference between the two. |
+| | * subnet_CIDR: Specify subnet CIDR of private network |
+| | * stack_name: Specify the name of the stack that will be |
+| | created, the default: "StorperfAgentGroup" |
+| | * volume_count: Specify the number of volumes per |
+| | virtual machines |
| | |
| | There are default values for each above-mentioned option. |
| | |
+--------------+--------------------------------------------------------------+
|pre-test | If you do not have an Ubuntu 14.04 image in Glance, you will |
-|conditions | need to add one. A key pair for launching agents is also |
-| | required. |
+|conditions | need to add one. |
| | |
| | Storperf is required to be installed in the environment. |
| | There are two possible methods for Storperf installation: |
-| | Run container on Jump Host |
-| | Run container in a VM |
+| | |
+| | - Run container on Jump Host |
+| | - Run container in a VM |
| | |
| | Running StorPerf on Jump Host |
| | Requirements: |
+| | |
| | - Docker must be installed |
| | - Jump Host must have access to the OpenStack Controller |
| | API |
@@ -112,6 +151,7 @@ Yardstick Test Case Description TC074
| | |
| | Running StorPerf in a VM |
| | Requirements: |
+| | |
| | - VM has docker installed |
| | - VM has OpenStack Controller credentials and can |
| | communicate with the Controller API |
@@ -126,10 +166,21 @@ Yardstick Test Case Description TC074
|test sequence | description and expected result |
| | |
+--------------+--------------------------------------------------------------+
-|step 1 | The Storperf is installed and Ubuntu 14.04 image is stored |
-| | in glance. TC is invoked and logs are produced and stored. |
+|step 1 | Yardstick calls StorPerf to create the heat stack with the |
+| | number of VMs and size of Cinder volumes specified. The |
+| | VMs will be on their own private subnet, and take floating |
+| | IP addresses from the specified public network. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick calls StorPerf to fill all the volumes with |
+| | random data. |
| | |
-| | Result: Logs are stored. |
++--------------+--------------------------------------------------------------+
+|step 3 | Yardstick calls StorPerf to perform the series of tests |
+| | specified by the workload, queue depths and block sizes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | Yardstick calls StorPerf to delete the stack it created. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | None. Storage performance results are fetched and stored. |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
index 793c3fdd5..df2192313 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
@@ -14,8 +14,8 @@ Yardstick Test Case Description TC081
|Network Latency |
| |
+--------------+--------------------------------------------------------------+
-|test case id | OPNFV_YARDSTICK_TC081_NETWORK_LATENCY_BETWEEN_CONTAINER_AND_ |
-| | VM |
+|test case id | OPNFV_YARDSTICK_TC081_NETWORK_LATENCY_BETWEEN_CONTAINER_AND |
+| | _VM |
| | |
+--------------+--------------------------------------------------------------+
|metric | RTT (Round Trip Time) |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
index 2e7b28e25..b3d44c4bf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
@@ -92,18 +92,19 @@ Yardstick Test Case Description TC084
+--------------+--------------------------------------------------------------+
|pre-test | To run and install SPEC CPU 2006, the following are |
|conditions | required: |
-| | * For SPECint 2006: Both C99 and C++98 compilers are |
-| | installed in VM images; |
-| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 |
-| | compilers installed in VM images; |
-| | * At least 4GB of disk space availabile on VM. |
-| | |
-| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu |
-| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. |
-| | Higher gcc and g++ version may cause compiling error. |
-| | |
-| | For more SPEC CPU 2006 dependencies please visit |
-| | (https://www.spec.org/cpu2006/Docs/techsupport.html) |
+| | |
+| | * For SPECint 2006: Both C99 and C++98 compilers are |
+| | installed in VM images; |
+| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 |
+| | compilers installed in VM images; |
+| | * At least 4GB of disk space availabile on VM. |
+| | |
+| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu |
+| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. |
+| | Higher gcc and g++ version may cause compiling error. |
+| | |
+| | For more SPEC CPU 2006 dependencies please visit |
+| | (https://www.spec.org/cpu2006/Docs/techsupport.html) |
| | |
+--------------+--------------------------------------------------------------+
|test sequence | description and expected result |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc087.rst b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
index 99bfeebfc..c11252606 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
@@ -41,6 +41,7 @@ Yardstick Test Case Description TC087
+--------------+--------------------------------------------------------------+
|attackers | In this test case, an attacker called “kill-process” is |
| | needed. This attacker includes three parameters: |
+| | |
| | 1. fault_type: which is used for finding the attacker's |
| | scripts. It should be set to 'kill-process' in this test |
| | |
@@ -58,6 +59,7 @@ Yardstick Test Case Description TC087
|monitors | This test case utilizes two monitors of type "ip-status" |
| | and one monitor of type "process" to track the following |
| | conditions: |
+| | |
| | 1. "ping_same_network_l2": monitor ICMP traffic between |
| | VMs in the same Neutron network |
| | |
@@ -74,11 +76,13 @@ Yardstick Test Case Description TC087
| | |
+--------------+--------------------------------------------------------------+
|operations | In this test case, the following operations are needed: |
+| | |
| | 1. "nova-create-instance-in_network": create a VM instance |
| | in one of the existing Neutron network. |
| | |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there are two metrics: |
+| | |
| | 1. process_recover_time: which indicates the maximun |
| | time (seconds) from the process being killed to |
| | recovered |
@@ -95,7 +99,9 @@ Yardstick Test Case Description TC087
| | |
+--------------+--------------------------------------------------------------+
|configuration | This test case needs two configuration files: |
+| | |
| | 1. test case file: opnfv_yardstick_tc087.yaml |
+| | |
| | - Attackers: see above “attackers” discription |
| | - waiting_time: which is the time (seconds) from the |
| | process being killed to stoping monitors the monitors |
@@ -126,7 +132,7 @@ Yardstick Test Case Description TC087
| | Neutron network. |
| | |
| | 2. Check connectivity from one VM to an external host on |
-| | the Internet to verify SNAT functionality.
+| | the Internet to verify SNAT functionality. |
| | |
| | Result: The monitor info will be collected. |
| | |
@@ -171,11 +177,14 @@ Yardstick Test Case Description TC087
|test verdict | This test fails if the SLAs are not met or if there is a |
| | test case execution problem. The SLAs are define as follows |
| | for this test: |
+| | |
| | * SDN Controller recovery |
+| | |
| | * process_recover_time <= 30 sec |
| | |
| | * no impact on data plane connectivity during SDN |
| | controller failure and recovery. |
+| | |
| | * packet_drop == 0 |
| | |
+--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc088.rst b/docs/testing/user/userguide/opnfv_yardstick_tc088.rst
new file mode 100644
index 000000000..2423a6b31
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc088.rst
@@ -0,0 +1,129 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC088
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node Openstack Service High Availability - Nova Scheduler |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC088: Control node Openstack service down - |
+| | nova scheduler |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | compute scheduler service provided by OpenStack (nova- |
+| | scheduler) on control node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of nova-scheduler service |
+| | on a selected control node, then checks whether the request |
+| | of the related OpenStack command is OK and the killed |
+| | processes are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to "nova- |
+| | scheduler". |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "nova-scheduler" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, one kind of monitor is needed: |
+| | 1. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor: |
+| | -monitor_type: "process" |
+| | -process_name: "nova-scheduler" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance": create a VM instance to check |
+| | whether the nova-scheduler works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are one metric: |
+| | 1)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc088.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stopping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | create a new instance to check whether the nova scheduler |
+| | works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | stop the monitor after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc089.rst b/docs/testing/user/userguide/opnfv_yardstick_tc089.rst
new file mode 100644
index 000000000..0a8b2570b
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc089.rst
@@ -0,0 +1,129 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC089
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node Openstack Service High Availability - Nova Conductor |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC089: Control node Openstack service down - |
+| | nova conductor |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | compute database proxy service provided by OpenStack (nova- |
+| | conductor) on control node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of nova-conductor service |
+| | on a selected control node, then checks whether the request |
+| | of the related OpenStack command is OK and the killed |
+| | processes are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to "nova- |
+| | conductor". |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "nova-conductor" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, one kind of monitor is needed: |
+| | 1. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor: |
+| | -monitor_type: "process" |
+| | -process_name: "nova-conductor" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance": create a VM instance to check |
+| | whether the nova-conductor works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are one metric: |
+| | 1)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc089.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stopping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | create a new instance to check whether the nova conductor |
+| | works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | stop the monitor after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc092.rst b/docs/testing/user/userguide/opnfv_yardstick_tc092.rst
new file mode 100644
index 000000000..9c833fa23
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc092.rst
@@ -0,0 +1,201 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Ericsson and others.
+
+*************************************
+Yardstick Test Case Description TC092
+*************************************
+
++-----------------------------------------------------------------------------+
+|SDN Controller resilience in HA configuration |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC092: SDN controller resilience and high |
+| | availability HA configuration |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test validates SDN controller node high availability by |
+| | verifying there is no impact on the data plane connectivity |
+| | when one SDN controller fails in a HA configuration, |
+| | i.e. all existing configured network services DHCP, ARP, L2, |
+| | L3VPN, Security Groups should continue to operate |
+| | between the existing VMs while one SDN controller instance |
+| | is offline and rebooting. |
+| | |
+| | The test also validates that network service operations such |
+| | as creating a new VM in an existing or new L2 network |
+| | network remain operational while one instance of the |
+| | SDN controller is offline and recovers from the failure. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case: |
+| | 1. fails one instance of a SDN controller cluster running |
+| | in a HA configuration on the OpenStack controller node |
+| | |
+| | 2. checks if already configured L2 connectivity between |
+| | existing VMs is not impacted |
+| | |
+| | 3. verifies that the system never loses the ability to |
+| | execute virtual network operations, even when the |
+| | failed SDN Controller is still recovering |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called “kill-process” is |
+| | needed. This attacker includes three parameters: |
+| | |
+| | 1. ``fault_type``: which is used for finding the attacker's |
+| | scripts. It should be set to 'kill-process' in this test |
+| | |
+| | 2. ``process_name``: should be set to sdn controller |
+| | process |
+| | |
+| | 3. ``host``: which is the name of a control node where |
+| | opendaylight process is running |
+| | |
+| | example: |
+| | - ``fault_type``: “kill-process” |
+| | - ``process_name``: “opendaylight-karaf” (TBD) |
+| | - ``host``: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, the following monitors are needed |
+| | 1. ``ping_same_network_l2``: monitor pinging traffic |
+| | between the VMs in same neutron network |
+| | |
+| | 2. ``ping_external_snat``: monitor ping traffic from VMs to |
+| | external destinations (e.g. google.com) |
+| | |
+| | 3. ``SDN controller process monitor``: a monitor checking |
+| | the state of a specified SDN controller process. It |
+| | measures the recovery time of the given process. |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance-in_network": create a VM instance |
+| | in one of the existing neutron network. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1. process_recover_time: which indicates the maximun |
+| | time (seconds) from the process being killed to |
+| | recovered |
+| | |
+| | 2. packet_drop: measure the packets that have been dropped |
+| | by the monitors using pktgen. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | TBD |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1. test case file: opnfv_yardstick_tc092.yaml |
+| | |
+| | - Attackers: see above “attackers” discription |
+| | - Monitors: see above “monitors” discription |
+| | |
+| | - waiting_time: which is the time (seconds) from the |
+| | process being killed to stoping monitors the |
+| | monitors |
+| | |
+| | - SLA: see above “metrics” discription |
+| | |
+| | 2. POD file: pod.yaml The POD configuration should record |
+| | on pod.yaml first. the “host” item in this test case |
+| | will use the node name in the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | Description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-action | 1. The OpenStack cluster is set up with an SDN controller |
+| | running in a three node cluster configuration. |
+| | |
+| | 2. One or more neutron networks are created with two or |
+| | more VMs attached to each of the neutron networks. |
+| | |
+| | 3. The neutron networks are attached to a neutron router |
+| | which is attached to an external network the towards |
+| | DCGW. |
+| | |
+| | 4. The master node of SDN controller cluster is known. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Start ip connectivity monitors: |
+| | 1. Check the L2 connectivity between the VMs in the same |
+| | neutron network. |
+| | |
+| | 2. Check the external connectivity of the VMs. |
+| | |
+| | Each monitor runs in an independent process. |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Start attacker: |
+| | SSH to the VIM node and kill the SDN controller process |
+| | determined in step 2. |
+| | |
+| | Result: One SDN controller service will be shut down |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Restart the SDN controller. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | Create a new VM in the existing Neutron network while the |
+| | SDN controller is offline or still recovering. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 5 | Stop IP connectivity monitors after a period of time |
+| | specified by “waiting_time” |
+| | |
+| | Result: The monitor info will be aggregated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 6 | Verify the IP connectivity monitor result |
+| | |
+| | Result: IP connectivity monitor should not have any packet |
+| | drop failures reported |
+| | |
++--------------+--------------------------------------------------------------+
+|step 7 | Verify process_recover_time, which indicates the maximun |
+| | time (seconds) from the process being killed to recovered, |
+| | is within the SLA. This step blocks until either the |
+| | process has recovered or a timeout occurred. |
+| | |
+| | Result: process_recover_time is within SLA limits, if not, |
+| | test case failed and stopped. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 8 | Start IP connectivity monitors for the new VM: |
+| | |
+| | 1. Check the L2 connectivity from the existing VMs to the |
+| | new VM in the Neutron network. |
+| | |
+| | 2. Check connectivity from one VM to an external host on |
+| | the Internet to verify SNAT functionality. |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 9 | Stop IP connectivity monitors after a period of time |
+| | specified by “waiting_time” |
+| | |
+| | Result: The monitor info will be aggregated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 10 | Verify the IP connectivity monitor result |
+| | |
+| | Result: IP connectivity monitor should not have any packet |
+| | drop failures reported |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc093.rst b/docs/testing/user/userguide/opnfv_yardstick_tc093.rst
new file mode 100644
index 000000000..4e22e8bf3
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc093.rst
@@ -0,0 +1,189 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intracom Telecom and others.
+.. mardim@intracom-telecom.com
+
+*************************************
+Yardstick Test Case Description TC093
+*************************************
+
++-----------------------------------------------------------------------------+
+|SDN Vswitch resilience in non-HA or HA configuration |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC093: SDN Vswitch resilience in |
+| | non-HA or HA configuration |
++--------------+--------------------------------------------------------------+
+|test purpose | This test validates that network data plane services are |
+| | resilient in the event of Virtual Switch failure |
+| | in compute nodes. Specifically, the test verifies that |
+| | existing data plane connectivity is not permanently impacted |
+| | i.e. all configured network services such as DHCP, ARP, L2, |
+| | L3 Security Groups continue to operate between the existing |
+| | VMs eventually after the Virtual Switches have finished |
+| | rebooting. |
+| | |
+| | The test also validates that new network service operations |
+| | (creating a new VM in the existing L2/L3 network or in a new |
+| | network, etc.) are operational after the Virtual Switches |
+| | have recovered from a failure. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This testcase first checks if the already configured |
+| | DHCP/ARP/L2/L3/SNAT connectivity is proper. After |
+| | it fails and restarts again the VSwitch services which are |
+| | running on both OpenStack compute nodes, and then checks if |
+| | already configured DHCP/ARP/L2/L3/SNAT connectivity is not |
+| | permanently impacted (even if there are some packet |
+| | loss events) between VMs and the system is able to execute |
+| | new virtual network operations once the Vswitch services |
+| | are restarted and have been fully recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, two attackers called “kill-process” are |
+| | needed. These attackers include three parameters: |
+| | |
+| | 1. fault_type: which is used for finding the attacker's |
+| | scripts. It should be set to 'kill-process' in this test |
+| | |
+| | 2. process_name: should be set to the name of the Vswitch |
+| | process |
+| | |
+| | 3. host: which is the name of the compute node where the |
+| | Vswitch process is running |
+| | |
+| | e.g. -fault_type: "kill-process" |
+| | -process_name: "openvswitch" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | This test case utilizes two monitors of type "ip-status" |
+| | and one monitor of type "process" to track the following |
+| | conditions: |
+| | |
+| | 1. "ping_same_network_l2": monitor ICMP traffic between |
+| | VMs in the same Neutron network |
+| | |
+| | 2. "ping_external_snat": monitor ICMP traffic from VMs to |
+| | an external host on the Internet to verify SNAT |
+| | functionality. |
+| | |
+| | 3. "Vswitch process monitor": a monitor checking the |
+| | state of the specified Vswitch process. It measures |
+| | the recovery time of the given process. |
+| | |
+| | Monitors of type "ip-status" use the "ping" utility to |
+| | verify reachability of a given target IP. |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance-in_network": create a VM instance |
+| | in one of the existing Neutron network. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1. process_recover_time: which indicates the maximun |
+| | time (seconds) from the process being killed to |
+| | recovered |
+| | |
+| | 2. outage_time: measures the total time in which |
+| | monitors were failing in their tasks (e.g. total time of |
+| | Ping failure) |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | none |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1. test case file: opnfv_yardstick_tc093.yaml |
+| | |
+| | - Attackers: see above “attackers” description |
+| | - monitor_time: which is the time (seconds) from |
+| | starting to stoping the monitors |
+| | - Monitors: see above “monitors” discription |
+| | - SLA: see above “metrics” description |
+| | |
+| | 2. POD file: pod.yaml The POD configuration should record |
+| | on pod.yaml first. the “host” item in this test case |
+| | will use the node name in the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | Description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-action | 1. The Vswitches are set up in both compute nodes. |
+| | |
+| | 2. One or more Neutron networks are created with two or |
+| | more VMs attached to each of the Neutron networks. |
+| | |
+| | 3. The Neutron networks are attached to a Neutron router |
+| | which is attached to an external network towards the |
+| | DCGW. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Start IP connectivity monitors: |
+| | 1. Check the L2 connectivity between the VMs in the same |
+| | Neutron network. |
+| | |
+| | 2. Check connectivity from one VM to an external host on |
+| | the Internet to verify SNAT functionality. |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Start attackers: |
+| | SSH connect to the VIM compute nodes and kill the Vswitch |
+| | processes |
+| | |
+| | Result: the SDN Vswitch services will be shutdown |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Verify the results of the IP connectivity monitors. |
+| | |
+| | Result: The outage_time metric reported by the monitors |
+| | is not greater than the max_outage_time. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | Restart the SDN Vswitch services. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 5 | Create a new VM in the existing Neutron network |
+| | |
++--------------+--------------------------------------------------------------+
+|step 6 | Verify connectivity between VMs as follows: |
+| | 1. Check the L2 connectivity between the previously |
+| | existing VM and the newly created VM on the same |
+| | Neutron network by sending ICMP messages |
+| | |
++--------------+--------------------------------------------------------------+
+|step 7 | Stop IP connectivity monitors after a period of time |
+| | specified by “monitor_time” |
+| | |
+| | Result: The monitor info will be aggregated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 8 | Verify the IP connectivity monitor results |
+| | |
+| | Result: IP connectivity monitor should not have any packet |
+| | drop failures reported |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | This test fails if the SLAs are not met or if there is a |
+| | test case execution problem. The SLAs are define as follows |
+| | for this test: |
+| | * SDN Vswitch recovery |
+| | |
+| | * process_recover_time <= 30 sec |
+| | |
+| | * no impact on data plane connectivity during SDN |
+| | Vswitch failure and recovery. |
+| | |
+| | * packet_drop == 0 |
+| | |
++--------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/references.rst b/docs/testing/user/userguide/references.rst
index 05729ba75..e6bc719fd 100644
--- a/docs/testing/user/userguide/references.rst
+++ b/docs/testing/user/userguide/references.rst
@@ -11,13 +11,12 @@ References
OPNFV
=====
-* Parser wiki: https://wiki.opnfv.org/parser
-* Pharos wiki: https://wiki.opnfv.org/pharos
-* VTC: https://wiki.opnfv.org/vtc
+* Parser wiki: https://wiki.opnfv.org/display/parser
+* Pharos wiki: https://wiki.opnfv.org/display/pharos
* Yardstick CI: https://build.opnfv.org/ci/view/yardstick/
* Yardstick and ETSI TST001 presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925205%2Fopnfv_summit_-_bridging_opnfv_and_etsi.pdf
* Yardstick Project presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925208%2Fopnfv_summit_-_yardstick_project.pdf
-* Yardstick wiki: https://wiki.opnfv.org/yardstick
+* Yardstick wiki: https://wiki.opnfv.org/display/yardstick
References used in Test Cases
=============================
@@ -26,22 +25,22 @@ References used in Test Cases
* cirros-image: https://download.cirros-cloud.net
* cyclictest: https://rt.wiki.kernel.org/index.php/Cyclictest
* DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/
-* DPDK supported NICs: http://dpdk.org/doc/nics
+* DPDK supported NICs: http://core.dpdk.org/supported/
* fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html
-* fio: http://www.bluestop.org/fio/HOWTO.txt
+* fio: https://bluestop.org/files/fio/HOWTO.txt
* free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html
* iperf3: https://iperf.fr/
-* iostat: http://linux.die.net/man/1/iostat
+* iostat: https://linux.die.net/man/1/iostat
* Lmbench man-pages: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html
* Memory bandwidth man-pages: http://manpages.ubuntu.com/manpages/trusty/bw_mem.8.html
* mpstat man-pages: http://manpages.ubuntu.com/manpages/trusty/man1/mpstat.1.html
-* netperf: http://www.netperf.org/netperf/training/Netperf.html
+* netperf: https://hewlettpackard.github.io/netperf/
* pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt
* RAMspeed: http://alasir.com/software/ramspeed/
-* sar: http://linux.die.net/man/1/sar
+* sar: https://linux.die.net/man/1/sar
* SR-IOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
* Storperf: https://wiki.opnfv.org/display/storperf/Storperf
-* unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench
+* unixbench: https://github.com/kdlucas/byte-unixbench/tree/master/UnixBench
Research
@@ -54,7 +53,7 @@ Research
Standards
=========
-* ETSI NFV: http://www.etsi.org/technologies-clusters/technologies/nfv
-* ETSI GS-NFV TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
+* ETSI NFV: https://www.etsi.org/technologies-clusters/technologies/nfv
+* ETSI GS-NFV TST 001: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
* RFC2544: https://www.ietf.org/rfc/rfc2544.txt