aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/release/release-notes/release-notes.rst42
-rw-r--r--docs/testing/user/userguide/04-installation.rst115
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst49
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst375
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst177
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst179
6 files changed, 800 insertions, 137 deletions
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 914daa3a4..920e90fbc 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -36,6 +36,9 @@ Version History
| December 14, 2018 | 7.1.0 | Yardstick for Gambia release |
| | | |
+-------------------+-----------+---------------------------------+
+| January 25, 2019 | 7.2.0 | Yardstick for Gambia release |
+| | | |
++-------------------+-----------+---------------------------------+
Important Notes
@@ -114,19 +117,19 @@ Release Data
| **Project** | Yardstick |
| | |
+--------------------------------+-----------------------+
-| **Repo/tag** | yardstick/opnfv-7.1.0 |
+| **Repo/tag** | yardstick/opnfv-7.2.0 |
| | |
+--------------------------------+-----------------------+
-| **Yardstick Docker image tag** | opnfv-7.1.0 |
+| **Yardstick Docker image tag** | opnfv-7.2.0 |
| | |
+--------------------------------+-----------------------+
-| **Release designation** | Gambia 7.1 |
+| **Release designation** | Gambia 7.2 |
| | |
+--------------------------------+-----------------------+
-| **Release date** | December 14, 2018 |
+| **Release date** | January 25, 2019 |
| | |
+--------------------------------+-----------------------+
-| **Purpose of the delivery** | OPNFV Gambia 7.1.0 |
+| **Purpose of the delivery** | OPNFV Gambia 7.2.0 |
| | |
+--------------------------------+-----------------------+
@@ -272,7 +275,7 @@ List of Scenarios
New Test cases
--------------
-.. note:: Yardstick Gambia 7.1.0 adds no new test cases.
+.. note:: Yardstick Gambia 7.2.0 adds no new test cases.
* Generic NFVI test cases
@@ -329,7 +332,7 @@ Feature additions
Scenario Matrix
===============
-For Gambia 7.1.0, Yardstick was tested on the following scenarios:
+For Gambia 7.2.0, Yardstick was tested on the following scenarios:
+-------------------------+------+---------+----------+------+
| Scenario | Apex | Compass | Fuel-arm | Fuel |
@@ -373,33 +376,16 @@ Known Issues/Faults
Corrected Faults
----------------
-Gambia 7.1.0:
+Gambia 7.2.0:
+--------------------+--------------------------------------------------------------------------+
| **JIRA REFERENCE** | **DESCRIPTION** |
+====================+==========================================================================+
-| YARDSTICK-1241 | Update NSB PROX devguide. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-1458 | NSB NFVi PROX Should report realtime port activity not historical data. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-1471 | Add Testcase Prox Standalone SRIOV. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-1475 | Adding Testcase for Prox Stanalone OvS-DPDK. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-1500 | Adding Testcase for Prox L2FWD PktTouch Stanalone OvS-DPDK. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-1517 | Missing opnfv "os-ovn-nofeature-ha" scenario test suite. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-l526 | Run testcase 074 result overridden by job status. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-1547 | Adding scale up test case for l3fwd OvS-DPDK. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-1560 | Fix pip environment. |
-+--------------------+--------------------------------------------------------------------------+
-| YARDSTICK-1561 | L3FWD Gradana Dashboards Out-of-date and incorrect. |
+| YARDSTICK-1512 | [dovetail] split the sla check results into process recovery and service |
+| | recovery for HA test cases. |
+--------------------+--------------------------------------------------------------------------+
-Gambia 7.1.0 known restrictions/issues
+Gambia 7.2.0 known restrictions/issues
======================================
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst
index 2dff80ef9..213821798 100644
--- a/docs/testing/user/userguide/04-installation.rst
+++ b/docs/testing/user/userguide/04-installation.rst
@@ -463,112 +463,111 @@ These configuration files can be found in the ``samples`` directory.
Default location for the output is ``/tmp/yardstick.out``.
-Automatic installation of Yardstick using ansible
--------------------------------------------------
+Automatic installation of Yardstick
+-----------------------------------
-Automatic installation can be used as an alternative to the manual.
-Yardstick can be installed on the bare metal and to the container. Yardstick
+Automatic installation can be used as an alternative to the manual by
+providing parameters for ansible script ``install.yaml`` in a ``nsb_setup.sh``
+file. Yardstick can be installed on the bare metal and to the container. Yardstick
container can be either pulled or built.
Bare metal installation
^^^^^^^^^^^^^^^^^^^^^^^
-Use ansible script ``install.yaml`` to install Yardstick on Ubuntu server:
+Modify ``nsb_setup.sh`` file ``install.yaml`` parameters to install Yardstick
+on Ubuntu server:
.. code-block:: console
ansible-playbook -i install-inventory.ini install.yaml \
+ -e IMAGE_PROPERTY='none' \
-e YARDSTICK_DIR=<path to Yardstick folder>
.. note:: By default ``INSTALLATION_MODE`` is ``baremetal``.
-.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to
- Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter.
+.. note:: No modification in ``install-inventory.ini`` is needed for Yardstick
+ installation.
.. note:: To install Yardstick in virtual environment pass parameter
``-e VIRTUAL_ENVIRONMENT=True``.
-To build Yardstick NSB image pass ``IMG_PROPERTY=nsb`` as input parameter:
-
-.. code-block:: console
-
- ansible-playbook -i install-inventory.ini install.yaml \
- -e IMAGE_PROPERTY=nsb \
- -e YARDSTICK_DIR=<path to Yardstick folder>
-
-.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF
- images will be built. Image type is defined by parameter ``IMAGE_PROPERTY``.
- By default Yardstick image will be built.
-
Container installation
^^^^^^^^^^^^^^^^^^^^^^
-Use ansible script ``install.yaml`` to pull or build Yardstick
-container. To pull Yardstick image and start container run:
+Modify ``install.yaml`` parameters in ``nsb_setup.sh`` file to pull or build
+Yardstick container. To pull Yardstick image and start container run:
.. code-block:: console
ansible-playbook -i install-inventory.ini install.yaml \
- -e YARDSTICK_DIR=<path to Yardstick folder> \
+ -e IMAGE_PROPERTY='none' \
-e INSTALLATION_MODE=container_pull
-.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF
- images will be built. Image type is defined by variable ``IMG_PROPERTY`` in
- file ``ansible/group_vars/all.yml``. By default Yardstick image will be
- built.
-
-.. note:: Open question: How to know if Docker image is built on Ubuntu 16.04 and 18.04?
- Do we need separate tag to be used?
+.. note:: Yardstick docker image is available for both Ubuntu 16.04 and Ubuntu
+ 18.04. By default Ubuntu 16.04 based docker image is used. To use
+ Ubuntu 18.04 based docker image pass ``-i opnfv/yardstick-ubuntu-18.04``
+ parameter to ``nsb_setup.sh``.
-To build Yardstick image run:
+To build Yardstick image modify Dockerfile as per comments in it and run:
.. code-block:: console
- ansible-playbook -i install-inventory.ini install.yaml \
- -e YARDSTICK_DIR=<path to Yardstick folder> \
- -e INSTALLATION_MODE=container
+ cd yardstick
+ docker build -f docker/Dockerfile -t opnfv/yardstick:<tag> .
-.. note:: In this ``INSTALLATION_MODE`` mode neither Yardstick image nor SampleVNF
- image will be built.
+.. note:: Yardstick docker image based on Ubuntu 16.04 will be built.
+ Pass ``-f docker/Dockerfile_ubuntu18`` to build Yardstick docker image based
+ on Ubuntu 18.04.
-.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to
- Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter.
+.. note:: Add ``--build-arg http_proxy=http://<proxy_host>:<proxy_port>`` to
+ build docker image if server is behind the proxy.
Parameters for ``install.yaml``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Description of the parameters used with ``install.yaml`` script
+Description of the parameters used with ``install.yaml``:
+-------------------------+-------------------------------------------------+
| Parameters | Detail |
+=========================+=================================================+
- | -i install-inventory.ini| Installs package dependency to remote servers |
- | | Mandatory parameter |
- | | By default no remote servers are provided |
- | | Needed packages will be installed on localhost |
+ | -i install-inventory.ini|| Installs package dependency to remote servers |
+ | || and localhost |
+ | || Mandatory parameter |
+ | || By default no remote servers are provided |
+-------------------------+-------------------------------------------------+
- | -e YARDSTICK_DIR | Path to Yardstick folder |
- | | Mandatory parameter |
+ | -e YARDSTICK_DIR || Path to Yardstick folder |
+ | || Mandatory parameter for Yardstick bare metal |
+ | || installation |
+-------------------------+-------------------------------------------------+
- | -e INSTALLATION_MODE | baremetal: Yardstick is installed to the bare |
- | | metal |
- | | Default parameter |
+ | -e INSTALLATION_MODE || baremetal: Yardstick is installed to the bare |
+ | | metal |
+ | || Default parameter |
| +-------------------------------------------------+
- | | container: Yardstick is installed in container |
- | | Container is built from Dockerfile |
+ | || container: Yardstick is installed in container |
+ | || Container is built from Dockerfile |
| +-------------------------------------------------+
- | | container_pull: Yardstick is installed in |
- | | container |
- | | Container is pulled from docker hub |
+ | || container_pull: Yardstick is installed in |
+ | || container |
+ | || Container is pulled from docker hub |
+-------------------------+-------------------------------------------------+
- | -e OS_RELEASE | xenial or bionic: Ubuntu version to be used |
- | | Default is Ubuntu 16.04 (xenial) |
+ | -e OS_RELEASE || xenial or bionic: Ubuntu version to be used for|
+ | || VM image (nsb or normal) |
+ | || Default is Ubuntu 16.04, xenial |
+ +-------------------------+-------------------------------------------------+
+ | -e IMAGE_PROPERTY || nsb: Build Yardstick NSB VM image |
+ | || Used to run Yardstick NSB tests on sample VNF |
+ | || Default parameter |
+ | +-------------------------------------------------+
+ | || normal: Build VM image to run ping test in |
+ | || OpenStack |
+ | +-------------------------------------------------+
+ | || none: don't build a VM image. |
+-------------------------+-------------------------------------------------+
- | -e IMAGE_PROPERTY | normal or nsb: Type of the VM image to be built |
- | | Default image is Yardstick |
+ | -e VIRTUAL_ENVIRONMENT || False or True: Whether install in virtualenv |
+ | || Default is False |
+-------------------------+-------------------------------------------------+
- | -e VIRTUAL_ENVIRONMENT | False or True: Whether install in virtualenv |
- | | Default is False |
+ | -e YARD_IMAGE_ARCH || CPU architecture on servers |
+ | || Default is 'amd64' |
+-------------------------+-------------------------------------------------+
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index ec4df1cae..012ea6c3d 100644
--- a/docs/testing/user/userguide/12-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -56,7 +56,7 @@ NSB extension includes:
* Generic data models of Network Services, based on ETSI spec
`ETSI GS NFV-TST 001`_
-* Standalone :term:`context` for VNF testing with SRIOV, OVS, OVS-DPDK, etc
+* Standalone :term:`context` for VNF testing SRIOV, OVS, OVS-DPDK, etc
* Generic VNF configuration models and metrics implemented with Python
classes
* Traffic generator features and traffic profiles
@@ -65,6 +65,14 @@ NSB extension includes:
* L4-L7 state-full traffic profiles
* Tunneling protocol/network overlay support
+* Scenarios that handle NSB test cases execution
+
+ * NSPerf - scenario that handles generic NSB test case execution
+ (setup and init tg/vnf, trigger traffic on tg, collect kpi)
+ * NSPerf-RFC2544 - scenario that allows repeatable triggering of traffic on
+ traffic generators until test case acceptance criteria is met
+ (for example RFC2544 binary search)
+
* Test case samples
* Ping
@@ -121,6 +129,13 @@ Network Service framework performs the necessary test steps. It may involve:
Components of Network Service
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. TODO: provide a list of components in this section and describe them in
+ later sub-sections
+
+.. Components are the methodology, TGs, framework extensions, KPI collection,
+ Testcases, SampleVNFs
+.. Framework extentions include: VNF models, NSPerf Scenario, contexts
+
* *Models for Network Service benchmarking*: The Network Service benchmarking
requires the proper modelling approach. The NSB provides models using Python
files and defining of NSDs and VNFDs.
@@ -169,6 +184,38 @@ for every combination of test case parameters:
* RFC2544 throughput for various loss rate defined (1% is a default)
+KPI Collection
+^^^^^^^^^^^^^^
+
+KPI collection is the process of sampling KPIs at multiple intervals to allow
+for investigation into anomalies during runtime. Some KPI intervals are
+adjustable. KPIs are collected from traffic generators and NFVI for the SUT.
+There is already some reporting in NSB available, but NSB collects all KPIs for
+analytics to process.
+
+Below is an example list of basic KPIs:
+* Throughput
+* Latency
+* Packet delay variation
+* Maximum establishment rate
+* Maximum tear-down rate
+* Maximum simultaneous number of sessions
+
+Of course, there can be many other KPIs that will be relevant for a specific
+NFVI, but in most cases these KPIs are enough to give you a basic picture of
+the SUT. NSB also uses :term:`collectd` in order to collect the KPIs. Currently
+the following collectd plug-ins are enabled for NSB testcases:
+
+* Libvirt
+* Interface stats
+* OvS events
+* vSwitch stats
+* Huge Pages
+* RAM
+* CPU usage
+* IntelĀ® PMU
+* Intel(r) RDT
+
Graphical Overview
------------------
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
index 71ced43ea..694521d2b 100644
--- a/docs/testing/user/userguide/13-nsb-installation.rst
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -21,6 +21,7 @@ NSB Installation
.. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
.. _devstack: https://docs.openstack.org/devstack/pike/>
+.. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
Abstract
--------
@@ -28,7 +29,7 @@ Abstract
The steps needed to run Yardstick with NSB testing are:
* Install Yardstick (NSB Testing).
-* Setup/reference ``pod.yaml`` describing Test topology
+* Setup/reference ``pod.yaml`` describing Test topology.
* Create/reference the test configuration yaml file.
* Run the test case.
@@ -89,21 +90,25 @@ Boot and BIOS settings:
Install Yardstick (NSB Testing)
-------------------------------
-Download the source code and check out the latest stable branch::
+Yardstick with NSB can be installed using ``nsb_setup.sh``.
+The ``nsb_setup.sh`` allows to:
-.. code-block:: console
-
- git clone https://gerrit.opnfv.org/gerrit/yardstick
- cd yardstick
- # Switch to latest stable branch
- git checkout stable/gambia
+1. Install Yardstick in specified mode: bare metal or container.
+ Refer :doc:`04-installation`.
+2. Install package dependencies on remote servers used as traffic generator or
+ sample VNF. Install DPDK, sample VNFs, TREX, collectd.
+ Add such servers to ``install-inventory.ini`` file to either
+ ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
+ It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
+3. Build VM image either nsb or normal. The nsb VM image is used to run
+ Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
+ The normal VM image is used to run Yardstick ping tests in OpenStack context.
+4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
-Configure the network proxy, either using the environment variables or setting
-the global environment file.
+Firstly, configure the network proxy, either using the environment variables or
+setting the global environment file.
-* Set environment
-
-.. code-block::
+Set environment::
http_proxy='http://proxy.company.com:port'
https_proxy='http://proxy.company.com:port'
@@ -113,55 +118,187 @@ the global environment file.
export http_proxy='http://proxy.company.com:port'
export https_proxy='http://proxy.company.com:port'
-Modify the Yardstick installation inventory, used by Ansible::
+Download the source code and check out the latest stable branch
+
+.. code-block:: console
+
+ git clone https://gerrit.opnfv.org/gerrit/yardstick
+ cd yardstick
+ # Switch to latest stable branch
+ git checkout stable/gambia
+
+Modify the Yardstick installation inventory used by Ansible::
cat ./ansible/install-inventory.ini
[jumphost]
localhost ansible_connection=local
- [yardstick-standalone]
- yardstick-standalone-node ansible_host=192.168.1.2
- yardstick-standalone-node-2 ansible_host=192.168.1.3
-
# section below is only due backward compatibility.
# it will be removed later
[yardstick:children]
jumphost
+ [yardstick-baremetal]
+ baremetal ansible_host=192.168.2.51 ansible_connection=ssh
+
+ [yardstick-standalone]
+ standalone ansible_host=192.168.2.52 ansible_connection=ssh
+
[all:vars]
- ansible_user=root
- ansible_pass=root
+ # Uncomment credentials below if needed
+ ansible_user=root
+ ansible_ssh_pass=root
+ # ansible_ssh_private_key_file=/root/.ssh/id_rsa
+ # When IMG_PROPERTY is passed neither normal nor nsb set
+ # "path_to_vm=/path/to/image" to add it to OpenStack
+ # path_to_img=/tmp/workspace/yardstick-image.img
+
+ # List of CPUs to be isolated (not used by default)
+ # Grub line will be extended with:
+ # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
+ # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
+
+.. warning::
+
+ Before running ``nsb_setup.sh`` make sure python is installed on servers
+ added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups.
.. note::
SSH access without password needs to be configured for all your nodes
- defined in ``yardstick-install-inventory.ini`` file.
+ defined in ``install-inventory.ini`` file.
If you want to use password authentication you need to install ``sshpass``::
sudo -EH apt-get install sshpass
-To execute an installation for a BareMetal or a Standalone context::
- ./nsb_setup.sh
+.. note::
+
+ A VM image built by other means than Yardstick can be added to OpenStack.
+ Uncomment and set correct path to the VM image in the
+ ``install-inventory.ini`` file::
+
+ path_to_img=/tmp/workspace/yardstick-image.img
+
+
+.. note::
+
+ CPU isolation can be applied to the remote servers, like:
+ ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
+ ``install-inventory.ini`` file.
+
+By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
+docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
+installs packages to the servers given in ``yardstick-standalone`` and
+``yardstick-baremetal`` host groups.
+To pull Yardstick built based on Ubuntu 18 run::
-To execute an installation for an OpenStack context::
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
+
+To change default behavior modify parameters for ``install.yaml`` in
+``nsb_setup.sh`` file.
+
+Refer chapter :doc:`04-installation` for more details on ``install.yaml``
+parameters.
+
+To execute an installation for a **BareMetal** or a **Standalone context**::
+
+ ./nsb_setup.sh
+
+To execute an installation for an **OpenStack** context::
./nsb_setup.sh <path to admin-openrc.sh>
+.. note::
+
+ Yardstick may not be operational after distributive linux kernel update if
+ it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
+
+.. warning::
+
+ The Yardstick VM image (NSB or normal) cannot be built inside a VM.
+
+.. warning::
+
+ The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
+ Reboot of the servers from ``yardstick-standalone`` or
+ ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
+ required to apply those changes.
+
The above commands will set up Docker with the latest Yardstick code. To
execute::
docker exec -it yardstick bash
+.. note::
+
+ It may be needed to configure tty in docker container to extend commandline
+ character length, for example:
+
+ stty size rows 58 cols 234
+
It will also automatically download all the packages needed for NSB Testing
-setup. Refer chapter :doc:`04-installation` for more on Docker
+setup. Refer chapter :doc:`04-installation` for more on Docker.
**Install Yardstick using Docker (recommended)**
-Another way to execute an installation for a Bare-Metal or a Standalone context
-is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
-for more details.
+Bare Metal context example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG and DUT servers to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies. Install
+ python on servers.
+3. Start deployment using docker image based on Ubuntu 16:
+
+.. code-block:: console
+
+ ./nsb_setup.sh
+
+4. Reboot bare metal servers.
+5. Enter to yardstick container and modify pod yaml file and run tests.
+
+Standalone context example for Ubuntu 18
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+Ubuntu 18 is installed on all servers.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG server to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies.
+ Add server where VM with sample VNF will be deployed to
+ ``yardstick-standalone`` group in ``install-inventory.ini`` file.
+ Target VM image named ``yardstick-nsb-image.img`` will be placed to
+ ``/var/lib/libvirt/images/``.
+ Install python on servers.
+3. Modify ``nsb_setup.sh`` on jump host:
+
+.. code-block:: console
+
+ ansible-playbook \
+ -e IMAGE_PROPERTY='nsb' \
+ -e OS_RELEASE='bionic' \
+ -e INSTALLATION_MODE='container_pull' \
+ -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
+ -i install-inventory.ini install.yaml
+
+4. Start deployment with Yardstick docker images based on Ubuntu 18:
+
+.. code-block:: console
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
+
+5. Reboot servers.
+6. Enter to yardstick container and modify pod yaml file and run tests.
+
System Topology
---------------
@@ -175,7 +312,7 @@ System Topology
| | | |
| | (1)<-----(1) | |
+----------+ +----------+
- trafficgen_1 vnf
+ trafficgen_0 vnf
Environment parameters and credentials
@@ -185,7 +322,7 @@ Configure yardstick.conf
^^^^^^^^^^^^^^^^^^^^^^^^
If you did not run ``yardstick env influxdb`` inside the container to generate
- ``yardstick.conf``, then create the config file manually (run inside the
+``yardstick.conf``, then create the config file manually (run inside the
container)::
cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
@@ -251,7 +388,7 @@ Bare-Metal 2-Node setup
| | | |
| | (n)<-----(n) | |
+----------+ +----------+
- trafficgen_1 vnf
+ trafficgen_0 vnf
Bare-Metal 3-Node setup - Correlated Traffic
++++++++++++++++++++++++++++++++++++++++++++
@@ -265,7 +402,7 @@ Bare-Metal 3-Node setup - Correlated Traffic
| | | | | |
| | | |(1)<---->(0)| |
+----------+ +----------+ +------------+
- trafficgen_1 vnf trafficgen_2
+ trafficgen_0 vnf trafficgen_1
Bare-Metal Config pod.yaml
@@ -279,7 +416,7 @@ topology and update all the required fields.::
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -388,7 +525,7 @@ On Host, where VM is created:
.. code-block:: YAML
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.7/24'
@@ -446,7 +583,7 @@ SR-IOV 2-Node setup
| | (n)<----->(n) | ----------------- |
| | | |
+----------+ +-------------------------+
- trafficgen_1 host
+ trafficgen_0 host
@@ -474,7 +611,7 @@ SR-IOV 3-Node setup - Correlated Traffic
| | | | | | |
| | (n)<----->(n) | -----| (n)<-->(n) | |
+----------+ +---------------------+ +--------------+
- trafficgen_1 host trafficgen_2
+ trafficgen_0 host trafficgen_1
Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
topology and update all the required fields.
@@ -493,7 +630,7 @@ SR-IOV Config pod_trex.yaml
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -554,7 +691,7 @@ Update contexts section
user: "" # update VM username
password: "" # update password
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -619,7 +756,7 @@ On Host, where VM is created:
.. code-block:: YAML
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.7/24'
@@ -683,7 +820,7 @@ OVS-DPDK 2-Node setup
| | | (ovs-dpdk) | |
| | (n)<----->(n) |------------------ |
+----------+ +-------------------------+
- trafficgen_1 host
+ trafficgen_0 host
OVS-DPDK 3-Node setup - Correlated Traffic
@@ -713,7 +850,7 @@ OVS-DPDK 3-Node setup - Correlated Traffic
| | | (ovs-dpdk) | | | |
| | (n)<----->(n) | ------ |(n)<-->(n)| |
+----------+ +-------------------------+ +------------+
- trafficgen_1 host trafficgen_2
+ trafficgen_0 host trafficgen_1
Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
@@ -731,7 +868,7 @@ OVS-DPDK Config pod_trex.yaml
nodes:
-
- name: trafficgen_1
+ name: trafficgen_0
role: TrafficGen
ip: 1.1.1.1
user: root
@@ -802,7 +939,7 @@ Update contexts section
user: "" # update VM username
password: "" # update password
servers:
- vnf:
+ vnf_0:
network_ports:
mgmt:
cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
@@ -822,6 +959,144 @@ Update contexts section
cidr: '152.16.40.10/24'
gateway_ip: '152.16.100.20'
+OVS-DPDK configuration options
+++++++++++++++++++++++++++++++
+
+There are number of configuration options available for OVS-DPDK context in
+test case. Mostly they are used for performance tuning.
+
+OVS-DPDK properties:
+''''''''''''''''''''
+
+OVS-DPDK properties example under *ovs_properties* section:
+
+ .. code-block:: console
+
+ ovs_properties:
+ version:
+ ovs: 2.8.1
+ dpdk: 17.05.2
+ pmd_threads: 4
+ pmd_cpu_mask: "0x3c"
+ ram:
+ socket_0: 2048
+ socket_1: 2048
+ queues: 2
+ vpath: "/usr/local"
+ max_idle: 30000
+ lcore_mask: 0x02
+ dpdk_pmd-rxq-affinity:
+ 0: "0:2,1:2"
+ 1: "0:2,1:2"
+ 2: "0:3,1:3"
+ 3: "0:3,1:3"
+ vhost_pmd-rxq-affinity:
+ 0: "0:3,1:3"
+ 1: "0:3,1:3"
+ 2: "0:4,1:4"
+ 3: "0:4,1:4"
+
+OVS-DPDK properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | version || Version of OVS and DPDK to be installed |
+ | || There is a relation between OVS and DPDK |
+ | | version which can be found at |
+ | | `OVS-DPDK-versions`_ |
+ | || By default OVS: 2.6.0, DPDK: 16.07.2 |
+ +-------------------------+-------------------------------------------------+
+ | lcore_mask || Core bitmask used during DPDK initialization |
+ | | where the non-datapath OVS-DPDK threads such |
+ | | as handler and revalidator threads run |
+ +-------------------------+-------------------------------------------------+
+ | pmd_cpu_mask || Core bitmask that sets which cores are used by |
+ | || OVS-DPDK for datapath packet processing |
+ +-------------------------+-------------------------------------------------+
+ | pmd_threads || Number of PMD threads used by OVS-DPDK for |
+ | | datapath |
+ | || This core mask is evaluated in Yardstick |
+ | || It will be used if pmd_cpu_mask is not given |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for each socket, MB |
+ | || Default is 2048 MB |
+ +-------------------------+-------------------------------------------------+
+ | queues || Number of RX queues used for DPDK physical |
+ | | interface |
+ +-------------------------+-------------------------------------------------+
+ | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vpath || User path for openvswitch files |
+ | || Default is ``/usr/local`` |
+ +-------------------------+-------------------------------------------------+
+ | max_idle || The maximum time that idle flows will remain |
+ | | cached in the datapath, ms |
+ +-------------------------+-------------------------------------------------+
+
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties example under *flavor* section:
+
+ .. code-block:: console
+
+ flavor:
+ images: <path>
+ ram: 8192
+ extra_specs:
+ machine_type: 'pc-i440fx-xenial'
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ hw_socket: 0
+ cputune: |
+ <cputune>
+ <vcpupin vcpu="0" cpuset="7"/>
+ <vcpupin vcpu="1" cpuset="8"/>
+ ...
+ <vcpupin vcpu="11" cpuset="18"/>
+ <emulatorpin cpuset="11"/>
+ </cputune>
+
+VM image properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | images || Path to the VM image generated by |
+ | | ``nsb_setup.sh`` |
+ | || Default path is ``/var/lib/libvirt/images/`` |
+ | || Default file name ``yardstick-nsb-image.img`` |
+ | | or ``yardstick-image.img`` |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for VM |
+ | || Default is 4096 MB |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_sockets || Number of sockets provided to the guest VM |
+ | || Default is 1 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_cores || Number of cores provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_threads || Number of threads provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw_socket || Generate vcpu cpuset from given HW socket |
+ | || Default is 0 |
+ +-------------------------+-------------------------------------------------+
+ | cputune || Maps virtual cpu with logical cpu |
+ +-------------------------+-------------------------------------------------+
+ | machine_type || Machine type to be emulated in VM |
+ | || Default is 'pc-i440fx-xenial' |
+ +-------------------------+-------------------------------------------------+
+
OpenStack with SR-IOV support
-----------------------------
@@ -859,7 +1134,7 @@ Single node OpenStack with external TG
| | (PF1)<----->(PF1) +--------------------+ |
| | | |
+----------+ +----------------------------+
- trafficgen_1 host
+ trafficgen_0 host
Host pre-configuration
@@ -1011,7 +1286,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section.
Multi node OpenStack TG and VNF setup (two nodes)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
@@ -1022,7 +1297,7 @@ Multi node OpenStack TG and VNF setup (two nodes)
| |sample-VNF VM | | | |sample-VNF VM | |
| | | | | | | |
| | TG | | | | DUT | |
- | | trafficgen_1 | | | | (VNF) | |
+ | | trafficgen_0 | | | | (VNF) | |
| | | | | | | |
| +--------+ +--------+ | | +--------+ +--------+ |
| | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
@@ -1094,7 +1369,7 @@ Enabling other Traffic generators
---------------------------------
IxLoad
-~~~~~~
+^^^^^^
1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
@@ -1197,9 +1472,9 @@ to be preinstalled and properly configured.
``PYTHONPATH`` environment variable.
.. important::
- The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
- For LsApi module to initialize correctly following lines (184-186) in
- lsapi.py
+ The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
+ For LsApi module to initialize correctly following lines (184-186) in
+ lsapi.py
.. code-block:: python
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
new file mode 100644
index 000000000..ffe4f6c19
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
@@ -0,0 +1,177 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case without link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscriber connections to BNG, runs prioritized traffic|
+| | on maximum throughput on all ports and collects network |
+| | and PPPoE subscriber metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up.yaml |
+| | |
+| | Mentioned test case is a template and number of ports in the |
+| | setup could be passed using cli arguments, e.g: |
+| | |
+| | yardstick -d task start --task-args='{vports: 8}' <tc_yaml> |
+| | |
+| | By default, vports=2. |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test case can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * Setup ports number; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * Enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolves the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports |
+| | (traffic flows are creating between access-core ports |
+| | pairs, traffic is bi-directional); |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. In case drop percentage rate is higher than expected, |
+| | reduce traffic line rate and repeat steps 7-10 again; |
+| | 11. In case drop percentage rate is as expected or number |
+| | of maximum iterations in step 10 achieved, disconnect |
+| | PPPoE subscribers and stop traffic; |
+| | 12. Stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During each iteration interval in the test run, all specified|
+| | metrics are retrieved from IxNetwork and stored in the |
+| | yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The vBNG RFC2544 test case will achieve maximum traffic line |
+| | rate with zero packet loss (or other non-zero allowed |
+| | partial drop rate). |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
new file mode 100644
index 000000000..889ba2410
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
@@ -0,0 +1,179 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case with link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscribers connections to BNG, run prioritized traffic|
+| | causing congestion of access port (port xe0) and collects |
+| | network and PPPoE subscribers metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX.yaml |
+| | |
+| | Number of ports: |
+| | * 8 ports |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
+| | NOTE: that only parameter that can't be changed is ports |
+| | number. To run the test with another number of ports |
+| | traffic profile should be updated. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test cases can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolve the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports. |
+| | While test covers case with access port congestion, |
+| | flows between ports will be created in the following |
+| | way: traffic from two core ports are going to one access |
+| | port causing port congestion and traffic from other two |
+| | core ports is splitting between remaining three access |
+| | ports; |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. Measure drop percentage rate of different priority |
+| | packets on congested port. Expected that all high and |
+| | medium priority packets was forwarded and only low |
+| | priority packets has drops. |
+| | 11. Disconnect PPPoE subscribers and stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During test run, in the end of each iteration all specified |
+| | in the document metrics are retrieved from IxNetwork and |
+| | stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case is successful if all high and medium priority |
+| | packets on congested port was forwarded and only low |
+| | priority packets has drops. |
+| | |
++--------------+--------------------------------------------------------------+