From 2347f3977823bfd0c4b1fa832d9046b8a625596b Mon Sep 17 00:00:00 2001 From: JingLu5 Date: Tue, 28 Mar 2017 01:43:21 +0000 Subject: Refine documentation Change-Id: I0d3387a520e3decf51bb0f7db53996f148a611d0 Signed-off-by: JingLu5 --- docs/testing/user/userguide/01-introduction.rst | 32 +- docs/testing/user/userguide/03-architecture.rst | 2 +- docs/testing/user/userguide/04-installation.rst | 442 ++++++++++----------- .../testing/user/userguide/05-yardstick_plugin.rst | 69 ++-- .../user/userguide/06-result-store-InfluxDB.rst | 6 +- docs/testing/user/userguide/08-api.rst | 177 +++++++++ docs/testing/user/userguide/08-vtc-overview.rst | 125 ------ .../user/userguide/09-apexlake_installation.rst | 302 -------------- docs/testing/user/userguide/09-vtc-overview.rst | 128 ++++++ docs/testing/user/userguide/10-apexlake_api.rst | 89 ----- .../user/userguide/10-apexlake_installation.rst | 302 ++++++++++++++ docs/testing/user/userguide/11-apexlake_api.rst | 89 +++++ docs/testing/user/userguide/11-nsb-overview.rst | 213 ---------- docs/testing/user/userguide/12-nsb-overview.rst | 194 +++++++++ .../testing/user/userguide/12-nsb_installation.rst | 268 ------------- docs/testing/user/userguide/13-list-of-tcs.rst | 129 ------ .../testing/user/userguide/13-nsb_installation.rst | 238 +++++++++++ docs/testing/user/userguide/14-list-of-tcs.rst | 129 ++++++ docs/testing/user/userguide/index.rst | 20 +- .../user/userguide/opnfv_yardstick_tc001.rst | 2 +- .../user/userguide/opnfv_yardstick_tc005.rst | 2 +- 21 files changed, 1547 insertions(+), 1411 deletions(-) create mode 100644 docs/testing/user/userguide/08-api.rst delete mode 100644 docs/testing/user/userguide/08-vtc-overview.rst delete mode 100644 docs/testing/user/userguide/09-apexlake_installation.rst create mode 100644 docs/testing/user/userguide/09-vtc-overview.rst delete mode 100644 docs/testing/user/userguide/10-apexlake_api.rst create mode 100644 docs/testing/user/userguide/10-apexlake_installation.rst create mode 100644 docs/testing/user/userguide/11-apexlake_api.rst delete mode 100644 docs/testing/user/userguide/11-nsb-overview.rst create mode 100644 docs/testing/user/userguide/12-nsb-overview.rst delete mode 100644 docs/testing/user/userguide/12-nsb_installation.rst delete mode 100644 docs/testing/user/userguide/13-list-of-tcs.rst create mode 100644 docs/testing/user/userguide/13-nsb_installation.rst create mode 100644 docs/testing/user/userguide/14-list-of-tcs.rst (limited to 'docs/testing/user') diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst index 2aa870c2a..4fc94ac62 100755 --- a/docs/testing/user/userguide/01-introduction.rst +++ b/docs/testing/user/userguide/01-introduction.rst @@ -12,6 +12,7 @@ Introduction .. _Pharos: https://wiki.opnfv.org/pharos .. _Yardstick: https://wiki.opnfv.org/yardstick .. _Presentation: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_yardstick_project.pdf?version=1&modificationDate=1458848320000&api=v2 + Yardstick_ is an OPNFV Project. The project's goal is to verify infrastructure compliance, from the perspective @@ -37,14 +38,14 @@ About This Document This document consists of the following chapters: -* Chapter :doc:`01-introduction` provides a brief introduction to yardstick - project's goal and scope and gives the structure of this document. +* Chapter :doc:`01-introduction` provides a brief introduction to *Yardstick* + project's background and describes the structure of this document. * Chapter :doc:`02-methodology` describes the methodology implemented by the - Yardstick Project for :term:`NFVI` verification. + *Yardstick* Project for :term:`NFVI` verification. * Chapter :doc:`03-architecture` provides information on the software architecture - of yardstick. + of *Yardstick*. * Chapter :doc:`04-installation` provides instructions to install *Yardstick*. @@ -54,22 +55,28 @@ This document consists of the following chapters: * Chapter :doc:`06-result-store-InfluxDB` provides inforamtion on how to run plug-in test cases and store test results into community's InfluxDB. -* Chapter :doc:`07-vtc-overview` provides information on the :term:`VTC`. +* Chapter :doc:`07-grafana` provides inforamtion on *Yardstick* grafana dashboard + and how to add a dashboard into *Yardstick* grafana dashboard. + +* Chapter :doc:`08-api` provides inforamtion on *Yardstick* ReST API and how to + use *Yardstick* API. -* Chapter :doc:`08-apexlake_installation` provides instructions to install the +* Chapter :doc:`09-vtc-overview` provides information on the :term:`VTC`. + +* Chapter :doc:`10-apexlake_installation` provides instructions to install the experimental framework *ApexLake* -* Chapter :doc:`09-apexlake_api` explains how this framework is integrated in +* Chapter :doc:`11-apexlake_api` explains how this framework is integrated in *Yardstick*. -* Chapter :doc:`10-nsb-overview` describes the methodology implemented by the - yardstick - Network service benchmarking to test real world usecase for a - given VNF +* Chapter :doc:`12-nsb-overview` describes the methodology implemented by the + Yardstick - Network service benchmarking to test real world usecase for a + given VNF. -* Chapter :doc:`11-nsb_installation` provides instructions to install +* Chapter :doc:`13-nsb_installation` provides instructions to install *Yardstick - Network service benchmarking testing*. -* Chapter :doc:`12-list-of-tcs` includes a list of available Yardstick test +* Chapter :doc:`14-list-of-tcs` includes a list of available *Yardstick* test cases. @@ -79,4 +86,3 @@ Contact Yardstick Feedback? `Contact us`_ .. _Contact us: opnfv-users@lists.opnfv.org - diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst index 95fe050e8..8336b609d 100755 --- a/docs/testing/user/userguide/03-architecture.rst +++ b/docs/testing/user/userguide/03-architecture.rst @@ -236,7 +236,7 @@ Yardstick Directory structure **yardstick/** - Yardstick main directory. -*/tests/ci/* - Used for continuous integration of Yardstick at different PODs and +*tests/ci/* - Used for continuous integration of Yardstick at different PODs and with support for different installers. *docs/* - All documentation is stored here, such as configuration guides, diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst index 64955c782..cb7b76714 100644 --- a/docs/testing/user/userguide/04-installation.rst +++ b/docs/testing/user/userguide/04-installation.rst @@ -12,7 +12,7 @@ Abstract Yardstick supports installation by Docker or directly in Ubuntu. The installation procedure for Docker and direct installation are detailed in -the section below. +the sections below. To use Yardstick you should have access to an OpenStack environment, with at least Nova, Neutron, Glance, Keystone and Heat installed. @@ -21,56 +21,50 @@ The steps needed to run Yardstick are: 1. Install Yardstick. 2. Load OpenStack environment variables. -3. Create a Neutron external network. -4. Build Yardstick flavor and a guest image. -5. Load the guest image into the OpenStack environment. -6. Create the test configuration .yaml file. -7. Run the test case. +#. Create Yardstick flavor. +#. Build a guest image and load it into the OpenStack environment. +#. Create the test configuration ``.yaml`` file and run the test case/suite. Prerequisites ------------- -The OPNFV deployment is out of the scope of this document but it can be -found in http://artifacts.opnfv.org/opnfvdocs/colorado/docs/configguide/index.html. -The OPNFV platform is considered as the System Under Test (SUT) in this -document. +The OPNFV deployment is out of the scope of this document and can be found `here `_. The OPNFV platform is considered as the System Under Test (SUT) in this document. Several prerequisites are needed for Yardstick: - #. A Jumphost to run Yardstick on - #. A Docker daemon shall be installed on the Jumphost - #. A public/external network created on the SUT - #. Connectivity from the Jumphost to the SUT public/external network +#. A Jumphost to run Yardstick on +#. A Docker daemon or a virtual environment installed on the Jumphost +#. A public/external network created on the SUT +#. Connectivity from the Jumphost to the SUT public/external network -WARNING: Connectivity from Jumphost is essential and it is of paramount +**NOTE:** *Jumphost* refers to any server which meets the previous +requirements. Normally it is the same server from where the OPNFV +deployment has been triggered. + +**WARNING:** Connectivity from Jumphost is essential and it is of paramount importance to make sure it is working before even considering to install and run Yardstick. Make also sure you understand how your networking is designed to work. -NOTE: **Jumphost** refers to any server which meets the previous -requirements. Normally it is the same server from where the OPNFV -deployment has been triggered previously. - -NOTE: If your Jumphost is operating behind a company http proxy and/or -Firewall, please consult first the section `Proxy Support`_, towards -the end of this document. The section details some tips/tricks which +**NOTE:** If your Jumphost is operating behind a company http proxy and/or +Firewall, please consult first the section `Proxy Support (**Todo**)`_, towards +the end of this document. That section details some tips/tricks which *may* be of help in a proxified environment. -Installing Yardstick using Docker ---------------------------------- +Install Yardstick using Docker (**recommended**) +--------------------------------------------------- -Yardstick has a Docker image, -**It is recommended to use this Docker image to run Yardstick test**. +Yardstick has a Docker image. It is recommended to use this Docker image to run Yardstick test. -Pulling the Yardstick Docker image +Prepare the Yardstick container ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ -Pull the Yardstick Docker image (**opnfv/yardstick**) from the public dockerhub -registry under the OPNFV account: [dockerhub_], with the following docker +Pull the Yardstick Docker image (``opnfv/yardstick``) from the public dockerhub +registry under the OPNFV account: dockerhub_, with the following docker command:: docker pull opnfv/yardstick:stable @@ -82,14 +76,11 @@ following docker command:: REPOSITORY TAG IMAGE ID CREATED SIZE opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB -Run the Docker image to get a Yardstick container -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -:: +Run the Docker image to get a Yardstick container:: docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock -p 8888:5000 -e INSTALLER_IP=192.168.200.2 -e INSTALLER_TYPE=compass --name yardstick opnfv/yardstick:stable -note: +Note: +----------------------------------------------+------------------------------+ | parameters | Detail | @@ -102,34 +93,34 @@ note: | | background. | +----------------------------------------------+------------------------------+ | --privileged | If you want to build | -| | yardstick-image in yardstick | -| | container, this parameter is | -| | needed. | +| | ``yardstick-image`` in | +| | Yardstick container, this | +| | parameter is needed. | +----------------------------------------------+------------------------------+ | -e INSTALLER_IP=192.168.200.2 | If you want to use yardstick | | | env prepare command(or | | -e INSTALLER_TYPE=compass | related API) to load the | -| | images that yardstick needs, | +| | images that Yardstick needs, | | | these parameters should be | | | provided. | | | The INSTALLER_IP and | | | INSTALLER_TYPE are depending | -| | on your OpenStack installer, | -| | currently apex, compass, | -| | fuel and joid are supported. | +| | on your OpenStack installer. | +| | Currently Apex, Compass, | +| | Fuel and Joid are supported. | | | If you use other installers, | | | such as devstack, these | | | parameters can be ignores. | +----------------------------------------------+------------------------------+ | -p 8888:5000 | If you want to call | -| | yardstick API out of | -| | yardstick container, this | +| | Yardstick API out of | +| | Yardstick container, this | | | parameter is needed. | +----------------------------------------------+------------------------------+ | -v /var/run/docker.sock:/var/run/docker.sock | If you want to use yardstick | | | env grafana/influxdb to | | | create a grafana/influxdb | -| | container out of yardstick | +| | container out of Yardstick | | | container, this parameter is | | | needed. | +----------------------------------------------+------------------------------+ @@ -138,158 +129,92 @@ note: | | defined by the user. | +----------------------------------------------+------------------------------+ -Enter Yardstick container -^^^^^^^^^^^^^^^^^^^^^^^^^ +Configure the Yardstick container environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -:: +There are three ways to configure environments for running Yardstick, which will be shown in the following sections. Before that, enter the Yardstick container:: docker exec -it yardstick /bin/bash -In the container, the Yardstick repository is located in the /home/opnfv/repos -directory. - -In Danube release, we have improved the Yardstick installation steps. -Now Yardstick provides a CLI to prepare openstack environment variables and -load yardstick images:: - - yardstick env prepare +and then configure Yardstick environments in the Yardstick container. -If you ues this command. you can skip the following sections about how to -prepare openstack environment variables, load yardstick images and load -yardstick flavor manually. +The first way (**recommended**) +################################### +In the Yardstick container, the Yardstick repository is located in the ``/home/opnfv/repos`` directory. Yardstick provides a CLI to prepare OpenStack environment variables and create Yardstick flavor and guest images automatically:: -Installing Yardstick directly in Ubuntu ---------------------------------------- - -.. _install-framework: - -Alternatively you can install Yardstick framework directly in Ubuntu or in an Ubuntu Docker -image. No matter which way you choose to install Yardstick framework, the -following installation steps are identical. - -If you choose to use the Ubuntu Docker image, You can pull the Ubuntu -Docker image from Docker hub: - -:: - - docker pull ubuntu:16.04 - - -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Download source code and install Yardstick framework: - -:: + yardstick env prepare - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - ./install.sh +**NOTE**: The above command just works for four OPNFV installers -- **Apex**, **Compass**, **Fuel** and **Joid**. -For installing yardstick directly in Ubuntu, the **yardstick env command** is not available. -You need to prepare openstack environment variables, load yardstick images and load -yardstick flavor manually. +The second way +################ -OpenStack parameters and credentials ------------------------------------- +Export OpenStack environment variables +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Environment variables -^^^^^^^^^^^^^^^^^^^^^ -Before running Yardstick it is necessary to export OpenStack environment variables -from the OpenStack *openrc* file (using the ``source`` command) and export the -external network name ``export EXTERNAL_NETWORK="external-network-name"``, -the default name for the external network is ``net04_ext``. +Before running Yardstick it is necessary to export OpenStack environment variables:: -Credential environment variables in the *openrc* file have to include at least: + source openrc -* OS_AUTH_URL -* OS_USERNAME -* OS_PASSWORD -* OS_TENANT_NAME +Environment variables in the ``openrc`` file have to include at least: -A sample openrc file may look like this: +* ``OS_AUTH_URL`` +* ``OS_USERNAME`` +* ``OS_PASSWORD`` +* ``OS_TENANT_NAME`` +* ``EXTERNAL_NETWORK`` -* export OS_PASSWORD=console -* export OS_TENANT_NAME=admin -* export OS_AUTH_URL=http://172.16.1.222:35357/v2.0 -* export OS_USERNAME=admin -* export OS_VOLUME_API_VERSION=2 -* export EXTERNAL_NETWORK=net04_ext +A sample `openrc` file may look like this:: + export OS_PASSWORD=console + export OS_TENANT_NAME=admin + export OS_AUTH_URL=http://172.16.1.222:35357/v2.0 + export OS_USERNAME=admin + export OS_VOLUME_API_VERSION=2 + export EXTERNAL_NETWORK=net04_ext -Yardstick falvor and guest images ---------------------------------- +Manually create Yardstick falvor and guest images +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Before executing Yardstick test cases, make sure that yardstick guest image and -yardstick flavor are available in OpenStack. -Detailed steps about creating yardstick flavor and building yardstick-trusty-server -image can be found below. +Before executing Yardstick test cases, make sure that Yardstick flavor and guest image are available in OpenStack. Detailed steps about creating the Yardstick flavor and building the Yardstick guest image can be found below. -Yardstick-flavor -^^^^^^^^^^^^^^^^ Most of the sample test cases in Yardstick are using an OpenStack flavor called -*yardstick-flavor* which deviates from the OpenStack standard m1.tiny flavor by the -disk size - instead of 1GB it has 3GB. Other parameters are the same as in m1.tiny. - -Create yardstick-flavor: +``yardstick-flavor`` which deviates from the OpenStack standard ``m1.tiny`` flavor by the disk size - instead of 1GB it has 3GB. Other parameters are the same as in ``m1.tiny``. -:: +Create ``yardstick-flavor``:: nova flavor-create yardstick-flavor 100 512 3 1 - -.. _guest-image: - -Building a guest image -^^^^^^^^^^^^^^^^^^^^^^ Most of the sample test cases in Yardstick are using a guest image called -*yardstick-trusty-server* which deviates from an Ubuntu Cloud Server image +``yardstick-image`` which deviates from an Ubuntu Cloud Server image containing all the required tools to run test cases supported by Yardstick. Yardstick has a tool for building this custom image. It is necessary to have -sudo rights to use this tool. +``sudo`` rights to use this tool. Also you may need install several additional packages to use this tool, by -follwing the commands below: - -:: +follwing the commands below:: - apt-get update && apt-get install -y \ - qemu-utils \ - kpartx - -This image can be built using the following command while in the directory where -Yardstick is installed (``~/yardstick`` if the framework is installed -by following the commands above): - -:: + sudo apt-get update && sudo apt-get install -y qemu-utils kpartx - sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh +This image can be built using the following command in the directory where Yardstick is installed:: -**Warning:** the script will create files by default in: -``/tmp/workspace/yardstick`` and the files will be owned by root! + sudo tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh -If you are building this guest image in inside a docker container make sure the -container is granted with privilege. +**Warning:** Before building the guest image inside the Yardstick container, make sure the container is granted with privilege. The script will create files by default in ``/tmp/workspace/yardstick`` and the files will be owned by root! -The created image can be added to OpenStack using the ``glance image-create`` or -via the OpenStack Dashboard. - -Example command: - -:: +The created image can be added to OpenStack using the ``glance image-create`` or via the OpenStack Dashboard. Example command is:: glance --os-image-api-version 1 image-create \ --name yardstick-image --is-public true \ --disk-format qcow2 --container-format bare \ --file /tmp/workspace/yardstick/yardstick-image.img -Some Yardstick test cases use a Cirros image and a Ubuntu 14.04 image, you can find one at -http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img, https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img +.. _`Cirros 0.3.5`: http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img +.. _`Ubuntu 14.04`: https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img -Add cirros and ubuntu image to OpenStack: - -:: +Some Yardstick test cases use a `Cirros 0.3.5`_ image and/or a `Ubuntu 14.04`_ image. Add Cirros and Ubuntu images to OpenStack:: openstack image create \ --disk-format qcow2 \ @@ -303,89 +228,146 @@ Add cirros and ubuntu image to OpenStack: --file $ubuntu_image_file \ Ubuntu-14.04 -Automatic flavor and image creation -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Yardstick has a script for automatic creating yardstick flavor and building -guest images. This script is mainly used in CI, but you can still use it in -your local environment. +The third way +################ -Example command: +Similar to the second way, the first step is also to `Export OpenStack environment variables`_. Then the following steps should be done. -:: +Automatically create Yardstcik flavor and guest images +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Yardstick has a script for automatically creating Yardstick flavor and building +Yardstick guest images. This script is mainly used for CI and can be also used in the local environment:: source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh -Examples and verifying the install ----------------------------------- +Delete the Yardstick container +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you want to uninstall Yardstick, just delete the Yardstick container:: + + docker stop yardstick && docker rm yardstick + + +Install Yardstick directly in Ubuntu +--------------------------------------- + +.. _install-framework: + +Alternatively you can install Yardstick framework directly in Ubuntu or in an Ubuntu Docker image. No matter which way you choose to install Yardstick, the following installation steps are identical. + +If you choose to use the Ubuntu Docker image, you can pull the Ubuntu +Docker image from Docker hub:: + + docker pull ubuntu:16.04 + + +Install Yardstick +^^^^^^^^^^^^^^^^^^^^^ + +Create a virtual environment:: + + virtualenv ~/yardstick_venv + source ~/yardstick_venv/bin/activate + +Download the source code and install Yardstick from it:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + cd yardstick + ./install.sh + + +Configure the Yardstick environment (**Todo**) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For installing Yardstick directly in Ubuntu, the ``yardstick env`` command is not available. You need to prepare OpenStack environment variables and create Yardstick flavor and guest images manually. + + +Uninstall Yardstick +^^^^^^^^^^^^^^^^^^^^^^ + +For unistalling Yardstick, just delete the virtual environment:: + + rm -rf ~/yardstick_venv + + +Verify the installation +----------------------------- It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Before executing yardstick -test cases make sure yardstick flavor and building yardstick-trusty-server -image can be found in glance and openrc file is sourced. Below is an example -invocation of yardstick help command and ping.py test sample: -:: +by executing some simple commands and test samples. Before executing Yardstick +test cases make sure ``yardstick-flavor`` and ``yardstick-image`` can be found in OpenStack and the ``openrc`` file is sourced. Below is an example +invocation of Yardstick ``help`` command and ``ping.py`` test sample:: yardstick -h yardstick task start samples/ping.yaml +**NOTE:** The above commands could be run in both the Yardstick container and the Ubuntu directly. + Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. +These configuration files can be found in the ``samples`` directory. Default location for the output is ``/tmp/yardstick.out``. -Deploy InfluxDB and Grafana locally ------------------------------------- +Deploy InfluxDB and Grafana using Docker +------------------------------------------- -The 'yardstick env' command can also help you to build influxDB and Grafana in -your local environment. +Without InfluxDB, Yardstick stores results for runnning test case in the file +``/tmp/yardstick.out``. However, it's unconvenient to retrieve and display +test results. So we will show how to use InfluxDB to store data and use +Grafana to display data in the following sections. -Create InfluxDB container and config with the following command:: +Automatically deploy InfluxDB and Grafana containers (**recommended**) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - yardstick env influxdb +Firstly, enter the Yardstick container:: + docker exec -it yardstick /bin/bash -Create Grafana container and config:: +Secondly, create InfluxDB container and configure with the following command:: - yardstick env grafana + yardstick env influxdb + +Thirdly, create and configure Grafana container:: -Then you can run a test case and visit http://host_ip:3000(user:admin,passwd:admin) to see the results. + yardstick env grafana -note: Using **yardstick env** command to deploy InfluxDB and Grafana requires -Jump Server's docker API version => 1.24. You can use the following command to -check the docker API version: +Then you can run a test case and visit http://host_ip:3000 (``admin``/``admin``) to see the results. -:: +**NOTE:** Executing ``yardstick env`` command to deploy InfluxDB and Grafana requires Jumphost's docker API version => 1.24. Run the following command to check the docker API version on the Jumphost:: docker version -The following sections describe how to deploy influxDB and Grafana manually. +Manually deploy InfluxDB and Grafana containers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +You could also deploy influxDB and Grafana containers manually on the Jumphost. +The following sections show how to do. .. pull docker images Pull docker images - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +#################### :: docker pull tutum/influxdb docker pull grafana/grafana -Run influxdb and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run influxdb -:: +Run and configure influxDB +############################### + +Run influxDB:: docker run -d --name influxdb \ -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ tutum/influxdb docker exec -it influxdb bash -Config influxdb -:: +Configure influxDB:: influx >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES @@ -393,31 +375,30 @@ Config influxdb >use yardstick; >show MEASUREMENTS; -Run grafana and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run grafana -:: +Run and configure Grafana +############################### - docker run -d --name grafana -p 3000:3000 grafana/grafana +Run Grafana:: -Config grafana -:: + docker run -d --name grafana -p 3000:3000 grafana/grafana - http://{YOUR_IP_HERE}:3000 - log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 +Log on http://{YOUR_IP_HERE}:3000 using ``admin``/``admin`` and configure database resource to be ``{YOUR_IP_HERE}:8086``. .. image:: images/Grafana_config.png :width: 800px :alt: Grafana data source configration -Config yardstick conf -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf +Configure ``yardstick.conf`` +############################## -vi /etc/yardstick/yardstick.conf -Config yardstick.conf :: + docker exec -it yardstick /bin/bash + cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + vi /etc/yardstick/yardstick.conf + +Modify ``yardstick.conf``:: + [DEFAULT] debug = True dispatcher = influxdb @@ -429,23 +410,21 @@ Config yardstick.conf username = root password = root -Now you can run yardstick test cases and store the results in influxdb -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Now you can run Yardstick test cases and store the results in influxDB. + +Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**) +----------------------------------------------------------- -Create a test suite for yardstick + +Create a test suite for Yardstick ------------------------------------ A test suite in yardstick is a yaml file which include one or more test cases. -Yardstick is able to support running test suite task, so you can customize you +Yardstick is able to support running test suite task, so you can customize your own test suite and run it in one task. -"tests/opnfv/test_suites" is where yardstick put ci test-suite. A typical test -suite is like below: - -fuel_test_suite.yaml - -:: +``tests/opnfv/test_suites`` is the folder where Yardstick puts CI test suite. A typical test suite is like below (the ``fuel_test_suite.yaml`` example):: --- # Fuel integration test task suite @@ -460,17 +439,14 @@ fuel_test_suite.yaml - file_name: iperf3.yaml -As you can see, there are two test cases in fuel_test_suite, the syntax is simple -here, you must specify the schema and the name, then you just need to list the -test cases in the tag "test_cases" and also mark their relative directory in the -tag "test_cases_dir". - -Yardstick test suite also support constraints and task args for each test case. -Here is another sample to show this, which is digested from one big test suite. +As you can see, there are two test cases in the ``fuel_test_suite.yaml``. The +``schema`` and the ``name`` must be specified. The test cases should be listed +via the tag ``test_cases`` and their relative path is also marked via the tag +``test_cases_dir``. -os-nosdn-nofeature-ha.yaml - -:: +Yardstick test suite also supports constraints and task args for each test +case. Here is another sample (the ``os-nosdn-nofeature-ha.yaml`` example) to +show this, which is digested from one big test suite:: --- @@ -492,9 +468,15 @@ os-nosdn-nofeature-ha.yaml huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml", "host": "node4.LF","target": "node5.LF"}' -As you can see in test case "opnfv_yardstick_tc043.yaml", there are two tags, "constraint" and -"task_args". "constraint" is where you can specify which installer or pod it can be run in -the ci environment. "task_args" is where you can specify the task arguments for each pod. +As you can see in test case ``opnfv_yardstick_tc043.yaml``, there are two +tags, ``constraint`` and ``task_args``. ``constraint`` is to specify which +installer or pod it can be run in the CI environment. ``task_args`` is to +specify the task arguments for each pod. + +All in all, to create a test suite in Yardstick, you just need to create a +yaml file and add test cases, constraint or task arguments if necessary. + + +Proxy Support (**Todo**) +--------------------------- -All in all, to create a test suite in yardstick, you just need to create a suite yaml file -and add test cases and constraint or task arguments if necessary. diff --git a/docs/testing/user/userguide/05-yardstick_plugin.rst b/docs/testing/user/userguide/05-yardstick_plugin.rst index b724b361b..ec0b49ff1 100644 --- a/docs/testing/user/userguide/05-yardstick_plugin.rst +++ b/docs/testing/user/userguide/05-yardstick_plugin.rst @@ -4,18 +4,19 @@ .. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. =================================== -Installing a plug-in into yardstick +Installing a plug-in into Yardstick =================================== + Abstract ======== -Yardstick currently provides a ``plugin`` CLI command to support integration -with other OPNFV testing projects. Below is an example invocation of yardstick -plugin command and Storperf plug-in sample. +Yardstick provides a ``plugin`` CLI command to support integration with other +OPNFV testing projects. Below is an example invocation of Yardstick plugin +command and Storperf plug-in sample. -Installing Storperf into yardstick +Installing Storperf into Yardstick ================================== Storperf is delivered as a Docker container from @@ -55,36 +56,49 @@ environment and other dependencies: should include credential environment variables at least: * OS_AUTH_URL +* OS_USERNAME +* OS_PASSWORD * OS_TENANT_ID * OS_TENANT_NAME * OS_PROJECT_NAME -* OS_USERNAME -* OS_PASSWORD -* OS_REGION_NAME +* OS_PROJECT_ID +* OS_USER_DOMAIN_ID + +*Yardstick* has a "prepare_storperf_admin-rc.sh" script which can be used to +generate the "storperf_admin-rc" file, this script is located at +test/ci/prepare_storperf_admin-rc.sh -For this storperf_admin-rc file, during environment preparation a "prepare_storperf_admin-rc.sh" -script can be used to generate it. :: #!/bin/bash + # Prepare storperf_admin-rc for StorPerf. AUTH_URL=${OS_AUTH_URL} USERNAME=${OS_USERNAME:-admin} PASSWORD=${OS_PASSWORD:-console} + TENANT_NAME=${OS_TENANT_NAME:-admin} - VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-2} + TENANT_ID=`openstack project show admin|grep '\bid\b' |awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME} - TENANT_ID=`keystone tenant-get admin|grep 'id'|awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` + PROJECT_ID=`openstack project show admin|grep '\bid\b' |awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` + USER_DOMAIN_ID=${OS_USER_DOMAIN_ID:-default} + rm -f ~/storperf_admin-rc touch ~/storperf_admin-rc + echo "OS_AUTH_URL="$AUTH_URL >> ~/storperf_admin-rc echo "OS_USERNAME="$USERNAME >> ~/storperf_admin-rc echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc - echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc - echo "OS_VOLUME_API_VERSION="$VOLUME_API_VERSION >> ~/storperf_admin-rc echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc + echo "OS_PROJECT_ID="$PROJECT_ID >> ~/storperf_admin-rc + echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc + echo "OS_USER_DOMAIN_ID="$USER_DOMAIN_ID >> ~/storperf_admin-rc -The generated "storperf_admin-rc" file will be stored under the root directory. If you installed Yardstick using Docker, this file will be located in the container. You may need to copy it to the root directory of the deployed host. + +The generated "storperf_admin-rc" file will be stored in the root directory. If +you installed *Yardstick* using Docker, this file will be located in the +container. You may need to copy it to the root directory of the Storperf +deployed host. Step 1: Plug-in configuration file preparation >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> @@ -114,21 +128,21 @@ in my local environment. Step 2: Plug-in install/remove scripts preparation >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Under "yardstick/resource/scripts directory", there are two folders: a "install" -folder and a "remove" folder. You need to store the plug-in install/remove script -in these two folders respectively. +In "yardstick/resource/scripts" directory, there are two folders: a "install" +folder and a "remove" folder. You need to store the plug-in install/remove +scripts in these two folders respectively. -The detailed installation or remove operation should de defined in these two scripts. -The name of both install and remove scripts should match the plugin-in name that you -specified in the plug-in configuration file. -For example, the install and remove scripts for Storperf are both named to "storperf.bash". +The detailed installation or remove operation should de defined in these two +scripts. The name of both install and remove scripts should match the plugin-in +name that you specified in the plug-in configuration file. +For example, the install and remove scripts for Storperf are both named to +"storperf.bash". Step 3: Install and remove Storperf >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -To install Storperf, simply execute the following command -:: +To install Storperf, simply execute the following command:: # Install Storperf yardstick plugin install plugin/storperf.yaml @@ -136,10 +150,11 @@ To install Storperf, simply execute the following command removing Storperf from yardstick ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To remove Storperf, simply execute the following command -:: +To remove Storperf, simply execute the following command:: # Remove Storperf yardstick plugin remove plugin/storperf.yaml -What yardstick plugin command does is using the username and password to log into the deployment target and then execute the corresponding install or remove script. +What yardstick plugin command does is using the username and password to log +into the deployment target and then execute the corresponding install or remove +script. diff --git a/docs/testing/user/userguide/06-result-store-InfluxDB.rst b/docs/testing/user/userguide/06-result-store-InfluxDB.rst index a0bb48a80..747927889 100644 --- a/docs/testing/user/userguide/06-result-store-InfluxDB.rst +++ b/docs/testing/user/userguide/06-result-store-InfluxDB.rst @@ -31,9 +31,9 @@ Store Storperf Test Results into Community's InfluxDB As shown in Framework_, there are two ways to store Storperf test results into community's InfluxDB: -1. Yardstick asks Storperf to run the test case. After the test case is - completed, Yardstick reads test results via ReST API from Storperf and - posts test data to the influxDB. +1. Yardstick executes Storperf test case (TC074), posting test job to Storperf + container via ReST API. After the test job is completed, Yardstick reads + test results via ReST API from Storperf and posts test data to the influxDB. 2. Additionally, Storperf can run tests by itself and post the test result directly to the InfluxDB. The method for posting data directly to influxDB diff --git a/docs/testing/user/userguide/08-api.rst b/docs/testing/user/userguide/08-api.rst new file mode 100644 index 000000000..1d9ea6d64 --- /dev/null +++ b/docs/testing/user/userguide/08-api.rst @@ -0,0 +1,177 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +Yardstick Restful API +====================== + + +Abstract +-------- + +Yardstick support restful API in danube. + + +Available API +------------- + +/yardstick/env/action +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to do some work related to environment. For now, we support: + +1. Prepare yardstick environment(Including fetch openrc file, get external network and load images) +2. Start a InfluxDB docker container and config yardstick output to InfluxDB. +3. Start a Grafana docker container and config with the InfluxDB. + +Which API to call will depend on the Parameters. + + +Method: POST + + +Prepare Yardstick Environment +Example:: + + { + 'action': 'prepareYardstickEnv' + } + +This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result. + + +Start and Config InfluxDB docker container +Example:: + + { + 'action': 'createInfluxDBContainer' + } + +This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result. + + +Start and Config Grafana docker container +Example:: + + { + 'action': 'createGrafanaContainer' + } + +This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result. + + +/yardstick/asynctask +^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to get the status of asynchronous task + + +Method: GET + + +Get the status of asynchronous task +Example:: + + http://localhost:8888/yardstick/asynctask?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c + +The returned status will be 0(running), 1(finished) and 2(failed). + + +/yardstick/testcases +^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to list all release test cases now in yardstick. + + +Method: GET + + +Get a list of release test cases +Example:: + + http://localhost:8888/yardstick/testcases + + +/yardstick/testcases/release/action +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to run a yardstick release test case. + + +Method: POST + + +Run a release test case +Example:: + + { + 'action': 'runTestCase', + 'args': { + 'opts': {}, + 'testcase': 'tc002' + } + } + +This is an asynchronous API. You need to call /yardstick/results to get the result. + + +/yardstick/testcases/samples/action +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to run a yardstick sample test case. + + +Method: POST + + +Run a sample test case +Example:: + + { + 'action': 'runTestCase', + 'args': { + 'opts': {}, + 'testcase': 'ping' + } + } + +This is an asynchronous API. You need to call /yardstick/results to get the result. + + +/yardstick/testsuites/action +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to run a yardstick test suite. + + +Method: POST + + +Run a test suite +Example:: + + { + 'action': 'runTestSuite', + 'args': { + 'opts': {}, + 'testcase': 'smoke' + } + } + +This is an asynchronous API. You need to call /yardstick/results to get the result. + + +/yardstick/results +^^^^^^^^^^^^^^^^^^ + + +Description: This API is used to get the test results of certain task. If you call /yardstick/testcases/samples/action API, it will return a task id. You can use the returned task id to get the results by using this API. + + +Get test results of one task +Example:: + + http://localhost:8888/yardstick/results?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c + +This API will return a list of test case result diff --git a/docs/testing/user/userguide/08-vtc-overview.rst b/docs/testing/user/userguide/08-vtc-overview.rst deleted file mode 100644 index f30bf7cc5..000000000 --- a/docs/testing/user/userguide/08-vtc-overview.rst +++ /dev/null @@ -1,125 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. - -========================== -Virtual Traffic Classifier -========================== - -Abstract -======== - -.. _TNOVA: http://www.t-nova.eu/ -.. _TNOVAresults: http://www.t-nova.eu/results/ -.. _Yardstick: https://wiki.opnfv.org/yardstick - -This chapter provides an overview of the virtual Traffic Classifier, a -contribution to OPNFV Yardstick_ from the EU Project TNOVA_. -Additional documentation is available in TNOVAresults_. - -Overview -======== - -The virtual Traffic Classifier (:term:`VTC`) :term:`VNF`, comprises of a -Virtual Network Function Component (:term:`VNFC`). The :term:`VNFC` contains -both the Traffic Inspection module, and the Traffic forwarding module, needed -to run the :term:`VNF`. The exploitation of Deep Packet Inspection -(:term:`DPI`) methods for traffic classification is built around two basic -assumptions: - -* third parties unaffiliated with either source or recipient are able to -inspect each IP packet’s payload - -* the classifier knows the relevant syntax of each application’s packet -payloads (protocol signatures, data patterns, etc.). - -The proposed :term:`DPI` based approach will only use an indicative, small -number of the initial packets from each flow in order to identify the content -and not inspect each packet. - -In this respect it follows the Packet Based per Flow State (term:`PBFS`). This -method uses a table to track each session based on the 5-tuples (src address, -dest address, src port,dest port, transport protocol) that is maintained for -each flow. - -Concepts -======== - -* *Traffic Inspection*: The process of packet analysis and application -identification of network traffic that passes through the :term:`VTC`. - -* *Traffic Forwarding*: The process of packet forwarding from an incoming -network interface to a pre-defined outgoing network interface. - -* *Traffic Rule Application*: The process of packet tagging, based on a -predefined set of rules. Packet tagging may include e.g. Type of Service -(:term:`ToS`) field modification. - -Architecture -============ - -The Traffic Inspection module is the most computationally intensive component -of the :term:`VNF`. It implements filtering and packet matching algorithms in -order to support the enhanced traffic forwarding capability of the :term:`VNF`. -The component supports a flow table (exploiting hashing algorithms for fast -indexing of flows) and an inspection engine for traffic classification. - -The implementation used for these experiments exploits the nDPI library. -The packet capturing mechanism is implemented using libpcap. When the -:term:`DPI` engine identifies a new flow, the flow register is updated with the -appropriate information and transmitted across the Traffic Forwarding module, -which then applies any required policy updates. - -The Traffic Forwarding moudle is responsible for routing and packet forwarding. -It accepts incoming network traffic, consults the flow table for classification -information for each incoming flow and then applies pre-defined policies -marking e.g. :term:`ToS`/Differentiated Services Code Point (:term:`DSCP`) -multimedia traffic for Quality of Service (:term:`QoS`) enablement on the -forwarded traffic. -It is assumed that the traffic is forwarded using the default policy until it -is identified and new policies are enforced. - -The expected response delay is considered to be negligible, as only a small -number of packets are required to identify each flow. - -Graphical Overview -================== - -.. code-block:: console - - +----------------------------+ - | | - | Virtual Traffic Classifier | - | | - | Analysing/Forwarding | - | ------------> | - | ethA ethB | - | | - +----------------------------+ - | ^ - | | - v | - +----------------------------+ - | | - | Virtual Switch | - | | - +----------------------------+ - -Install -======= - -run the build.sh with root privileges - -Run -=== - -:: - - sudo ./pfbridge -a eth1 -b eth2 - - -Development Environment -======================= - -Ubuntu 14.04 Ubuntu 16.04 diff --git a/docs/testing/user/userguide/09-apexlake_installation.rst b/docs/testing/user/userguide/09-apexlake_installation.rst deleted file mode 100644 index 0d8ef143f..000000000 --- a/docs/testing/user/userguide/09-apexlake_installation.rst +++ /dev/null @@ -1,302 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - - -.. _DPDK: http://dpdk.org/doc/nics -.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/ -.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking -.. _PORTSEC: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver -.. _here: https://wiki.opnfv.org/vtc - - -============================ -Apexlake Installation Guide -============================ - -Abstract --------- - -ApexLake is a framework that provides automatic execution of experiments and -related data collection to enable a user validate infrastructure from the -perspective of a Virtual Network Function (:term:`VNF`). - -In the context of Yardstick, a virtual Traffic Classifier (:term:`VTC`) network -function is utilized. - - -Framework Hardware Dependencies -=============================== - -In order to run the framework there are some hardware related dependencies for -ApexLake. - -The framework needs to be installed on the same physical node where DPDK-pktgen_ -is installed. - -The installation requires the physical node hosting the packet generator must -have 2 NICs which are DPDK_ compatible. - -The 2 NICs will be connected to the switch where the OpenStack VM -network is managed. - -The switch used must support multicast traffic and :term:`IGMP` snooping. -Further details about the configuration are provided at the following here_. - -The corresponding ports to which the cables are connected need to be configured -as VLAN trunks using two of the VLAN IDs available for Neutron. -Note the VLAN IDs used as they will be required in later configuration steps. - - -Framework Software Dependencies -=============================== -Before starting the framework, a number of dependencies must first be installed. -The following describes the set of instructions to be executed via the Linux -shell in order to install and configure the required dependencies. - -1. Install Dependencies. - -To support the framework dependencies the following packages must be installed. -The example provided is based on Ubuntu and needs to be executed in root mode. - -:: - - apt-get install python-dev - apt-get install python-pip - apt-get install python-mock - apt-get install tcpreplay - apt-get install libpcap-dev - -2. Source OpenStack openrc file. - -:: - - source openrc - -3. Configure Openstack Neutron - -In order to support traffic generation and management by the virtual -Traffic Classifier, the configuration of the port security driver -extension is required for Neutron. - -For further details please follow the following link: PORTSEC_ -This step can be skipped in case the target OpenStack is Juno or Kilo release, -but it is required to support Liberty. -It is therefore required to indicate the release version in the configuration -file located in ./yardstick/vTC/apexlake/apexlake.conf - - -4. Create Two Networks based on VLANs in Neutron. - -To enable network communications between the packet generator and the compute -node, two networks must be created via Neutron and mapped to the VLAN IDs -that were previously used in the configuration of the physical switch. -The following shows the typical set of commands required to configure Neutron -correctly. -The physical switches need to be configured accordingly. - -:: - - VLAN_1=2032 - VLAN_2=2033 - PHYSNET=physnet2 - neutron net-create apexlake_inbound_network \ - --provider:network_type vlan \ - --provider:segmentation_id $VLAN_1 \ - --provider:physical_network $PHYSNET - - neutron subnet-create apexlake_inbound_network \ - 192.168.0.0/24 --name apexlake_inbound_subnet - - neutron net-create apexlake_outbound_network \ - --provider:network_type vlan \ - --provider:segmentation_id $VLAN_2 \ - --provider:physical_network $PHYSNET - - neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \ - --name apexlake_outbound_subnet - - -5. Download Ubuntu Cloud Image and load it on Glance - -The virtual Traffic Classifier is supported on top of Ubuntu 14.04 cloud image. -The image can be downloaded on the local machine and loaded on Glance -using the following commands: - -:: - - wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img - glance image-create \ - --name ubuntu1404 \ - --is-public true \ - --disk-format qcow \ - --container-format bare \ - --file trusty-server-cloudimg-amd64-disk1.img - - - -6. Configure the Test Cases - -The VLAN tags must also be included in the test case Yardstick yaml file -as parameters for the following test cases: - - * :doc:`opnfv_yardstick_tc006` - - * :doc:`opnfv_yardstick_tc007` - - * :doc:`opnfv_yardstick_tc020` - - * :doc:`opnfv_yardstick_tc021` - - -Install and Configure DPDK Pktgen -+++++++++++++++++++++++++++++++++ - -Execution of the framework is based on DPDK Pktgen. -If DPDK Pktgen has not installed, it is necessary to download, install, compile -and configure it. -The user can create a directory and download the dpdk packet generator source -code: - -:: - - cd experimental_framework/libraries - mkdir dpdk_pktgen - git clone https://github.com/pktgen/Pktgen-DPDK.git - -For instructions on the installation and configuration of DPDK and DPDK Pktgen -please follow the official DPDK Pktgen README file. -Once the installation is completed, it is necessary to load the DPDK kernel -driver, as follow: - -:: - - insmod uio - insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko - -It is necessary to set the configuration file to support the desired Pktgen -configuration. -A description of the required configuration parameters and supporting examples -is provided in the following: - -:: - - [PacketGen] - packet_generator = dpdk_pktgen - - # This is the directory where the packet generator is installed - # (if the user previously installed dpdk-pktgen, - # it is required to provide the director where it is installed). - pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/ - - # This is the directory where DPDK is installed - dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/ - - # Name of the dpdk-pktgen program that starts the packet generator - program_name = app/app/x86_64-native-linuxapp-gcc/pktgen - - # DPDK coremask (see DPDK-Pktgen readme) - coremask = 1f - - # DPDK memory channels (see DPDK-Pktgen readme) - memory_channels = 3 - - # Name of the interface of the pktgen to be used to send traffic (vlan_sender) - name_if_1 = p1p1 - - # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver) - name_if_2 = p1p2 - - # PCI bus address correspondent to if_1 - bus_slot_nic_1 = 01:00.0 - - # PCI bus address correspondent to if_2 - bus_slot_nic_2 = 01:00.1 - - -To find the parameters related to names of the NICs and the addresses of the PCI buses -the user may find it useful to run the :term:`DPDK` tool nic_bind as follows: - -:: - - DPDK_DIR/tools/dpdk_nic_bind.py --status - -Lists the NICs available on the system, and shows the available drivers and bus addresses for each interface. -Please make sure to select NICs which are :term:`DPDK` compatible. - -Installation and Configuration of smcroute -++++++++++++++++++++++++++++++++++++++++++ - -The user is required to install smcroute which is used by the framework to -support multicast communications. - -The following is the list of commands required to download and install smroute. - -:: - - cd ~ - git clone https://github.com/troglobit/smcroute.git - cd smcroute - git reset --hard c3f5c56 - sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh - sed -i 's/automake-1.11/automake/g' ./autogen.sh - ./autogen.sh - ./configure - make - sudo make install - cd .. - -It is required to do the reset to the specified commit ID. -It is also requires the creation a configuration file using the following -command: - -:: - - SMCROUTE_NIC=(name of the nic) - -where name of the nic is the name used previously for the variable "name_if_2". -For example: - -:: - - SMCROUTE_NIC=p1p2 - -Then create the smcroute configuration file /etc/smcroute.conf - -:: - - echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf - - -At the end of this procedure it will be necessary to perform the following -actions to add the user to the sudoers: - -:: - - adduser USERNAME sudo - echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers - - -Experiment using SR-IOV Configuration on the Compute Node -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -To enable :term:`SR-IOV` interfaces on the physical NIC of the compute node, a -compatible NIC is required. -NIC configuration depends on model and vendor. After proper configuration to -support :term:`SR-IOV`, a proper configuration of OpenStack is required. -For further information, please refer to the SRIOV_ configuration guide - -Finalize installation the framework on the system -================================================= - -The installation of the framework on the system requires the setup of the project. -After entering into the apexlake directory, it is sufficient to run the following -command. - -:: - - python setup.py install - -Since some elements are copied into the /tmp directory (see configuration file) -it could be necessary to repeat this step after a reboot of the host. diff --git a/docs/testing/user/userguide/09-vtc-overview.rst b/docs/testing/user/userguide/09-vtc-overview.rst new file mode 100644 index 000000000..8ed17873d --- /dev/null +++ b/docs/testing/user/userguide/09-vtc-overview.rst @@ -0,0 +1,128 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +========================== +Virtual Traffic Classifier +========================== + +Abstract +======== + +.. _TNOVA: http://www.t-nova.eu/ +.. _TNOVAresults: http://www.t-nova.eu/results/ +.. _Yardstick: https://wiki.opnfv.org/yardstick + +This chapter provides an overview of the virtual Traffic Classifier, a +contribution to OPNFV Yardstick_ from the EU Project TNOVA_. +Additional documentation is available in TNOVAresults_. + +Overview +======== + +The virtual Traffic Classifier (:term:`VTC`) :term:`VNF`, comprises of a +Virtual Network Function Component (:term:`VNFC`). The :term:`VNFC` contains +both the Traffic Inspection module, and the Traffic forwarding module, needed +to run the :term:`VNF`. The exploitation of Deep Packet Inspection +(:term:`DPI`) methods for traffic classification is built around two basic +assumptions: + +* third parties unaffiliated with either source or recipient are able to +inspect each IP packet’s payload + +* the classifier knows the relevant syntax of each application’s packet +payloads (protocol signatures, data patterns, etc.). + +The proposed :term:`DPI` based approach will only use an indicative, small +number of the initial packets from each flow in order to identify the content +and not inspect each packet. + +In this respect it follows the Packet Based per Flow State (term:`PBFS`). This +method uses a table to track each session based on the 5-tuples (src address, +dest address, src port,dest port, transport protocol) that is maintained for +each flow. + +Concepts +======== + +* *Traffic Inspection*: The process of packet analysis and application +identification of network traffic that passes through the :term:`VTC`. + +* *Traffic Forwarding*: The process of packet forwarding from an incoming +network interface to a pre-defined outgoing network interface. + +* *Traffic Rule Application*: The process of packet tagging, based on a +predefined set of rules. Packet tagging may include e.g. Type of Service +(:term:`ToS`) field modification. + +Architecture +============ + +The Traffic Inspection module is the most computationally intensive component +of the :term:`VNF`. It implements filtering and packet matching algorithms in +order to support the enhanced traffic forwarding capability of the :term:`VNF`. +The component supports a flow table (exploiting hashing algorithms for fast +indexing of flows) and an inspection engine for traffic classification. + +The implementation used for these experiments exploits the nDPI library. +The packet capturing mechanism is implemented using libpcap. When the +:term:`DPI` engine identifies a new flow, the flow register is updated with the +appropriate information and transmitted across the Traffic Forwarding module, +which then applies any required policy updates. + +The Traffic Forwarding moudle is responsible for routing and packet forwarding. +It accepts incoming network traffic, consults the flow table for classification +information for each incoming flow and then applies pre-defined policies +marking e.g. :term:`ToS`/Differentiated Services Code Point (:term:`DSCP`) +multimedia traffic for Quality of Service (:term:`QoS`) enablement on the +forwarded traffic. +It is assumed that the traffic is forwarded using the default policy until it +is identified and new policies are enforced. + +The expected response delay is considered to be negligible, as only a small +number of packets are required to identify each flow. + +Graphical Overview +================== + +.. code-block:: console + + +----------------------------+ + | | + | Virtual Traffic Classifier | + | | + | Analysing/Forwarding | + | ------------> | + | ethA ethB | + | | + +----------------------------+ + | ^ + | | + v | + +----------------------------+ + | | + | Virtual Switch | + | | + +----------------------------+ + +Install +======= + +run the vTC/build.sh with root privileges + +Run +=== + +:: + + sudo ./pfbridge -a eth1 -b eth2 + + +.. note:: Virtual Traffic Classifier is not support in OPNFV Danube release. + + +Development Environment +======================= + +Ubuntu 14.04 Ubuntu 16.04 diff --git a/docs/testing/user/userguide/10-apexlake_api.rst b/docs/testing/user/userguide/10-apexlake_api.rst deleted file mode 100644 index 35a1dbe3e..000000000 --- a/docs/testing/user/userguide/10-apexlake_api.rst +++ /dev/null @@ -1,89 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - - -================================= -Apexlake API Interface Definition -================================= - -Abstract --------- - -The API interface provided by the framework to enable the execution of test -cases is defined as follows. - - -init ----- - -**static init()** - - Initializes the Framework - - **Returns** None - - -execute_framework ------------------ - -**static execute_framework** (test_cases, - - iterations, - - heat_template, - - heat_template_parameters, - - deployment_configuration, - - openstack_credentials) - - Executes the framework according the specified inputs - - **Parameters** - - - **test_cases** - - Test cases to be run with the workload (dict() of dict()) - - Example: - test_case = dict() - - test_case[’name’] = ‘module.Class’ - - test_case[’params’] = dict() - - test_case[’params’][’throughput’] = ‘1’ - - test_case[’params’][’vlan_sender’] = ‘1000’ - - test_case[’params’][’vlan_receiver’] = ‘1001’ - - test_cases = [test_case] - - - **iterations** - Number of test cycles to be executed (int) - - - **heat_template** - (string) File name of the heat template corresponding to the workload to be deployed. - It contains the parameters to be evaluated in the form of #parameter_name. - (See heat_templates/vTC.yaml as example). - - - **heat_template_parameters** - (dict) Parameters to be provided as input to the - heat template. See http://docs.openstack.org/developer/heat/ template_guide/hot_guide.html - section “Template input parameters” for further info. - - - **deployment_configuration** - ( dict[string] = list(strings) ) ) Dictionary of parameters - representing the deployment configuration of the workload. - - The key is a string corresponding to the name of the parameter, - the value is a list of strings representing the value to be - assumed by a specific param. The parameters are user defined: - they have to correspond to the place holders (#parameter_name) - specified in the heat template. - - **Returns** dict() containing results diff --git a/docs/testing/user/userguide/10-apexlake_installation.rst b/docs/testing/user/userguide/10-apexlake_installation.rst new file mode 100644 index 000000000..0d8ef143f --- /dev/null +++ b/docs/testing/user/userguide/10-apexlake_installation.rst @@ -0,0 +1,302 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + + +.. _DPDK: http://dpdk.org/doc/nics +.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/ +.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking +.. _PORTSEC: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver +.. _here: https://wiki.opnfv.org/vtc + + +============================ +Apexlake Installation Guide +============================ + +Abstract +-------- + +ApexLake is a framework that provides automatic execution of experiments and +related data collection to enable a user validate infrastructure from the +perspective of a Virtual Network Function (:term:`VNF`). + +In the context of Yardstick, a virtual Traffic Classifier (:term:`VTC`) network +function is utilized. + + +Framework Hardware Dependencies +=============================== + +In order to run the framework there are some hardware related dependencies for +ApexLake. + +The framework needs to be installed on the same physical node where DPDK-pktgen_ +is installed. + +The installation requires the physical node hosting the packet generator must +have 2 NICs which are DPDK_ compatible. + +The 2 NICs will be connected to the switch where the OpenStack VM +network is managed. + +The switch used must support multicast traffic and :term:`IGMP` snooping. +Further details about the configuration are provided at the following here_. + +The corresponding ports to which the cables are connected need to be configured +as VLAN trunks using two of the VLAN IDs available for Neutron. +Note the VLAN IDs used as they will be required in later configuration steps. + + +Framework Software Dependencies +=============================== +Before starting the framework, a number of dependencies must first be installed. +The following describes the set of instructions to be executed via the Linux +shell in order to install and configure the required dependencies. + +1. Install Dependencies. + +To support the framework dependencies the following packages must be installed. +The example provided is based on Ubuntu and needs to be executed in root mode. + +:: + + apt-get install python-dev + apt-get install python-pip + apt-get install python-mock + apt-get install tcpreplay + apt-get install libpcap-dev + +2. Source OpenStack openrc file. + +:: + + source openrc + +3. Configure Openstack Neutron + +In order to support traffic generation and management by the virtual +Traffic Classifier, the configuration of the port security driver +extension is required for Neutron. + +For further details please follow the following link: PORTSEC_ +This step can be skipped in case the target OpenStack is Juno or Kilo release, +but it is required to support Liberty. +It is therefore required to indicate the release version in the configuration +file located in ./yardstick/vTC/apexlake/apexlake.conf + + +4. Create Two Networks based on VLANs in Neutron. + +To enable network communications between the packet generator and the compute +node, two networks must be created via Neutron and mapped to the VLAN IDs +that were previously used in the configuration of the physical switch. +The following shows the typical set of commands required to configure Neutron +correctly. +The physical switches need to be configured accordingly. + +:: + + VLAN_1=2032 + VLAN_2=2033 + PHYSNET=physnet2 + neutron net-create apexlake_inbound_network \ + --provider:network_type vlan \ + --provider:segmentation_id $VLAN_1 \ + --provider:physical_network $PHYSNET + + neutron subnet-create apexlake_inbound_network \ + 192.168.0.0/24 --name apexlake_inbound_subnet + + neutron net-create apexlake_outbound_network \ + --provider:network_type vlan \ + --provider:segmentation_id $VLAN_2 \ + --provider:physical_network $PHYSNET + + neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \ + --name apexlake_outbound_subnet + + +5. Download Ubuntu Cloud Image and load it on Glance + +The virtual Traffic Classifier is supported on top of Ubuntu 14.04 cloud image. +The image can be downloaded on the local machine and loaded on Glance +using the following commands: + +:: + + wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img + glance image-create \ + --name ubuntu1404 \ + --is-public true \ + --disk-format qcow \ + --container-format bare \ + --file trusty-server-cloudimg-amd64-disk1.img + + + +6. Configure the Test Cases + +The VLAN tags must also be included in the test case Yardstick yaml file +as parameters for the following test cases: + + * :doc:`opnfv_yardstick_tc006` + + * :doc:`opnfv_yardstick_tc007` + + * :doc:`opnfv_yardstick_tc020` + + * :doc:`opnfv_yardstick_tc021` + + +Install and Configure DPDK Pktgen ++++++++++++++++++++++++++++++++++ + +Execution of the framework is based on DPDK Pktgen. +If DPDK Pktgen has not installed, it is necessary to download, install, compile +and configure it. +The user can create a directory and download the dpdk packet generator source +code: + +:: + + cd experimental_framework/libraries + mkdir dpdk_pktgen + git clone https://github.com/pktgen/Pktgen-DPDK.git + +For instructions on the installation and configuration of DPDK and DPDK Pktgen +please follow the official DPDK Pktgen README file. +Once the installation is completed, it is necessary to load the DPDK kernel +driver, as follow: + +:: + + insmod uio + insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko + +It is necessary to set the configuration file to support the desired Pktgen +configuration. +A description of the required configuration parameters and supporting examples +is provided in the following: + +:: + + [PacketGen] + packet_generator = dpdk_pktgen + + # This is the directory where the packet generator is installed + # (if the user previously installed dpdk-pktgen, + # it is required to provide the director where it is installed). + pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/ + + # This is the directory where DPDK is installed + dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/ + + # Name of the dpdk-pktgen program that starts the packet generator + program_name = app/app/x86_64-native-linuxapp-gcc/pktgen + + # DPDK coremask (see DPDK-Pktgen readme) + coremask = 1f + + # DPDK memory channels (see DPDK-Pktgen readme) + memory_channels = 3 + + # Name of the interface of the pktgen to be used to send traffic (vlan_sender) + name_if_1 = p1p1 + + # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver) + name_if_2 = p1p2 + + # PCI bus address correspondent to if_1 + bus_slot_nic_1 = 01:00.0 + + # PCI bus address correspondent to if_2 + bus_slot_nic_2 = 01:00.1 + + +To find the parameters related to names of the NICs and the addresses of the PCI buses +the user may find it useful to run the :term:`DPDK` tool nic_bind as follows: + +:: + + DPDK_DIR/tools/dpdk_nic_bind.py --status + +Lists the NICs available on the system, and shows the available drivers and bus addresses for each interface. +Please make sure to select NICs which are :term:`DPDK` compatible. + +Installation and Configuration of smcroute +++++++++++++++++++++++++++++++++++++++++++ + +The user is required to install smcroute which is used by the framework to +support multicast communications. + +The following is the list of commands required to download and install smroute. + +:: + + cd ~ + git clone https://github.com/troglobit/smcroute.git + cd smcroute + git reset --hard c3f5c56 + sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh + sed -i 's/automake-1.11/automake/g' ./autogen.sh + ./autogen.sh + ./configure + make + sudo make install + cd .. + +It is required to do the reset to the specified commit ID. +It is also requires the creation a configuration file using the following +command: + +:: + + SMCROUTE_NIC=(name of the nic) + +where name of the nic is the name used previously for the variable "name_if_2". +For example: + +:: + + SMCROUTE_NIC=p1p2 + +Then create the smcroute configuration file /etc/smcroute.conf + +:: + + echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf + + +At the end of this procedure it will be necessary to perform the following +actions to add the user to the sudoers: + +:: + + adduser USERNAME sudo + echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers + + +Experiment using SR-IOV Configuration on the Compute Node ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +To enable :term:`SR-IOV` interfaces on the physical NIC of the compute node, a +compatible NIC is required. +NIC configuration depends on model and vendor. After proper configuration to +support :term:`SR-IOV`, a proper configuration of OpenStack is required. +For further information, please refer to the SRIOV_ configuration guide + +Finalize installation the framework on the system +================================================= + +The installation of the framework on the system requires the setup of the project. +After entering into the apexlake directory, it is sufficient to run the following +command. + +:: + + python setup.py install + +Since some elements are copied into the /tmp directory (see configuration file) +it could be necessary to repeat this step after a reboot of the host. diff --git a/docs/testing/user/userguide/11-apexlake_api.rst b/docs/testing/user/userguide/11-apexlake_api.rst new file mode 100644 index 000000000..35a1dbe3e --- /dev/null +++ b/docs/testing/user/userguide/11-apexlake_api.rst @@ -0,0 +1,89 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + + +================================= +Apexlake API Interface Definition +================================= + +Abstract +-------- + +The API interface provided by the framework to enable the execution of test +cases is defined as follows. + + +init +---- + +**static init()** + + Initializes the Framework + + **Returns** None + + +execute_framework +----------------- + +**static execute_framework** (test_cases, + + iterations, + + heat_template, + + heat_template_parameters, + + deployment_configuration, + + openstack_credentials) + + Executes the framework according the specified inputs + + **Parameters** + + - **test_cases** + + Test cases to be run with the workload (dict() of dict()) + + Example: + test_case = dict() + + test_case[’name’] = ‘module.Class’ + + test_case[’params’] = dict() + + test_case[’params’][’throughput’] = ‘1’ + + test_case[’params’][’vlan_sender’] = ‘1000’ + + test_case[’params’][’vlan_receiver’] = ‘1001’ + + test_cases = [test_case] + + - **iterations** + Number of test cycles to be executed (int) + + - **heat_template** + (string) File name of the heat template corresponding to the workload to be deployed. + It contains the parameters to be evaluated in the form of #parameter_name. + (See heat_templates/vTC.yaml as example). + + - **heat_template_parameters** + (dict) Parameters to be provided as input to the + heat template. See http://docs.openstack.org/developer/heat/ template_guide/hot_guide.html + section “Template input parameters” for further info. + + - **deployment_configuration** + ( dict[string] = list(strings) ) ) Dictionary of parameters + representing the deployment configuration of the workload. + + The key is a string corresponding to the name of the parameter, + the value is a list of strings representing the value to be + assumed by a specific param. The parameters are user defined: + they have to correspond to the place holders (#parameter_name) + specified in the heat template. + + **Returns** dict() containing results diff --git a/docs/testing/user/userguide/11-nsb-overview.rst b/docs/testing/user/userguide/11-nsb-overview.rst deleted file mode 100644 index 6dfa521d1..000000000 --- a/docs/testing/user/userguide/11-nsb-overview.rst +++ /dev/null @@ -1,213 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. - -===================================== -Network Services Benchmarking (NSB) -===================================== - -Abstract -======== - -.. _Yardstick: https://wiki.opnfv.org/yardstick - -This chapter provides an overview of the NSB, a contribution to OPNFV -Yardstick_ from Intel. - -Overview -======== - -GOAL: Extend Yardstick to perform real world VNFs and NFVi Characterization and -benchmarking with repeatable and deterministic methods. - -The Network Service Benchmarking (NSB) extends the yardstick framework to do -VNF characterization and benchmarking in three different execution -environments - bare metal i.e. native Linux environment, standalone virtual -environment and managed virtualized environment (e.g. Open stack etc.). -It also brings in the capability to interact with external traffic generators -both hardware & software based for triggering and validating the traffic -according to user defined profiles. - -NSB extension includes: - - - Generic data models of Network Services, based on ETSI spec (ETSI GS NFV-TST 001) - .. _ETSI GS NFV-TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf - - - New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc - - - Generic VNF configuration models and metrics implemented with Python - classes - - - Traffic generator features and traffic profiles - - - L1-L3 state-less traffic profiles - - - L4-L7 state-full traffic profiles - - - Tunneling protocol / network overlay support - - - Test case samples - - - Ping - - - Trex - - - vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc - - - Traffic generators like Trex, ab/nginx, ixia, iperf etc - - - KPIs for a given use case: - - - System agent support for collecting NFVi KPI. This includes: - - - CPU statistic - - - Memory BW - - - OVS-DPDK Stats - - - Network KPIs, e.g., inpackets, outpackets, thoughput, latency etc - - - VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc - -Architecture -============ -The Network Service (NS) defines a set of Virtual Network Functions (VNF) -connected together using NFV infrastructure. - -The Yardstick NSB extension can support multiple VNFs created by different -vendors including traffic generators. Every VNF being tested has its -own data model. The Network service defines a VNF modelling on base of performed -network functionality. The part of the data model is a set of the configuration -parameters, number of connection points used and flavor including core and -memory amount. - -The ETSI defines a Network Service as a set of configurable VNFs working in -some NFV Infrastructure connecting each other using Virtual Links available -through Connection Points. The ETSI MANO specification defines a set of -management entities called Network Service Descriptors (NSD) and -VNF Descriptors (VNFD) that define real Network Service. The picture below -makes an example how the real Network Operator use-case can map into ETSI -Network service definition - -Network Service framework performs the necessary test steps. It may involve - - - Interacting with traffic generator and providing the inputs on traffic - type / packet structure to generate the required traffic as per the - test case. Traffic profiles will be used for this. - - - Executing the commands required for the test procedure and analyses the - command output for confirming whether the command got executed correctly - or not. E.g. As per the test case, run the traffic for the given - time period / wait for the necessary time delay - - - Verify the test result. - - - Validate the traffic flow from SUT - - - Fetch the table / data from SUT and verify the value as per the test case - - - Upload the logs from SUT onto the Test Harness server - - - Read the KPI's provided by particular VNF - -Components of Network Service ------------------------------- - -* *Models for Network Service benchmarking*: The Network Service benchmarking - requires the proper modelling approach. The NSB provides models using Python - files and defining of NSDs and VNFDs. - -The benchmark control application being a part of OPNFV yardstick can call -that python models to instantiate and configure the VNFs. Depending on -infrastructure type (bare-metal or fully virtualized) that calls could be -made directly or using MANO system. - -* *Traffic generators in NSB*: Any benchmark application requires a set of - traffic generator and traffic profiles defining the method in which traffic - is generated. - -The Network Service benchmarking model extends the Network Service -definition with a set of Traffic Generators (TG) that are treated -same way as other VNFs being a part of benchmarked network service. -Same as other VNFs the traffic generator are instantiated and terminated. - -Every traffic generator has own configuration defined as a traffic profile and -a set of KPIs supported. The python models for TG is extended by specific calls -to listen and generate traffic. - -* *The stateless TREX traffic generator*: The main traffic generator used as - Network Service stimulus is open source TREX tool. - -The TREX tool can generate any kind of stateless traffic. - -.. code-block:: console - - +--------+ +-------+ +--------+ - | | | | | | - | Trex | ---> | VNF | ---> | Trex | - | | | | | | - +--------+ +-------+ +--------+ - -Supported testcases scenarios: - - - Correlated UDP traffic using TREX traffic generator and replay VNF. - - - using different IMIX configuration like pure voice, pure video traffic etc - - - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows - - - Using different number of rules configured like 1 rule, 1K, 10K rules - -For UDP correlated traffic following Key Performance Indicators are collected -for every combination of test case parameters: - - - RFC2544 throughput for various loss rate defined (1% is a default) - -Graphical Overview -================== - -NSB Testing with yardstick framework facilitate performance testing of various -VNFs provided. - -.. code-block:: console - - +-----------+ - | | +-----------+ - | vPE | ->|TGen Port 0| - | TestCase | | +-----------+ - | | | - +-----------+ +------------------+ +-------+ | - | | -- API --> | VNF | <---> - +-----------+ | Yardstick | +-------+ | - | Test Case | --> | NSB Testing | | - +-----------+ | | | - | | | | - | +------------------+ | - +-----------+ | +-----------+ - | Traffic | ->|TGen Port 1| - | patterns | +-----------+ - +-----------+ - - Figure 1: Network Service - 2 server configuration - - -Install -======= - -run the nsb_install.sh with root privileges - -Run -=== - -:: - - source ~/.bash_profile - cd /yardstick/cmd - sudo -E ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml - -Development Environment -======================= - -Ubuntu 14.04, Ubuntu 16.04 diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst new file mode 100644 index 000000000..faac61f08 --- /dev/null +++ b/docs/testing/user/userguide/12-nsb-overview.rst @@ -0,0 +1,194 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016-2017 Intel Corporation. + +===================================== +Network Services Benchmarking (NSB) +===================================== + +Abstract +======== + +.. _Yardstick: https://wiki.opnfv.org/yardstick + +This chapter provides an overview of the NSB, a contribution to OPNFV +Yardstick_ from Intel. + +Overview +======== + +GOAL: Extend Yardstick to perform real world VNFs and NFVi Characterization and +benchmarking with repeatable and deterministic methods. + +The Network Service Benchmarking (NSB) extends the yardstick framework to do +VNF characterization and benchmarking in three different execution +environments - bare metal i.e. native Linux environment, standalone virtual +environment and managed virtualized environment (e.g. Open stack etc.). +It also brings in the capability to interact with external traffic generators +both hardware & software based for triggering and validating the traffic +according to user defined profiles. + +NSB extension includes: + + - Generic data models of Network Services, based on ETSI spec (ETSI GS NFV-TST 001) + .. _ETSI GS NFV-TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf + + - New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc + + - Generic VNF configuration models and metrics implemented with Python + classes + + - Traffic generator features and traffic profiles + + - L1-L3 state-less traffic profiles + + - L4-L7 state-full traffic profiles + + - Tunneling protocol / network overlay support + + - Test case samples + + - Ping + + - Trex + + - vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc + + - Traffic generators like Trex, ab/nginx, ixia, iperf etc + + - KPIs for a given use case: + + - System agent support for collecting NFVi KPI. This includes: + + - CPU statistic + + - Memory BW + + - OVS-DPDK Stats + + - Network KPIs, e.g., inpackets, outpackets, thoughput, latency etc + + - VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc + +Architecture +============ +The Network Service (NS) defines a set of Virtual Network Functions (VNF) +connected together using NFV infrastructure. + +The Yardstick NSB extension can support multiple VNFs created by different +vendors including traffic generators. Every VNF being tested has its +own data model. The Network service defines a VNF modelling on base of performed +network functionality. The part of the data model is a set of the configuration +parameters, number of connection points used and flavor including core and +memory amount. + +The ETSI defines a Network Service as a set of configurable VNFs working in +some NFV Infrastructure connecting each other using Virtual Links available +through Connection Points. The ETSI MANO specification defines a set of +management entities called Network Service Descriptors (NSD) and +VNF Descriptors (VNFD) that define real Network Service. The picture below +makes an example how the real Network Operator use-case can map into ETSI +Network service definition + +Network Service framework performs the necessary test steps. It may involve + + - Interacting with traffic generator and providing the inputs on traffic + type / packet structure to generate the required traffic as per the + test case. Traffic profiles will be used for this. + + - Executing the commands required for the test procedure and analyses the + command output for confirming whether the command got executed correctly + or not. E.g. As per the test case, run the traffic for the given + time period / wait for the necessary time delay + + - Verify the test result. + + - Validate the traffic flow from SUT + + - Fetch the table / data from SUT and verify the value as per the test case + + - Upload the logs from SUT onto the Test Harness server + + - Read the KPI's provided by particular VNF + +Components of Network Service +------------------------------ + +* *Models for Network Service benchmarking*: The Network Service benchmarking + requires the proper modelling approach. The NSB provides models using Python + files and defining of NSDs and VNFDs. + +The benchmark control application being a part of OPNFV yardstick can call +that python models to instantiate and configure the VNFs. Depending on +infrastructure type (bare-metal or fully virtualized) that calls could be +made directly or using MANO system. + +* *Traffic generators in NSB*: Any benchmark application requires a set of + traffic generator and traffic profiles defining the method in which traffic + is generated. + +The Network Service benchmarking model extends the Network Service +definition with a set of Traffic Generators (TG) that are treated +same way as other VNFs being a part of benchmarked network service. +Same as other VNFs the traffic generator are instantiated and terminated. + +Every traffic generator has own configuration defined as a traffic profile and +a set of KPIs supported. The python models for TG is extended by specific calls +to listen and generate traffic. + +* *The stateless TREX traffic generator*: The main traffic generator used as + Network Service stimulus is open source TREX tool. + +The TREX tool can generate any kind of stateless traffic. + +.. code-block:: console + + +--------+ +-------+ +--------+ + | | | | | | + | Trex | ---> | VNF | ---> | Trex | + | | | | | | + +--------+ +-------+ +--------+ + +Supported testcases scenarios: + + - Correlated UDP traffic using TREX traffic generator and replay VNF. + + - using different IMIX configuration like pure voice, pure video traffic etc + + - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows + + - Using different number of rules configured like 1 rule, 1K, 10K rules + +For UDP correlated traffic following Key Performance Indicators are collected +for every combination of test case parameters: + + - RFC2544 throughput for various loss rate defined (1% is a default) + +Graphical Overview +================== + +NSB Testing with yardstick framework facilitate performance testing of various +VNFs provided. + +.. code-block:: console + + +-----------+ + | | +-----------+ + | vPE | ->|TGen Port 0| + | TestCase | | +-----------+ + | | | + +-----------+ +------------------+ +-------+ | + | | -- API --> | VNF | <---> + +-----------+ | Yardstick | +-------+ | + | Test Case | --> | NSB Testing | | + +-----------+ | | | + | | | | + | +------------------+ | + +-----------+ | +-----------+ + | Traffic | ->|TGen Port 1| + | patterns | +-----------+ + +-----------+ + + Figure 1: Network Service - 2 server configuration + diff --git a/docs/testing/user/userguide/12-nsb_installation.rst b/docs/testing/user/userguide/12-nsb_installation.rst deleted file mode 100644 index 0b0840029..000000000 --- a/docs/testing/user/userguide/12-nsb_installation.rst +++ /dev/null @@ -1,268 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. - -Yardstick - NSB Testing -Installation -===================================== - -Abstract --------- - -The Network Service Benchmarking (NSB) extends the yardstick framework to do -VNF characterization and benchmarking in three different execution -environments viz., bare metal i.e. native Linux environment, standalone virtual -environment and managed virtualized environment (e.g. Open stack etc.). -It also brings in the capability to interact with external traffic generators -both hardware & software based for triggering and validating the traffic -according to user defined profiles. - -The steps needed to run Yardstick with NSB testing are: - -* Install Yardstick (NSB Testing). -* Setup pod.yaml describing Test topology -* Create the test configuration yaml file. -* Run the test case. - - -Prerequisites -------------- - -Refer chapter Yardstick Instalaltion for more information on yardstick -prerequisites - -Several prerequisites are needed for Yardstick(VNF testing): - -- Python Modules: pyzmq, pika. - -- flex - -- bison - -- build-essential - -- automake - -- libtool - -- librabbitmq-dev - -- rabbitmq-server - -- collectd - -- intel-cmt-cat - -Installing Yardstick on Ubuntu 14.04 ------------------------------------- - -.. _install-framework: - -You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu -14.04 Docker image. No matter which way you choose to install Yardstick -framework, the following installation steps are identical. - -If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu -14.04 Docker image from Docker hub: - -:: - - docker pull ubuntu:14.04 - -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Download source code and install Yardstick framework: - -:: - - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - ./nsb_setup.sh - -It will also automatically download all the packages needed for NSB Testing setup. - -System Topology: ------------------ - -.. code-block:: console - - +----------+ +----------+ - | | | | - | | (0)----->(0) | Ping/ | - | TG1 | | vPE/ | - | | | 2Trex | - | | (1)<-----(1) | | - +----------+ +----------+ - trafficgen_1 vnf - - -OpenStack parameters and credentials ------------------------------------- - -Environment variables -^^^^^^^^^^^^^^^^^^^^^ - -Before running Yardstick (NSB Testing) it is necessary to export traffic -generator libraries. - -:: - - source ~/.bash_profile - -Config yardstick conf -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -:: - - cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf - vi /etc/yardstick/yardstick.conf - -Add trex_path and bin_path in 'nsb' section. - -:: - - [DEFAULT] - debug = True - dispatcher = influxdb - - [dispatcher_influxdb] - timeout = 5 - target = http://{YOUR_IP_HERE}:8086 - db_name = yardstick - username = root - password = root - - [nsb] - trex_path=/opt/nsb_bin/trex/scripts - bin_path=/opt/nsb_bin - - -Config pod.yaml describing Topology -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Before executing Yardstick test cases, make sure that pod.yaml reflects the -topology and update all the required fields. - -:: - - cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml - -Config pod.yaml -:: - nodes: - - - name: trafficgen_1 - role: TrafficGen - ip: 1.1.1.1 - user: root - password: r00t - interfaces: - xe0: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.0" - driver: i40e # default kernel driver - dpdk_port_num: 0 - local_ip: "152.16.100.20" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:01" - xe1: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.1" - driver: i40e # default kernel driver - dpdk_port_num: 1 - local_ip: "152.16.40.20" - netmask: "255.255.255.0" - local_mac: "00:00.00:00:00:02" - - - - name: vnf - role: vnf - ip: 1.1.1.2 - user: root - password: r00t - host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node - interfaces: - xe0: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.0" - driver: i40e # default kernel driver - dpdk_port_num: 0 - local_ip: "152.16.100.19" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:03" - - xe1: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.1" - driver: i40e # default kernel driver - dpdk_port_num: 1 - local_ip: "152.16.40.19" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:04" - routing_table: - - network: "152.16.100.20" - netmask: "255.255.255.0" - gateway: "152.16.100.20" - if: "xe0" - - network: "152.16.40.20" - netmask: "255.255.255.0" - gateway: "152.16.40.20" - if: "xe1" - nd_route_tbl: - - network: "0064:ff9b:0:0:0:0:9810:6414" - netmask: "112" - gateway: "0064:ff9b:0:0:0:0:9810:6414" - if: "xe0" - - network: "0064:ff9b:0:0:0:0:9810:2814" - netmask: "112" - gateway: "0064:ff9b:0:0:0:0:9810:2814" - if: "xe1" - -Enable yardstick virtual environment -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Before executing yardstick test cases, make sure to activate yardstick -python virtual environment - -:: - source /opt/nsb_bin/yardstick_venv/bin/activate - - -Examples and verifying the install ----------------------------------- - -It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Before executing yardstick -test cases make sure yardstick flavor and building yardstick-trusty-server -image can be found in glance and openrc file is sourced. Below is an example -invocation of yardstick help command and ping.py test sample: -:: - - yardstick -h - yardstick task start samples/ping.yaml - -Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. - -Default location for the output is ``/tmp/yardstick.out``. - - -Run Yardstick - Network Service Testcases ------------------------------------------ - -NS testing - using NSBperf CLI -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -:: - - source /opt/nsb_setup/yardstick_venv/bin/activate - PYTHONPATH: ". ~/.bash_profile" - cd /yardstick/cmd - Execute command: ./NSPerf.py -h - ./NSBperf.py --vnf --test - eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml - -NS testing - using yardstick CLI -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -:: - - source /opt/nsb_setup/yardstick_venv/bin/activate - PYTHONPATH: ". ~/.bash_profile" - Go to test case forlder type we want to execute. - e.g. /samples/vnf_samples/nsut// - run: yardstick --debug task start diff --git a/docs/testing/user/userguide/13-list-of-tcs.rst b/docs/testing/user/userguide/13-list-of-tcs.rst deleted file mode 100644 index 1b5806cd9..000000000 --- a/docs/testing/user/userguide/13-list-of-tcs.rst +++ /dev/null @@ -1,129 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -==================== -Yardstick Test Cases -==================== - -Abstract -======== - -This chapter lists available Yardstick test cases. -Yardstick test cases are divided in two main categories: - -* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology -described in :doc:`02-methodology` - -* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more -aspect of a feature delivered by an OPNFV Project, including the test cases -developed for the :term:`VTC`. - -Generic NFVI Test Case Descriptions -=================================== - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc001.rst - opnfv_yardstick_tc002.rst - opnfv_yardstick_tc004.rst - opnfv_yardstick_tc005.rst - opnfv_yardstick_tc008.rst - opnfv_yardstick_tc009.rst - opnfv_yardstick_tc010.rst - opnfv_yardstick_tc011.rst - opnfv_yardstick_tc012.rst - opnfv_yardstick_tc014.rst - opnfv_yardstick_tc024.rst - opnfv_yardstick_tc037.rst - opnfv_yardstick_tc038.rst - opnfv_yardstick_tc042.rst - opnfv_yardstick_tc043.rst - opnfv_yardstick_tc044.rst - opnfv_yardstick_tc055.rst - opnfv_yardstick_tc061.rst - opnfv_yardstick_tc063.rst - opnfv_yardstick_tc069.rst - opnfv_yardstick_tc070.rst - opnfv_yardstick_tc071.rst - opnfv_yardstick_tc072.rst - opnfv_yardstick_tc073.rst - opnfv_yardstick_tc075.rst - opnfv_yardstick_tc076.rst - -OPNFV Feature Test Cases -======================== - -H A ---- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc019.rst - opnfv_yardstick_tc025.rst - opnfv_yardstick_tc045.rst - opnfv_yardstick_tc046.rst - opnfv_yardstick_tc047.rst - opnfv_yardstick_tc048.rst - opnfv_yardstick_tc049.rst - opnfv_yardstick_tc050.rst - opnfv_yardstick_tc051.rst - opnfv_yardstick_tc052.rst - opnfv_yardstick_tc053.rst - opnfv_yardstick_tc054.rst - -IPv6 ----- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc027.rst - -KVM ---- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc028.rst - -Parser ------- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc040.rst - - StorPerf ------------ - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc074.rst - -virtual Traffic Classifier --------------------------- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc006.rst - opnfv_yardstick_tc007.rst - opnfv_yardstick_tc020.rst - opnfv_yardstick_tc021.rst - -Templates -========= - -.. toctree:: - :maxdepth: 1 - - testcase_description_v2_template - Yardstick_task_templates - diff --git a/docs/testing/user/userguide/13-nsb_installation.rst b/docs/testing/user/userguide/13-nsb_installation.rst new file mode 100644 index 000000000..3eb17bbca --- /dev/null +++ b/docs/testing/user/userguide/13-nsb_installation.rst @@ -0,0 +1,238 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016-2017 Intel Corporation. + +Yardstick - NSB Testing -Installation +===================================== + +Abstract +-------- + +The Network Service Benchmarking (NSB) extends the yardstick framework to do +VNF characterization and benchmarking in three different execution +environments viz., bare metal i.e. native Linux environment, standalone virtual +environment and managed virtualized environment (e.g. Open stack etc.). +It also brings in the capability to interact with external traffic generators +both hardware & software based for triggering and validating the traffic +according to user defined profiles. + +The steps needed to run Yardstick with NSB testing are: + +* Install Yardstick (NSB Testing). +* Setup pod.yaml describing Test topology +* Create the test configuration yaml file. +* Run the test case. + + +Prerequisites +------------- + +Refer chapter Yardstick Instalaltion for more information on yardstick +prerequisites + +Several prerequisites are needed for Yardstick(VNF testing): + +- Python Modules: pyzmq, pika. + +- flex + +- bison + +- build-essential + +- automake + +- libtool + +- librabbitmq-dev + +- rabbitmq-server + +- collectd + +- intel-cmt-cat + +Install Yardstick (NSB Testing) +------------------------------- + +Refer chapter :doc:`04-installation` for more information on installing *Yardstick* + +After *Yardstick* is installed, executing the "nsb_setup.sh" script to setup +NSB testing. + +:: + + ./nsb_setup.sh + +It will also automatically download all the packages needed for NSB Testing setup. + +System Topology: +----------------- + +.. code-block:: console + + +----------+ +----------+ + | | | | + | | (0)----->(0) | Ping/ | + | TG1 | | vPE/ | + | | | 2Trex | + | | (1)<-----(1) | | + +----------+ +----------+ + trafficgen_1 vnf + + +OpenStack parameters and credentials +------------------------------------ + +Environment variables +^^^^^^^^^^^^^^^^^^^^^ + +Before running Yardstick (NSB Testing) it is necessary to export traffic +generator libraries. + +:: + + source ~/.bash_profile + +Config yardstick conf +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + vi /etc/yardstick/yardstick.conf + +Add trex_path and bin_path in 'nsb' section. + +:: + + [DEFAULT] + debug = True + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root + + [nsb] + trex_path=/opt/nsb_bin/trex/scripts + bin_path=/opt/nsb_bin + + +Config pod.yaml describing Topology +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Before executing Yardstick test cases, make sure that pod.yaml reflects the +topology and update all the required fields. + +:: + + cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml + +Config pod.yaml +:: + nodes: + - + name: trafficgen_1 + role: TrafficGen + ip: 1.1.1.1 + user: root + password: r00t + interfaces: + xe0: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.0" + driver: i40e # default kernel driver + dpdk_port_num: 0 + local_ip: "152.16.100.20" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:01" + xe1: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.1" + driver: i40e # default kernel driver + dpdk_port_num: 1 + local_ip: "152.16.40.20" + netmask: "255.255.255.0" + local_mac: "00:00.00:00:00:02" + + - + name: vnf + role: vnf + ip: 1.1.1.2 + user: root + password: r00t + host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node + interfaces: + xe0: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.0" + driver: i40e # default kernel driver + dpdk_port_num: 0 + local_ip: "152.16.100.19" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:03" + + xe1: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.1" + driver: i40e # default kernel driver + dpdk_port_num: 1 + local_ip: "152.16.40.19" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:04" + routing_table: + - network: "152.16.100.20" + netmask: "255.255.255.0" + gateway: "152.16.100.20" + if: "xe0" + - network: "152.16.40.20" + netmask: "255.255.255.0" + gateway: "152.16.40.20" + if: "xe1" + nd_route_tbl: + - network: "0064:ff9b:0:0:0:0:9810:6414" + netmask: "112" + gateway: "0064:ff9b:0:0:0:0:9810:6414" + if: "xe0" + - network: "0064:ff9b:0:0:0:0:9810:2814" + netmask: "112" + gateway: "0064:ff9b:0:0:0:0:9810:2814" + if: "xe1" + +Enable yardstick virtual environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Before executing yardstick test cases, make sure to activate yardstick +python virtual environment + +:: + + source /opt/nsb_bin/yardstick_venv/bin/activate + + +Run Yardstick - Network Service Testcases +----------------------------------------- + +NS testing - using NSBperf CLI +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + source /opt/nsb_setup/yardstick_venv/bin/activate + PYTHONPATH: ". ~/.bash_profile" + cd /yardstick/cmd + + Execute command: ./NSPerf.py -h + ./NSBperf.py --vnf --test + eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml + +NS testing - using yardstick CLI +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + source /opt/nsb_setup/yardstick_venv/bin/activate + PYTHONPATH: ". ~/.bash_profile" + +Go to test case forlder type we want to execute. + e.g. /samples/vnf_samples/nsut// + run: yardstick --debug task start diff --git a/docs/testing/user/userguide/14-list-of-tcs.rst b/docs/testing/user/userguide/14-list-of-tcs.rst new file mode 100644 index 000000000..1b5806cd9 --- /dev/null +++ b/docs/testing/user/userguide/14-list-of-tcs.rst @@ -0,0 +1,129 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +==================== +Yardstick Test Cases +==================== + +Abstract +======== + +This chapter lists available Yardstick test cases. +Yardstick test cases are divided in two main categories: + +* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology +described in :doc:`02-methodology` + +* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more +aspect of a feature delivered by an OPNFV Project, including the test cases +developed for the :term:`VTC`. + +Generic NFVI Test Case Descriptions +=================================== + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc001.rst + opnfv_yardstick_tc002.rst + opnfv_yardstick_tc004.rst + opnfv_yardstick_tc005.rst + opnfv_yardstick_tc008.rst + opnfv_yardstick_tc009.rst + opnfv_yardstick_tc010.rst + opnfv_yardstick_tc011.rst + opnfv_yardstick_tc012.rst + opnfv_yardstick_tc014.rst + opnfv_yardstick_tc024.rst + opnfv_yardstick_tc037.rst + opnfv_yardstick_tc038.rst + opnfv_yardstick_tc042.rst + opnfv_yardstick_tc043.rst + opnfv_yardstick_tc044.rst + opnfv_yardstick_tc055.rst + opnfv_yardstick_tc061.rst + opnfv_yardstick_tc063.rst + opnfv_yardstick_tc069.rst + opnfv_yardstick_tc070.rst + opnfv_yardstick_tc071.rst + opnfv_yardstick_tc072.rst + opnfv_yardstick_tc073.rst + opnfv_yardstick_tc075.rst + opnfv_yardstick_tc076.rst + +OPNFV Feature Test Cases +======================== + +H A +--- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc019.rst + opnfv_yardstick_tc025.rst + opnfv_yardstick_tc045.rst + opnfv_yardstick_tc046.rst + opnfv_yardstick_tc047.rst + opnfv_yardstick_tc048.rst + opnfv_yardstick_tc049.rst + opnfv_yardstick_tc050.rst + opnfv_yardstick_tc051.rst + opnfv_yardstick_tc052.rst + opnfv_yardstick_tc053.rst + opnfv_yardstick_tc054.rst + +IPv6 +---- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc027.rst + +KVM +--- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc028.rst + +Parser +------ + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc040.rst + + StorPerf +----------- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc074.rst + +virtual Traffic Classifier +-------------------------- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc006.rst + opnfv_yardstick_tc007.rst + opnfv_yardstick_tc020.rst + opnfv_yardstick_tc021.rst + +Templates +========= + +.. toctree:: + :maxdepth: 1 + + testcase_description_v2_template + Yardstick_task_templates + diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst index 58a2a4d43..f99d868e9 100644 --- a/docs/testing/user/userguide/index.rst +++ b/docs/testing/user/userguide/index.rst @@ -5,12 +5,13 @@ .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Ericsson AB and others. -================== +=========================================== Performance Testing User Guide (Yardstick) -================== +=========================================== .. toctree:: - :maxdepth: 2 + :maxdepth: 4 + :numbered: 01-introduction 02-methodology @@ -19,11 +20,12 @@ Performance Testing User Guide (Yardstick) 05-yardstick_plugin 06-result-store-InfluxDB 07-grafana - 08-vtc-overview - 09-apexlake_installation - 10-apexlake_api - 11-nsb-overview - 12-nsb_installation - 13-list-of-tcs + 08-api + 09-vtc-overview + 10-apexlake_installation + 11-apexlake_api + 12-nsb-overview + 13-nsb_installation + 14-list-of-tcs glossary references diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc001.rst b/docs/testing/user/userguide/opnfv_yardstick_tc001.rst index b53c508a6..ef2382d4f 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc001.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc001.rst @@ -1,4 +1,4 @@ -s work is licensed under a Creative Commons Attribution 4.0 International +.. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Ericsson AB and others. diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc005.rst b/docs/testing/user/userguide/opnfv_yardstick_tc005.rst index 1c2d71d81..fc75c0da0 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc005.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc005.rst @@ -1,4 +1,4 @@ -. This work is licensed under a Creative Commons Attribution 4.0 International +.. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Huawei Technologies Co.,Ltd and others. -- cgit 1.2.3-korg