diff options
Diffstat (limited to 'docs/testing/user')
20 files changed, 990 insertions, 590 deletions
diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst index 0e0eea002..63d0d9883 100755 --- a/docs/testing/user/userguide/01-introduction.rst +++ b/docs/testing/user/userguide/01-introduction.rst @@ -12,6 +12,7 @@ Introduction .. _Pharos: https://wiki.opnfv.org/pharos .. _Yardstick: https://wiki.opnfv.org/yardstick .. _Presentation: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_yardstick_project.pdf?version=1&modificationDate=1458848320000&api=v2 + Yardstick_ is an OPNFV Project. The project's goal is to verify infrastructure compliance, from the perspective @@ -37,36 +38,49 @@ About This Document This document consists of the following chapters: +* Chapter :doc:`01-introduction` provides a brief introduction to *Yardstick* + project's background and describes the structure of this document. + * Chapter :doc:`02-methodology` describes the methodology implemented by the - Yardstick Project for :term:`NFVI` verification. + *Yardstick* Project for :term:`NFVI` verification. * Chapter :doc:`03-architecture` provides information on the software architecture - of yardstick. + of *Yardstick*. -* Chapter :doc:`04-vtc-overview` provides information on the :term:`VTC`. +* Chapter :doc:`04-installation` provides instructions to install *Yardstick*. -* Chapter :doc:`05-apexlake_installation` provides instructions to install the - experimental framework *ApexLake* +* Chapter :doc:`05-yardstick_plugin` provides information on how to integrate + other OPNFV testing projects into *Yardstick*. -* Chapter :doc:`06-apexlake_api` explains how this framework is integrated in - *Yardstick*. +* Chapter :doc:`06-result-store-InfluxDB` provides inforamtion on how to run + plug-in test cases and store test results into community's InfluxDB. -* Chapter :doc:`07-nsb-overview` describes the methodology implemented by the - yardstick - Network service benchmarking to test real world usecase for a - given VNF +* Chapter :doc:`07-grafana` provides inforamtion on *Yardstick* grafana dashboard + and how to add a dashboard into *Yardstick* grafana dashboard. -* Chapter :doc:`08-nsb_installation` provides instructions to install - *Yardstick - Network service benchmarking testing*. +* Chapter :doc:`08-api` provides inforamtion on *Yardstick* ReST API and how to + use *Yardstick* API. -* Chapter :doc:`09-installation` provides instructions to install *Yardstick*. +* Chapter :doc:`09-yardstick_user_interface` provides inforamtion on how to use + yardstick report CLI to view the test result in table format and also values + pinned on to a graph -* Chapter :doc:`10-yardstick_plugin` provides information on how to integrate - other OPNFV testing projects into *Yardstick*. +* Chapter :doc:`10-vtc-overview` provides information on the :term:`VTC`. -* Chapter :doc:`11-result-store-InfluxDB` provides inforamtion on how to run - plug-in test cases and store test results into community's InfluxDB. +* Chapter :doc:`11-apexlake_installation` provides instructions to install the + experimental framework *ApexLake* + +* Chapter :doc:`12-apexlake_api` explains how this framework is integrated in + *Yardstick*. -* Chapter :doc:`12-list-of-tcs` includes a list of available Yardstick test +* Chapter :doc:`13-nsb-overview` describes the methodology implemented by the + Yardstick - Network service benchmarking to test real world usecase for a + given VNF. + +* Chapter :doc:`14-nsb_installation` provides instructions to install + *Yardstick - Network service benchmarking testing*. + +* Chapter :doc:`15-list-of-tcs` includes a list of available *Yardstick* test cases. @@ -76,4 +90,3 @@ Contact Yardstick Feedback? `Contact us`_ .. _Contact us: opnfv-users@lists.opnfv.org - diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst index 03bf00f58..8336b609d 100755 --- a/docs/testing/user/userguide/03-architecture.rst +++ b/docs/testing/user/userguide/03-architecture.rst @@ -187,9 +187,9 @@ run test measurement scripts through the ssh tunnel. After all TestScenaio is finished, TaskCommands will undeploy the heat stack. Then the whole test is finished. -.. image:: images/Logical_view.png +.. image:: images/Yardstick_framework_architecture_in_D.png :width: 800px - :alt: Yardstick Logical View + :alt: Yardstick framework architecture in Danube Process View (Test execution flow) ================================== @@ -236,7 +236,7 @@ Yardstick Directory structure **yardstick/** - Yardstick main directory. -*ci/* - Used for continuous integration of Yardstick at different PODs and +*tests/ci/* - Used for continuous integration of Yardstick at different PODs and with support for different installers. *docs/* - All documentation is stored here, such as configuration guides, diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst new file mode 100644 index 000000000..0c2bb58cf --- /dev/null +++ b/docs/testing/user/userguide/04-installation.rst @@ -0,0 +1,508 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. + +Yardstick Installation +====================== + + +Abstract +-------- + +Yardstick supports installation by Docker or directly in Ubuntu. The +installation procedure for Docker and direct installation are detailed in +the sections below. + +To use Yardstick you should have access to an OpenStack environment, with at +least Nova, Neutron, Glance, Keystone and Heat installed. + +The steps needed to run Yardstick are: + +1. Install Yardstick. +2. Load OpenStack environment variables. +#. Create Yardstick flavor. +#. Build a guest image and load it into the OpenStack environment. +#. Create the test configuration ``.yaml`` file and run the test case/suite. + + +Prerequisites +------------- + +The OPNFV deployment is out of the scope of this document and can be found `here <http://artifacts.opnfv.org/opnfvdocs/colorado/docs/configguide/index.html>`_. The OPNFV platform is considered as the System Under Test (SUT) in this document. + +Several prerequisites are needed for Yardstick: + +#. A Jumphost to run Yardstick on +#. A Docker daemon or a virtual environment installed on the Jumphost +#. A public/external network created on the SUT +#. Connectivity from the Jumphost to the SUT public/external network + +**NOTE:** *Jumphost* refers to any server which meets the previous +requirements. Normally it is the same server from where the OPNFV +deployment has been triggered. + +**WARNING:** Connectivity from Jumphost is essential and it is of paramount +importance to make sure it is working before even considering to install +and run Yardstick. Make also sure you understand how your networking is +designed to work. + +**NOTE:** If your Jumphost is operating behind a company http proxy and/or +Firewall, please consult first the section `Proxy Support (**Todo**)`_, towards +the end of this document. That section details some tips/tricks which +*may* be of help in a proxified environment. + + +Install Yardstick using Docker (**recommended**) +--------------------------------------------------- + +Yardstick has a Docker image. It is recommended to use this Docker image to run Yardstick test. + +Prepare the Yardstick container +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ + +Install docker on your guest system with the following command, if not done yet:: + + wget -qO- https://get.docker.com/ | sh + +Pull the Yardstick Docker image (``opnfv/yardstick``) from the public dockerhub +registry under the OPNFV account: dockerhub_, with the following docker +command:: + + docker pull opnfv/yardstick:stable + +After pulling the Docker image, check that it is available with the +following docker command:: + + [yardsticker@jumphost ~]$ docker images + REPOSITORY TAG IMAGE ID CREATED SIZE + opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB + +Run the Docker image to get a Yardstick container:: + + docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock -p 8888:5000 -e INSTALLER_IP=192.168.200.2 -e INSTALLER_TYPE=compass --name yardstick opnfv/yardstick:stable + +Note: + ++----------------------------------------------+------------------------------+ +| parameters | Detail | ++==============================================+==============================+ +| -itd | -i: interactive, Keep STDIN | +| | open even if not attached. | +| | -t: allocate a pseudo-TTY. | +| | -d: run container in | +| | detached mode, in the | +| | background. | ++----------------------------------------------+------------------------------+ +| --privileged | If you want to build | +| | ``yardstick-image`` in | +| | Yardstick container, this | +| | parameter is needed. | ++----------------------------------------------+------------------------------+ +| -e INSTALLER_IP=192.168.200.2 | If you want to use yardstick | +| | env prepare command(or | +| -e INSTALLER_TYPE=compass | related API) to load the | +| | images that Yardstick needs, | +| | these parameters should be | +| | provided. | +| | The INSTALLER_IP and | +| | INSTALLER_TYPE are depending | +| | on your OpenStack installer. | +| | Currently Apex, Compass, | +| | Fuel and Joid are supported. | +| | If you use other installers, | +| | such as devstack, these | +| | parameters can be ignores. | ++----------------------------------------------+------------------------------+ +| -p 8888:5000 | If you want to call | +| | Yardstick API out of | +| | Yardstick container, this | +| | parameter is needed. | ++----------------------------------------------+------------------------------+ +| -v /var/run/docker.sock:/var/run/docker.sock | If you want to use yardstick | +| | env grafana/influxdb to | +| | create a grafana/influxdb | +| | container out of Yardstick | +| | container, this parameter is | +| | needed. | ++----------------------------------------------+------------------------------+ +| --name yardstick | The name for this container, | +| | not needed and can be | +| | defined by the user. | ++----------------------------------------------+------------------------------+ + +Configure the Yardstick container environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +There are three ways to configure environments for running Yardstick, which will be shown in the following sections. Before that, enter the Yardstick container:: + + docker exec -it yardstick /bin/bash + +and then configure Yardstick environments in the Yardstick container. + +The first way (**recommended**) +################################### + +In the Yardstick container, the Yardstick repository is located in the ``/home/opnfv/repos`` directory. Yardstick provides a CLI to prepare OpenStack environment variables and create Yardstick flavor and guest images automatically:: + + yardstick env prepare + +**NOTE**: The above command just works for four OPNFV installers -- **Apex**, **Compass**, **Fuel** and **Joid**. +The env prepare command may take up to 6-8 minutes to finish building +yardstick-image and other environment preparation. Meanwhile if you wish to +monitor the env prepare process, you can enter the Yardstick container in a new +terminal window and execute the following command:: + + tail -f /var/log/yardstick/uwsgi.log + + +The second way +################ + +Export OpenStack environment variables +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Before running Yardstick it is necessary to export OpenStack environment variables:: + + source openrc + +Environment variables in the ``openrc`` file have to include at least: + +* ``OS_AUTH_URL`` +* ``OS_USERNAME`` +* ``OS_PASSWORD`` +* ``OS_TENANT_NAME`` +* ``EXTERNAL_NETWORK`` + +A sample `openrc` file may look like this:: + + export OS_PASSWORD=console + export OS_TENANT_NAME=admin + export OS_AUTH_URL=http://172.16.1.222:35357/v2.0 + export OS_USERNAME=admin + export OS_VOLUME_API_VERSION=2 + export EXTERNAL_NETWORK=net04_ext + +Manually create Yardstick falvor and guest images +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Before executing Yardstick test cases, make sure that Yardstick flavor and guest image are available in OpenStack. Detailed steps about creating the Yardstick flavor and building the Yardstick guest image can be found below. + +Most of the sample test cases in Yardstick are using an OpenStack flavor called +``yardstick-flavor`` which deviates from the OpenStack standard ``m1.tiny`` flavor by the disk size - instead of 1GB it has 3GB. Other parameters are the same as in ``m1.tiny``. + +Create ``yardstick-flavor``:: + + nova flavor-create yardstick-flavor 100 512 3 1 + +Most of the sample test cases in Yardstick are using a guest image called +``yardstick-image`` which deviates from an Ubuntu Cloud Server image +containing all the required tools to run test cases supported by Yardstick. +Yardstick has a tool for building this custom image. It is necessary to have +``sudo`` rights to use this tool. + +Also you may need install several additional packages to use this tool, by +follwing the commands below:: + + sudo apt-get update && sudo apt-get install -y qemu-utils kpartx + +This image can be built using the following command in the directory where Yardstick is installed:: + + sudo tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh + +**Warning:** Before building the guest image inside the Yardstick container, make sure the container is granted with privilege. The script will create files by default in ``/tmp/workspace/yardstick`` and the files will be owned by root! + +The created image can be added to OpenStack using the ``glance image-create`` or via the OpenStack Dashboard. Example command is:: + + glance --os-image-api-version 1 image-create \ + --name yardstick-image --is-public true \ + --disk-format qcow2 --container-format bare \ + --file /tmp/workspace/yardstick/yardstick-image.img + +.. _`Cirros 0.3.5`: http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img +.. _`Ubuntu 16.04`: https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img + +Some Yardstick test cases use a `Cirros 0.3.5`_ image and/or a `Ubuntu 16.04`_ image. Add Cirros and Ubuntu images to OpenStack:: + + openstack image create \ + --disk-format qcow2 \ + --container-format bare \ + --file $cirros_image_file \ + cirros-0.3.5 + + openstack image create \ + --disk-format qcow2 \ + --container-format bare \ + --file $ubuntu_image_file \ + Ubuntu-16.04 + + +The third way +################ + +Similar to the second way, the first step is also to `Export OpenStack environment variables`_. Then the following steps should be done. + +Automatically create Yardstcik flavor and guest images +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Yardstick has a script for automatically creating Yardstick flavor and building +Yardstick guest images. This script is mainly used for CI and can be also used in the local environment:: + + source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh + + +Delete the Yardstick container +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you want to uninstall Yardstick, just delete the Yardstick container:: + + docker stop yardstick && docker rm yardstick + + +Install Yardstick directly in Ubuntu +--------------------------------------- + +.. _install-framework: + +Alternatively you can install Yardstick framework directly in Ubuntu or in an Ubuntu Docker image. No matter which way you choose to install Yardstick, the following installation steps are identical. + +If you choose to use the Ubuntu Docker image, you can pull the Ubuntu +Docker image from Docker hub:: + + docker pull ubuntu:16.04 + + +Install Yardstick +^^^^^^^^^^^^^^^^^^^^^ + +Prerequisite preparation:: + + apt-get update && apt-get install -y git python-setuptools python-pip + easy_install -U setuptools==30.0.0 + pip install appdirs==1.4.0 + pip install virtualenv + +Create a virtual environment:: + + virtualenv ~/yardstick_venv + export YARDSTICK_VENV=~/yardstick_venv + source ~/yardstick_venv/bin/activate + +Download the source code and install Yardstick from it:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + export YARDSTICK_REPO_DIR=~/yardstick + cd yardstick + ./install.sh + + +Configure the Yardstick environment (**Todo**) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For installing Yardstick directly in Ubuntu, the ``yardstick env`` command is not available. You need to prepare OpenStack environment variables and create Yardstick flavor and guest images manually. + + +Uninstall Yardstick +^^^^^^^^^^^^^^^^^^^^^^ + +For unistalling Yardstick, just delete the virtual environment:: + + rm -rf ~/yardstick_venv + + +Verify the installation +----------------------------- + +It is recommended to verify that Yardstick was installed successfully +by executing some simple commands and test samples. Before executing Yardstick +test cases make sure ``yardstick-flavor`` and ``yardstick-image`` can be found in OpenStack and the ``openrc`` file is sourced. Below is an example +invocation of Yardstick ``help`` command and ``ping.py`` test sample:: + + yardstick -h + yardstick task start samples/ping.yaml + +**NOTE:** The above commands could be run in both the Yardstick container and the Ubuntu directly. + +Each testing tool supported by Yardstick has a sample configuration file. +These configuration files can be found in the ``samples`` directory. + +Default location for the output is ``/tmp/yardstick.out``. + + +Deploy InfluxDB and Grafana using Docker +------------------------------------------- + +Without InfluxDB, Yardstick stores results for runnning test case in the file +``/tmp/yardstick.out``. However, it's unconvenient to retrieve and display +test results. So we will show how to use InfluxDB to store data and use +Grafana to display data in the following sections. + +Automatically deploy InfluxDB and Grafana containers (**recommended**) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Firstly, enter the Yardstick container:: + + docker exec -it yardstick /bin/bash + +Secondly, create InfluxDB container and configure with the following command:: + + yardstick env influxdb + +Thirdly, create and configure Grafana container:: + + yardstick env grafana + +Then you can run a test case and visit http://host_ip:3000 (``admin``/``admin``) to see the results. + +**NOTE:** Executing ``yardstick env`` command to deploy InfluxDB and Grafana requires Jumphost's docker API version => 1.24. Run the following command to check the docker API version on the Jumphost:: + + docker version + +Manually deploy InfluxDB and Grafana containers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +You could also deploy influxDB and Grafana containers manually on the Jumphost. +The following sections show how to do. + +.. pull docker images + +Pull docker images +#################### + +:: + + docker pull tutum/influxdb + docker pull grafana/grafana + +Run and configure influxDB +############################### + +Run influxDB:: + + docker run -d --name influxdb \ + -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ + tutum/influxdb + docker exec -it influxdb bash + +Configure influxDB:: + + influx + >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + >CREATE DATABASE yardstick; + >use yardstick; + >show MEASUREMENTS; + +Run and configure Grafana +############################### + +Run Grafana:: + + docker run -d --name grafana -p 3000:3000 grafana/grafana + +Log on http://{YOUR_IP_HERE}:3000 using ``admin``/``admin`` and configure database resource to be ``{YOUR_IP_HERE}:8086``. + +.. image:: images/Grafana_config.png + :width: 800px + :alt: Grafana data source configration + +Configure ``yardstick.conf`` +############################## + +:: + + docker exec -it yardstick /bin/bash + cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + vi /etc/yardstick/yardstick.conf + +Modify ``yardstick.conf``:: + + [DEFAULT] + debug = True + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root + +Now you can run Yardstick test cases and store the results in influxDB. + + +Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**) +----------------------------------------------------------- + + +Run Yardstick in a local environment +------------------------------------ + +We also have a guide about how to run Yardstick in a local environment. +This work is contributed by Tapio Tallgren. +You can find this guide at `here <https://wiki.opnfv.org/display/yardstick/How+to+run+Yardstick+in+a+local+environment>`_. + + +Create a test suite for Yardstick +------------------------------------ + +A test suite in yardstick is a yaml file which include one or more test cases. +Yardstick is able to support running test suite task, so you can customize your +own test suite and run it in one task. + +``tests/opnfv/test_suites`` is the folder where Yardstick puts CI test suite. A typical test suite is like below (the ``fuel_test_suite.yaml`` example):: + + --- + # Fuel integration test task suite + + schema: "yardstick:suite:0.1" + + name: "fuel_test_suite" + test_cases_dir: "samples/" + test_cases: + - + file_name: ping.yaml + - + file_name: iperf3.yaml + +As you can see, there are two test cases in the ``fuel_test_suite.yaml``. The +``schema`` and the ``name`` must be specified. The test cases should be listed +via the tag ``test_cases`` and their relative path is also marked via the tag +``test_cases_dir``. + +Yardstick test suite also supports constraints and task args for each test +case. Here is another sample (the ``os-nosdn-nofeature-ha.yaml`` example) to +show this, which is digested from one big test suite:: + + --- + + schema: "yardstick:suite:0.1" + + name: "os-nosdn-nofeature-ha" + test_cases_dir: "tests/opnfv/test_cases/" + test_cases: + - + file_name: opnfv_yardstick_tc002.yaml + - + file_name: opnfv_yardstick_tc005.yaml + - + file_name: opnfv_yardstick_tc043.yaml + constraint: + installer: compass + pod: huawei-pod1 + task_args: + huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml", + "host": "node4.LF","target": "node5.LF"}' + +As you can see in test case ``opnfv_yardstick_tc043.yaml``, there are two +tags, ``constraint`` and ``task_args``. ``constraint`` is to specify which +installer or pod it can be run in the CI environment. ``task_args`` is to +specify the task arguments for each pod. + +All in all, to create a test suite in Yardstick, you just need to create a +yaml file and add test cases, constraint or task arguments if necessary. + + +Proxy Support (**Todo**) +--------------------------- diff --git a/docs/testing/user/userguide/10-yardstick_plugin.rst b/docs/testing/user/userguide/05-yardstick_plugin.rst index f16dedd02..ec0b49ff1 100644 --- a/docs/testing/user/userguide/10-yardstick_plugin.rst +++ b/docs/testing/user/userguide/05-yardstick_plugin.rst @@ -4,18 +4,19 @@ .. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. =================================== -Installing a plug-in into yardstick +Installing a plug-in into Yardstick =================================== + Abstract ======== -Yardstick currently provides a ``plugin`` CLI command to support integration -with other OPNFV testing projects. Below is an example invocation of yardstick -plugin command and Storperf plug-in sample. +Yardstick provides a ``plugin`` CLI command to support integration with other +OPNFV testing projects. Below is an example invocation of Yardstick plugin +command and Storperf plug-in sample. -Installing Storperf into yardstick +Installing Storperf into Yardstick ================================== Storperf is delivered as a Docker container from @@ -55,38 +56,52 @@ environment and other dependencies: should include credential environment variables at least: * OS_AUTH_URL +* OS_USERNAME +* OS_PASSWORD * OS_TENANT_ID * OS_TENANT_NAME * OS_PROJECT_NAME -* OS_USERNAME -* OS_PASSWORD -* OS_REGION_NAME +* OS_PROJECT_ID +* OS_USER_DOMAIN_ID + +*Yardstick* has a "prepare_storperf_admin-rc.sh" script which can be used to +generate the "storperf_admin-rc" file, this script is located at +test/ci/prepare_storperf_admin-rc.sh -For this storperf_admin-rc file, during environment preparation a "prepare_storperf_admin-rc.sh" -script can be used to generate it. :: #!/bin/bash + # Prepare storperf_admin-rc for StorPerf. AUTH_URL=${OS_AUTH_URL} USERNAME=${OS_USERNAME:-admin} PASSWORD=${OS_PASSWORD:-console} + TENANT_NAME=${OS_TENANT_NAME:-admin} - VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-2} + TENANT_ID=`openstack project show admin|grep '\bid\b' |awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME} - TENANT_ID=`keystone tenant-get admin|grep 'id'|awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` + PROJECT_ID=`openstack project show admin|grep '\bid\b' |awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` + USER_DOMAIN_ID=${OS_USER_DOMAIN_ID:-default} + rm -f ~/storperf_admin-rc touch ~/storperf_admin-rc + echo "OS_AUTH_URL="$AUTH_URL >> ~/storperf_admin-rc echo "OS_USERNAME="$USERNAME >> ~/storperf_admin-rc echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc - echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc - echo "OS_VOLUME_API_VERSION="$VOLUME_API_VERSION >> ~/storperf_admin-rc echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc + echo "OS_PROJECT_ID="$PROJECT_ID >> ~/storperf_admin-rc + echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc + echo "OS_USER_DOMAIN_ID="$USER_DOMAIN_ID >> ~/storperf_admin-rc +The generated "storperf_admin-rc" file will be stored in the root directory. If +you installed *Yardstick* using Docker, this file will be located in the +container. You may need to copy it to the root directory of the Storperf +deployed host. + Step 1: Plug-in configuration file preparation -++++++++++++++++++++++++++++++++++++++++++++++ +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> To install a plug-in, first you need to prepare a plug-in configuration file in YAML format and store it in the "plugin" directory. The plugin configration file @@ -111,23 +126,23 @@ Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host in my local environment. Step 2: Plug-in install/remove scripts preparation -++++++++++++++++++++++++++++++++++++++++++++++++++ +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Under "yardstick/resource/scripts directory", there are two folders: a "install" -folder and a "remove" folder. You need to store the plug-in install/remove script -in these two folders respectively. +In "yardstick/resource/scripts" directory, there are two folders: a "install" +folder and a "remove" folder. You need to store the plug-in install/remove +scripts in these two folders respectively. -The detailed installation or remove operation should de defined in these two scripts. -The name of both install and remove scripts should match the plugin-in name that you -specified in the plug-in configuration file. -For example, the install and remove scripts for Storperf are both named to "storperf.bash". +The detailed installation or remove operation should de defined in these two +scripts. The name of both install and remove scripts should match the plugin-in +name that you specified in the plug-in configuration file. +For example, the install and remove scripts for Storperf are both named to +"storperf.bash". Step 3: Install and remove Storperf -+++++++++++++++++++++++++++++++++++ +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -To install Storperf, simply execute the following command -:: +To install Storperf, simply execute the following command:: # Install Storperf yardstick plugin install plugin/storperf.yaml @@ -135,10 +150,11 @@ To install Storperf, simply execute the following command removing Storperf from yardstick ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To remove Storperf, simply execute the following command -:: +To remove Storperf, simply execute the following command:: # Remove Storperf yardstick plugin remove plugin/storperf.yaml -What yardstick plugin command does is using the username and password to log into the deployment target and then execute the corresponding install or remove script. +What yardstick plugin command does is using the username and password to log +into the deployment target and then execute the corresponding install or remove +script. diff --git a/docs/testing/user/userguide/11-result-store-InfluxDB.rst b/docs/testing/user/userguide/06-result-store-InfluxDB.rst index a0bb48a80..747927889 100644 --- a/docs/testing/user/userguide/11-result-store-InfluxDB.rst +++ b/docs/testing/user/userguide/06-result-store-InfluxDB.rst @@ -31,9 +31,9 @@ Store Storperf Test Results into Community's InfluxDB As shown in Framework_, there are two ways to store Storperf test results into community's InfluxDB: -1. Yardstick asks Storperf to run the test case. After the test case is - completed, Yardstick reads test results via ReST API from Storperf and - posts test data to the influxDB. +1. Yardstick executes Storperf test case (TC074), posting test job to Storperf + container via ReST API. After the test job is completed, Yardstick reads + test results via ReST API from Storperf and posts test data to the influxDB. 2. Additionally, Storperf can run tests by itself and post the test result directly to the InfluxDB. The method for posting data directly to influxDB diff --git a/docs/testing/user/userguide/12-grafana.rst b/docs/testing/user/userguide/07-grafana.rst index 416857b71..416857b71 100644 --- a/docs/testing/user/userguide/12-grafana.rst +++ b/docs/testing/user/userguide/07-grafana.rst diff --git a/docs/testing/user/userguide/08-api.rst b/docs/testing/user/userguide/08-api.rst new file mode 100644 index 000000000..1d9ea6d64 --- /dev/null +++ b/docs/testing/user/userguide/08-api.rst @@ -0,0 +1,177 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +Yardstick Restful API +====================== + + +Abstract +-------- + +Yardstick support restful API in danube. + + +Available API +------------- + +/yardstick/env/action +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to do some work related to environment. For now, we support: + +1. Prepare yardstick environment(Including fetch openrc file, get external network and load images) +2. Start a InfluxDB docker container and config yardstick output to InfluxDB. +3. Start a Grafana docker container and config with the InfluxDB. + +Which API to call will depend on the Parameters. + + +Method: POST + + +Prepare Yardstick Environment +Example:: + + { + 'action': 'prepareYardstickEnv' + } + +This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result. + + +Start and Config InfluxDB docker container +Example:: + + { + 'action': 'createInfluxDBContainer' + } + +This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result. + + +Start and Config Grafana docker container +Example:: + + { + 'action': 'createGrafanaContainer' + } + +This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result. + + +/yardstick/asynctask +^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to get the status of asynchronous task + + +Method: GET + + +Get the status of asynchronous task +Example:: + + http://localhost:8888/yardstick/asynctask?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c + +The returned status will be 0(running), 1(finished) and 2(failed). + + +/yardstick/testcases +^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to list all release test cases now in yardstick. + + +Method: GET + + +Get a list of release test cases +Example:: + + http://localhost:8888/yardstick/testcases + + +/yardstick/testcases/release/action +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to run a yardstick release test case. + + +Method: POST + + +Run a release test case +Example:: + + { + 'action': 'runTestCase', + 'args': { + 'opts': {}, + 'testcase': 'tc002' + } + } + +This is an asynchronous API. You need to call /yardstick/results to get the result. + + +/yardstick/testcases/samples/action +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to run a yardstick sample test case. + + +Method: POST + + +Run a sample test case +Example:: + + { + 'action': 'runTestCase', + 'args': { + 'opts': {}, + 'testcase': 'ping' + } + } + +This is an asynchronous API. You need to call /yardstick/results to get the result. + + +/yardstick/testsuites/action +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Description: This API is used to run a yardstick test suite. + + +Method: POST + + +Run a test suite +Example:: + + { + 'action': 'runTestSuite', + 'args': { + 'opts': {}, + 'testcase': 'smoke' + } + } + +This is an asynchronous API. You need to call /yardstick/results to get the result. + + +/yardstick/results +^^^^^^^^^^^^^^^^^^ + + +Description: This API is used to get the test results of certain task. If you call /yardstick/testcases/samples/action API, it will return a task id. You can use the returned task id to get the results by using this API. + + +Get test results of one task +Example:: + + http://localhost:8888/yardstick/results?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c + +This API will return a list of test case result diff --git a/docs/testing/user/userguide/09-installation.rst b/docs/testing/user/userguide/09-installation.rst deleted file mode 100644 index 9c2082a27..000000000 --- a/docs/testing/user/userguide/09-installation.rst +++ /dev/null @@ -1,401 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. - -Yardstick Installation -====================== - -Abstract --------- - -Yardstick supports installation on Ubuntu 14.04 or via a Docker image. The -installation procedure on Ubuntu 14.04 or via the docker image are detailed in -the section below. - -To use Yardstick you should have access to an OpenStack environment, with at -least Nova, Neutron, Glance, Keystone and Heat installed. - -The steps needed to run Yardstick are: - -1. Install Yardstick. -2. Load OpenStack environment variables. -3. Create a Neutron external network. -4. Build Yardstick flavor and a guest image. -5. Load the guest image into the OpenStack environment. -6. Create the test configuration .yaml file. -7. Run the test case. - - -Prerequisites -------------- - -The OPNFV deployment is out of the scope of this document but it can be -found in http://artifacts.opnfv.org/opnfvdocs/colorado/docs/configguide/index.html. -The OPNFV platform is considered as the System Under Test (SUT) in this -document. - -Several prerequisites are needed for Yardstick: - - #. A Jumphost to run Yardstick on - #. A Docker daemon shall be installed on the Jumphost - #. A public/external network created on the SUT - #. Connectivity from the Jumphost to the SUT public/external network - -WARNING: Connectivity from Jumphost is essential and it is of paramount -importance to make sure it is working before even considering to install -and run Yardstick. Make also sure you understand how your networking is -designed to work. - -NOTE: **Jumphost** refers to any server which meets the previous -requirements. Normally it is the same server from where the OPNFV -deployment has been triggered previously. - -NOTE: If your Jumphost is operating behind a company http proxy and/or -Firewall, please consult first the section `Proxy Support`_, towards -the end of this document. The section details some tips/tricks which -*may* be of help in a proxified environment. - - -Installing Yardstick on Ubuntu 14.04 ------------------------------------- - -.. _install-framework: - -You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu -14.04 Docker image. No matter which way you choose to install Yardstick -framework, the following installation steps are identical. - -If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu -14.04 Docker image from Docker hub: - -:: - - docker pull ubuntu:14.04 - -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Download source code and install python dependencies: - -:: - - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - ./install.sh - - -Installing Yardstick using Docker ---------------------------------- - -Yardstick has a Docker image, this Docker image (**Yardstick-stable**) -serves as a replacement for installing the Yardstick framework in a virtual -environment (for example as done in :ref:`install-framework`). -It is recommended to use this Docker image to run Yardstick test. - -Pulling the Yardstick Docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ - -Pull the Yardstick Docker image ('opnfv/yardstick') from the public dockerhub -registry under the OPNFV account: [dockerhub_], with the following docker -command:: - - docker pull opnfv/yardstick:stable - -After pulling the Docker image, check that it is available with the -following docker command:: - - [yardsticker@jumphost ~]$ docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB - -Run the Docker image: - -:: - - docker run --privileged=true -it opnfv/yardstick:stable /bin/bash - -In the container the Yardstick repository is located in the /home/opnfv/repos -directory. - - -OpenStack parameters and credentials ------------------------------------- - -Environment variables -^^^^^^^^^^^^^^^^^^^^^ -Before running Yardstick it is necessary to export OpenStack environment variables -from the OpenStack *openrc* file (using the ``source`` command) and export the -external network name ``export EXTERNAL_NETWORK="external-network-name"``, -the default name for the external network is ``net04_ext``. - -Credential environment variables in the *openrc* file have to include at least: - -* OS_AUTH_URL -* OS_USERNAME -* OS_PASSWORD -* OS_TENANT_NAME - -A sample openrc file may look like this: - -* export OS_PASSWORD=console -* export OS_TENANT_NAME=admin -* export OS_AUTH_URL=http://172.16.1.222:35357/v2.0 -* export OS_USERNAME=admin -* export OS_VOLUME_API_VERSION=2 -* export EXTERNAL_NETWORK=net04_ext - - -Yardstick falvor and guest images ---------------------------------- - -Before executing Yardstick test cases, make sure that yardstick guest image and -yardstick flavor are available in OpenStack. -Detailed steps about creating yardstick flavor and building yardstick-trusty-server -image can be found below. - -Yardstick-flavor -^^^^^^^^^^^^^^^^ -Most of the sample test cases in Yardstick are using an OpenStack flavor called -*yardstick-flavor* which deviates from the OpenStack standard m1.tiny flavor by the -disk size - instead of 1GB it has 3GB. Other parameters are the same as in m1.tiny. - -Create yardstick-flavor: - -:: - - nova flavor-create yardstick-flavor 100 512 3 1 - - -.. _guest-image: - -Building a guest image -^^^^^^^^^^^^^^^^^^^^^^ -Most of the sample test cases in Yardstick are using a guest image called -*yardstick-trusty-server* which deviates from an Ubuntu Cloud Server image -containing all the required tools to run test cases supported by Yardstick. -Yardstick has a tool for building this custom image. It is necessary to have -sudo rights to use this tool. - -Also you may need install several additional packages to use this tool, by -follwing the commands below: - -:: - - apt-get update && apt-get install -y \ - qemu-utils \ - kpartx - -This image can be built using the following command while in the directory where -Yardstick is installed (``~/yardstick`` if the framework is installed -by following the commands above): - -:: - - export YARD_IMG_ARCH="amd64" - sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers - sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh - -**Warning:** the script will create files by default in: -``/tmp/workspace/yardstick`` and the files will be owned by root! - -If you are building this guest image in inside a docker container make sure the -container is granted with privilege. - -The created image can be added to OpenStack using the ``glance image-create`` or -via the OpenStack Dashboard. - -Example command: - -:: - - glance --os-image-api-version 1 image-create \ - --name yardstick-image --is-public true \ - --disk-format qcow2 --container-format bare \ - --file /tmp/workspace/yardstick/yardstick-image.img - -Some Yardstick test cases use a Cirros image, you can find one at -http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img - - -Automatic flavor and image creation ------------------------------------ -Yardstick has a script for automatic creating yardstick flavor and building -guest images. This script is mainly used in CI, but you can still use it in -your local environment. - -Example command: - -:: - - export YARD_IMG_ARCH="amd64" - sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers - source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh - - -Yardstick default key pair -^^^^^^^^^^^^^^^^^^^^^^^^^^ -Yardstick uses a SSH key pair to connect to the guest image. This key pair can -be found in the ``resources/files`` directory. To run the ``ping-hot.yaml`` test -sample, this key pair needs to be imported to the OpenStack environment. - - -Examples and verifying the install ----------------------------------- - -It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Before executing yardstick -test cases make sure yardstick flavor and building yardstick-trusty-server -image can be found in glance and openrc file is sourced. Below is an example -invocation of yardstick help command and ping.py test sample: -:: - - yardstick –h - yardstick task start samples/ping.yaml - -Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. - -Default location for the output is ``/tmp/yardstick.out``. - - -Deploy InfluxDB and Grafana locally ------------------------------------- - -.. pull docker images - -Pull docker images - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -:: - - docker pull tutum/influxdb - docker pull grafana/grafana - -Run influxdb and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run influxdb -:: - - docker run -d --name influxdb \ - -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ - tutum/influxdb - docker exec -it influxdb bash - -Config influxdb -:: - - influx - >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES - >CREATE DATABASE yardstick; - >use yardstick; - >show MEASUREMENTS; - -Run grafana and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run grafana -:: - - docker run -d --name grafana -p 3000:3000 grafana/grafana - -Config grafana -:: - - http://{YOUR_IP_HERE}:3000 - log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 - -.. image:: images/Grafana_config.png - :width: 800px - :alt: Grafana data source configration - -Config yardstick conf -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf - -vi /etc/yardstick/yardstick.conf -Config yardstick.conf -:: - - [DEFAULT] - debug = True - dispatcher = influxdb - - [dispatcher_influxdb] - timeout = 5 - target = http://{YOUR_IP_HERE}:8086 - db_name = yardstick - username = root - password = root - -Now you can run yardstick test cases and store the results in influxdb -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - -Create a test suite for yardstick ------------------------------------- - -A test suite in yardstick is a yaml file which include one or more test cases. -Yardstick is able to support running test suite task, so you can customize you -own test suite and run it in one task. - -"tests/opnfv/test_suites" is where yardstick put ci test-suite. A typical test -suite is like below: - -fuel_test_suite.yaml - -:: - - --- - # Fuel integration test task suite - - schema: "yardstick:suite:0.1" - - name: "fuel_test_suite" - test_cases_dir: "samples/" - test_cases: - - - file_name: ping.yaml - - - file_name: iperf3.yaml - -As you can see, there are two test cases in fuel_test_suite, the syntax is simple -here, you must specify the schema and the name, then you just need to list the -test cases in the tag "test_cases" and also mark their relative directory in the -tag "test_cases_dir". - -Yardstick test suite also support constraints and task args for each test case. -Here is another sample to show this, which is digested from one big test suite. - -os-nosdn-nofeature-ha.yaml - -:: - - --- - - schema: "yardstick:suite:0.1" - - name: "os-nosdn-nofeature-ha" - test_cases_dir: "tests/opnfv/test_cases/" - test_cases: - - - file_name: opnfv_yardstick_tc002.yaml - - - file_name: opnfv_yardstick_tc005.yaml - - - file_name: opnfv_yardstick_tc043.yaml - constraint: - installer: compass - pod: huawei-pod1 - task_args: - huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml", - "host": "node4.LF","target": "node5.LF"}' - -As you can see in test case "opnfv_yardstick_tc043.yaml", there are two tags, "constraint" and -"task_args". "constraint" is where you can specify which installer or pod it can be run in -the ci environment. "task_args" is where you can specify the task arguments for each pod. - -All in all, to create a test suite in yardstick, you just need to create a suite yaml file -and add test cases and constraint or task arguments if necessary. - diff --git a/docs/testing/user/userguide/09-yardstick_user_interface.rst b/docs/testing/user/userguide/09-yardstick_user_interface.rst new file mode 100644 index 000000000..9058dd46d --- /dev/null +++ b/docs/testing/user/userguide/09-yardstick_user_interface.rst @@ -0,0 +1,29 @@ +Yardstick User Interface +======================== + +This interface provides a user to view the test result +in table format and also values pinned on to a graph. + + +Command +------- +:: + + yardstick report generate <task-ID> <testcase-filename> + + +Description +----------- + +1. When the command is triggered using the task-id and the testcase +name provided the respective values are retrieved from the +database (influxdb in this particular case). + +2. The values are then formatted and then provided to the html +template framed with complete html body using Django Framework. + +3. Then the whole template is written into a html file. + +The graph is framed with Timestamp on x-axis and output values +(differ from testcase to testcase) on y-axis with the help of +"Highcharts". diff --git a/docs/testing/user/userguide/04-vtc-overview.rst b/docs/testing/user/userguide/10-vtc-overview.rst index 82b20cad5..8ed17873d 100644 --- a/docs/testing/user/userguide/04-vtc-overview.rst +++ b/docs/testing/user/userguide/10-vtc-overview.rst @@ -109,14 +109,20 @@ Graphical Overview Install ======= -run the build.sh with root privileges +run the vTC/build.sh with root privileges Run === -sudo ./pfbridge -a eth1 -b eth2 +:: + + sudo ./pfbridge -a eth1 -b eth2 + + +.. note:: Virtual Traffic Classifier is not support in OPNFV Danube release. + Development Environment ======================= -Ubuntu 14.04 +Ubuntu 14.04 Ubuntu 16.04 diff --git a/docs/testing/user/userguide/05-apexlake_installation.rst b/docs/testing/user/userguide/11-apexlake_installation.rst index d4493e0f8..0d8ef143f 100644 --- a/docs/testing/user/userguide/05-apexlake_installation.rst +++ b/docs/testing/user/userguide/11-apexlake_installation.rst @@ -251,6 +251,8 @@ It is required to do the reset to the specified commit ID. It is also requires the creation a configuration file using the following command: +:: + SMCROUTE_NIC=(name of the nic) where name of the nic is the name used previously for the variable "name_if_2". diff --git a/docs/testing/user/userguide/06-apexlake_api.rst b/docs/testing/user/userguide/12-apexlake_api.rst index 35a1dbe3e..35a1dbe3e 100644 --- a/docs/testing/user/userguide/06-apexlake_api.rst +++ b/docs/testing/user/userguide/12-apexlake_api.rst diff --git a/docs/testing/user/userguide/07-nsb-overview.rst b/docs/testing/user/userguide/13-nsb-overview.rst index 19719f1a7..faac61f08 100644 --- a/docs/testing/user/userguide/07-nsb-overview.rst +++ b/docs/testing/user/userguide/13-nsb-overview.rst @@ -23,33 +23,53 @@ benchmarking with repeatable and deterministic methods. The Network Service Benchmarking (NSB) extends the yardstick framework to do VNF characterization and benchmarking in three different execution -environments viz., bare metal i.e. native Linux environment, standalone virtual +environments - bare metal i.e. native Linux environment, standalone virtual environment and managed virtualized environment (e.g. Open stack etc.). It also brings in the capability to interact with external traffic generators both hardware & software based for triggering and validating the traffic according to user defined profiles. NSB extension includes: - • Generic data models of Network Services, based on ETSI specs - • New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc - • Generic VNF configuration models and metrics implemented with Python + + - Generic data models of Network Services, based on ETSI spec (ETSI GS NFV-TST 001) + .. _ETSI GS NFV-TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf + + - New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc + + - Generic VNF configuration models and metrics implemented with Python classes - • Traffic generator features and traffic profiles - • L1-L3 state-less traffic profiles - • L4-L7 state-full traffic profiles - • Tunneling protocol / network overlay support - • Test case samples - • Ping - • Trex - • vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc - • Traffic generators like Trex, ab/nginx, ixia, iperf etc - • KPIs for a given use case: - • System agent support for collecting NFvi KPI. This includes: - o CPU statistic - o Memory BW - o OVS-DPDK Stats - • Network KPIs – eg, inpackets, outpackets, thoughput, latency etc - • VNF KPIs – packet_in, packet_drop, packet_fwd etc + + - Traffic generator features and traffic profiles + + - L1-L3 state-less traffic profiles + + - L4-L7 state-full traffic profiles + + - Tunneling protocol / network overlay support + + - Test case samples + + - Ping + + - Trex + + - vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc + + - Traffic generators like Trex, ab/nginx, ixia, iperf etc + + - KPIs for a given use case: + + - System agent support for collecting NFVi KPI. This includes: + + - CPU statistic + + - Memory BW + + - OVS-DPDK Stats + + - Network KPIs, e.g., inpackets, outpackets, thoughput, latency etc + + - VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc Architecture ============ @@ -72,18 +92,25 @@ makes an example how the real Network Operator use-case can map into ETSI Network service definition Network Service framework performs the necessary test steps. It may involve - o Interacting with traffic generator and providing the inputs on traffic + + - Interacting with traffic generator and providing the inputs on traffic type / packet structure to generate the required traffic as per the test case. Traffic profiles will be used for this. - o Executing the commands required for the test procedure and analyses the + + - Executing the commands required for the test procedure and analyses the command output for confirming whether the command got executed correctly or not. E.g. As per the test case, run the traffic for the given time period / wait for the necessary time delay - o Verify the test result. - o Validate the traffic flow from SUT - o Fetch the table / data from SUT and verify the value as per the test case - o Upload the logs from SUT onto the Test Harness server - o Read the KPI’s provided by particular VNF + + - Verify the test result. + + - Validate the traffic flow from SUT + + - Fetch the table / data from SUT and verify the value as per the test case + + - Upload the logs from SUT onto the Test Harness server + + - Read the KPI's provided by particular VNF Components of Network Service ------------------------------ @@ -124,14 +151,19 @@ The TREX tool can generate any kind of stateless traffic. +--------+ +-------+ +--------+ Supported testcases scenarios: -• Correlated UDP traffic using TREX traffic generator and replay VNF. - o using different IMIX configuration like pure voice, pure video traffic etc - o using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows - o Using different number of rules configured like 1 rule, 1K, 10K rules + + - Correlated UDP traffic using TREX traffic generator and replay VNF. + + - using different IMIX configuration like pure voice, pure video traffic etc + + - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows + + - Using different number of rules configured like 1 rule, 1K, 10K rules For UDP correlated traffic following Key Performance Indicators are collected for every combination of test case parameters: - • RFC2544 throughput for various loss rate defined (1% is a default) + + - RFC2544 throughput for various loss rate defined (1% is a default) Graphical Overview ================== @@ -140,6 +172,7 @@ NSB Testing with yardstick framework facilitate performance testing of various VNFs provided. .. code-block:: console + +-----------+ | | +-----------+ | vPE | ->|TGen Port 0| @@ -156,22 +189,6 @@ VNFs provided. | Traffic | ->|TGen Port 1| | patterns | +-----------+ +-----------+ - Figure 1: Network Service - 2 server configuration - - -Install -======= -run the nsb_install.sh with root privileges - -Run -=== - -source ~/.bash_profile -cd <yardstick_repo>/yardstick/cmd -sudo -E ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml - -Development Environment -======================= + Figure 1: Network Service - 2 server configuration -Ubuntu 14.04, Ubuntu 16.04 diff --git a/docs/testing/user/userguide/08-nsb_installation.rst b/docs/testing/user/userguide/14-nsb_installation.rst index a390bb7d7..3eb17bbca 100644 --- a/docs/testing/user/userguide/08-nsb_installation.rst +++ b/docs/testing/user/userguide/14-nsb_installation.rst @@ -9,10 +9,6 @@ Yardstick - NSB Testing -Installation Abstract -------- -Yardstick supports installation on Ubuntu 14.04 or via a Docker image. The -installation procedure on Ubuntu 14.04 or via the docker image are detailed in -the section below. - The Network Service Benchmarking (NSB) extends the yardstick framework to do VNF characterization and benchmarking in three different execution environments viz., bare metal i.e. native Linux environment, standalone virtual @@ -32,48 +28,44 @@ The steps needed to run Yardstick with NSB testing are: Prerequisites ------------- -Refer chapter 08-instalaltion.rst for more information on yardstick +Refer chapter Yardstick Instalaltion for more information on yardstick prerequisites Several prerequisites are needed for Yardstick(VNF testing): -* Python Modules: pyzmq, pika. -* flex -* bison -* build-essential -* automake -* libtool -* librabbitmq-dev -* rabbitmq-server -* collectd -* intel-cmt-cat - -Installing Yardstick on Ubuntu 14.04 ------------------------------------- -.. _install-framework: +- Python Modules: pyzmq, pika. -You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu -14.04 Docker image. No matter which way you choose to install Yardstick -framework, the following installation steps are identical. +- flex -If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu -14.04 Docker image from Docker hub: +- bison -:: +- build-essential - docker pull ubuntu:14.04 +- automake -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Download source code and install python dependencies: +- libtool + +- librabbitmq-dev + +- rabbitmq-server + +- collectd + +- intel-cmt-cat + +Install Yardstick (NSB Testing) +------------------------------- + +Refer chapter :doc:`04-installation` for more information on installing *Yardstick* + +After *Yardstick* is installed, executing the "nsb_setup.sh" script to setup +NSB testing. :: - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick ./nsb_setup.sh -It will automatically download all the packages needed for NSB Testing setup. +It will also automatically download all the packages needed for NSB Testing setup. System Topology: ----------------- @@ -95,19 +87,24 @@ OpenStack parameters and credentials Environment variables ^^^^^^^^^^^^^^^^^^^^^ + Before running Yardstick (NSB Testing) it is necessary to export traffic generator libraries. :: - source ~/.bash_profile + + source ~/.bash_profile Config yardstick conf ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf -vi /etc/yardstick/yardstick.conf +:: + + cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + vi /etc/yardstick/yardstick.conf + +Add trex_path and bin_path in 'nsb' section. -Config yardstick.conf :: [DEFAULT] @@ -128,10 +125,13 @@ Config yardstick.conf Config pod.yaml describing Topology ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + Before executing Yardstick test cases, make sure that pod.yaml reflects the topology and update all the required fields. -copy /etc/yardstick/nodes/pod.yaml.nsb.example to /etc/yardstick/nodes/pod.yaml +:: + + cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml Config pod.yaml :: @@ -202,30 +202,13 @@ Config pod.yaml Enable yardstick virtual environment ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + Before executing yardstick test cases, make sure to activate yardstick python virtual environment :: - source /opt/nsb_bin/yardstick_venv/bin/activate - - -Examples and verifying the install ----------------------------------- -It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Before executing yardstick -test cases make sure yardstick flavor and building yardstick-trusty-server -image can be found in glance and openrc file is sourced. Below is an example -invocation of yardstick help command and ping.py test sample: -:: - - yardstick –h - yardstick task start samples/ping.yaml - -Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. - -Default location for the output is ``/tmp/yardstick.out``. + source /opt/nsb_bin/yardstick_venv/bin/activate Run Yardstick - Network Service Testcases @@ -238,7 +221,8 @@ NS testing - using NSBperf CLI source /opt/nsb_setup/yardstick_venv/bin/activate PYTHONPATH: ". ~/.bash_profile" cd <yardstick_repo>/yardstick/cmd - Execute command: ./NSPerf.py -h + + Execute command: ./NSPerf.py -h ./NSBperf.py --vnf <selected vnf> --test <rfc test> eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml @@ -248,6 +232,7 @@ NS testing - using yardstick CLI source /opt/nsb_setup/yardstick_venv/bin/activate PYTHONPATH: ". ~/.bash_profile" - Go to test case forlder type we want to execute. + +Go to test case forlder type we want to execute. e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/ run: yardstick --debug task start <test_case.yaml> diff --git a/docs/testing/user/userguide/13-list-of-tcs.rst b/docs/testing/user/userguide/15-list-of-tcs.rst index 1b5806cd9..1b5806cd9 100644 --- a/docs/testing/user/userguide/13-list-of-tcs.rst +++ b/docs/testing/user/userguide/15-list-of-tcs.rst diff --git a/docs/testing/user/userguide/images/Yardstick_framework_architecture_in_D.png b/docs/testing/user/userguide/images/Yardstick_framework_architecture_in_D.png Binary files differnew file mode 100644 index 000000000..f4065cb5e --- /dev/null +++ b/docs/testing/user/userguide/images/Yardstick_framework_architecture_in_D.png diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst index 1b963af61..8ac1c7bdb 100644 --- a/docs/testing/user/userguide/index.rst +++ b/docs/testing/user/userguide/index.rst @@ -1,27 +1,32 @@ +.. _yardstick-userguide: + .. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Ericsson AB and others. -================== +=========================================== Performance Testing User Guide (Yardstick) -================== +=========================================== .. toctree:: - :maxdepth: 2 + :maxdepth: 4 + :numbered: 01-introduction 02-methodology 03-architecture - 04-vtc-overview - 05-apexlake_installation - 06-apexlake_api - 07-nsb-overview - 08-nsb_installation - 09-installation - 10-yardstick_plugin - 11-result-store-InfluxDB - 12-grafana - 13-list-of-tcs + 04-installation + 05-yardstick_plugin + 06-result-store-InfluxDB + 07-grafana + 08-api + 09-yardstick_user_interface + 10-vtc-overview + 11-apexlake_installation + 12-apexlake_api + 13-nsb-overview + 14-nsb_installation + 15-list-of-tcs glossary references diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc001.rst b/docs/testing/user/userguide/opnfv_yardstick_tc001.rst index b53c508a6..ef2382d4f 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc001.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc001.rst @@ -1,4 +1,4 @@ -s work is licensed under a Creative Commons Attribution 4.0 International +.. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Ericsson AB and others. diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc005.rst b/docs/testing/user/userguide/opnfv_yardstick_tc005.rst index 1c2d71d81..fc75c0da0 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc005.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc005.rst @@ -1,4 +1,4 @@ -. This work is licensed under a Creative Commons Attribution 4.0 International +.. This work is licensed under a Creative Commons Attribution 4.0 International .. License. .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Huawei Technologies Co.,Ltd and others. diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc076.rst b/docs/testing/user/userguide/opnfv_yardstick_tc076.rst index ac7bde794..1e7647fa6 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc076.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc076.rst @@ -19,14 +19,20 @@ Yardstick Test Case Description TC076 | | TCP segment error rate and UDP datagram error rate | | | | +--------------+--------------------------------------------------------------+ -|test purpose | Monitor network metrics provided by the kernel in a host and | -| | calculate IP datagram error rate, ICMP message error rate, | -| | TCP segment error rate and UDP datagram error rate. | +|test purpose | The purpose of TC076 is to evaluate the IaaS network | +| | reliability with regards to IP datagram error rate, ICMP | +| | message error rate, TCP segment error rate and UDP datagram | +| | error rate. | | | | -+--------------+--------------------------------------------------------------+ -|configuration | file: opnfv_yardstick_tc076.yaml | +| | TC076 monitors network metrics provided by the Linux kernel | +| | in a host and calculates IP datagram error rate, ICMP | +| | message error rate, TCP segment error rate and UDP datagram | +| | error rate. | | | | -| | There is no additional configuration to be set for this TC. | +| | The purpose is also to be able to spot the trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | | | | +--------------+--------------------------------------------------------------+ |test tool | nstat | @@ -34,6 +40,25 @@ Yardstick Test Case Description TC076 | | nstat is a simple tool to monitor kernel snmp counters and | | | network interface statistics. | | | | +| | (nstat is not always part of a Linux distribution, hence it | +| | needs to be installed. nstat is provided by the iproute2 | +| | collection, which is usually also the name of the package in | +| | many Linux distributions.As an example see the | +| | /yardstick/tools/ directory for how to generate a Linux | +| | image with iproute2 included.) | +| | | ++--------------+--------------------------------------------------------------+ +|test | Ping packets (ICMP protocol's mandatory ECHO_REQUEST | +|description | datagram) are sent from host VM to target VM(s) to elicit | +| | ICMP ECHO_RESPONSE. | +| | | +| | nstat is invoked on the target vm to monitors network | +| | metrics provided by the Linux kernel. | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc076.yaml | +| | | +| | There is no additional configuration to be set for this TC. | +| | | +--------------+--------------------------------------------------------------+ |references | nstat man page | | | | @@ -43,19 +68,37 @@ Yardstick Test Case Description TC076 |applicability | This test case is mainly for monitoring network metrics. | | | | +--------------+--------------------------------------------------------------+ -|pre_test | | -|conditions | | +|pre_test | The test case image needs to be installed into Glance | +|conditions | with fio included in it. | +| | | +| | No POD specific requirements have been identified. | | | | +--------------+--------------------------------------------------------------+ |test sequence | description and expected result | | | | +--------------+--------------------------------------------------------------+ -|step 1 | The pod is available. | -| | Nstat is invoked and logs are produced and stored. | +|step 1 | Two host VMs are booted, as server and client. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the server VM by using ssh. | +| | 'ping_benchmark' bash script is copyied from Jump Host to | +| | the server VM via the ssh tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Ping is invoked. Ping packets are sent from server VM to | +| | client VM. RTT results are calculated and checked against | +| | the SLA. nstat is invoked on the client vm to monitors | +| | network metrics provided by the Linux kernel. IP datagram | +| | error rate, ICMP message error rate, TCP segment error rate | +| | and UDP datagram error rate are calculated. | +| | Logs are produced and stored. | | | | | | Result: Logs are stored. | | | | +--------------+--------------------------------------------------------------+ +|step 4 | Two host VMs are deleted. | +| | | ++--------------+--------------------------------------------------------------+ |test verdict | None. | | | | +--------------+--------------------------------------------------------------+ |