summaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing/user/userguide')
-rwxr-xr-xdocs/testing/user/userguide/01-introduction.rst32
-rwxr-xr-xdocs/testing/user/userguide/03-architecture.rst22
-rw-r--r--docs/testing/user/userguide/04-installation.rst292
-rw-r--r--docs/testing/user/userguide/05-operation.rst296
-rw-r--r--docs/testing/user/userguide/06-yardstick-plugin.rst (renamed from docs/testing/user/userguide/05-yardstick_plugin.rst)68
-rw-r--r--docs/testing/user/userguide/07-result-store-InfluxDB.rst (renamed from docs/testing/user/userguide/06-result-store-InfluxDB.rst)26
-rw-r--r--docs/testing/user/userguide/08-grafana.rst (renamed from docs/testing/user/userguide/07-grafana.rst)6
-rw-r--r--docs/testing/user/userguide/09-api.rst (renamed from docs/testing/user/userguide/08-api.rst)115
-rw-r--r--docs/testing/user/userguide/10-yardstick-user-interface.rst (renamed from docs/testing/user/userguide/09-yardstick_user_interface.rst)5
-rw-r--r--docs/testing/user/userguide/11-vtc-overview.rst (renamed from docs/testing/user/userguide/10-vtc-overview.rst)14
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst (renamed from docs/testing/user/userguide/11-nsb-overview.rst)31
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst (renamed from docs/testing/user/userguide/12-nsb_installation.rst)250
-rw-r--r--docs/testing/user/userguide/14-nsb-operation.rst (renamed from docs/testing/user/userguide/13-nsb_operation.rst)65
-rw-r--r--docs/testing/user/userguide/15-list-of-tcs.rst10
-rw-r--r--docs/testing/user/userguide/index.rst19
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc050.rst52
16 files changed, 725 insertions, 578 deletions
diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst
index c1d5def98..d846e759c 100755
--- a/docs/testing/user/userguide/01-introduction.rst
+++ b/docs/testing/user/userguide/01-introduction.rst
@@ -42,43 +42,47 @@ This document consists of the following chapters:
* Chapter :doc:`02-methodology` describes the methodology implemented by the
*Yardstick* Project for :term:`NFVI` verification.
-* Chapter :doc:`03-architecture` provides information on the software architecture
- of *Yardstick*.
+* Chapter :doc:`03-architecture` provides information on the software
+ architecture of *Yardstick*.
* Chapter :doc:`04-installation` provides instructions to install *Yardstick*.
-* Chapter :doc:`05-yardstick_plugin` provides information on how to integrate
+* Chapter :doc:`05-operation` provides information on how to use *Yardstick*
+ to run and create testcases.
+
+* Chapter :doc:`06-yardstick-plugin` provides information on how to integrate
other OPNFV testing projects into *Yardstick*.
-* Chapter :doc:`06-result-store-InfluxDB` provides inforamtion on how to run
+* Chapter :doc:`07-result-store-InfluxDB` provides inforamtion on how to run
plug-in test cases and store test results into community's InfluxDB.
-* Chapter :doc:`07-grafana` provides inforamtion on *Yardstick* grafana dashboard
- and how to add a dashboard into *Yardstick* grafana dashboard.
+* Chapter :doc:`08-grafana` provides inforamtion on *Yardstick* grafana
+ dashboard and how to add a dashboard into *Yardstick* grafana dashboard.
-* Chapter :doc:`08-api` provides inforamtion on *Yardstick* ReST API and how to
+* Chapter :doc:`09-api` provides inforamtion on *Yardstick* ReST API and how to
use *Yardstick* API.
-* Chapter :doc:`09-yardstick_user_interface` provides inforamtion on how to use
+* Chapter :doc:`10-yardstick-user-interface` provides inforamtion on how to use
yardstick report CLI to view the test result in table format and also values
pinned on to a graph
-* Chapter :doc:`10-vtc-overview` provides information on the :term:`VTC`.
+* Chapter :doc:`11-vtc-overview` provides information on the :term:`VTC`.
-* Chapter :doc:`13-nsb-overview` describes the methodology implemented by the
+* Chapter :doc:`12-nsb-overview` describes the methodology implemented by the
Yardstick - Network service benchmarking to test real world usecase for a
given VNF.
-* Chapter :doc:`14-nsb_installation` provides instructions to install
- *Yardstick - Network service benchmarking testing*.
+* Chapter :doc:`13-nsb_installation` provides instructions to install
+ *Yardstick - Network Service Benchmarking (NSB) testing*.
+
+* Chapter :doc:`14-nsb-operation` provides information on running *NSB*
* Chapter :doc:`15-list-of-tcs` includes a list of available *Yardstick* test
cases.
-
Contact Yardstick
=================
Feedback? `Contact us`_
-.. _Contact us: opnfv-users@lists.opnfv.org
+.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="[yardstick]"
diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst
index 8336b609d..622002ee4 100755
--- a/docs/testing/user/userguide/03-architecture.rst
+++ b/docs/testing/user/userguide/03-architecture.rst
@@ -9,8 +9,9 @@ Architecture
Abstract
========
-This chapter describes the yardstick framework software architecture. we will introduce it from Use-Case View,
-Logical View, Process View and Deployment View. More technical details will be introduced in this chapter.
+This chapter describes the yardstick framework software architecture. We will
+introduce it from Use-Case View, Logical View, Process View and Deployment
+View. More technical details will be introduced in this chapter.
Overview
========
@@ -23,8 +24,8 @@ files. Yardstick is inspired by Rally. Yardstick is intended to run on a
computer with access and credentials to a cloud. The test case is described
in a configuration file given as an argument.
-How it works: the benchmark task configuration file is parsed and converted into
-an internal model. The context part of the model is converted into a Heat
+How it works: the benchmark task configuration file is parsed and converted
+into an internal model. The context part of the model is converted into a Heat
template and deployed into a stack. Each scenario is run using a runner, either
serially or in parallel. Each runner runs in its own subprocess executing
commands in a VM using SSH. The output of each scenario is written as json
@@ -43,13 +44,15 @@ names, image names, affinity rules and network configurations. A context is
converted into a simplified Heat template, which is used to deploy onto the
Openstack environment.
-**Data** - Output produced by running a benchmark, written to a file in json format
+**Data** - Output produced by running a benchmark, written to a file in json
+format
**Runner** - Logic that determines how a test scenario is run and reported, for
example the number of test iterations, input value stepping and test duration.
Predefined runner types exist for re-usage, see `Runner types`_.
-**Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, LmBench, ...)
+**Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf,
+LmBench, ...)
**SLA** - Relates to what result boundary a test case must meet to pass. For
example a latency limit, amount or ratio of lost packets and so on. Action
@@ -128,8 +131,8 @@ Snippet of an Iteration runner configuration:
Use-Case View
=============
Yardstick Use-Case View shows two kinds of users. One is the Tester who will
-do testing in cloud, the other is the User who is more concerned with test result
-and result analyses.
+do testing in cloud, the other is the User who is more concerned with test
+result and result analyses.
For testers, they will run a single test case or test case suite to verify
infrastructure compliance or bencnmark their own infrastructure performance.
@@ -254,7 +257,8 @@ Yardstick Directory structure
*tools/* - Currently contains tools to build image for VMs which are deployed
by Heat. Currently contains how to build the yardstick-trusty-server
- image with the different tools that are needed from within the image.
+ image with the different tools that are needed from within the
+ image.
*plugin/* - Plug-in configuration files are stored here.
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst
index cac814667..a4846230e 100644
--- a/docs/testing/user/userguide/04-installation.rst
+++ b/docs/testing/user/userguide/04-installation.rst
@@ -39,18 +39,18 @@ Several prerequisites are needed for Yardstick:
4. Connectivity from the Jumphost to the SUT public/external network
.. note:: *Jumphost* refers to any server which meets the previous
-requirements. Normally it is the same server from where the OPNFV
-deployment has been triggered.
+ requirements. Normally it is the same server from where the OPNFV
+ deployment has been triggered.
.. warning:: Connectivity from Jumphost is essential and it is of paramount
-importance to make sure it is working before even considering to install
-and run Yardstick. Make also sure you understand how your networking is
-designed to work.
+ importance to make sure it is working before even considering to install
+ and run Yardstick. Make also sure you understand how your networking is
+ designed to work.
.. note:: If your Jumphost is operating behind a company http proxy and/or
-Firewall, please first consult `Proxy Support`_ section which is towards the
-end of this document. That section details some tips/tricks which *may* be of
-help in a proxified environment.
+ Firewall, please first consult `Proxy Support`_ section which is towards
+ the end of this document. That section details some tips/tricks which *may*
+ be of help in a proxified environment.
Install Yardstick using Docker (first option) (**recommended**)
@@ -85,27 +85,30 @@ Run the Docker image to get a Yardstick container::
docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock \
-p 8888:5000 --name yardstick opnfv/yardstick:stable
-.. table:: Description of the parameters used with ``docker run`` command
-
- ======================= ====================================================
- Parameters Detail
- ======================= ====================================================
- -itd -i: interactive, Keep STDIN open even if not
- attached
- -t: allocate a pseudo-TTY detached mode, in the
- background
- ======================= ====================================================
- --privileged If you want to build ``yardstick-image`` in
- Yardstick container, this parameter is needed
- ======================= ====================================================
- -p 8888:5000 Redirect the a host port (8888) to a container port
- (5000)
- ======================= ====================================================
- -v /var/run/docker.sock If you want to use yardstick env grafana/influxdb to
- :/var/run/docker.sock create a grafana/influxdb container out of Yardstick
- container
- ======================= ====================================================
- --name yardstick The name for this container
+Description of the parameters used with ``docker run`` command
+
+ +------------------------+--------------------------------------------------+
+ | Parameters | Detail |
+ +========================+==================================================+
+ | -itd | -i: interactive, Keep STDIN open even if not |
+ | | attached |
+ | +--------------------------------------------------+
+ | | -t: allocate a pseudo-TTY detached mode, in the |
+ | | background |
+ +------------------------+--------------------------------------------------+
+ | --privileged | If you want to build ``yardstick-image`` in |
+ | | Yardstick container, this parameter is needed |
+ +------------------------+--------------------------------------------------+
+ | -p 8888:5000 | Redirect the a host port (8888) to a container |
+ | | port (5000) |
+ +------------------------+--------------------------------------------------+
+ | -v /var/run/docker.sock| If you want to use yardstick env |
+ | :/var/run/docker.sock | grafana/influxdb to create a grafana/influxdb |
+ | | container out of Yardstick container |
+ +------------------------+--------------------------------------------------+
+ | --name yardstick | The name for this container |
+ +------------------------+--------------------------------------------------+
+
If the host is restarted
^^^^^^^^^^^^^^^^^^^^^^^^
@@ -135,18 +138,18 @@ automatically::
yardstick env prepare
.. note:: Since Euphrates release, the above command will not be able to
-automatically configure the ``/etc/yardstick/openstack.creds`` file. So before
-running the above command, it is necessary to create the
-``/etc/yardstick/openstack.creds`` file and save OpenStack environment
-variables into it manually. If you have the openstack credential file saved
-outside the Yardstick Docker container, you can do this easily by mapping the
-credential file into Yardstick container using::
+ automatically configure the ``/etc/yardstick/openstack.creds`` file. So before
+ running the above command, it is necessary to create the
+ ``/etc/yardstick/openstack.creds`` file and save OpenStack environment
+ variables into it manually. If you have the openstack credential file saved
+ outside the Yardstick Docker container, you can do this easily by mapping the
+ credential file into Yardstick container using::
- '-v /path/to/credential_file:/etc/yardstick/openstack.creds'
+ '-v /path/to/credential_file:/etc/yardstick/openstack.creds'
-when running the Yardstick container. For details of the required OpenStack
-environment variables please refer to section `Export OpenStack environment
-variables`_.
+ when running the Yardstick container. For details of the required OpenStack
+ environment variables please refer to section `Export OpenStack environment
+ variables`_.
The ``env prepare`` command may take up to 6-8 minutes to finish building
yardstick-image and other environment preparation. Meanwhile if you wish to
@@ -222,8 +225,8 @@ Yardstick is installed::
sudo -EH tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh
.. warning:: Before building the guest image inside the Yardstick container,
-make sure the container is granted with privilege. The script will create files
-by default in ``/tmp/workspace/yardstick`` and the files will be owned by root.
+ make sure the container is granted with privilege. The script will create files
+ by default in ``/tmp/workspace/yardstick`` and the files will be owned by root.
The created image can be added to OpenStack using the OpenStack client or via
the OpenStack Dashboard::
@@ -270,7 +273,7 @@ For usage of Yardstick GUI, please watch our demo video at
`Yardstick GUI demo`_.
.. note:: The Yardstick GUI is still in development, the GUI layout and
-features may change.
+ features may change.
Delete the Yardstick container
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -433,7 +436,7 @@ of Yardstick ``help`` command and ``ping.py`` test sample::
yardstick task start samples/ping.yaml
.. note:: The above commands could be run in both the Yardstick container and
-the Ubuntu directly.
+ the Ubuntu directly.
Each testing tool supported by Yardstick has a sample configuration file.
These configuration files can be found in the ``samples`` directory.
@@ -468,10 +471,10 @@ Then you can run a test case and visit http://host_ip:1948
(``admin``/``admin``) to see the results.
.. note:: Executing ``yardstick env`` command to deploy InfluxDB and Grafana
-requires Jumphost's docker API version => 1.24. Run the following command to
-check the docker API version on the Jumphost::
+ requires Jumphost's docker API version => 1.24. Run the following command to
+ check the docker API version on the Jumphost::
- docker version
+ docker version
Manual deployment of InfluxDB and Grafana containers
@@ -537,200 +540,6 @@ Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**)
---------------------------------------------------------
-Yardstick common CLI
---------------------
-
-List test cases
-^^^^^^^^^^^^^^^
-
-``yardstick testcase list``: This command line would list all test cases in
-Yardstick. It would show like below::
-
- +---------------------------------------------------------------------------------------
- | Testcase Name | Description
- +---------------------------------------------------------------------------------------
- | opnfv_yardstick_tc001 | Measure network throughput using pktgen
- | opnfv_yardstick_tc002 | measure network latency using ping
- | opnfv_yardstick_tc005 | Measure Storage IOPS, throughput and latency using fio.
- ...
- +---------------------------------------------------------------------------------------
-
-
-Show a test case config file
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Take opnfv_yardstick_tc002 for an example. This test case measure network
-latency. You just need to type in ``yardstick testcase show
-opnfv_yardstick_tc002``, and the console would show the config yaml of this
-test case::
-
- ---
-
- schema: "yardstick:task:0.1"
- description: >
- Yardstick TC002 config file;
- measure network latency using ping;
-
- {% set image = image or "cirros-0.3.5" %}
-
- {% set provider = provider or none %}
- {% set physical_network = physical_network or 'physnet1' %}
- {% set segmentation_id = segmentation_id or none %}
- {% set packetsize = packetsize or 100 %}
-
- scenarios:
- {% for i in range(2) %}
- -
- type: Ping
- options:
- packetsize: {{packetsize}}
- host: athena.demo
- target: ares.demo
-
- runner:
- type: Duration
- duration: 60
- interval: 10
-
- sla:
- max_rtt: 10
- action: monitor
- {% endfor %}
-
- context:
- name: demo
- image: {{image}}
- flavor: yardstick-flavor
- user: cirros
-
- placement_groups:
- pgrp1:
- policy: "availability"
-
- servers:
- athena:
- floating_ip: true
- placement: "pgrp1"
- ares:
- placement: "pgrp1"
-
- networks:
- test:
- cidr: '10.0.1.0/24'
- {% if provider == "vlan" %}
- provider: {{provider}}
- physical_network: {{physical_network}}å
- {% if segmentation_id %}
- segmentation_id: {{segmentation_id}}
- {% endif %}
- {% endif %}
-
-
-Start a task to run yardstick test case
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-If you want run a test case, then you need to use ``yardstick task start
-<test_case_path>`` this command support some parameters as below::
-
- +---------------------+--------------------------------------------------+
- | Parameters | Detail |
- +=====================+==================================================+
- | -d | show debug log of yardstick running |
- | | |
- +---------------------+--------------------------------------------------+
- | --task-args | If you want to customize test case parameters, |
- | | use "--task-args" to pass the value. The format |
- | | is a json string with parameter key-value pair. |
- | | |
- +---------------------+--------------------------------------------------+
- | --task-args-file | If you want to use yardstick |
- | | env prepare command(or |
- | | related API) to load the |
- +---------------------+--------------------------------------------------+
- | --parse-only | |
- | | |
- | | |
- +---------------------+--------------------------------------------------+
- | --output-file \ | Specify where to output the log. if not pass, |
- | OUTPUT_FILE_PATH | the default value is |
- | | "/tmp/yardstick/yardstick.log" |
- | | |
- +---------------------+--------------------------------------------------+
- | --suite \ | run a test suite, TEST_SUITE_PATH specify where |
- | TEST_SUITE_PATH | the test suite locates |
- | | |
- +---------------------+--------------------------------------------------+
-
-
-Run Yardstick in a local environment
-------------------------------------
-
-We also have a guide about how to run Yardstick in a local environment.
-This work is contributed by Tapio Tallgren.
-You can find this guide at `How to run Yardstick in a local environment`_.
-
-
-Create a test suite for Yardstick
-------------------------------------
-
-A test suite in yardstick is a yaml file which include one or more test cases.
-Yardstick is able to support running test suite task, so you can customize your
-own test suite and run it in one task.
-
-``tests/opnfv/test_suites`` is the folder where Yardstick puts CI test suite.
-A typical test suite is like below (the ``fuel_test_suite.yaml`` example)::
-
- ---
- # Fuel integration test task suite
-
- schema: "yardstick:suite:0.1"
-
- name: "fuel_test_suite"
- test_cases_dir: "samples/"
- test_cases:
- -
- file_name: ping.yaml
- -
- file_name: iperf3.yaml
-
-As you can see, there are two test cases in the ``fuel_test_suite.yaml``. The
-``schema`` and the ``name`` must be specified. The test cases should be listed
-via the tag ``test_cases`` and their relative path is also marked via the tag
-``test_cases_dir``.
-
-Yardstick test suite also supports constraints and task args for each test
-case. Here is another sample (the ``os-nosdn-nofeature-ha.yaml`` example) to
-show this, which is digested from one big test suite::
-
- ---
-
- schema: "yardstick:suite:0.1"
-
- name: "os-nosdn-nofeature-ha"
- test_cases_dir: "tests/opnfv/test_cases/"
- test_cases:
- -
- file_name: opnfv_yardstick_tc002.yaml
- -
- file_name: opnfv_yardstick_tc005.yaml
- -
- file_name: opnfv_yardstick_tc043.yaml
- constraint:
- installer: compass
- pod: huawei-pod1
- task_args:
- huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml",
- "host": "node4.LF","target": "node5.LF"}'
-
-As you can see in test case ``opnfv_yardstick_tc043.yaml``, there are two
-tags, ``constraint`` and ``task_args``. ``constraint`` is to specify which
-installer or pod it can be run in the CI environment. ``task_args`` is to
-specify the task arguments for each pod.
-
-All in all, to create a test suite in Yardstick, you just need to create a
-yaml file and add test cases, constraint or task arguments if necessary.
-
-
Proxy Support
-------------
@@ -790,7 +599,7 @@ stop and delete the container::
sudo docker rm yardstick
.. warning:: Be careful, the above ``rm`` command will delete the container
-completely. Everything on this container will be lost.
+ completely. Everything on this container will be lost.
Then follow the previous instructions `Prepare the Yardstick container`_ to
rebuild the Yardstick container.
@@ -804,4 +613,3 @@ References
.. _`Cirros 0.3.5`: http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
.. _`Ubuntu 16.04`: https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
.. _`Yardstick GUI demo`: https://www.youtube.com/watch?v=M3qbJDp6QBk
-.. _`How to run Yardstick in a local environment`: https://wiki.opnfv.org/display/yardstick/How+to+run+Yardstick+in+a+local+environment
diff --git a/docs/testing/user/userguide/05-operation.rst b/docs/testing/user/userguide/05-operation.rst
new file mode 100644
index 000000000..f390d1643
--- /dev/null
+++ b/docs/testing/user/userguide/05-operation.rst
@@ -0,0 +1,296 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel, Ericsson AB, Huawei Technologies Co. Ltd and others.
+
+..
+ Convention for heading levels in Yardstick:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+ Avoid deeper levels because they do not render well.
+
+===============
+Yardstick Usage
+===============
+
+Once you have yardstick installed, you can start using it to run testcases
+immediately, through the CLI. You can also define and run new testcases and
+test suites. This chapter details basic usage (running testcases), as well as
+more advanced usage (creating your own testcases).
+
+Yardstick common CLI
+--------------------
+
+List test cases
+^^^^^^^^^^^^^^^
+
+``yardstick testcase list``: This command line would list all test cases in
+Yardstick. It would show like below::
+
+ +---------------------------------------------------------------------------------------
+ | Testcase Name | Description
+ +---------------------------------------------------------------------------------------
+ | opnfv_yardstick_tc001 | Measure network throughput using pktgen
+ | opnfv_yardstick_tc002 | measure network latency using ping
+ | opnfv_yardstick_tc005 | Measure Storage IOPS, throughput and latency using fio.
+ ...
+ +---------------------------------------------------------------------------------------
+
+
+Show a test case config file
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Take opnfv_yardstick_tc002 for an example. This test case measure network
+latency. You just need to type in ``yardstick testcase show
+opnfv_yardstick_tc002``, and the console would show the config yaml of this
+test case:
+
+.. literalinclude::
+ ../../../../tests/opnfv/test_cases/opnfv_yardstick_tc002.yaml
+ :lines: 9-
+
+Run a Yardstick test case
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want run a test case, then you need to use ``yardstick task start
+<test_case_path>`` this command support some parameters as below:
+
+ +---------------------+--------------------------------------------------+
+ | Parameters | Detail |
+ +=====================+==================================================+
+ | -d | show debug log of yardstick running |
+ | | |
+ +---------------------+--------------------------------------------------+
+ | --task-args | If you want to customize test case parameters, |
+ | | use "--task-args" to pass the value. The format |
+ | | is a json string with parameter key-value pair. |
+ | | |
+ +---------------------+--------------------------------------------------+
+ | --task-args-file | If you want to use yardstick |
+ | | env prepare command(or |
+ | | related API) to load the |
+ +---------------------+--------------------------------------------------+
+ | --parse-only | |
+ | | |
+ | | |
+ +---------------------+--------------------------------------------------+
+ | --output-file \ | Specify where to output the log. if not pass, |
+ | OUTPUT_FILE_PATH | the default value is |
+ | | "/tmp/yardstick/yardstick.log" |
+ | | |
+ +---------------------+--------------------------------------------------+
+ | --suite \ | run a test suite, TEST_SUITE_PATH specify where |
+ | TEST_SUITE_PATH | the test suite locates |
+ | | |
+ +---------------------+--------------------------------------------------+
+
+
+Run Yardstick in a local environment
+------------------------------------
+
+We also have a guide about `How to run Yardstick in a local environment`_.
+This work is contributed by Tapio Tallgren.
+
+Create a new testcase for Yardstick
+-----------------------------------
+
+As a user, you may want to define a new testcase in addition to the ones
+already available in Yardstick. This section will show you how to do this.
+
+Each testcase consists of two sections:
+
+* ``scenarios`` describes what will be done by the test
+* ``context`` describes the environment in which the test will be run.
+
+Defining the testcase scenarios
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+TODO
+
+Defining the testcase context(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Each testcase consists of one or more contexts, which describe the environment
+in which the testcase will be run.
+Current available contexts are:
+
+* ``Dummy``: this is a no-op context, and is used when there is no environment
+ to set up e.g. when testing whether OpenStack services are available
+* ``Node``: this context is used to perform operations on baremetal servers
+* ``Heat``: uses OpenStack to provision the required hosts, networks, etc.
+* ``Kubernetes``: uses Kubernetes to provision the resources required for the
+ test.
+
+Regardless of the context type, the ``context`` section of the testcase will
+consist of the following::
+
+ context:
+ name: demo
+ type: Dummy|Node|Heat|Kubernetes
+
+The content of the ``context`` section will vary based on the context type.
+
+Dummy Context
++++++++++++++
+
+No additional information is required for the Dummy context::
+
+ context:
+ name: my_context
+ type: Dummy
+
+Node Context
+++++++++++++
+
+TODO
+
+Heat Context
+++++++++++++
+
+In addition to ``name`` and ``type``, a Heat context requires the following
+arguments:
+
+* ``image``: the image to be used to boot VMs
+* ``flavor``: the flavor to be used for VMs in the context
+* ``user``: the username for connecting into the VMs
+* ``networks``: The networks to be created, networks are identified by name
+
+ * ``name``: network name (required)
+ * (TODO) Any optional attributes
+
+* ``servers``: The servers to be created
+
+ * ``name``: server name
+ * (TODO) Any optional attributes
+
+In addition to the required arguments, the following optional arguments can be
+passed to the Heat context:
+
+* ``placement_groups``:
+
+ * ``name``: the name of the placement group to be created
+ * ``policy``: either ``affinity`` or ``availability``
+* ``server_groups``:
+
+ * ``name``: the name of the server group
+ * ``policy``: either ``affinity`` or ``anti-affinity``
+
+Combining these elements together, a sample Heat context config looks like:
+
+.. literalinclude::
+ ../../../../yardstick/tests/integration/dummy-scenario-heat-context.yaml
+ :start-after: ---
+ :empahsise-lines: 14-
+
+Using exisiting HOT Templates
+'''''''''''''''''''''''''''''
+
+TODO
+
+Kubernetes Context
+++++++++++++++++++
+
+TODO
+
+Using multiple contexts in a testcase
++++++++++++++++++++++++++++++++++++++
+
+When using multiple contexts in a testcase, the ``context`` section is replaced
+by a ``contexts`` section, and each context is separated with a ``-`` line::
+
+ contexts:
+ -
+ name: context1
+ type: Heat
+ ...
+ -
+ name: context2
+ type: Node
+ ...
+
+
+Reusing a context
++++++++++++++++++
+
+Typically, a context is torn down after a testcase is run, however, the user
+may wish to keep an context intact after a testcase is complete.
+
+.. note::
+ This feature has been implemented for the Heat context only
+
+To keep or reuse a context, the ``flags`` option must be specified:
+
+* ``no_setup``: skip the deploy stage, and fetch the details of a deployed
+ context/Heat stack.
+* ``no_teardown``: skip the undeploy stage, thus keeping the stack intact for
+ the next test
+
+If either of these ``flags`` are ``True``, the context information must still
+be given. By default, these flags are disabled::
+
+ context:
+ name: mycontext
+ type: Heat
+ flags:
+ no_setup: True
+ no_teardown: True
+ ...
+
+Create a test suite for Yardstick
+---------------------------------
+
+A test suite in Yardstick is a .yaml file which includes one or more test
+cases. Yardstick is able to support running test suite task, so you can
+customize your own test suite and run it in one task.
+
+``tests/opnfv/test_suites`` is the folder where Yardstick puts CI test suite.
+A typical test suite is like below (the ``fuel_test_suite.yaml`` example):
+
+.. literalinclude::
+ ../../../../tests/opnfv/test_suites/fuel_test_suite.yaml
+ :lines: 9-
+
+As you can see, there are two test cases in the ``fuel_test_suite.yaml``. The
+``schema`` and the ``name`` must be specified. The test cases should be listed
+via the tag ``test_cases`` and their relative path is also marked via the tag
+``test_cases_dir``.
+
+Yardstick test suite also supports constraints and task args for each test
+case. Here is another sample (the ``os-nosdn-nofeature-ha.yaml`` example) to
+show this, which is digested from one big test suite::
+
+ ---
+
+ schema: "yardstick:suite:0.1"
+
+ name: "os-nosdn-nofeature-ha"
+ test_cases_dir: "tests/opnfv/test_cases/"
+ test_cases:
+ -
+ file_name: opnfv_yardstick_tc002.yaml
+ -
+ file_name: opnfv_yardstick_tc005.yaml
+ -
+ file_name: opnfv_yardstick_tc043.yaml
+ constraint:
+ installer: compass
+ pod: huawei-pod1
+ task_args:
+ huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml",
+ "host": "node4.LF","target": "node5.LF"}'
+
+As you can see in test case ``opnfv_yardstick_tc043.yaml``, there are two
+tags, ``constraint`` and ``task_args``. ``constraint`` is to specify which
+installer or pod it can be run in the CI environment. ``task_args`` is to
+specify the task arguments for each pod.
+
+All in all, to create a test suite in Yardstick, you just need to create a
+yaml file and add test cases, constraint or task arguments if necessary.
+
+References
+----------
+
+.. _`How to run Yardstick in a local environment`: https://wiki.opnfv.org/display/yardstick/How+to+run+Yardstick+in+a+local+environment
diff --git a/docs/testing/user/userguide/05-yardstick_plugin.rst b/docs/testing/user/userguide/06-yardstick-plugin.rst
index 679ce7900..bc35e239d 100644
--- a/docs/testing/user/userguide/05-yardstick_plugin.rst
+++ b/docs/testing/user/userguide/06-yardstick-plugin.rst
@@ -31,7 +31,7 @@ In this introduction we will install Storperf on Jump Host.
Step 0: Environment preparation
->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+-------------------------------
Running Storperf on Jump Host
Requirements:
@@ -47,24 +47,26 @@ environment and other dependencies:
1. Make sure docker is installed.
2. Make sure Keystone, Nova, Neutron, Glance, Heat are installed correctly.
3. Make sure Jump Host have access to the OpenStack Controller API.
-4. Make sure Jump Host must have internet connectivity for downloading docker image.
-5. You need to know where to get basic openstack Keystone authorization info, such as
- OS_PASSWORD, OS_PROJECT_NAME, OS_AUTH_URL, OS_USERNAME.
-6. To run a Storperf container, you need to have OpenStack Controller environment
- variables defined and passed to Storperf container. The best way to do this is to
- put environment variables in a "storperf_admin-rc" file. The storperf_admin-rc
- should include credential environment variables at least:
-
-* OS_AUTH_URL
-* OS_USERNAME
-* OS_PASSWORD
-* OS_PROJECT_NAME
-* OS_PROJECT_ID
-* OS_USER_DOMAIN_ID
-
-*Yardstick* has a "prepare_storperf_admin-rc.sh" script which can be used to
-generate the "storperf_admin-rc" file, this script is located at
-test/ci/prepare_storperf_admin-rc.sh
+4. Make sure Jump Host must have internet connectivity for downloading docker
+ image.
+5. You need to know where to get basic openstack Keystone authorization info,
+ such as OS_PASSWORD, OS_PROJECT_NAME, OS_AUTH_URL, OS_USERNAME.
+6. To run a Storperf container, you need to have OpenStack Controller
+ environment variables defined and passed to Storperf container. The best way
+ to do this is to put environment variables in a "storperf_admin-rc" file.
+ The storperf_admin-rc should include credential environment variables at
+ least:
+
+ * OS_AUTH_URL
+ * OS_USERNAME
+ * OS_PASSWORD
+ * OS_PROJECT_NAME
+ * OS_PROJECT_ID
+ * OS_USER_DOMAIN_ID
+
+*Yardstick* has a ``prepare_storperf_admin-rc.sh`` script which can be used to
+generate the ``storperf_admin-rc`` file, this script is located at
+``test/ci/prepare_storperf_admin-rc.sh``
::
@@ -92,18 +94,18 @@ test/ci/prepare_storperf_admin-rc.sh
echo "OS_USER_DOMAIN_ID="$USER_DOMAIN_ID >> ~/storperf_admin-rc
-The generated "storperf_admin-rc" file will be stored in the root directory. If
-you installed *Yardstick* using Docker, this file will be located in the
+The generated ``storperf_admin-rc`` file will be stored in the root directory.
+If you installed *Yardstick* using Docker, this file will be located in the
container. You may need to copy it to the root directory of the Storperf
deployed host.
Step 1: Plug-in configuration file preparation
->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+----------------------------------------------
To install a plug-in, first you need to prepare a plug-in configuration file in
-YAML format and store it in the "plugin" directory. The plugin configration file
-work as the input of yardstick "plugin" command. Below is the Storperf plug-in
-configuration file sample:
+YAML format and store it in the "plugin" directory. The plugin configration
+file work as the input of yardstick "plugin" command. Below is the Storperf
+plug-in configuration file sample:
::
---
@@ -123,28 +125,28 @@ Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host
in my local environment.
Step 2: Plug-in install/remove scripts preparation
->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+--------------------------------------------------
-In "yardstick/resource/scripts" directory, there are two folders: a "install"
-folder and a "remove" folder. You need to store the plug-in install/remove
-scripts in these two folders respectively.
+In ``yardstick/resource/scripts`` directory, there are two folders: an
+``install`` folder and a ``remove`` folder. You need to store the plug-in
+install/remove scripts in these two folders respectively.
The detailed installation or remove operation should de defined in these two
scripts. The name of both install and remove scripts should match the plugin-in
name that you specified in the plug-in configuration file.
-For example, the install and remove scripts for Storperf are both named to
-"storperf.bash".
+For example, the install and remove scripts for Storperf are both named
+``storperf.bash``.
Step 3: Install and remove Storperf
->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+-----------------------------------
To install Storperf, simply execute the following command::
# Install Storperf
yardstick plugin install plugin/storperf.yaml
-removing Storperf from yardstick
+Removing Storperf from yardstick
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To remove Storperf, simply execute the following command::
diff --git a/docs/testing/user/userguide/06-result-store-InfluxDB.rst b/docs/testing/user/userguide/07-result-store-InfluxDB.rst
index 747927889..cde931376 100644
--- a/docs/testing/user/userguide/06-result-store-InfluxDB.rst
+++ b/docs/testing/user/userguide/07-result-store-InfluxDB.rst
@@ -24,7 +24,7 @@ Store Storperf Test Results into Community's InfluxDB
=====================================================
.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py
-.. _Mingjiang: limingjiang@huawei.com
+.. _Mingjiang: mailto:limingjiang@huawei.com
.. _Visual: https://wiki.opnfv.org/download/attachments/6827660/tc074.PNG?version=1&modificationDate=1470298075000&api=v2
.. _Login: http://testresults.opnfv.org/grafana/login
@@ -40,12 +40,13 @@ into community's InfluxDB:
will be supported in the future.
Our plan is to support rest-api in D release so that other testing projects can
-call the rest-api to use yardstick dispatcher service to push data to yardstick's
-influxdb database.
+call the rest-api to use yardstick dispatcher service to push data to
+Yardstick's InfluxDB database.
-For now, influxdb only support line protocol, and the json protocol is deprecated.
+For now, InfluxDB only supports line protocol, and the json protocol is
+deprecated.
-Take ping test case for example, the raw_result is json format like this:
+Take ping test case for example, the ``raw_result`` is json format like this:
::
"benchmark": {
@@ -61,23 +62,24 @@ Take ping test case for example, the raw_result is json format like this:
"runner_id": 2625
}
-With the help of "influxdb_line_protocol", the json is transform to like below as a line string:
-::
+With the help of "influxdb_line_protocol", the json is transform to like below
+as a line string::
'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown,pod_name=unknown,
runner_id=2625,scenarios=Ping,target=ares.demo,task_id=77755f38-1f6a-4667-a7f3-
301c99963656,version=unknown rtt.ares=1.125 1470315409868094976'
-So, for data output of json format, you just need to transform json into line format and call
-influxdb api to post the data into the database. All this function has been implemented in Influxdb_.
-If you need support on this, please contact Mingjiang_.
+So, for data output of json format, you just need to transform json into line
+format and call influxdb api to post the data into the database. All this
+function has been implemented in Influxdb_. If you need support on this, please
+contact Mingjiang_.
::
curl -i -XPOST 'http://104.197.68.199:8086/write?db=yardstick' --
data-binary 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown, ...'
-Grafana will be used for visualizing the collected test data, which is shown in Visual_. Grafana
-can be accessed by Login_.
+Grafana will be used for visualizing the collected test data, which is shown in
+Visual_. Grafana can be accessed by Login_.
.. image:: images/results_visualization.png
diff --git a/docs/testing/user/userguide/07-grafana.rst b/docs/testing/user/userguide/08-grafana.rst
index 416857b71..29bc23a08 100644
--- a/docs/testing/user/userguide/07-grafana.rst
+++ b/docs/testing/user/userguide/08-grafana.rst
@@ -108,8 +108,10 @@ There are 6 steps to go.
5. When finished with all Grafana configuration changes in this temporary
dashboard then chose "export" of the updated dashboard copy into a JSON file
- and put it up for review in Gerrit, in file /yardstick/dashboard/Yardstick-TCxxx-yyyyyyyyyyyyy.
- For instance a typical default name of the file would be "Yardstick-TC001 Copy-1234567891234".
+ and put it up for review in Gerrit, in file
+ ``/yardstick/dashboard/Yardstick-TCxxx-yyyyyyyyyyyyy``.
+ For instance a typical default name of the file would be
+ ``Yardstick-TC001 Copy-1234567891234``.
6. Once you finish your dashboard, the next step is exporting the configuration
file and propose a patch into Yardstick. Yardstick team will review and
diff --git a/docs/testing/user/userguide/08-api.rst b/docs/testing/user/userguide/09-api.rst
index 2206c2ac8..f0ae3980b 100644
--- a/docs/testing/user/userguide/08-api.rst
+++ b/docs/testing/user/userguide/09-api.rst
@@ -3,25 +3,29 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
+=====================
Yardstick Restful API
-======================
+=====================
Abstract
---------
+========
Yardstick support restful API since Danube.
Available API
--------------
+=============
/yardstick/env/action
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+---------------------
-Description: This API is used to prepare Yardstick test environment. For Euphrates, it supports:
+Description: This API is used to prepare Yardstick test environment.
+For Euphrates, it supports:
-1. Prepare yardstick test environment, including set external network environment variable, load Yardstick VM images and create flavors;
+1. Prepare yardstick test environment, including setting the
+ ``EXTERNAL_NETWORK`` environment variable, load Yardstick VM images and
+ create flavors;
2. Start an InfluxDB Docker container and config Yardstick output to InfluxDB;
3. Start a Grafana Docker container and config it with the InfluxDB.
@@ -38,7 +42,8 @@ Example::
'action': 'prepare_env'
}
-This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
+This is an asynchronous API. You need to call ``/yardstick/asynctask`` API to
+get the task result.
Start and config an InfluxDB docker container
@@ -48,7 +53,8 @@ Example::
'action': 'create_influxdb'
}
-This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
+This is an asynchronous API. You need to call ``/yardstick/asynctask`` API to
+get the task result.
Start and config a Grafana docker container
@@ -58,11 +64,12 @@ Example::
'action': 'create_grafana'
}
-This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
+This is an asynchronous API. You need to call ``/yardstick/asynctask`` API to
+get the task result.
/yardstick/asynctask
-^^^^^^^^^^^^^^^^^^^^
+--------------------
Description: This API is used to get the status of asynchronous tasks
@@ -84,7 +91,7 @@ NOTE::
/yardstick/testcases
-^^^^^^^^^^^^^^^^^^^^
+--------------------
Description: This API is used to list all released Yardstick test cases.
@@ -99,7 +106,7 @@ Example::
/yardstick/testcases/release/action
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-----------------------------------
Description: This API is used to run a Yardstick released test case.
@@ -118,11 +125,12 @@ Example::
}
}
-This is an asynchronous API. You need to call /yardstick/results to get the result.
+This is an asynchronous API. You need to call ``/yardstick/results`` to get the
+result.
/yardstick/testcases/samples/action
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-----------------------------------
Description: This API is used to run a Yardstick sample test case.
@@ -141,13 +149,15 @@ Example::
}
}
-This is an asynchronous API. You need to call /yardstick/results to get the result.
+This is an asynchronous API. You need to call ``/yardstick/results`` to get
+the result.
/yardstick/testcases/<testcase_name>/docs
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-----------------------------------------
-Description: This API is used to the documentation of a certain released test case.
+Description: This API is used to the documentation of a certain released test
+case.
Method: GET
@@ -160,7 +170,7 @@ Example::
/yardstick/testsuites/action
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+----------------------------
Description: This API is used to run a Yardstick test suite.
@@ -179,11 +189,12 @@ Example::
}
}
-This is an asynchronous API. You need to call /yardstick/results to get the result.
+This is an asynchronous API. You need to call /yardstick/results to get the
+result.
/yardstick/tasks/<task_id>/log
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+------------------------------
Description: This API is used to get the real time log of test case execution.
@@ -198,9 +209,11 @@ Example::
/yardstick/results
-^^^^^^^^^^^^^^^^^^
+------------------
-Description: This API is used to get the test results of tasks. If you call /yardstick/testcases/samples/action API, it will return a task id. You can use the returned task id to get the results by using this API.
+Description: This API is used to get the test results of tasks. If you call
+/yardstick/testcases/samples/action API, it will return a task id. You can use
+the returned task id to get the results by using this API.
Method: GET
@@ -215,9 +228,10 @@ This API will return a list of test case result
/api/v2/yardstick/openrcs
-^^^^^^^^^^^^^^^^^^^^^^^^^
+-------------------------
-Description: This API provides functionality of handling OpenStack credential file (openrc). For Euphrates, it supports:
+Description: This API provides functionality of handling OpenStack credential
+file (openrc). For Euphrates, it supports:
1. Upload an openrc file for an OpenStack environment;
2. Update an openrc;
@@ -268,7 +282,7 @@ Example::
/api/v2/yardstick/openrcs/<openrc_id>
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-------------------------------------
Description: This API provides functionality of handling OpenStack credential file (openrc). For Euphrates, it supports:
@@ -294,9 +308,10 @@ Example::
/api/v2/yardstick/pods
-^^^^^^^^^^^^^^^^^^^^^^
+----------------------
-Description: This API provides functionality of handling Yardstick pod file (pod.yaml). For Euphrates, it supports:
+Description: This API provides functionality of handling Yardstick pod file
+(pod.yaml). For Euphrates, it supports:
1. Upload a pod file;
@@ -319,7 +334,7 @@ Example::
/api/v2/yardstick/pods/<pod_id>
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-------------------------------
Description: This API provides functionality of handling Yardstick pod file (pod.yaml). For Euphrates, it supports:
@@ -343,9 +358,10 @@ Example::
/api/v2/yardstick/images
-^^^^^^^^^^^^^^^^^^^^^^^^
+------------------------
-Description: This API is used to do some work related to Yardstick VM images. For Euphrates, it supports:
+Description: This API is used to do some work related to Yardstick VM images.
+For Euphrates, it supports:
1. Load Yardstick VM images;
@@ -367,7 +383,7 @@ Example::
/api/v2/yardstick/images/<image_id>
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-----------------------------------
Description: This API is used to do some work related to Yardstick VM images. For Euphrates, it supports:
@@ -391,9 +407,10 @@ Example::
/api/v2/yardstick/tasks
-^^^^^^^^^^^^^^^^^^^^^^^
+-----------------------
-Description: This API is used to do some work related to yardstick tasks. For Euphrates, it supports:
+Description: This API is used to do some work related to yardstick tasks. For
+Euphrates, it supports:
1. Create a Yardstick task;
@@ -416,7 +433,7 @@ Example::
/api/v2/yardstick/tasks/<task_id>
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+--------------------------------
Description: This API is used to do some work related to yardstick tasks. For Euphrates, it supports:
@@ -496,13 +513,15 @@ METHOD: DELETE
Delete a task
Example::
+
http://<SERVER IP>:<PORT>/api/v2/yardstick/tasks/5g6g3e02-155a-4847-a5f8-154f1b31db8c
/api/v2/yardstick/testcases
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
+---------------------------
-Description: This API is used to do some work related to yardstick testcases. For Euphrates, it supports:
+Description: This API is used to do some work related to Yardstick testcases.
+For Euphrates, it supports:
1. Upload a test case;
2. Get all released test cases' information;
@@ -534,7 +553,7 @@ Example::
/api/v2/yardstick/testcases/<case_name>
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+---------------------------------------
Description: This API is used to do some work related to yardstick testcases. For Euphrates, it supports:
@@ -555,13 +574,15 @@ METHOD: DELETE
Delete a certain test case
Example::
+
http://<SERVER IP>:<PORT>/api/v2/yardstick/testcases/opnfv_yardstick_tc002
/api/v2/yardstick/testsuites
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+----------------------------
-Description: This API is used to do some work related to yardstick test suites. For Euphrates, it supports:
+Description: This API is used to do some work related to yardstick test suites.
+For Euphrates, it supports:
1. Create a test suite;
2. Get all test suites;
@@ -596,7 +617,7 @@ Example::
/api/v2/yardstick/testsuites
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+----------------------------
Description: This API is used to do some work related to yardstick test suites. For Euphrates, it supports:
@@ -622,9 +643,10 @@ Example::
/api/v2/yardstick/projects
-^^^^^^^^^^^^^^^^^^^^^^^^^^
+--------------------------
-Description: This API is used to do some work related to yardstick test projects. For Euphrates, it supports:
+Description: This API is used to do some work related to Yardstick test
+projects. For Euphrates, it supports:
1. Create a Yardstick project;
2. Get all projects;
@@ -656,7 +678,7 @@ Example::
/api/v2/yardstick/projects
-^^^^^^^^^^^^^^^^^^^^^^^^^^
+--------------------------
Description: This API is used to do some work related to yardstick test projects. For Euphrates, it supports:
@@ -682,9 +704,10 @@ Example::
/api/v2/yardstick/containers
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+----------------------------
-Description: This API is used to do some work related to Docker containers. For Euphrates, it supports:
+Description: This API is used to do some work related to Docker containers.
+For Euphrates, it supports:
1. Create a Grafana Docker container;
2. Create an InfluxDB Docker container;
@@ -721,7 +744,7 @@ Example::
/api/v2/yardstick/containers/<container_id>
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-------------------------------------------
Description: This API is used to do some work related to Docker containers. For Euphrates, it supports:
diff --git a/docs/testing/user/userguide/09-yardstick_user_interface.rst b/docs/testing/user/userguide/10-yardstick-user-interface.rst
index 9058dd46d..cadec78ef 100644
--- a/docs/testing/user/userguide/09-yardstick_user_interface.rst
+++ b/docs/testing/user/userguide/10-yardstick-user-interface.rst
@@ -1,3 +1,4 @@
+========================
Yardstick User Interface
========================
@@ -6,14 +7,14 @@ in table format and also values pinned on to a graph.
Command
--------
+=======
::
yardstick report generate <task-ID> <testcase-filename>
Description
------------
+===========
1. When the command is triggered using the task-id and the testcase
name provided the respective values are retrieved from the
diff --git a/docs/testing/user/userguide/10-vtc-overview.rst b/docs/testing/user/userguide/11-vtc-overview.rst
index 8ed17873d..47582358c 100644
--- a/docs/testing/user/userguide/10-vtc-overview.rst
+++ b/docs/testing/user/userguide/11-vtc-overview.rst
@@ -29,10 +29,10 @@ to run the :term:`VNF`. The exploitation of Deep Packet Inspection
assumptions:
* third parties unaffiliated with either source or recipient are able to
-inspect each IP packet’s payload
+ inspect each IP packet's payload
-* the classifier knows the relevant syntax of each application’s packet
-payloads (protocol signatures, data patterns, etc.).
+* the classifier knows the relevant syntax of each application's packet
+ payloads (protocol signatures, data patterns, etc.).
The proposed :term:`DPI` based approach will only use an indicative, small
number of the initial packets from each flow in order to identify the content
@@ -47,14 +47,14 @@ Concepts
========
* *Traffic Inspection*: The process of packet analysis and application
-identification of network traffic that passes through the :term:`VTC`.
+ identification of network traffic that passes through the :term:`VTC`.
* *Traffic Forwarding*: The process of packet forwarding from an incoming
-network interface to a pre-defined outgoing network interface.
+ network interface to a pre-defined outgoing network interface.
* *Traffic Rule Application*: The process of packet tagging, based on a
-predefined set of rules. Packet tagging may include e.g. Type of Service
-(:term:`ToS`) field modification.
+ predefined set of rules. Packet tagging may include e.g. Type of Service
+ (:term:`ToS`) field modification.
Architecture
============
diff --git a/docs/testing/user/userguide/11-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index 332dba47d..71a5c1130 100644
--- a/docs/testing/user/userguide/11-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -3,11 +3,12 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, 2016-2017 Intel Corporation.
+===================================
Network Services Benchmarking (NSB)
===================================
Abstract
---------
+========
.. _Yardstick: https://wiki.opnfv.org/yardstick
@@ -15,10 +16,10 @@ This chapter provides an overview of the NSB, a contribution to OPNFV
Yardstick_ from Intel.
Overview
---------
+========
-The goal of NSB is to Extend Yardstick to perform real world VNFs and NFVi Characterization and
-benchmarking with repeatable and deterministic methods.
+The goal of NSB is to Extend Yardstick to perform real world VNFs and NFVi
+Characterization and benchmarking with repeatable and deterministic methods.
The Network Service Benchmarking (NSB) extends the yardstick framework to do
VNF characterization and benchmarking in three different execution
@@ -70,17 +71,17 @@ NSB extension includes:
- VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc
Architecture
-------------
+============
The Network Service (NS) defines a set of Virtual Network Functions (VNF)
connected together using NFV infrastructure.
The Yardstick NSB extension can support multiple VNFs created by different
vendors including traffic generators. Every VNF being tested has its
-own data model. The Network service defines a VNF modelling on base of performed
-network functionality. The part of the data model is a set of the configuration
-parameters, number of connection points used and flavor including core and
-memory amount.
+own data model. The Network service defines a VNF modelling on base of
+performed network functionality. The part of the data model is a set of the
+configuration parameters, number of connection points used and flavor including
+core and memory amount.
The ETSI defines a Network Service as a set of configurable VNFs working in
some NFV Infrastructure connecting each other using Virtual Links available
@@ -112,7 +113,7 @@ Network Service framework performs the necessary test steps. It may involve
- Read the KPI's provided by particular VNF
Components of Network Service
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-----------------------------
* *Models for Network Service benchmarking*: The Network Service benchmarking
requires the proper modelling approach. The NSB provides models using Python
@@ -132,9 +133,9 @@ Components of Network Service
same way as other VNFs being a part of benchmarked network service.
Same as other VNFs the traffic generator are instantiated and terminated.
- Every traffic generator has own configuration defined as a traffic profile and
- a set of KPIs supported. The python models for TG is extended by specific calls
- to listen and generate traffic.
+ Every traffic generator has own configuration defined as a traffic profile
+ and a set of KPIs supported. The python models for TG is extended by
+ specific calls to listen and generate traffic.
* *The stateless TREX traffic generator*: The main traffic generator used as
Network Service stimulus is open source TREX tool.
@@ -165,7 +166,7 @@ Components of Network Service
- RFC2544 throughput for various loss rate defined (1% is a default)
Graphical Overview
-------------------
+==================
NSB Testing with yardstick framework facilitate performance testing of various
VNFs provided.
@@ -192,7 +193,7 @@ VNFs provided.
Figure 1: Network Service - 2 server configuration
VNFs supported for chracterization:
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-----------------------------------
1. CGNAPT - Carrier Grade Network Address and port Translation
2. vFW - Virtual Firewall
diff --git a/docs/testing/user/userguide/12-nsb_installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
index 5631c6578..00f8cfd97 100644
--- a/docs/testing/user/userguide/12-nsb_installation.rst
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -3,11 +3,12 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, 2016-2017 Intel Corporation.
+=====================================
Yardstick - NSB Testing -Installation
=====================================
Abstract
---------
+========
The Network Service Benchmarking (NSB) extends the yardstick framework to do
VNF characterization and benchmarking in three different execution
@@ -26,79 +27,65 @@ The steps needed to run Yardstick with NSB testing are:
Prerequisites
--------------
+=============
Refer chapter Yardstick Installation for more information on yardstick
prerequisites
-Several prerequisites are needed for Yardstick(VNF testing):
-
- - Python Modules: pyzmq, pika.
-
- - flex
-
- - bison
-
- - build-essential
-
- - automake
-
- - libtool
+Several prerequisites are needed for Yardstick (VNF testing):
- - librabbitmq-dev
-
- - rabbitmq-server
-
- - collectd
-
- - intel-cmt-cat
+ * Python Modules: pyzmq, pika.
+ * flex
+ * bison
+ * build-essential
+ * automake
+ * libtool
+ * librabbitmq-dev
+ * rabbitmq-server
+ * collectd
+ * intel-cmt-cat
Hardware & Software Ingredients
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-------------------------------
SUT requirements:
- +-----------+--------------------+
- | Item | Description |
- +-----------+--------------------+
- | Memory | Min 20GB |
- +-----------+--------------------+
- | NICs | 2 x 10G |
- +-----------+--------------------+
- | OS | Ubuntu 16.04.3 LTS |
- +-----------+--------------------+
- | kernel | 4.4.0-34-generic |
- +-----------+--------------------+
- | DPDK | 17.02 |
- +-----------+--------------------+
+ ======= ===================
+ Item Description
+ ======= ===================
+ Memory Min 20GB
+ NICs 2 x 10G
+ OS Ubuntu 16.04.3 LTS
+ kernel 4.4.0-34-generic
+ DPDK 17.02
+ ======= ===================
Boot and BIOS settings:
- +------------------+---------------------------------------------------+
- | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
- | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
- | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
- | | iommu=on iommu=pt intel_iommu=on |
- | | Note: nohz_full and rcu_nocbs is to disable Linux |
- | | kernel interrupts |
- +------------------+---------------------------------------------------+
- |BIOS | CPU Power and Performance Policy <Performance> |
- | | CPU C-state Disabled |
- | | CPU P-state Disabled |
- | | Enhanced Intel® Speedstep® Tech Disabled |
- | | Hyper-Threading Technology (If supported) Enabled |
- | | Virtualization Techology Enabled |
- | | Intel(R) VT for Direct I/O Enabled |
- | | Coherency Enabled |
- | | Turbo Boost Disabled |
- +------------------+---------------------------------------------------+
+ ============= =================================================
+ Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
+ hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
+ nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
+ iommu=on iommu=pt intel_iommu=on
+ Note: nohz_full and rcu_nocbs is to disable Linux
+ kernel interrupts
+ BIOS CPU Power and Performance Policy <Performance>
+ CPU C-state Disabled
+ CPU P-state Disabled
+ Enhanced Intel® Speedstep® Tech Disabl
+ Hyper-Threading Technology (If supported) Enabled
+ Virtualization Techology Enabled
+ Intel(R) VT for Direct I/O Enabled
+ Coherency Enabled
+ Turbo Boost Disabled
+ ============= =================================================
Install Yardstick (NSB Testing)
--------------------------------
+===============================
Download the source code and install Yardstick from it
@@ -168,11 +155,12 @@ Above command setup docker with latest yardstick code. To execute
docker exec -it yardstick bash
-It will also automatically download all the packages needed for NSB Testing setup.
-Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
+It will also automatically download all the packages needed for NSB Testing
+setup. Refer chapter :doc:`04-installation` for more on docker
+**Install Yardstick using Docker (recommended)**
System Topology:
-----------------
+================
.. code-block:: console
@@ -187,13 +175,15 @@ System Topology:
Environment parameters and credentials
---------------------------------------
+======================================
Config yardstick conf
-^^^^^^^^^^^^^^^^^^^^^
+---------------------
-If user did not run 'yardstick env influxdb' inside the container, which will generate
-correct yardstick.conf, then create the config file manually (run inside the container):
+If user did not run 'yardstick env influxdb' inside the container, which will
+generate correct ``yardstick.conf``, then create the config file manually (run
+inside the container):
+::
cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
vi /etc/yardstick/yardstick.conf
@@ -219,11 +209,11 @@ Add trex_path, trex_client_lib and bin_path in 'nsb' section.
trex_client_lib=/opt/nsb_bin/trex_client/stl
Run Yardstick - Network Service Testcases
------------------------------------------
+=========================================
NS testing - using yardstick CLI
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+--------------------------------
See :doc:`04-installation`
@@ -236,13 +226,13 @@ NS testing - using yardstick CLI
yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
Network Service Benchmarking - Bare-Metal
------------------------------------------
+=========================================
Bare-Metal Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+----------------------------------------------
-Bare-Metal 2-Node setup:
-########################
+Bare-Metal 2-Node setup
+^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
+----------+ +----------+
@@ -254,8 +244,8 @@ Bare-Metal 2-Node setup:
+----------+ +----------+
trafficgen_1 vnf
-Bare-Metal 3-Node setup - Correlated Traffic:
-#############################################
+Bare-Metal 3-Node setup - Correlated Traffic
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
+----------+ +----------+ +------------+
@@ -270,7 +260,7 @@ Bare-Metal 3-Node setup - Correlated Traffic:
Bare-Metal Config pod.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^
+--------------------------
Before executing Yardstick test cases, make sure that pod.yaml reflects the
topology and update all the required fields.::
@@ -345,13 +335,13 @@ topology and update all the required fields.::
Network Service Benchmarking - Standalone Virtualization
---------------------------------------------------------
+========================================================
-SR-IOV:
-^^^^^^^
+SR-IOV
+------
SR-IOV Pre-requisites
-#####################
+^^^^^^^^^^^^^^^^^^^^^
On Host:
a) Create a bridge for VM to connect to external network
@@ -387,10 +377,10 @@ On Host:
SR-IOV Config pod.yaml describing Topology
-##########################################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SR-IOV 2-Node setup:
-####################
+^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
+--------------------+
@@ -418,7 +408,7 @@ SR-IOV 2-Node setup:
SR-IOV 3-Node setup - Correlated Traffic
-########################################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
+--------------------+
@@ -454,7 +444,7 @@ topology and update all the required fields.
.. note:: Update all the required fields like ip, user, password, pcis, etc...
SR-IOV Config pod_trex.yaml
-###########################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: YAML
@@ -483,7 +473,7 @@ SR-IOV Config pod_trex.yaml
local_mac: "00:00.00:00:00:02"
SR-IOV Config host_sriov.yaml
-#############################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: YAML
@@ -495,7 +485,8 @@ SR-IOV Config host_sriov.yaml
user: ""
password: ""
-SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
+SR-IOV testcase update:
+``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
Update "contexts" section
"""""""""""""""""""""""""
@@ -542,11 +533,11 @@ Update "contexts" section
-OVS-DPDK:
-^^^^^^^^^
+OVS-DPDK
+--------
OVS-DPDK Pre-requisites
-#######################
+^^^^^^^^^^^^^^^^^^^^^^^
On Host:
a) Create a bridge for VM to connect to external network
@@ -585,10 +576,10 @@ On Host:
OVS-DPDK Config pod.yaml describing Topology
-############################################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-OVS-DPDK 2-Node setup:
-######################
+OVS-DPDK 2-Node setup
+^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
@@ -619,7 +610,7 @@ OVS-DPDK 2-Node setup:
OVS-DPDK 3-Node setup - Correlated Traffic
-##########################################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: console
@@ -659,7 +650,7 @@ topology and update all the required fields.
.. note:: Update all the required fields like ip, user, password, pcis, etc...
OVS-DPDK Config pod_trex.yaml
-#############################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: YAML
@@ -687,7 +678,7 @@ OVS-DPDK Config pod_trex.yaml
local_mac: "00:00.00:00:00:02"
OVS-DPDK Config host_ovs.yaml
-#############################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: YAML
@@ -699,7 +690,8 @@ OVS-DPDK Config host_ovs.yaml
user: ""
password: ""
-ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
+ovs_dpdk testcase update:
+``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
Update "contexts" section
"""""""""""""""""""""""""
@@ -757,7 +749,7 @@ Update "contexts" section
Network Service Benchmarking - OpenStack with SR-IOV support
-------------------------------------------------------------
+============================================================
This section describes how to run a Sample VNF test case, using Heat context,
with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
@@ -765,7 +757,7 @@ DevStack, with SR-IOV support.
Single node OpenStack setup with external TG
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+--------------------------------------------
.. code-block:: console
@@ -796,7 +788,7 @@ Single node OpenStack setup with external TG
Host pre-configuration
-######################
+^^^^^^^^^^^^^^^^^^^^^^
.. warning:: The following configuration requires sudo access to the system. Make
sure that your user have the access.
@@ -896,7 +888,7 @@ Setup SR-IOV ports on the host:
DevStack installation
-#####################
+^^^^^^^^^^^^^^^^^^^^^
Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
documentation to install OpenStack on a host. Please note, that stable
@@ -918,7 +910,7 @@ Start the devstack installation on a host.
TG host configuration
-#####################
+^^^^^^^^^^^^^^^^^^^^^
Yardstick automatically install and configure Trex traffic generator on TG
host based on provided POD file (see below). Anyway, it's recommended to check
@@ -927,7 +919,7 @@ the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
Run the Sample VNF test case
-############################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There is an example of Sample VNF test case ready to be executed in an
OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
@@ -952,7 +944,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section.
Multi node OpenStack TG and VNF setup (two nodes)
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-------------------------------------------------
.. code-block:: console
@@ -983,14 +975,14 @@ Multi node OpenStack TG and VNF setup (two nodes)
Controller/Compute pre-configuration
-####################################
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Pre-configuration of the controller and compute hosts are the same as
described in `Host pre-configuration`_ section. Follow the steps in the section.
DevStack configuration
-######################
+^^^^^^^^^^^^^^^^^^^^^^
Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
documentation to install OpenStack on a host. Please note, that stable
@@ -1017,7 +1009,7 @@ Start the devstack installation on the controller and compute hosts.
Run the sample vFW TC
-#####################
+^^^^^^^^^^^^^^^^^^^^^
Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
context.
@@ -1034,30 +1026,31 @@ and the following yardtick command line arguments:
Enabling other Traffic generator
---------------------------------
+================================
-IxLoad:
-^^^^^^^
+IxLoad
+^^^^^^
-1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS
- version>Linux64.bin.tar.gz`` (Download from ixia support site)
- Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
- If the installation was not done inside the container, after installing the IXIA client,
- check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
- inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
- to /usr/bin/ixiapython<ver> inside the container.
+1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
+ ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
+ Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
+ ``<IxOS version>Linux64.bin.tar.gz``
+ If the installation was not done inside the container, after installing
+ the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
+ sure you can run this cmd inside the yardstick container. Usually user is
+ required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
+ ``/usr/bin/ixiapython<ver>`` inside the container.
-2. Update pod_ixia.yaml file with ixia details.
+2. Update ``pod_ixia.yaml`` file with ixia details.
.. code-block:: console
cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
- Config pod_ixia.yaml
+ Config ``pod_ixia.yaml``
.. code-block:: yaml
-
nodes:
-
name: trafficgen_1
@@ -1097,22 +1090,23 @@ IxLoad:
You will also need to configure the IxLoad machine to start the IXIA
IxosTclServer. This can be started like so:
- - Connect to the IxLoad machine using RDP
- - Go to:
+ * Connect to the IxLoad machine using RDP
+ * Go to:
``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
or
``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
-4. Create a folder "Results" in c:\ and share the folder on the network.
+4. Create a folder ``Results`` in c:\ and share the folder on the network.
-5. execute testcase in samplevnf folder.
- eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
+5. Execute testcase in samplevnf folder e.g.
+ ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
-IxNetwork:
-^^^^^^^^^^
+IxNetwork
+---------
-1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
- Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
+1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
+ (Download from ixia support site)
+ Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
2. Update pod_ixia.yaml file with ixia details.
.. code-block:: console
@@ -1162,9 +1156,11 @@ IxNetwork:
You will also need to configure the IxNetwork machine to start the IXIA
IxNetworkTclServer. This can be started like so:
- - Connect to the IxNetwork machine using RDP
- - Go to: ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
+ * Connect to the IxNetwork machine using RDP
+ * Go to:
+ ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
+ (or ``IxNetworkApiServer``)
-4. execute testcase in samplevnf folder.
- eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
+4. Execute testcase in samplevnf folder e.g.
+ ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
diff --git a/docs/testing/user/userguide/13-nsb_operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst
index e791b048d..2e741822e 100644
--- a/docs/testing/user/userguide/13-nsb_operation.rst
+++ b/docs/testing/user/userguide/14-nsb-operation.rst
@@ -23,13 +23,14 @@ provider/external networks.
Provider networks
^^^^^^^^^^^^^^^^^
-The VNFs require a clear L2 connect to the external network in order to generate
-realistic traffic from multiple address ranges and port
+The VNFs require a clear L2 connect to the external network in order to
+generate realistic traffic from multiple address ranges and ports.
-In order to prevent Neutron from filtering traffic we have to disable Neutron Port Security.
-We also disable DHCP on the data ports because we are binding the ports to DPDK and do not need
-DHCP addresses. We also disable gateways because multiple default gateways can prevent SSH access
-to the VNF from the floating IP. We only want a gateway on the mgmt network
+In order to prevent Neutron from filtering traffic we have to disable Neutron
+Port Security. We also disable DHCP on the data ports because we are binding
+the ports to DPDK and do not need DHCP addresses. We also disable gateways
+because multiple default gateways can prevent SSH access to the VNF from the
+floating IP. We only want a gateway on the mgmt network
.. code-block:: yaml
@@ -42,8 +43,9 @@ to the VNF from the floating IP. We only want a gateway on the mgmt network
Heat Topologies
^^^^^^^^^^^^^^^
-By default Heat will attach every node to every Neutron network that is created.
-For scale-out tests we do not want to attach every node to every network.
+By default Heat will attach every node to every Neutron network that is
+created. For scale-out tests we do not want to attach every node to every
+network.
For each node you can specify which ports are on which network using the
network_ports dictionary.
@@ -85,11 +87,11 @@ In this example we have ``TRex xe0 <-> xe0 VNF xe1 <-> xe0 UDP_Replay``
Collectd KPIs
-------------
-NSB can collect KPIs from collected. We have support for various plugins enabled by the
-Barometer project.
+NSB can collect KPIs from collected. We have support for various plugins
+enabled by the Barometer project.
-The default yardstick-samplevnf has collectd installed. This allows for collecting KPIs
-from the VNF.
+The default yardstick-samplevnf has collectd installed. This allows for
+collecting KPIs from the VNF.
Collecting KPIs from the NFVi is more complicated and requires manual setup.
We assume that collectd is not installed on the compute nodes.
@@ -130,15 +132,17 @@ Scale-Up
VNFs performance data with scale-up
- * Helps to figure out optimal number of cores specification in the Virtual Machine template creation or VNF
+ * Helps to figure out optimal number of cores specification in the Virtual
+ Machine template creation or VNF
* Helps in comparison between different VNF vendor offerings
- * Better the scale-up index, indicates the performance scalability of a particular solution
+ * Better the scale-up index, indicates the performance scalability of a
+ particular solution
Heat
^^^^
-
-For VNF scale-up tests we increase the number for VNF worker threads and ports. In the case of VNFs
-we also need to increase the number of VCPUs and memory allocated to the VNF.
+For VNF scale-up tests we increase the number for VNF worker threads. In the
+case of VNFs we also need to increase the number of VCPUs and memory allocated
+to the VNF.
An example scale-up Heat testcase is:
@@ -195,9 +199,9 @@ file may by specified in ``vnf_config`` scenario section.
Baremetal
^^^^^^^^^
1. Follow above traffic generator section to setup.
- 2. edit num of threads in ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml``
-
- e.g, 6 Threads for given VNF
+ 2. Edit num of threads in
+ ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml``
+ e.g, 6 Threads for given VNF
.. code-block:: yaml
@@ -240,11 +244,12 @@ Baremetal
Scale-Out
--------------------
-VNFs performance data with scale-out
+VNFs performance data with scale-out helps
- * Helps in capacity planning to meet the given network node requirements
- * Helps in comparison between different VNF vendor offerings
- * Better the scale-out index, provides the flexibility in meeting future capacity requirements
+ * in capacity planning to meet the given network node requirements
+ * in comparison between different VNF vendor offerings
+ * better the scale-out index, provides the flexibility in meeting future
+ capacity requirements
Standalone
@@ -274,7 +279,8 @@ Scale-out not supported on Baremetal.
Heat
^^^^
-There are sample scale-out all-VM Heat tests. These tests only use VMs and don't use external traffic.
+There are sample scale-out all-VM Heat tests. These tests only use VMs and
+don't use external traffic.
The tests use UDP_Replay and correlated traffic.
@@ -288,11 +294,14 @@ To run the test you need to increase OpenStack CPU, Memory and Port quotas.
Traffic Generator tuning
------------------------
-The TRex traffic generator can be setup to use multiple threads per core, this is for multiqueue testing.
+The TRex traffic generator can be setup to use multiple threads per core, this
+is for multiqueue testing.
-TRex does not automatically enable multiple threads because we currently cannot detect the number of queues on a device.
+TRex does not automatically enable multiple threads because we currently cannot
+detect the number of queues on a device.
-To enable multiple queue set the queues_per_port value in the TG VNF options section.
+To enable multiple queue set the ``queues_per_port`` value in the TG VNF
+options section.
.. code-block:: yaml
diff --git a/docs/testing/user/userguide/15-list-of-tcs.rst b/docs/testing/user/userguide/15-list-of-tcs.rst
index cb99c49cf..678f0f9a9 100644
--- a/docs/testing/user/userguide/15-list-of-tcs.rst
+++ b/docs/testing/user/userguide/15-list-of-tcs.rst
@@ -14,11 +14,11 @@ This chapter lists available Yardstick test cases.
Yardstick test cases are divided in two main categories:
* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology
-described in :doc:`02-methodology`
+ described in :doc:`02-methodology`
* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more
-aspect of a feature delivered by an OPNFV Project, including the test cases
-developed for the :term:`VTC`.
+ aspect of a feature delivered by an OPNFV Project, including the test cases
+ developed for the :term:`VTC`.
Generic NFVI Test Case Descriptions
===================================
@@ -109,8 +109,8 @@ Parser
opnfv_yardstick_tc040.rst
- StorPerf
------------
+StorPerf
+--------
.. toctree::
:maxdepth: 1
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index 61e157e52..b936e723d 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -17,15 +17,16 @@ Yardstick User Guide
02-methodology
03-architecture
04-installation
- 05-yardstick_plugin
- 06-result-store-InfluxDB
- 07-grafana
- 08-api
- 09-yardstick_user_interface
- 10-vtc-overview
- 11-nsb-overview
- 12-nsb_installation
- 13-nsb_operation
+ 05-operation
+ 06-yardstick-plugin
+ 07-result-store-InfluxDB
+ 08-grafana
+ 09-api
+ 10-yardstick-user-interface
+ 11-vtc-overview
+ 12-nsb-overview
+ 13-nsb-installation
+ 14-nsb-operation
15-list-of-tcs
nsb/nsb-list-of-tcs
glossary
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
index 8890c9d53..82a491b72 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
@@ -34,23 +34,20 @@ Yardstick Test Case Description TC050
| | 2) host: which is the name of a control node being attacked. |
| | 3) interface: the network interface to be turned off. |
| | |
-| | There are four instance of the "close-interface" monitor: |
-| | attacker1(for public netork): |
-| | -fault_type: "close-interface" |
-| | -host: node1 |
-| | -interface: "br-ex" |
-| | attacker2(for management netork): |
-| | -fault_type: "close-interface" |
-| | -host: node1 |
-| | -interface: "br-mgmt" |
-| | attacker3(for storage netork): |
-| | -fault_type: "close-interface" |
-| | -host: node1 |
-| | -interface: "br-storage" |
-| | attacker4(for private netork): |
-| | -fault_type: "close-interface" |
-| | -host: node1 |
-| | -interface: "br-mesh" |
+| | The interface to be closed by the attacker can be set by the |
+| | variable of "{{ interface_name }}" |
+| | |
+| | attackers: |
+| | - |
+| | fault_type: "general-attacker" |
+| | host: {{ attack_host }} |
+| | key: "close-br-public" |
+| | attack_key: "close-interface" |
+| | action_parameter: |
+| | interface: {{ interface_name }} |
+| | rollback_parameter: |
+| | interface: {{ interface_name }} |
+| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, the monitor named "openstack-cmd" is |
| | needed. The monitor needs needs two parameters: |
@@ -61,17 +58,17 @@ Yardstick Test Case Description TC050
| | |
| | There are four instance of the "openstack-cmd" monitor: |
| | monitor1: |
-| | -monitor_type: "openstack-cmd" |
-| | -command_name: "nova image-list" |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "nova image-list" |
| | monitor2: |
-| | -monitor_type: "openstack-cmd" |
-| | -command_name: "neutron router-list" |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "neutron router-list" |
| | monitor3: |
-| | -monitor_type: "openstack-cmd" |
-| | -command_name: "heat stack-list" |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "heat stack-list" |
| | monitor4: |
-| | -monitor_type: "openstack-cmd" |
-| | -command_name: "cinder list" |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "cinder list" |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there is one metric: |
| | 1)service_outage_time: which indicates the maximum outage |
@@ -109,9 +106,9 @@ Yardstick Test Case Description TC050
+--------------+--------------------------------------------------------------+
|step 2 | do attacker: connect the host through SSH, and then execute |
| | the turnoff network interface script with param value |
-| | specified by "interface". |
+| | specified by "{{ interface_name }}". |
| | |
-| | Result: Network interfaces will be turned down. |
+| | Result: The specified network interface will be down. |
| | |
+--------------+--------------------------------------------------------------+
|step 3 | stop monitors after a period of time specified by |
@@ -133,3 +130,4 @@ Yardstick Test Case Description TC050
| | execution problem. |
| | |
+--------------+--------------------------------------------------------------+
+