aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/release/release-notes/release-notes.rst14
-rw-r--r--docs/release/results/euphrates_fraser_comparison.rst10
-rw-r--r--docs/release/results/images/tc014_pod_fraser.png (renamed from docs/release/results/images/tc014_pod_fraseer.png)bin29513 -> 29513 bytes
-rw-r--r--docs/release/results/overview.rst9
-rw-r--r--docs/release/results/results.rst17
-rw-r--r--docs/release/results/yardstick-opnfv-vtc.rst248
-rw-r--r--docs/templates/test_results_template.rst23
-rwxr-xr-xdocs/testing/developer/devguide/devguide.rst156
-rwxr-xr-xdocs/testing/developer/devguide/devguide_nsb_prox.rst335
-rwxr-xr-xdocs/testing/user/userguide/01-introduction.rst8
-rwxr-xr-xdocs/testing/user/userguide/03-architecture.rst25
-rw-r--r--docs/testing/user/userguide/04-installation.rst150
-rw-r--r--docs/testing/user/userguide/05-operation.rst2
-rw-r--r--docs/testing/user/userguide/08-grafana.rst2
-rw-r--r--docs/testing/user/userguide/09-api.rst2
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst2
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst161
-rw-r--r--docs/testing/user/userguide/14-nsb-operation.rst115
-rw-r--r--docs/testing/user/userguide/15-list-of-tcs.rst11
-rw-r--r--docs/testing/user/userguide/comp-intro.rst4
-rw-r--r--docs/testing/user/userguide/glossary.rst75
-rw-r--r--docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst5
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst156
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst149
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst159
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst167
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst174
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc010.rst3
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc011.rst6
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc012.rst1
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc019.rst22
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc025.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc027.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc040.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc042.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc050.rst49
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc052.rst19
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc055.rst4
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc057.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc063.rst1
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc069.rst6
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc073.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc074.rst10
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc081.rst4
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc084.rst25
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc087.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc092.rst33
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc093.rst43
-rw-r--r--docs/testing/user/userguide/references.rst22
49 files changed, 1822 insertions, 646 deletions
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index daa4b8187..457b308ae 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -5,7 +5,7 @@ License
OPNFV Fraser release note for Yardstick Docs
are licensed under a Creative Commons Attribution 4.0 International License.
You should have received a copy of the license along with this.
-If not, see <http://creativecommons.org/licenses/by/4.0/>.
+If not, see <https://creativecommons.org/licenses/by/4.0/>.
The *Yardstick framework*, the *Yardstick test cases* are open-source software,
licensed under the terms of the Apache License, Version 2.0.
@@ -17,11 +17,11 @@ OPNFV Fraser Release Note for Yardstick
.. toctree::
:maxdepth: 2
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
-.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _Dashboard: http://testresults.opnfv.org/grafana/
-.. _NFV-TST001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
+.. _NFV-TST001: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
Abstract
@@ -149,9 +149,9 @@ Deliverables
Documents
---------
- - User Guide: http://docs.opnfv.org/en/stable-fraser/submodules/yardstick/docs/testing/user/userguide/index.html
+ - User Guide: :ref:`<yardstick:userguide>`
- - Developer Guide: http://docs.opnfv.org/en/stable-fraser/submodules/yardstick/docs/testing/developer/devguide/index.html
+ - Developer Guide: :ref:`<yardstick:devguide>`
Software Deliverables
@@ -606,7 +606,7 @@ Useful links
- wiki Yardstick Fraser release planing page: https://wiki.opnfv.org/display/yardstick/Release+Fraser
- - Yardstick repo: https://git.opnfv.org/cgit/yardstick
+ - Yardstick repo: https://git.opnfv.org/yardstick
- Yardstick CI dashboard: https://build.opnfv.org/ci/view/yardstick
diff --git a/docs/release/results/euphrates_fraser_comparison.rst b/docs/release/results/euphrates_fraser_comparison.rst
index 53dfb994f..1dd328bb7 100644
--- a/docs/release/results/euphrates_fraser_comparison.rst
+++ b/docs/release/results/euphrates_fraser_comparison.rst
@@ -2,7 +2,15 @@
.. License.
.. http://creativecommons.org/licenses/by/4.0
-=======================================================
+..
+ Convention for heading levels in Yardstick:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+ Avoid deeper levels because they do not render well.
+
Test results analysis for Euphrates and Fraser releases
=======================================================
diff --git a/docs/release/results/images/tc014_pod_fraseer.png b/docs/release/results/images/tc014_pod_fraser.png
index 697201d76..697201d76 100644
--- a/docs/release/results/images/tc014_pod_fraseer.png
+++ b/docs/release/results/images/tc014_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/overview.rst b/docs/release/results/overview.rst
index 9fd74797c..b5e6a43a6 100644
--- a/docs/release/results/overview.rst
+++ b/docs/release/results/overview.rst
@@ -3,6 +3,15 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Ericsson AB and others.
+..
+ Convention for heading levels in Yardstick:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+ Avoid deeper levels because they do not render well.
+
Yardstick test tesult document overview
=======================================
diff --git a/docs/release/results/results.rst b/docs/release/results/results.rst
index 0ed92f867..f0c20360b 100644
--- a/docs/release/results/results.rst
+++ b/docs/release/results/results.rst
@@ -2,8 +2,17 @@
.. License.
.. http://creativecommons.org/licenses/by/4.0
+..
+ Convention for heading levels in Yardstick:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+ Avoid deeper levels because they do not render well.
+
Results listed by test cases
-==========================
+----------------------------
.. _TOM: https://wiki.opnfv.org/display/testing/R+post-processing+of+the+Yardstick+results
@@ -14,7 +23,7 @@ the determined state of the specific test case as executed in the Fraser release
process. All test date are analyzed using TOM_ tool.
Scenario Results
-================
+----------------
.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/
@@ -42,7 +51,7 @@ Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_.
Test results for Fraser release are collected from April 10, 2018 to May 13, 2018.
Feature Test Results
-====================
+--------------------
The following features were verified by Yardstick test cases:
@@ -54,8 +63,6 @@ The following features were verified by Yardstick test cases:
* Parser
- * Virtual Traffic Classifier (see :doc:`yardstick-opnfv-vtc`)
-
* StorPerf
.. note:: The test cases for IPv6 and Parser Projects are included in the
diff --git a/docs/release/results/yardstick-opnfv-vtc.rst b/docs/release/results/yardstick-opnfv-vtc.rst
deleted file mode 100644
index 059b5491f..000000000
--- a/docs/release/results/yardstick-opnfv-vtc.rst
+++ /dev/null
@@ -1,248 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _Dashboard006: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc006
-.. _Dashboard007: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc007
-.. _Dashboard020: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc020
-.. _Dashboard021: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc021
-.. _DashboardVTC: http://testresults.opnfv.org/grafana/dashboard/db/vtc-dashboard
-====================================
-Test Results for yardstick-opnfv-vtc
-====================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-The virtual Traffic Classifier (vtc) Scenario supported by Yardstick is used by 4 Test Cases:
-
-- TC006
-- TC007
-- TC020
-- TC021
-
-
-* TC006
-
-TC006 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking Test.
-It collects measures about the end-to-end throughput supported by the
-virtual Traffic Classifier (vTC).
-Results of the test are shown in the Dashboard006_
-The throughput is expressed as percentage of the available bandwidth on the NIC.
-
-
-* TC007
-
-TC007 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking in presence of
-noisy neighbors Test.
-It collects measures about the end-to-end throughput supported by the
-virtual Traffic Classifier when a user-defined number of noisy neighbors is deployed.
-Results of the test are shown in the Dashboard007_
-The throughput is expressed as percentage of the available bandwidth on the NIC.
-
-
-* TC020
-
-TC020 is the Virtual Traffic Classifier Instantiation Test.
-It verifies that a newly instantiated vTC is alive and functional and its instantiation
-is correctly supported by the underlying infrastructure.
-Results of the test are shown in the Dashboard020_
-
-
-* TC021
-
-TC021 is the Virtual Traffic Classifier Instantiation in presence of noisy neighbors Test.
-It verifies that a newly instantiated vTC is alive and functional and its instantiation
-is correctly supported by the underlying infrastructure when noisy neighbors are present.
-Results of the test are shown in the Dashboard021_
-
-* Generic
-
-In the Generic scenario the Virtual Traffic Classifier is running on a standard Openstack
-setup and traffic is being replayed from a neighbor VM. The traffic sent contains
-various protocols and applications, and the VTC identifies them and exports the data.
-Results of the test are shown in the DashboardVTC.
-
-Detailed test results
----------------------
-
-* TC006
-
-The results for TC006 have been obtained using the following test case
-configuration:
-
-- Context: Dummy
-- Scenario: vtc_throughput
-- Network Techology: SR-IOV
-- vTC Flavor: m1.large
-
-
-* TC007
-
-The results for TC007 have been obtained using the following test case
-configuration:
-
-- Context: Dummy
-- Scenario: vtc_throughput_noisy
-- Network Techology: SR-IOV
-- vTC Flavor: m1.large
-- Number of noisy neighbors: 2
-- Number of cores per neighbor: 2
-- Amount of RAM per neighbor: 1G
-
-
-* TC020
-
-The results for TC020 have been obtained using the following test case
-configuration:
-
-The results listed in previous section have been obtained using the following
-test case configuration:
-
-- Context: Dummy
-- Scenario: vtc_instantiation_validation
-- Network Techology: SR-IOV
-- vTC Flavor: m1.large
-
-
-* TC021
-
-The results listed in previous section have been obtained using the following
-test case configuration:
-
-- Context: Dummy
-- Scenario: vtc_instantiation_validation
-- Network Techology: SR-IOV
-- vTC Flavor: m1.large
-- Number of noisy neighbors: 2
-- Number of cores per neighbor: 2
-- Amount of RAM per neighbor: 1G
-
-
-For all the test cases, the user can specify different values for the parameters.
-
-* Generic
-
-The results listed in the previous section have been obtained, using a
-standard Openstack setup.
-The user can replay his/her own traffic and see the corresponding results.
-
-Rationale for decisions
------------------------
-
-* TC006
-
-The result of the test is a number between 0 and 100 which represents the percentage of bandwidth
-available on the NIC that corresponds to the supported throughput by the vTC.
-
-
-* TC007
-
-The result of the test is a number between 0 and 100 which represents the percentage of bandwidth
-available on the NIC that corresponds to the supported throughput by the vTC.
-
-* TC020
-
-The execution of the test is done as described in the following:
-
-- The vTC is deployed on the OpenStack testbed;
-- Some traffic is sent to the vTC;
-- The vTC changes the header of the packets and sends them back to the packet generator;
-- The packet generator checks that all the packets are received correctly and have been changed
-correctly by the vTC.
-
-The test is declared as PASSED if all the packets are correcly received by the packet generator
-and they have been modified by the virtual Traffic Classifier as required.
-
-
-* TC021
-
-The execution of the test is done as described in the following:
-
-- The vTC is deployed on the OpenStack testbed;
-- The noisy neighbors are deployed as requested by the user;
-- Some traffic is sent to the vTC;
-- The vTC change the header of the packets and sends them back to the packet generator;
-- The packet generator checks that all the packets are received correctly and have been changed
-correctly by the vTC
-
-The test is declared as PASSED if all the packets are correcly received by the packet generator
-and they have been modified by the virtual Traffic Classifier as required.
-
-* Generic
-
-The execution of the test consists of the following actions:
-
-- The vTC is deployed on the OpenStack testbed;
-- The traffic generator VM is deployed on the Openstack Testbed;
-- Traffic data are relevant to the network setup;
-- Traffic is sent to the vTC;
-
-
-
-Conclusions and recommendations
--------------------------------
-
-* TC006
-
-The obtained results show that the virtual Traffic Classifier can support up to 4 Gbps
-(40% of the available bandwidth) correspond to the expected behaviour of the virtual
-Traffic Classifier.
-Using the configuration with SR-IOV and large flavor, the expected throughput should
-generally be in the range between 3 and 4 Gbps.
-
-
-* TC007
-
-These results correspond to the configuration in which the virtual Traffic Classifier uses SR-IOV
-Virtual Functions and the flavor is set to large for the virtual machine.
-The throughput is in the range between 2.5 Gbps and 3.7 Gbps.
-This shows that the effect of 2 noisy neighbors reduces the throughput of
-the service between 10 and 20%.
-Increasing number of neihbours would have a higher impact on the performance.
-
-
-* TC020
-
-The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
-Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is
-correctly instantiated, it is able to receive and send packets using SR-IOV technology
-and to forward packets back to the packet generator changing the TCP/IP header as required.
-
-
-* TC021
-
-The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
-Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is
-correctly instantiated, it is able to receive and send packets using SR-IOV technology
-and to forward packets back to the packet generator changing the TCP/IP header as required,
-also in presence of noisy neighbors.
-
-* Generic
-
-The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
-Using the aforementioned configuration the expected application protocols are identified
-and their traffic statistics are demonstrated in the DashboardVTC, a group of popular
-applications is selected to demonstrate the sound operation of the vTC.
-The demonstrated application protocols are:
-- HTTP
-- Skype
-- Bittorrent
-- Youtube
-- Dropbox
-- Twitter
-- Viber
-- iCloud
diff --git a/docs/templates/test_results_template.rst b/docs/templates/test_results_template.rst
index f04b2b2a8..8885588ae 100644
--- a/docs/templates/test_results_template.rst
+++ b/docs/templates/test_results_template.rst
@@ -1,3 +1,18 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ~~~~~~~ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
=====================
Yardstick Test Report
=====================
@@ -46,16 +61,16 @@ TCXXX
on-demand test cases (HA, KVM, Parser)
* Overview of test results
-.. general on metrics collected, number of iterations
+ .. general on metrics collected, number of iterations
* Detailed test results
-.. info on lab, installer, scenario
+ .. info on lab, installer, scenario
* Rationale for decisions
-.. pass/fail
+ .. pass/fail
* Conclusions and recommendations
-.. did the expected behavior occured?
+ .. did the expected behavior occured?
General
=======
diff --git a/docs/testing/developer/devguide/devguide.rst b/docs/testing/developer/devguide/devguide.rst
index 91f2c2148..2065f6e0d 100755
--- a/docs/testing/developer/devguide/devguide.rst
+++ b/docs/testing/developer/devguide/devguide.rst
@@ -47,14 +47,14 @@ your field of interest is.
Where can I find some help to start?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. _`user guide`: http://artifacts.opnfv.org/yardstick/danube/1.0/docs/stesting_user_userguide/index.html
+.. _`user guide`: https://artifacts.opnfv.org/yardstick/docs/testing_user_userguide/index.html
.. _`wiki page`: https://wiki.opnfv.org/display/yardstick/
This guide is made for you. You can have a look at the `user guide`_.
There are also references on documentation, video tutorials, tips in the
-project `wiki page`_. You can also directly contact us by mail with [Yardstick]
-prefix in the subject at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan
-#opnfv-yardstick.
+project `wiki page`_. You can also directly contact us by mail with
+``#yardstick`` or ``[yardstick]`` prefix in the subject at
+``opnfv-tech-discuss@lists.opnfv.org`` or on the IRC channel ``#opnfv-yardstick``.
Yardstick developer areas
@@ -401,7 +401,7 @@ Gerrit & JIRA introduction
++++++++++++++++++++++++++
.. _Gerrit: https://www.gerritcodereview.com/
-.. _`OPNFV Gerrit`: http://gerrit.opnfv.org/
+.. _`OPNFV Gerrit`: http://gerrit.opnfv.org/gerrit
.. _link: https://identity.linuxfoundation.org/
.. _JIRA: https://jira.opnfv.org/secure/Dashboard.jspa
@@ -449,6 +449,10 @@ Verify your patch::
It is used in CI but also by the CLI.
+For more details on ``tox`` and tests, please refer to the `Running tests`_
+and `working with tox`_ sections below, which describe the different available
+environments.
+
Submit the code with Git
++++++++++++++++++++++++
@@ -481,7 +485,7 @@ Git repository::
JIRA: YARDSTICK-XXX
-.. _`this document`: http://chris.beams.io/posts/git-commit/
+.. _`this document`: https://chris.beams.io/posts/git-commit/
The message that is required for the commit should follow a specific set of
rules. This practice allows to standardize the description messages attached
@@ -506,8 +510,8 @@ Yardstick committers and contributors to review your codes.
:alt: Gerrit for code review
You can find a list Yardstick people
-`here <https://wiki.opnfv.org/display/yardstick/People>`_, or use the
-``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit.
+`here <https://wiki.opnfv.org/display/yardstick/Yardstick+People>`_, or use
+the ``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit.
Modify the code under review in Gerrit
++++++++++++++++++++++++++++++++++++++
@@ -566,6 +570,142 @@ The process for backporting is as follows:
A backported change needs a ``+1`` and a ``+2`` from a committer who didn’t
propose the change (i.e. minimum 3 people involved).
+Development guidelines
+----------------------
+This section provides guidelines and best practices for feature development
+and bug fixing in Yardstick.
+
+In general, bug fixes should be submitted as a single patch.
+
+When developing larger features, all commits on the local topic branch can be
+submitted together, by running ``git review`` on the tip of the branch. This
+creates a chain of related patches in gerrit.
+
+Each commit should contain one logical change and the author should aim for no
+more than 300 lines of code per commit. This helps to make the changes easier
+to review.
+
+Each feature should have the following:
+
+* Feature/bug fix code
+* Unit tests (both positive and negative)
+* Functional tests (optional)
+* Sample testcases (if applicable)
+* Documentation
+* Update to release notes
+
+Coding style
+~~~~~~~~~~~~
+.. _`OpenStack Style Guidelines`: https://docs.openstack.org/hacking/latest/user/hacking.html
+.. _`OPNFV coding guidelines`: https://wiki.opnfv.org/display/DEV/Contribution+Guidelines
+
+Please follow the `OpenStack Style Guidelines`_ for code contributions (the
+section on Internationalization (i18n) Strings is not applicable).
+
+When writing commit message, the `OPNFV coding guidelines`_ on git commit
+message style should also be used.
+
+Running tests
+~~~~~~~~~~~~~
+Once your patch has been submitted, a number of tests will be run by Jenkins
+CI to verify the patch. Before submitting your patch, you should run these
+tests locally. You can do this using ``tox``, which has a number of different
+test environments defined in ``tox.ini``.
+Calling ``tox`` without any additional arguments runs the default set of
+tests (unit tests, functional tests, coverage and pylint).
+
+If some tests are failing, you can save time and select test environments
+individually, by passing one or more of the following command-line options to
+``tox``:
+
+* ``-e py27``: Unit tests using Python 2.7
+* ``-e py3``: Unit tests using Python 3
+* ``-e pep8``: Linter and style checks on updated files
+* ``-e functional``: Functional tests using Python 2.7
+* ``-e functional-py3``: Functional tests using Python 3
+* ``-e coverage``: Code coverage checks
+
+.. note:: You need to stage your changes prior to running coverage for those
+ changes to be checked.
+
+In addition to the tests run by Jenkins (listed above), there are a number of
+other test environments defined.
+
+* ``-e pep8-full``: Linter and style checks are run on the whole repo (not
+ just on updated files)
+* ``-e os-requirements``: Check that the requirements are compatible with
+ OpenStack requirements.
+
+Working with tox
+++++++++++++++++
+.. _virtualenv: https://virtualenv.pypa.io/en/stable/
+
+``tox`` uses `virtualenv`_ to create isolated Python environments to run the
+tests in. The test environments are located at
+``.tox/<environment_name>`` e.g. ``.tox/py27``.
+
+If requirements are changed, you will need to recreate the tox test
+environment to make sure the new requirements are installed. This is done by
+passing the additional ``-r`` command-line option to ``tox``::
+
+ tox -r -e ...
+
+This can also be achieved by deleting the test environments manually before
+running ``tox``::
+
+ rm -rf .tox/<environment_name>
+ rm -rf .tox/py27
+
+Writing unit tests
+~~~~~~~~~~~~~~~~~~
+For each change submitted, a set of unit tests should be submitted, which
+should include both positive and negative testing.
+
+In order to help identify which tests are needed, follow the guidelines below.
+
+* In general, there should be a separate test for each branching point, return
+ value and input set.
+* Negative tests should be written to make sure exceptions are raised and/or
+ handled appropriately.
+
+The following convention should be used for naming tests::
+
+ test_<method_name>_<some_comment>
+
+The comment gives more information on the nature of the test, the side effect
+being checked, or the parameter being modified::
+
+ test_my_method_runtime_error
+ test_my_method_invalid_credentials
+ test_my_method_param1_none
+
+Mocking
++++++++
+The ``mock`` library is used for unit testing to stub out external libraries.
+
+The following conventions are used in Yardstick:
+
+* Use ``mock.patch.object`` instead of ``mock.patch``.
+
+* When naming mocked classes/functions, use ``mock_<class_and_function_name>``
+ e.g. ``mock_subprocess_call``
+
+* Avoid decorating classes with mocks. Apply the mocking in ``setUp()``::
+
+ @mock.patch.object(ssh, 'SSH')
+ class MyClassTestCase(unittest.TestCase):
+
+ should be::
+
+ class MyClassTestCase(unittest.TestCase):
+ def setUp(self):
+ self._mock_ssh = mock.patch.object(ssh, 'SSH')
+ self.mock_ssh = self._mock_ssh.start()
+
+ self.addCleanup(self._stop_mocks)
+
+ def _stop_mocks(self):
+ self._mock_ssh.stop()
Plugins
-------
diff --git a/docs/testing/developer/devguide/devguide_nsb_prox.rst b/docs/testing/developer/devguide/devguide_nsb_prox.rst
index 79990055a..be2b5be61 100755
--- a/docs/testing/developer/devguide/devguide_nsb_prox.rst
+++ b/docs/testing/developer/devguide/devguide_nsb_prox.rst
@@ -13,9 +13,10 @@ optimal system architectures and configurations.
Prerequisites
=============
-In order to integrate PROX tests into NSB, the following prerequisites are required.
+In order to integrate PROX tests into NSB, the following prerequisites are
+required.
-.. _`dpdk wiki page`: http://dpdk.org/
+.. _`dpdk wiki page`: https://www.dpdk.org/
.. _`yardstick wiki page`: https://wiki.opnfv.org/display/yardstick/
.. _`Prox documentation`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation
.. _`openstack wiki page`: https://wiki.openstack.org/wiki/Main_Page
@@ -159,11 +160,13 @@ A NSB Prox test is composed of the following components :-
``tc_prox_heat_context_vpe-4.yaml``. This file describes the components
of the test, in the case of openstack the network description and
server descriptions, in the case of baremetal the hardware
- description location. It also contains the name of the Traffic Generator, the SUT config file
- and the traffic profile description, all described below. See nsb-test-description-label_
+ description location. It also contains the name of the Traffic Generator,
+ the SUT config file and the traffic profile description, all described below.
+ See nsb-test-description-label_
-* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the packet size, tolerated
- loss, initial line rate to start traffic at, test interval etc See nsb-traffic-profile-label_
+* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the
+ packet size, tolerated loss, initial line rate to start traffic at, test
+ interval etc See nsb-traffic-profile-label_
* Traffic Generator Config file. Usually called ``gen_<test>-<ports>.cfg``.
@@ -235,7 +238,8 @@ show you how to understand the test description file.
Now let's examine the components of the file in detail
1. ``traffic_profile`` - This specifies the traffic profile for the
- test. In this case ``prox_binsearch.yaml`` is used. See nsb-traffic-profile-label_
+ test. In this case ``prox_binsearch.yaml`` is used. See
+ nsb-traffic-profile-label_
2. ``topology`` - This is either ``prox-tg-topology-1.yaml`` or
``prox-tg-topology-2.yaml`` or ``prox-tg-topology-4.yaml``
@@ -330,11 +334,11 @@ This describes the details of the traffic flow. In this case
:alt: NSB PROX Traffic Profile
-1. ``name`` - The name of the traffic profile. This name should match the name specified in the
- ``traffic_profile`` field in the Test Description File.
+1. ``name`` - The name of the traffic profile. This name should match the name
+ specified in the ``traffic_profile`` field in the Test Description File.
-2. ``traffic_type`` - This specifies the type of traffic pattern generated, This name matches
- class name of the traffic generator See::
+2. ``traffic_type`` - This specifies the type of traffic pattern generated,
+ This name matches class name of the traffic generator. See::
network_services/traffic_profile/prox_binsearch.py class ProxBinSearchProfile(ProxProfile)
@@ -704,15 +708,22 @@ Now let's examine the components of the file in detail
physical core improves performance, however sometimes it
is optimal to move task to a separate core. This is
best decided by checking performance.
- c. ``mode=lat`` - Specifies the action carried out by this task on this core. Supported modes are: acl,
- classify, drop, gredecap, greencap, ipv6_decap, ipv6_encap, l2fwd, lbnetwork, lbpos, lbqinq, nop,
- police, qinqdecapv4, qinqencapv4, qos, routing, impair, lb5tuple, mirror, unmpls, tagmpls,
- nat, decapnsh, encapnsh, gen, genl4 and lat. This task(0) per core(3) receives packets on port.
- d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will receive packets on ``Port 1``.
- e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet. Note that the packet length should
- be longer than ``lat pos`` + 4 bytes to avoid truncation of the timestamp. It defines where the timestamp is
- to be read from. Note that the SUT workload might cause the position of the timestamp to change
- (i.e. due to encapsulation).
+ c. ``mode=lat`` - Specifies the action carried out by this task on this
+ core.
+ Supported modes are: ``acl``, ``classify``, ``drop``, ``gredecap``,
+ ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, ``lbnetwork``,
+ ``lbpos``, ``lbqinq``, ``nop``, ``police``, ``qinqdecapv4``,
+ ``qinqencapv4``, ``qos``, ``routing``, ``impair``, ``lb5tuple``,
+ ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, ``decapnsh``, ``encapnsh``,
+ ``gen``, ``genl4`` and ``lat``. This task(0) per core(3) receives packets
+ on port.
+ d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will
+ receive packets on ``Port 1``.
+ e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet.
+ Note that the packet length should be longer than ``lat pos`` + 4 bytes
+ to avoid truncation of the timestamp. It defines where the timestamp is
+ to be read from. Note that the SUT workload might cause the position of
+ the timestamp to change (i.e. due to encapsulation).
.. _nsb-sut-generator-label:
@@ -720,7 +731,8 @@ Now let's examine the components of the file in detail
-------------------------------
This section will describes the SUT(VNF) config file. This is the same for both
-baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to explain the options.
+baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to
+explain the options.
.. image:: images/PROX_Handle_2port_cfg.png
:width: 1400px
@@ -730,13 +742,15 @@ See `prox options`_ for details
Now let's examine the components of the file in detail
-1. ``[eal options]`` - same as the Generator config file. This specified the EAL (Environmental Abstraction Layer)
- options. These are default values and are not changed.
- See `dpdk wiki page`_.
+1. ``[eal options]`` - same as the Generator config file. This specified the
+ EAL (Environmental Abstraction Layer) options. These are default values and
+ are not changed. See `dpdk wiki page`_.
-2. ``[port 0]`` - This section describes the DPDK Port. The number following the keyword ``port`` usually refers to the DPDK Port Id. usually starting from ``0``.
- Because you can have multiple ports this entry usually repeated. Eg. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4 port setup ``[port 0]``, ``[port 1]``,
- ``[port 2]`` and ``[port 3]``::
+2. ``[port 0]`` - This section describes the DPDK Port. The number following
+ the keyword ``port`` usually refers to the DPDK Port Id. usually starting
+ from ``0``. Because you can have multiple ports this entry usually
+ repeated. E.g. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4
+ port setup ``[port 0]``, ``[port 1]``, ``[port 2]`` and ``[port 3]``::
[port 0]
name=if0
@@ -745,10 +759,14 @@ Now let's examine the components of the file in detail
tx desc=2048
promiscuous=yes
- a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any name can be assigned to a port.
- b. ``mac=hardware`` sets the MAC address assigned by the hardware to data from this port.
- c. ``rx desc=2048`` sets the number of available descriptors to allocate for receive packets. This can be changed and can effect performance.
- d. ``tx desc=2048`` sets the number of available descriptors to allocate for transmit packets. This can be changed and can effect performance.
+ a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any
+ name can be assigned to a port.
+ b. ``mac=hardware`` sets the MAC address assigned by the hardware to data
+ from this port.
+ c. ``rx desc=2048`` sets the number of available descriptors to allocate
+ for receive packets. This can be changed and can effect performance.
+ d. ``tx desc=2048`` sets the number of available descriptors to allocate
+ for transmit packets. This can be changed and can effect performance.
e. ``promiscuous=yes`` this enables promiscuous mode for this port.
3. ``[defaults]`` - Here default operations and settings can be over written.::
@@ -757,35 +775,46 @@ Now let's examine the components of the file in detail
mempool size=8K
memcache size=512
- a. In this example ``mempool size=8K`` the number of mbufs per task is altered. Altering this value could effect performance. See `prox options`_ for details.
- b. ``memcache size=512`` - number of mbufs cached per core, default is 256 this is the cache_size. Altering this value could effect performance.
+ a. In this example ``mempool size=8K`` the number of mbufs per task is
+ altered. Altering this value could effect performance. See
+ `prox options`_ for details.
+ b. ``memcache size=512`` - number of mbufs cached per core, default is 256
+ this is the cache_size. Altering this value could affect performance.
-4. ``[global]`` - Here application wide setting are supported. Things like application name, start time, duration and memory configurations can be set here.
+4. ``[global]`` - Here application wide setting are supported. Things like
+ application name, start time, duration and memory configurations can be set
+ here.
In this example.::
[global]
start time=5
name=Basic Gen
- a. ``start time=5`` Time is seconds after which average stats will be started.
+ a. ``start time=5`` Time is seconds after which average stats will be
+ started.
b. ``name=Handle L2FWD Multiflow (2x)`` Name of the configuration.
-5. ``[core 0]`` - This core is designated the master core. Every Prox application must have a master core. The master mode must be assigned to
+5. ``[core 0]`` - This core is designated the master core. Every Prox
+ application must have a master core. The master mode must be assigned to
exactly one task, running alone on one core.::
[core 0]
mode=master
-6. ``[core 1]`` - This describes the activity on core 1. Cores can be configured by means of a set of [core #] sections, where # represents either:
+6. ``[core 1]`` - This describes the activity on core 1. Cores can be
+ configured by means of a set of [core #] sections, where # represents
+ either:
- a. an absolute core number: e.g. on a 10-core, dual socket system with hyper-threading,
- cores are numbered from 0 to 39.
+ a. an absolute core number: e.g. on a 10-core, dual socket system with
+ hyper-threading, cores are numbered from 0 to 39.
- b. PROX allows a core to be identified by a core number, the letter 's', and a socket number.
- However NSB PROX is hardware agnostic (physical and virtual configurations are the same) it
- is advisable no to use physical core numbering.
+ b. PROX allows a core to be identified by a core number, the letter 's',
+ and a socket number. However NSB PROX is hardware agnostic (physical and
+ virtual configurations are the same) it is advisable no to use physical
+ core numbering.
- Each core can be assigned with a set of tasks, each running one of the implemented packet processing modes.::
+ Each core can be assigned with a set of tasks, each running one of the
+ implemented packet processing modes.::
[core 1]
name=none
@@ -796,20 +825,33 @@ Now let's examine the components of the file in detail
tx port=if1
a. ``name=none`` - No name assigned to the core.
- b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``. Task 1 can be defined later in this core or
- can be defined in another ``[core 1]`` section with ``task=1`` later in configuration file. Sometimes running
- multiple task related to the same packet on the same physical core improves performance, however sometimes it
- is optimal to move task to a separate core. This is best decided by checking performance.
- c. ``mode=l2fwd`` - Specifies the action carried out by this task on this core. Supported modes are: acl,
- classify, drop, gredecap, greencap, ipv6_decap, ipv6_encap, l2fwd, lbnetwork, lbpos, lbqinq, nop,
- police, qinqdecapv4, qinqencapv4, qos, routing, impair, lb5tuple, mirror, unmpls, tagmpls,
- nat, decapnsh, encapnsh, gen, genl4 and lat. This code does ``l2fwd`` .. ie it does the L2FWD.
-
- d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet will be set to the MAC address of ``Port 1`` of destination device. (The Traffic Generator/Verifier)
- e. ``rx port=if0`` - This specifies that the packets are received from ``Port 0`` called if0
- f. ``tx port=if1`` - This specifies that the packets are transmitted to ``Port 1`` called if1
-
- If this example we receive a packet on core on a port, carry out operation on the packet on the core and transmit it on on another port still using the same task on the same core.
+ b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``.
+ Task 1 can be defined later in this core or can be defined in another
+ ``[core 1]`` section with ``task=1`` later in configuration file.
+ Sometimes running multiple task related to the same packet on the same
+ physical core improves performance, however sometimes it is optimal to
+ move task to a separate core. This is best decided by checking
+ performance.
+ c. ``mode=l2fwd`` - Specifies the action carried out by this task on this
+ core. Supported modes are: ``acl``, ``classify``, ``drop``,
+ ``gredecap``, ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``,
+ ``lbnetwork``, ``lbpos``, ``lbqinq``, ``nop``, ``police``,
+ ``qinqdecapv4``, ``qinqencapv4``, ``qos``, ``routing``, ``impair``,
+ ``lb5tuple``, ``mirror``, ``unmpls``, ``tagmpls``, ``nat``,
+ ``decapnsh``, ``encapnsh``, ``gen``, ``genl4`` and ``lat``. This code
+ does ``l2fwd``. i.e. it does the L2FWD.
+
+ d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet
+ will be set to the MAC address of ``Port 1`` of destination device.
+ (The Traffic Generator/Verifier)
+ e. ``rx port=if0`` - This specifies that the packets are received from
+ ``Port 0`` called if0
+ f. ``tx port=if1`` - This specifies that the packets are transmitted to
+ ``Port 1`` called if1
+
+ In this example we receive a packet on core on a port, carry out operation
+ on the packet on the core and transmit it on on another port still using
+ the same task on the same core.
On some implementation you may wish to use multiple tasks, like this.::
@@ -829,15 +871,22 @@ Now let's examine the components of the file in detail
tx port=if0
drop=no
- In this example you can see Core 1/Task 0 called ``rx_task`` receives the packet from if0 and perform the l2fwd. However instead of sending the packet to a
- port it sends it to a core see ``tx cores=1t1``. In this case it sends it to Core 1/Task 1.
+ In this example you can see Core 1/Task 0 called ``rx_task`` receives the
+ packet from if0 and perform the l2fwd. However instead of sending the
+ packet to a port it sends it to a core see ``tx cores=1t1``. In this case it
+ sends it to Core 1/Task 1.
- Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but from the ring. See ``rx ring=yes``. It does not perform any operation on the packet See ``mode=none``
- and sends the packets to ``if0`` see ``tx port=if0``.
+ Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but
+ from the ring. See ``rx ring=yes``. It does not perform any operation on the
+ packet See ``mode=none`` and sends the packets to ``if0`` see
+ ``tx port=if0``.
- It is also possible to implement more complex operations be chaining multiple operations in sequence and using rings to pass packets from one core to another.
+ It is also possible to implement more complex operations by chaining
+ multiple operations in sequence and using rings to pass packets from one
+ core to another.
- In thus example we show a Broadband Network Gateway (BNG) with Quality of Service (QoS). Communication from task to task is via rings.
+ In this example, we show a Broadband Network Gateway (BNG) with Quality of
+ Service (QoS). Communication from task to task is via rings.
.. image:: images/PROX_BNG_QOS.png
:width: 1000px
@@ -848,26 +897,36 @@ Now let's examine the components of the file in detail
.. _baremetal-config-label:
-This is required for baremetal testing. It describes the IP address of the various ports, the Network devices drivers and MAC addresses and the network
+This is required for baremetal testing. It describes the IP address of the
+various ports, the Network devices drivers and MAC addresses and the network
configuration.
-In this example we will describe a 2 port configuration. This file is the same for all 2 port NSB Prox tests on the same platforms/configuration.
+In this example we will describe a 2 port configuration. This file is the same
+for all 2 port NSB Prox tests on the same platforms/configuration.
.. image:: images/PROX_Baremetal_config.png
:width: 1000px
:alt: NSB PROX Yardstick Config
-Now lets describe the sections of the file.
-
- 1. ``TrafficGen`` - This section describes the Traffic Generator node of the test configuration. The name of the node ``trafficgen_1`` must match the node name
- in the ``Test Description File for Baremetal`` mentioned earlier. The password attribute of the test needs to be configured. All other parameters
- can remain as default settings.
- 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic Generator.
- 3. ``xe0`` is DPDK Port 0. ``lspci`` and `` ./dpdk-devbind.py -s`` can be used to provide the interface information. ``netmask`` and ``local_ip`` should not be changed
- 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1`` section needs to be repeated and modified accordingly.
- 5. ``vnf`` - This section describes the SUT of the test configuration. The name of the node ``vnf`` must match the node name in the
- ``Test Description File for Baremetal`` mentioned earlier. The password attribute of the test needs to be configured. All other parameters
- can remain as default settings
+Now let's describe the sections of the file.
+
+ 1. ``TrafficGen`` - This section describes the Traffic Generator node of the
+ test configuration. The name of the node ``trafficgen_1`` must match the
+ node name in the ``Test Description File for Baremetal`` mentioned
+ earlier. The password attribute of the test needs to be configured. All
+ other parameters can remain as default settings.
+ 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic
+ Generator.
+ 3. ``xe0`` is DPDK Port 0. ``lspci`` and ``./dpdk-devbind.py -s`` can be used
+ to provide the interface information. ``netmask`` and ``local_ip`` should
+ not be changed
+ 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1``
+ section needs to be repeated and modified accordingly.
+ 5. ``vnf`` - This section describes the SUT of the test configuration. The
+ name of the node ``vnf`` must match the node name in the
+ ``Test Description File for Baremetal`` mentioned earlier. The password
+ attribute of the test needs to be configured. All other parameters can
+ remain as default settings
6. ``interfaces`` - This defines the DPDK interfaces on the SUT
7. ``xe0`` - Same as 3 but for the ``SUT``.
8. ``xe1`` - Same as 4 but for the ``SUT`` also.
@@ -877,11 +936,13 @@ Now lets describe the sections of the file.
*Grafana Dashboard*
-------------------
-The grafana dashboard visually displays the results of the tests. The steps required to produce a grafana dashboard are described here.
+The grafana dashboard visually displays the results of the tests. The steps
+required to produce a grafana dashboard are described here.
.. _yardstick-config-label:
- a. Configure ``yardstick`` to use influxDB to store test results. See file ``/etc/yardstick/yardstick.conf``.
+ a. Configure ``yardstick`` to use influxDB to store test results. See file
+ ``/etc/yardstick/yardstick.conf``.
.. image:: images/PROX_Yardstick_config.png
:width: 1000px
@@ -890,10 +951,12 @@ The grafana dashboard visually displays the results of the tests. The steps requ
1. Specify the dispatcher to use influxDB to store results.
2. "target = .. " - Specify location of influxDB to store results.
"db_name = yardstick" - name of database. Do not change
- "username = root" - username to use to store result. (Many tests are run as root)
+ "username = root" - username to use to store result. (Many tests are
+ run as root)
"password = ... " - Please set to root user password
- b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See `grafana deployment`_.
+ b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See
+ `grafana deployment`_.
c. Generate the test data. Run the tests as follows .::
yardstick --debug task start tc_prox_<context>_<test>-ports.yaml
@@ -910,7 +973,8 @@ How to run NSB Prox Test on an baremetal environment
In order to run the NSB PROX test.
- 1. Install NSB on Traffic Generator node and Prox in SUT. See `NSB Installation`_
+ 1. Install NSB on Traffic Generator node and Prox in SUT. See
+ `NSB Installation`_
2. To enter container::
@@ -922,8 +986,8 @@ In order to run the NSB PROX test.
cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox
- b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that topology
- into this directory as per baremetal-config-label_
+ b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that
+ topology into this directory as per baremetal-config-label_
c. Install and configure ``yardstick.conf`` ::
@@ -971,7 +1035,8 @@ Here is a list of frequently asked questions.
*NSB Prox does not work on Baremetal, How do I resolve this?*
-------------------------------------------------------------
-If PROX NSB does not work on baremetal, problem is either in network configuration or test file.
+If PROX NSB does not work on baremetal, problem is either in network
+configuration or test file.
*Solution*
@@ -1011,8 +1076,8 @@ If PROX NSB does not work on baremetal, problem is either in network configurati
See ``Link detected`` if ``yes`` .... Cable is good. If ``no`` you have an issue with your cable/port.
-2. If existing baremetal works then issue is with your test. Check the traffic generator gen_<test>-<ports>.cfg to ensure
- it is producing a valid packet.
+2. If existing baremetal works then issue is with your test. Check the traffic
+ generator gen_<test>-<ports>.cfg to ensure it is producing a valid packet.
*How do I debug NSB Prox on Baremetal?*
---------------------------------------
@@ -1033,7 +1098,8 @@ If PROX NSB does not work on baremetal, problem is either in network configurati
cd
/opt/nsb_bin/prox -f /tmp/handle_<test>-<ports>.cfg
-4. Now let's examine the Generator Output. In this case the output of gen_l2fwd-4.cfg.
+4. Now let's examine the Generator Output. In this case the output of
+ ``gen_l2fwd-4.cfg``.
.. image:: images/PROX_Gen_GUI.png
:width: 1000px
@@ -1048,10 +1114,12 @@ If PROX NSB does not work on baremetal, problem is either in network configurati
It appears what is transmitted is received.
.. Caution::
- The number of packets MAY not exactly match because the ports are read in sequence.
+ The number of packets MAY not exactly match because the ports are read in
+ sequence.
.. Caution::
- What is transmitted on PORT X may not always be received on same port. Please check the Test scenario.
+ What is transmitted on PORT X may not always be received on same port.
+ Please check the Test scenario.
5. Now lets examine the SUT Output
@@ -1083,17 +1151,18 @@ If PROX NSB does not work on baremetal, problem is either in network configurati
*NSB Prox works on Baremetal but not in Openstack. How do I resolve this?*
--------------------------------------------------------------------------
-NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A badly
-formed packed may still work with PROX on Baremetal. However on
+NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A
+badly formed packed may still work with PROX on Baremetal. However on
Openstack the packet must be correct and all fields of the header correct.
-Eg A packet with an invalid Protocol ID would still work in Baremetal
-but this packet would be rejected by openstack.
+E.g. A packet with an invalid Protocol ID would still work in Baremetal but
+this packet would be rejected by openstack.
*Solution*
1. Check the validity of the packet.
2. Use a known good packet in your test
- 3. If using ``Random`` fields in the traffic generator, disable them and retry.
+ 3. If using ``Random`` fields in the traffic generator, disable them and
+ retry.
*How do I debug NSB Prox on Openstack?*
@@ -1111,7 +1180,8 @@ but this packet would be rejected by openstack.
3. Install openstack credentials.
- Depending on your openstack deployment, the location of these credentials may vary.
+ Depending on your openstack deployment, the location of these credentials
+ may vary.
On this platform I do this via::
scp root@10.237.222.55:/etc/kolla/admin-openrc.sh .
@@ -1127,8 +1197,8 @@ but this packet would be rejected by openstack.
b. Get the Floating IP of the Traffic Generator & SUT
- This generates a lot of information. Please not the floating IP of the VNF and
- the Traffic Generator.
+ This generates a lot of information. Please note the floating IP of the
+ VNF and the Traffic Generator.
.. image:: images/PROX_Openstack_stack_show_a.png
:width: 1000px
@@ -1215,7 +1285,8 @@ If it fails due to ::
Missing value auth-url required for auth plugin password
-Check your shell environment for Openstack variables. One of them should contain the authentication URL ::
+Check your shell environment for Openstack variables. One of them should
+contain the authentication URL ::
OS_AUTH_URL=``https://192.168.72.41:5000/v3``
@@ -1239,16 +1310,16 @@ Result ::
and visible.
-If the Openstack Cli appears to hang, then verify the proxys and no_proxy are set correctly.
-They should be similar to ::
+If the Openstack ClI appears to hang, then verify the proxys and ``no_proxy``
+are set correctly. They should be similar to ::
- FTP_PROXY="http://proxy.ir.intel.com:911/"
- HTTPS_PROXY="http://proxy.ir.intel.com:911/"
- HTTP_PROXY="http://proxy.ir.intel.com:911/"
+ FTP_PROXY="http://<your_proxy>:<port>/"
+ HTTPS_PROXY="http://<your_proxy>:<port>/"
+ HTTP_PROXY="http://<your_proxy>:<port>/"
NO_PROXY="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
- ftp_proxy="http://proxy.ir.intel.com:911/"
- http_proxy="http://proxy.ir.intel.com:911/"
- https_proxy="http://proxy.ir.intel.com:911/"
+ ftp_proxy="http://<your_proxy>:<port>/"
+ http_proxy="http://<your_proxy>:<port>/"
+ https_proxy="http://<your_proxy>:<port>/"
no_proxy="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
Where
@@ -1256,8 +1327,6 @@ Where
1) 10.237.222.55 = IP Address of deployment node
2) 10.237.223.80 = IP Address of Controller node
3) 10.237.222.134 = IP Address of Compute Node
- 4) ir.intel.com = local no proxy
-
*How to Understand the Grafana output?*
---------------------------------------
@@ -1280,48 +1349,48 @@ Where
A. Test Parameters - Test interval, Duartion, Tolerated Loss and Test Precision
-B. Overall No of packets send and received during test
+B. No. of packets send and received during test
C. Generator Stats - packets sent, received and attempted by Generator
-D. Packets Size
-
-E. No of packets received by SUT
-
-F. No of packets forwarded by SUT
-
-G. This is the number of packets sent by the generator per port, for each interval.
+D. Packet size
-H. This is the number of packets received by the generator per port, for each interval.
+E. No. of packets received by SUT
-I. This is the number of packets send and received by the generator and lost by the SUT
- that meet the success criteria
+F. No. of packets forwarded by SUT
-J. This is the changes the Percentage of Line Rate used over a test, The MAX and the
- MIN should converge to within the interval specified as the ``test-precision``.
+G. No. of packets sent by the generator per port, for each interval.
-K. This is the packets Size supported during test. If "N/A" appears in any field the result has not been decided.
+H. No. of packets received by the generator per port, for each interval.
-L. This is the calculated throughput in MPPS(Million Packets Per second) for this line rate.
+I. No. of packets sent and received by the generator and lost by the SUT that
+ meet the success criteria
-M. This is the actual No, of packets sent by the generator in MPPS
+J. The change in the Percentage of Line Rate used over a test, The MAX and the
+ MIN should converge to within the interval specified as the
+ ``test-precision``.
-N. This is the actual No. of packets received by the generator in MPPS
+K. Packet size supported during test. If *N/A* appears in any field the
+ result has not been decided.
-O. This is the total No. of packets sent by SUT.
+L. Calculated throughput in MPPS (Million Packets Per second) for this line
+ rate.
-P. This is the total No. of packets received by the SUT
+M. No. of packets sent by the generator in MPPS
-Q. This is the total No. of packets dropped. (These packets were sent by the generator but not
- received back by the generator, these may be dropped by the SUT or the Generator)
+N. No. of packets received by the generator in MPPS
-R. This is the tolerated no of packets that can be dropped.
+O. No. of packets sent by SUT.
-S. This is the test Throughput in Gbps
+P. No. of packets received by the SUT
-T. This is the Latencey per Port
+Q. Total no. of dropped packets -- Packets sent but not received back by the
+ generator, these may be dropped by the SUT or the generator.
-U. This is the CPU Utilization
+R. The tolerated no. of dropped packets.
+S. Test throughput in Gbps
+T. Latencey per Port
+U. CPU Utilization
diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst
index 494b1ef3d..5fc2e8d0f 100755
--- a/docs/testing/user/userguide/01-introduction.rst
+++ b/docs/testing/user/userguide/01-introduction.rst
@@ -9,8 +9,8 @@ Introduction
**Welcome to Yardstick's documentation !**
-.. _Pharos: https://wiki.opnfv.org/pharos
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Pharos: https://wiki.opnfv.org/display/pharos
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
.. _Presentation: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_yardstick_project.pdf?version=1&modificationDate=1458848320000&api=v2
Yardstick_ is an OPNFV Project.
@@ -70,7 +70,7 @@ This document consists of the following chapters:
Yardstick - Network service benchmarking to test real world usecase for a
given VNF.
-* Chapter :doc:`13-nsb_installation` provides instructions to install
+* Chapter :doc:`13-nsb-installation` provides instructions to install
*Yardstick - Network Service Benchmarking (NSB) testing*.
* Chapter :doc:`14-nsb-operation` provides information on running *NSB*
@@ -83,4 +83,4 @@ Contact Yardstick
Feedback? `Contact us`_
-.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="[yardstick]"
+.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="#yardstick"
diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst
index 886631510..62250d6a3 100755
--- a/docs/testing/user/userguide/03-architecture.rst
+++ b/docs/testing/user/userguide/03-architecture.rst
@@ -243,26 +243,27 @@ Yardstick Directory structure
with support for different installers.
*docs/* - All documentation is stored here, such as configuration guides,
- user guides and Yardstick descriptions.
+ user guides and Yardstick test case descriptions.
*etc/* - Used for test cases requiring specific POD configurations.
*samples/* - test case samples are stored here, most of all scenario and
- feature's samples are shown in this directory.
+ feature samples are shown in this directory.
-*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as
- well as the test cases run to verify the NFVI (*opnfv/*) are stored.
- Also configurations of what to run daily and weekly at the different
- PODs is located here.
+*tests/* - The test cases run to verify the NFVI (*opnfv/*) are stored here.
+ The configurations of what to run daily and weekly at the different
+ PODs are also located here.
-*tools/* - Currently contains tools to build image for VMs which are deployed
- by Heat. Currently contains how to build the yardstick-trusty-server
- image with the different tools that are needed from within the
- image.
+*tools/* - Contains tools to build image for VMs which are deployed by Heat.
+ Currently contains how to build the yardstick-image with the
+ different tools that are needed from within the image.
*plugin/* - Plug-in configuration files are stored here.
-*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts,
- CLI parsing, keys, plotting tools, dispatcher, plugin
+*yardstick/* - Contains the internals of Yardstick: :term:`Runners <runner>`,
+ :term:`Scenarios <scenario>`, :term:`Contexts <context>`, CLI
+ parsing, keys, plotting tools, dispatcher, plugin
install/remove scripts and so on.
+*yardstick/tests* - The Yardstick internal tests (*functional/* and *unit/*)
+ are stored here.
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst
index a4846230e..2f8175c25 100644
--- a/docs/testing/user/userguide/04-installation.rst
+++ b/docs/testing/user/userguide/04-installation.rst
@@ -3,6 +3,17 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others.
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ~~~~~~~ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
======================
Yardstick Installation
======================
@@ -444,6 +455,115 @@ These configuration files can be found in the ``samples`` directory.
Default location for the output is ``/tmp/yardstick.out``.
+Automatic installation of Yardstick using ansible
+-------------------------------------------------
+
+Automatic installation can be used as an alternative to the manual.
+Yardstick can be installed on the bare metal and to the container. Yardstick
+container can be either pulled or built.
+
+Bare metal installation
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Use ansible script ``install.yaml`` to install Yardstick on Ubuntu server:
+
+.. code-block:: console
+
+ ansible-playbook -i install-inventory.ini install.yaml \
+ -e YARDSTICK_DIR=<path to Yardstick folder>
+
+.. note:: By default ``INSTALLATION_MODE`` is ``baremetal``.
+
+.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to
+ Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter.
+
+.. note:: To install Yardstick in virtual environment pass parameter
+ ``-e VIRTUAL_ENVIRONMENT=True``.
+
+To build Yardstick NSB image pass ``IMG_PROPERTY=nsb`` as input parameter:
+
+.. code-block:: console
+
+ ansible-playbook -i install-inventory.ini install.yaml \
+ -e IMAGE_PROPERTY=nsb \
+ -e YARDSTICK_DIR=<path to Yardstick folder>
+
+.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF
+ images will be built. Image type is defined by parameter ``IMAGE_PROPERTY``.
+ By default Yardstick image will be built.
+
+Container installation
+^^^^^^^^^^^^^^^^^^^^^^
+
+Use ansible script ``install.yaml`` to pull or build Yardstick
+container. To pull Yardstick image and start container run:
+
+.. code-block:: console
+
+ ansible-playbook -i install-inventory.ini install.yaml \
+ -e YARDSTICK_DIR=<path to Yardstick folder> \
+ -e INSTALLATION_MODE=container_pull
+
+.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF
+ images will be built. Image type is defined by variable ``IMG_PROPERTY`` in
+ file ``ansible/group_vars/all.yml``. By default Yardstick image will be
+ built.
+
+.. note:: Open question: How to know if Docker image is built on Ubuntu 16.04 and 18.04?
+ Do we need separate tag to be used?
+
+To build Yardstick image run:
+
+.. code-block:: console
+
+ ansible-playbook -i install-inventory.ini install.yaml \
+ -e YARDSTICK_DIR=<path to Yardstick folder> \
+ -e INSTALLATION_MODE=container
+
+.. note:: In this ``INSTALLATION_MODE`` mode neither Yardstick image nor SampleVNF
+ image will be built.
+
+.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to
+ Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter.
+
+Parameters for ``install.yaml``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description of the parameters used with ``install.yaml`` script
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | -i install-inventory.ini| Installs package dependency to remote servers |
+ | | Mandatory parameter |
+ | | By default no remote servers are provided |
+ | | Needed packages will be installed on localhost |
+ +-------------------------+-------------------------------------------------+
+ | -e YARDSTICK_DIR | Path to Yardstick folder |
+ | | Mandatory parameter |
+ +-------------------------+-------------------------------------------------+
+ | -e INSTALLATION_MODE | baremetal: Yardstick is installed to the bare |
+ | | metal |
+ | | Default parameter |
+ | +-------------------------------------------------+
+ | | container: Yardstick is installed in container |
+ | | Container is built from Dockerfile |
+ | +-------------------------------------------------+
+ | | container_pull: Yardstick is installed in |
+ | | container |
+ | | Container is pulled from docker hub |
+ +-------------------------+-------------------------------------------------+
+ | -e OS_RELEASE | xenial or bionic: Ubuntu version to be used |
+ | | Default is Ubuntu 16.04 (xenial) |
+ +-------------------------+-------------------------------------------------+
+ | -e IMAGE_PROPERTY | normal or nsb: Type of the VM image to be built |
+ | | Default image is Yardstick |
+ +-------------------------+-------------------------------------------------+
+ | -e VIRTUAL_ENVIRONMENT | False or True: Whether install in virtualenv |
+ | | Default is False |
+ +-------------------------+-------------------------------------------------+
+
+
Deploy InfluxDB and Grafana using Docker
----------------------------------------
@@ -455,17 +575,17 @@ Grafana to display data in the following sections.
Automatic deployment of InfluxDB and Grafana containers (**recommended**)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Firstly, enter the Yardstick container::
+1. Enter the Yardstick container::
- sudo -EH docker exec -it yardstick /bin/bash
+ sudo -EH docker exec -it yardstick /bin/bash
-Secondly, create InfluxDB container and configure with the following command::
+2. Create InfluxDB container and configure with the following command::
- yardstick env influxdb
+ yardstick env influxdb
-Thirdly, create and configure Grafana container::
+3. Create and configure Grafana container::
- yardstick env grafana
+ yardstick env grafana
Then you can run a test case and visit http://host_ip:1948
(``admin``/``admin``) to see the results.
@@ -493,21 +613,21 @@ Run influxDB::
sudo -EH docker run -d --name influxdb \
-p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \
tutum/influxdb
- docker exec -it influxdb bash
+ docker exec -it influxdb influx
Configure influxDB::
- influx
- >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES
- >CREATE DATABASE yardstick;
- >use yardstick;
- >show MEASUREMENTS;
+ > CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES
+ > CREATE DATABASE yardstick;
+ > use yardstick;
+ > show MEASUREMENTS;
+ > quit
Run Grafana::
sudo -EH docker run -d --name grafana -p 1948:3000 grafana/grafana
-Log on http://{YOUR_IP_HERE}:1948 using ``admin``/``admin`` and configure
+Log on to ``http://{YOUR_IP_HERE}:1948`` using ``admin``/``admin`` and configure
database resource to be ``{YOUR_IP_HERE}:8086``.
.. image:: images/Grafana_config.png
@@ -520,7 +640,7 @@ Configure ``yardstick.conf``::
sudo cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
sudo vi /etc/yardstick/yardstick.conf
-Modify ``yardstick.conf``::
+Modify ``yardstick.conf`` to add the ``influxdb`` dispatcher::
[DEFAULT]
debug = True
@@ -533,7 +653,7 @@ Modify ``yardstick.conf``::
username = root
password = root
-Now you can run Yardstick test cases and store the results in influxDB.
+Now Yardstick will store results in InfluxDB when you run a testcase.
Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**)
diff --git a/docs/testing/user/userguide/05-operation.rst b/docs/testing/user/userguide/05-operation.rst
index f390d1643..82539c97f 100644
--- a/docs/testing/user/userguide/05-operation.rst
+++ b/docs/testing/user/userguide/05-operation.rst
@@ -183,7 +183,7 @@ Combining these elements together, a sample Heat context config looks like:
.. literalinclude::
../../../../yardstick/tests/integration/dummy-scenario-heat-context.yaml
:start-after: ---
- :empahsise-lines: 14-
+ :emphasize-lines: 14-
Using exisiting HOT Templates
'''''''''''''''''''''''''''''
diff --git a/docs/testing/user/userguide/08-grafana.rst b/docs/testing/user/userguide/08-grafana.rst
index 29bc23a08..020a08a65 100644
--- a/docs/testing/user/userguide/08-grafana.rst
+++ b/docs/testing/user/userguide/08-grafana.rst
@@ -36,7 +36,7 @@ of TC002.
.. image:: images/TC002.png
:width: 800px
- :alt:TC002 dashboard
+ :alt: TC002 dashboard
For each test case dashboard. On the top left, we have a dashboard selection,
you can switch to different test cases using this pull-down menu.
diff --git a/docs/testing/user/userguide/09-api.rst b/docs/testing/user/userguide/09-api.rst
index f0ae3980b..1a896699b 100644
--- a/docs/testing/user/userguide/09-api.rst
+++ b/docs/testing/user/userguide/09-api.rst
@@ -433,7 +433,7 @@ Example::
/api/v2/yardstick/tasks/<task_id>
---------------------------------
+---------------------------------
Description: This API is used to do some work related to yardstick tasks. For Euphrates, it supports:
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index 71a5c1130..7b0d46804 100644
--- a/docs/testing/user/userguide/12-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -10,7 +10,7 @@ Network Services Benchmarking (NSB)
Abstract
========
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
This chapter provides an overview of the NSB, a contribution to OPNFV
Yardstick_ from Intel.
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
index fb68fbf21..973d56628 100644
--- a/docs/testing/user/userguide/13-nsb-installation.rst
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -1,14 +1,25 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
+.. (c) OPNFV, 2016-2018 Intel Corporation.
+
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ~~~~~~~ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
=====================================
Yardstick - NSB Testing -Installation
=====================================
Abstract
-========
+--------
The Network Service Benchmarking (NSB) extends the yardstick framework to do
VNF characterization and benchmarking in three different execution
@@ -27,7 +38,7 @@ The steps needed to run Yardstick with NSB testing are:
Prerequisites
-=============
+-------------
Refer chapter Yardstick Installation for more information on yardstick
prerequisites
@@ -46,7 +57,7 @@ Several prerequisites are needed for Yardstick (VNF testing):
* intel-cmt-cat
Hardware & Software Ingredients
--------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SUT requirements:
@@ -85,7 +96,7 @@ Boot and BIOS settings:
Install Yardstick (NSB Testing)
-===============================
+-------------------------------
Download the source code and install Yardstick from it
@@ -168,8 +179,12 @@ It will also automatically download all the packages needed for NSB Testing
setup. Refer chapter :doc:`04-installation` for more on docker
**Install Yardstick using Docker (recommended)**
-System Topology:
-================
+Another way to execute an installation for a Bare-Metal or a Standalone context
+is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation`
+for more details.
+
+System Topology
+---------------
.. code-block:: console
@@ -184,10 +199,10 @@ System Topology:
Environment parameters and credentials
-======================================
+--------------------------------------
Config yardstick conf
----------------------
+~~~~~~~~~~~~~~~~~~~~~
If user did not run 'yardstick env influxdb' inside the container, which will
generate correct ``yardstick.conf``, then create the config file manually (run
@@ -218,11 +233,11 @@ Add trex_path, trex_client_lib and bin_path in 'nsb' section.
trex_client_lib=/opt/nsb_bin/trex_client/stl
Run Yardstick - Network Service Testcases
-=========================================
+-----------------------------------------
NS testing - using yardstick CLI
---------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
See :doc:`04-installation`
@@ -235,13 +250,13 @@ NS testing - using yardstick CLI
yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
Network Service Benchmarking - Bare-Metal
-=========================================
+-----------------------------------------
Bare-Metal Config pod.yaml describing Topology
-----------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Bare-Metal 2-Node setup
-^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++
.. code-block:: console
+----------+ +----------+
@@ -254,7 +269,7 @@ Bare-Metal 2-Node setup
trafficgen_1 vnf
Bare-Metal 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++++
.. code-block:: console
+----------+ +----------+ +------------+
@@ -269,7 +284,7 @@ Bare-Metal 3-Node setup - Correlated Traffic
Bare-Metal Config pod.yaml
---------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~
Before executing Yardstick test cases, make sure that pod.yaml reflects the
topology and update all the required fields.::
@@ -344,13 +359,13 @@ topology and update all the required fields.::
Network Service Benchmarking - Standalone Virtualization
-========================================================
+--------------------------------------------------------
SR-IOV
-------
+~~~~~~
SR-IOV Pre-requisites
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
On Host, where VM is created:
a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
@@ -421,10 +436,10 @@ On Host, where VM is created:
SR-IOV Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++
-SR-IOV 2-Node setup:
-^^^^^^^^^^^^^^^^^^^^
+SR-IOV 2-Node setup
++++++++++++++++++++
.. code-block:: console
+--------------------+
@@ -452,7 +467,7 @@ SR-IOV 2-Node setup:
SR-IOV 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++
.. code-block:: console
+--------------------+
@@ -488,7 +503,7 @@ topology and update all the required fields.
.. note:: Update all the required fields like ip, user, password, pcis, etc...
SR-IOV Config pod_trex.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++
.. code-block:: YAML
@@ -517,7 +532,7 @@ SR-IOV Config pod_trex.yaml
local_mac: "00:00.00:00:00:02"
SR-IOV Config host_sriov.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
.. code-block:: YAML
@@ -533,7 +548,7 @@ SR-IOV testcase update:
``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
Update "contexts" section
-"""""""""""""""""""""""""
+'''''''''''''''''''''''''
.. code-block:: YAML
@@ -578,10 +593,10 @@ Update "contexts" section
OVS-DPDK
---------
+~~~~~~~~
OVS-DPDK Pre-requisites
-^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~
On Host, where VM is created:
a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
@@ -655,11 +670,10 @@ On Host, where VM is created:
OVS-DPDK Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++++
OVS-DPDK 2-Node setup
-^^^^^^^^^^^^^^^^^^^^^
-
++++++++++++++++++++++
.. code-block:: console
@@ -689,7 +703,7 @@ OVS-DPDK 2-Node setup
OVS-DPDK 3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++++++
.. code-block:: console
@@ -729,7 +743,7 @@ topology and update all the required fields.
.. note:: Update all the required fields like ip, user, password, pcis, etc...
OVS-DPDK Config pod_trex.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
.. code-block:: YAML
@@ -757,7 +771,7 @@ OVS-DPDK Config pod_trex.yaml
local_mac: "00:00.00:00:00:02"
OVS-DPDK Config host_ovs.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++
.. code-block:: YAML
@@ -773,7 +787,7 @@ ovs_dpdk testcase update:
``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
Update "contexts" section
-"""""""""""""""""""""""""
+'''''''''''''''''''''''''
.. code-block:: YAML
@@ -828,7 +842,7 @@ Update "contexts" section
Network Service Benchmarking - OpenStack with SR-IOV support
-============================================================
+------------------------------------------------------------
This section describes how to run a Sample VNF test case, using Heat context,
with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
@@ -836,7 +850,7 @@ DevStack, with SR-IOV support.
Single node OpenStack setup with external TG
---------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
@@ -867,7 +881,7 @@ Single node OpenStack setup with external TG
Host pre-configuration
-^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++
.. warning:: The following configuration requires sudo access to the system. Make
sure that your user have the access.
@@ -922,7 +936,7 @@ Setup system proxy (if needed). Add the following configuration into the
``/etc/environment`` file:
.. note:: The proxy server name/port and IPs should be changed according to
- actuall/current proxy configuration in the lab.
+ actual/current proxy configuration in the lab.
.. code:: bash
@@ -967,7 +981,7 @@ Setup SR-IOV ports on the host:
DevStack installation
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
documentation to install OpenStack on a host. Please note, that stable
@@ -989,7 +1003,7 @@ Start the devstack installation on a host.
TG host configuration
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
Yardstick automatically install and configure Trex traffic generator on TG
host based on provided POD file (see below). Anyway, it's recommended to check
@@ -998,7 +1012,7 @@ the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
Run the Sample VNF test case
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++
There is an example of Sample VNF test case ready to be executed in an
OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
@@ -1023,7 +1037,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section.
Multi node OpenStack TG and VNF setup (two nodes)
--------------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
@@ -1054,14 +1068,14 @@ Multi node OpenStack TG and VNF setup (two nodes)
Controller/Compute pre-configuration
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++
Pre-configuration of the controller and compute hosts are the same as
described in `Host pre-configuration`_ section. Follow the steps in the section.
DevStack configuration
-^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++
Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
documentation to install OpenStack on a host. Please note, that stable
@@ -1088,7 +1102,7 @@ Start the devstack installation on the controller and compute hosts.
Run the sample vFW TC
-^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++
Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
context.
@@ -1105,10 +1119,10 @@ and the following yardtick command line arguments:
Enabling other Traffic generator
-================================
+--------------------------------
IxLoad
-^^^^^^
+~~~~~~
1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
@@ -1149,7 +1163,7 @@ IxLoad
``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
IxNetwork
----------
+~~~~~~~~~
IxNetwork testcases use IxNetwork API Python Bindings module, which is
installed as part of the requirements of the project.
@@ -1178,3 +1192,52 @@ installed as part of the requirements of the project.
3. Execute testcase in samplevnf folder e.g.
``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
+
+Spirent Landslide
+-----------------
+
+In order to use Spirent Landslide for vEPC testcases, some dependencies have
+to be preinstalled and properly configured.
+
+- Java
+
+ 32-bit Java installation is required for the Spirent Landslide TCL API.
+
+ | ``$ sudo apt-get install openjdk-8-jdk:i386``
+
+ .. important::
+ Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
+ check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
+ section of installation instructions.
+
+- LsApi (Tcl API module)
+
+ Follow Landslide documentation for detailed instructions on Linux
+ installation of Tcl API and its dependencies
+ ``http://TAS_HOST_IP/tclapiinstall.html``.
+ For working with LsApi Python wrapper only steps 1-5 are required.
+
+ .. note:: After installation make sure your API home path is included in
+ ``PYTHONPATH`` environment variable.
+
+ .. important::
+ The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
+ For LsApi module to initialize correctly following lines (184-186) in
+ lsapi.py
+
+ .. code-block:: python
+
+ ldpath = os.environ.get('LD_LIBRARY_PATH', '')
+ if ldpath == '':
+ environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
+
+ should be changed to:
+
+ .. code-block:: python
+
+ ldpath = os.environ.get('LD_LIBRARY_PATH', '')
+ if not ldpath == '':
+ environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
+
+.. note:: The Spirent landslide TCL software package needs to be updated in case
+ the user upgrades to a new version of Spirent landslide software.
diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst
index a5f3a0cf6..c96155804 100644
--- a/docs/testing/user/userguide/14-nsb-operation.rst
+++ b/docs/testing/user/userguide/14-nsb-operation.rst
@@ -1,7 +1,7 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
+.. (c) OPNFV, 2016-2018 Intel Corporation.
Yardstick - NSB Testing - Operation
===================================
@@ -256,7 +256,7 @@ to the VNF.
An example scale-up Heat testcase is:
-.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
+.. literalinclude:: /../samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
:language: yaml
This testcase template requires specifying the number of VCPUs, Memory and Ports.
@@ -271,7 +271,7 @@ In order to support ports scale-up, traffic and topology templates need to be us
A example topology template is:
-.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
+.. literalinclude:: /../samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
:language: yaml
This template has ``vports`` as an argument. To pass this argument it needs to
@@ -293,7 +293,7 @@ For example:
A example traffic profile template is:
-.. literalinclude:: /submodules/yardstick/samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
+.. literalinclude:: /../samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
:language: yaml
There is an option to provide predefined config for SampleVNFs. Path to config
@@ -457,5 +457,110 @@ Sample test case file
4. Modify ``networks/phy_port`` accordingly to the baremetal setup.
5. Run test from:
-.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
+.. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
:language: yaml
+
+Preparing test run of vEPC test case
+------------------------------------
+
+Provided vEPC test cases are examples of emulation of vEPC infrastructure
+components, such as UE, eNodeB, MME, SGW, PGW.
+
+Location of vEPC test cases: ``samples/vnf_samples/nsut/vepc/``.
+
+Before running a specific vEPC test case using NSB, some preconfiguration
+needs to be done.
+
+Update Spirent Landslide TG configuration in pod file
+=====================================================
+
+Examples of ``pod.yaml`` files could be found in
+:file:`etc/yardstick/nodes/standalone`.
+The name of related pod file could be checked in the context section of NSB
+test case.
+
+The ``pod.yaml`` related to vEPC test case uses some sub-structures that hold the
+details of accessing the Spirent Landslide traffic generator.
+These subsections and the changes to be done in provided example pod file are
+described below.
+
+1. ``tas_manager``: data under this key holds the information required to
+access Landslide TAS (Test Administration Server) and perform needed
+configurations on it.
+
+ * ``ip``: IP address of TAS Manager node; should be updated according to test
+ setup used
+ * ``super_user``: superuser name; could be retrieved from Landslide documentation
+ * ``super_user_password``: superuser password; could be retrieved from
+ Landslide documentation
+ * ``cfguser_password``: password of predefined user named 'cfguser'; default
+ password could be retrieved from Landslide documentation
+ * ``test_user``: username to be used during test run as a Landslide library
+ name; to be defined by test run operator
+ * ``test_user_password``: password of test user; to be defined by test run
+ operator
+ * ``proto``: *http* or *https*; to be defined by test run operator
+ * ``license``: Landslide license number installed on TAS
+
+2. The ``config`` section holds information about test servers (TSs) and
+systems under test (SUTs). Data is represented as a list of entries.
+Each such entry contains:
+
+ * ``test_server``: this subsection represents data related to test server
+ configuration, such as:
+
+ * ``name``: test server name; unique custom name to be defined by test
+ operator
+ * ``role``: this value is used as a key to bind specific Test Server and
+ TestCase; should be set to one of test types supported by TAS license
+ * ``ip``: Test Server IP address
+ * ``thread_model``: parameter related to Test Server performance mode.
+ The value should be one of the following: "Legacy" | "Max" | "Fireball".
+ Refer to Landslide documentation for details.
+ * ``phySubnets``: a structure used to specify IP ranges reservations on
+ specific network interfaces of related Test Server. Structure fields are:
+
+ * ``base``: start of IP address range
+ * ``mask``: IP range mask in CIDR format
+ * ``name``: network interface name, e.g. *eth1*
+ * ``numIps``: size of IP address range
+
+ * ``preResolvedArpAddress``: a structure used to specify the range of IP
+ addresses for which the ARP responses will be emulated
+
+ * ``StartingAddress``: IP address specifying the start of IP address range
+ * ``NumNodes``: size of the IP address range
+
+ * ``suts``: a structure that contains definitions of each specific SUT
+ (represents a vEPC component). SUT structure contains following key/value
+ pairs:
+
+ * ``name``: unique custom string specifying SUT name
+ * ``role``: string value corresponding with an SUT role specified in the
+ session profile (test session template) file
+ * ``managementIp``: SUT management IP adress
+ * ``phy``: network interface name, e.g. *eth1*
+ * ``ip``: vEPC component IP address used in test case topology
+ * ``nextHop``: next hop IP address, to allow for vEPC inter-node communication
+
+Update NSB test case definitions
+================================
+NSB test case file designated for vEPC testing contains an example of specific
+test scenario configuration.
+Test operator may change these definitions as required for the use case that
+requires testing.
+Specifically, following subsections of the vEPC test case (section **scenarios**)
+may be changed.
+
+1. Subsection ``options``: contains custom parameters used for vEPC testing
+
+ * subsection ``dmf``: may contain one or more parameters specified in
+ ``traffic_profile`` template file
+ * subsection ``test_cases``: contains re-definitions of parameters specified
+ in ``session_profile`` template file
+
+ .. note:: All parameters in ``session_profile``, value of which is a
+ placeholder, needs to be re-defined to construct a valid test session.
+
+2. Subsection ``runner``: specifies the test duration and the interval of
+TG and VNF side KPIs polling. For more details, refer to :doc:`03-architecture`.
diff --git a/docs/testing/user/userguide/15-list-of-tcs.rst b/docs/testing/user/userguide/15-list-of-tcs.rst
index 0efecebd1..2f0a87144 100644
--- a/docs/testing/user/userguide/15-list-of-tcs.rst
+++ b/docs/testing/user/userguide/15-list-of-tcs.rst
@@ -118,17 +118,6 @@ StorPerf
opnfv_yardstick_tc074.rst
-virtual Traffic Classifier
---------------------------
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc006.rst
- opnfv_yardstick_tc007.rst
- opnfv_yardstick_tc020.rst
- opnfv_yardstick_tc021.rst
-
Templates
=========
diff --git a/docs/testing/user/userguide/comp-intro.rst b/docs/testing/user/userguide/comp-intro.rst
index ad354b66d..bab6e60da 100644
--- a/docs/testing/user/userguide/comp-intro.rst
+++ b/docs/testing/user/userguide/comp-intro.rst
@@ -7,10 +7,10 @@
Yardstick
=========
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
.. _Presentation: https://wiki.opnfv.org/_media/opnfv_summit_-_yardstick_project.pdf
.. _NFV-TST001: https://docbox.etsi.org/ISG/NFV/Open/Drafts/TST001_-_Pre-deployment_Validation/
-.. _Yardsticktst: https://wiki.opnfv.org/_media/opnfv_summit_-_bridging_opnfv_and_etsi.pdf
+.. _Yardsticktst: http://events17.linuxfoundation.org/sites/events/files/slides/OPNFV%20Summit%20-%20bridging_opnfv_and_etsi.pdf
The project's goal is to verify infrastructure compliance, from the perspective
of a Virtual Network Function (VNF).
diff --git a/docs/testing/user/userguide/glossary.rst b/docs/testing/user/userguide/glossary.rst
index be98aa6c0..6a153943c 100644
--- a/docs/testing/user/userguide/glossary.rst
+++ b/docs/testing/user/userguide/glossary.rst
@@ -13,6 +13,11 @@ Glossary
API
Application Programming Interface
+ Docker
+ Docker provisions and manages containers. Yardstick and many other OPNFV
+ projects are deployed in containers. Docker is required to launch the
+ containerized versions of these projects.
+
DPI
Deep Packet Inspection
@@ -27,36 +32,80 @@ Glossary
IOPS
Input/Output Operations Per Second
+ A performance measurement used to benchmark storage devices.
+
+ KPI
+ Key Performance Indicator
+
+ Kubernetes
+ k8s
+ Kubernetes is an open-source container-orchestration system for automating
+ deployment, scaling and management of containerized applications.
+ It is one of the contexts supported in Yardstick.
+
+ NFV
+ Network Function Virtualization
+ NFV is an initiative to take network services which were traditionally run
+ on proprietary, dedicated hardware, and virtualize them to run on general
+ purpose hardware.
+
+ NFVI
+ Network Function Virtualization Infrastructure
+ The servers, routers, switches, etc on which the NFV system runs.
NIC
Network Interface Controller
+ OpenStack
+ OpenStack is a cloud operating system that controls pools of compute,
+ storage, and networking resources. OpenStack is an open source project
+ licensed under the Apache License 2.0.
+
PBFS
Packet Based per Flow State
+ PROX
+ Packet pROcessing eXecution engine
+
QoS
Quality of Service
+ The ability to guarantee certain network or storage requirements to
+ satisfy a Service Level Agreement (SLA) between an application provider
+ and end users.
+ Typically includes performance requirements like networking bandwidth,
+ latency, jitter correction, and reliability as well as storage
+ performance in Input/Output Operations Per Second (IOPS), throttling
+ agreements, and performance expectations at peak load
+
+ SLA
+ Service Level Agreement
+ An SLA is an agreement between a service provider and a customer to
+ provide a certain level of service/performance.
+
+ SR-IOV
+ Single Root IO Virtualization
+ A specification that, when implemented by a physical PCIe
+ device, enables it to appear as multiple separate PCIe devices. This
+ enables multiple virtualized guests to share direct access to the
+ physical device.
+
+ SUT
+ System Under Test
+
+ ToS
+ Type of Service
VLAN
- Virtual LAN
+ Virtual LAN (Local Area Network)
VM
Virtual Machine
+ An operating system instance that runs on top of a hypervisor.
+ Multiple VMs can run at the same time on the same physical
+ host.
VNF
Virtual Network Function
VNFC
Virtual Network Function Component
-
- NFVI
- Network Function Virtualization Infrastructure
-
- SR-IOV
- Single Root IO Virtualization
-
- SUT
- System Under Test
-
- ToS
- Type of Service
diff --git a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
index 895837283..f9ca900fd 100644
--- a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
+++ b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
@@ -28,3 +28,8 @@ NSB PROX Test Case Descriptions
tc_prox_context_load_balancer_port
tc_prox_context_vpe_port
tc_prox_context_lw_after_port
+ tc_epc_default_bearer_landslide
+ tc_epc_dedicated_bearer_landslide
+ tc_epc_saegw_tput_relocation_landslide
+ tc_epc_network_service_request_landslide
+ tc_epc_ue_service_request_landslide
diff --git a/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst
new file mode 100644
index 000000000..c8865ed93
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst
@@ -0,0 +1,156 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*********************************************************
+Yardstick Test Case Description: NSB EPC DEDICATED BEARER
+*********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC dedicated bearer test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_{initiator}_dedicated_bearer_landslide |
+| | |
+| | * initiator: dedicated bearer creation initiator side could |
+| | be UE (ue) or Network (network). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability under |
+| | different levels of load (number of subscriber, generated |
+| | traffic throughput, etc.) for case when default and dedicated|
+| | bearers are creating and using for traffic transferring. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC dedicated bearer test cases are listed below: |
+| | |
+| | * tc_epc_ue_dedicated_bearer_create_landslide.yaml |
+| | * tc_epc_network_dedicated_bearer_create_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Number of dedicated bearers per default bearer: |
+| | |
+| | * 1. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional and |
+| | performance testing of different types of mobile networks. |
+| | It emulates real-world control and data traffic of mobile |
+| | subscribers moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC DEDICATED BEARER test cases can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * number of dedicated bearers per default; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * dedicated bearers activation timeout; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies need to be installed. |
+| conditions | The steps are described in NSB installation chapter for the|
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information (TAS |
+| | VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administrator Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start the emulated EPC network nodes; |
+| | * Establish the subscribers connections to EPC network |
+| | (default bearers); |
+| | * Establish the number of dedicated bearers as per per |
+| | default bearer for each subscriber; |
+| | * Create the sessions and transmit traffic through EPC |
+| | network nodes during the specified traffic duration time; |
+| | * Disconnect dedicated bearers; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the results|
+| | in the database for benchmarking purposes. The aim is only |
+| | to collect all the metrics that are provided by Spirent |
+| | Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst
new file mode 100644
index 000000000..9e6d77825
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst
@@ -0,0 +1,149 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*******************************************************
+Yardstick Test Case Description: NSB EPC DEFAULT BEARER
+*******************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC default bearer test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_default_bearer_landslide_{dmf_setup} |
+| | |
+| | * dmf_setup: single or multi dmf test session setup; |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability of EPC under |
+| | different levels of load (number of subscriber, generated |
+| | traffic throughput) for case when only one default bearer is |
+| | using for transferring traffic from UE to Network. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC default bearer test cases are listed below: |
+| | |
+| | * tc_epc_default_bearer_create_landslide.yaml |
+| | * tc_epc_default_bearer_create_landslide_multi_dmf.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP - for single DMF test case; |
+| | * UDP and TCP - for multi DMF test case; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes for UDP packets; |
+| | * 1518 bytes for TCP packets; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC DEFAULT BEARER test cases can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (only |
+| | default bearers are established); |
+| | * Create the sessions and transmit traffic through EPC |
+| | network nodes during the specified traffic duration time; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst
new file mode 100644
index 000000000..85e6ce11a
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst
@@ -0,0 +1,159 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+****************************************************************
+Yardstick Test Case Description: NSB EPC NETWORK SERVICE REQUEST
+****************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC network service request test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_network_service_request_landslide |
+| | |
+| | * initiator: service request initiator side could be UE (ue) |
+| | or Network (network). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test covers case of network initiated service request & |
+| | allows to check processing capabilities of EPC handling high |
+| | amount of continuous Downlink Data Notification messages from|
+| | network to UEs which are in Idle state. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC network service request test cases are listed below: |
+| | |
+| | * tc_epc_network_service_request_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 0.1 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Idle entry time (timeout after which UE goes to Idle state): |
+| | |
+| | * 5s; |
+| | |
+| | Traffic start delay: |
+| | |
+| | * 1000ms. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC NETWORK SERVICE REQUEST test case can be configured |
+| | with different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * timeout after which UE goes to Idle state; |
+| | * Traffic start delay; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Switch UE to Idle state after specified in test case |
+| | timeout; |
+| | * Send Downlink Data Notification from network to UE, that |
+| | will return UE to active state. This process is continuous |
+| | and during whole test run UEs will be going to Idle state |
+| | and will be switched back to active state after Downlink |
+| | Data Notification was received; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst
new file mode 100644
index 000000000..102517562
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst
@@ -0,0 +1,167 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*********************************************************
+Yardstick Test Case Description: NSB EPC SAEGW RELOCATION
+*********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC SAEGW throughput with relocation test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_saegw_tput_relocation_landslide |
+| | |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability of EPC |
+| | handling large amount of subscribers X2 handovers between |
+| | different eNBs while UEs are sending traffic. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC SAEGW throughput with relocation tests are listed |
+| | below: |
+| | |
+| | * tc_epc_saegw_tput_relocation_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Handover type: |
+| | |
+| | * X2 handover; |
+| | |
+| | Mobility time (timeout after sessions were established after |
+| | which handover will start): |
+| | |
+| | * 10000ms; |
+| | |
+| | Handover start type: |
+| | |
+| | * When all sessions started; |
+| | |
+| | Mobility mode: |
+| | |
+| | * Single handoff; |
+| | |
+| | Mobility Rate: |
+| | |
+| | * 120 subscribers/s. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC UE SERVICE REQUEST test cases can be configured with|
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * handover type; |
+| | * mobility rate; |
+| | * mobility time; |
+| | * mobility mode; |
+| | * handover start condition; |
+| | * subscribers disconnection rate; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Start run traffic; |
+| | * After specified in test case mobility timeout, start |
+| | handover process on specified mobility rate; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst
new file mode 100644
index 000000000..0711a0ce3
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst
@@ -0,0 +1,174 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+***********************************************************
+Yardstick Test Case Description: NSB EPC UE SERVICE REQUEST
+***********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC UE service request test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_{initiator}_service_request_landslide |
+| | |
+| | * initiator: service request initiator side could be UE (ue) |
+| | or Network (nw). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capabilities of EPC |
+| | under high user connections rate and traffic load for case |
+| | when UEs initiates service request (UE initiates bearer |
+| | modification request to provide dedicated bearer for new |
+| | type of traffic) |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC ue service request test cases are listed below: |
+| | |
+| | * tc_epc_ue_service_request_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Number of dedicated bearers per default bearer: |
+| | |
+| | * 1. |
+| | |
+| | TFT settings for dedicated bearers: |
+| | |
+| | * TFT configured to filter TCP traffic (Protocol ID 6) |
+| | |
+| | Modified TFT settings: |
+| | |
+| | * Create new TFT to filter UDP traffic (Protocol ID 17) from |
+| | 2002 local port and 2003 remote port; |
+| | |
+| | Modified QoS settings: |
+| | |
+| | * Set QCI 5 for dedicated bearers; |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC UE SERVICE REQUEST test case can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * number of dedicated bearers per default; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * dedicated bearers activation timeout; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | * Starting TFT settings for dedicated bearers; |
+| | * Modified TFT settings for dedicated bearers; |
+| | * Modified QoS settings for dedicated bearers; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Establish the number of dedicated bearer as specified in |
+| | the test case as per default bearer for each subscriber; |
+| | * start run users traffic through EPC network nodes; |
+| | * During traffic is running, send bearer modification request|
+| | after specified in test case timeout; |
+| | * Disconnect dedicated bearers; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
index 202307de6..19cc80e30 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
@@ -34,6 +34,7 @@ Yardstick Test Case Description TC010
| | |
| | Lmbench is a suite of operating system microbenchmarks. This |
| | test uses lat_mem_rd tool from that suite including: |
+| | |
| | * Context switching |
| | * Networking: connection establishment, pipe, TCP, UDP, and |
| | RPC hot potato |
@@ -55,7 +56,7 @@ Yardstick Test Case Description TC010
| | The benchmark runs as two nested loops. The outer loop is |
| | the stride size. The inner loop is the array size. For each |
| | array size, the benchmark creates a ring of pointers that |
-| | point backward one stride.Traversing the array is done by: |
+| | point backward one stride. Traversing the array is done by:: |
| | |
| | p = (char **)*p; |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
index 48bdef497..cbb1db91f 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
@@ -60,14 +60,14 @@ Yardstick Test Case Description TC011
| | |
| | * options: |
| | protocol: udp # The protocol used by iperf3 tools |
-| | bandwidth: 20m # It will send the given number of packets |
-| | without pausing |
+| | # Send the given number of packets without pausing: |
+| | bandwidth: 20m |
| | * runner: |
| | duration: 30 # Total test duration 30 seconds. |
| | |
| | * SLA (optional): |
| | jitter: 10 (ms) # The maximum amount of jitter that is |
-| | accepted. |
+| | accepted. |
| | |
+--------------+--------------------------------------------------------------+
|applicability | Test can be configured with different: |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
index b56e829f5..2502f5d94 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
@@ -34,6 +34,7 @@ Yardstick Test Case Description TC012
| | |
| | LMbench is a suite of operating system microbenchmarks. |
| | This test uses bw_mem tool from that suite including: |
+| | |
| | * Cached file read |
| | * Memory copy (bcopy) |
| | * Memory read |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
index 8d79e011a..d27b201c5 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
@@ -43,20 +43,24 @@ Yardstick Test Case Description TC019
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, two kinds of monitor are needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters: |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scritps. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
+| | |
+| | 1. monitor_type: which is used for finding the monitor |
+| | class and related scritps. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2. command_name: which is the command name used for |
+| | request |
| | |
| | 2. the "process" monitor check whether a process is running |
| | on a specific node, which needs three parameters: |
-| | 1) monitor_type: which used for finding the monitor class |
-| | and related scritps. It should be always set to "process" |
-| | for this monitor. |
-| | 2) process_name: which is the process name for monitor |
-| | 3) host: which is the name of the node runing the process |
+| | |
+| | 1. monitor_type: which used for finding the monitor class |
+| | and related scritps. It should be always set to |
+| | "process" for this monitor. |
+| | 2. process_name: which is the process name for monitor |
+| | 3. host: which is the name of the node runing the process |
| | |
| | e.g. |
| | monitor1: |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
index 0e2e9a5f8..f3f9ea6bf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
@@ -39,12 +39,15 @@ Yardstick Test Case Description TC025
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, one kind of monitor are needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scritps. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
+| | |
+| | 1) monitor_type: which is used for finding the monitor |
+| | class and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for |
+| | request |
| | |
| | There are four instance of the "openstack-cmd" monitor: |
| | monitor1: |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
index 125fd59fa..90790e2e3 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC027
*************************************
-.. _ipv6: https://wiki.opnfv.org/ipv6_opnfv_project
+.. _ipv6: https://wiki.opnfv.org/display/ipv6
+-----------------------------------------------------------------------------+
|IPv6 connectivity between nodes on the tenant network |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
index d62fbf787..4c73c9677 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC040
*************************************
-.. _Parser: https://wiki.opnfv.org/parser
+.. _Parser: https://wiki.opnfv.org/display/parser
+-----------------------------------------------------------------------------+
|Verify Parser Yang-to-Tosca |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
index a0c487c7b..23b98c8f4 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
@@ -9,7 +9,7 @@ Yardstick Test Case Description TC042
.. _DPDK: http://dpdk.org/doc/guides/index.html
.. _Testpmd: http://dpdk.org/doc/guides/testpmd_app_ug/index.html
-.. _Pktgen-dpdk: http://pktgen.readthedocs.io/en/latest/index.html
+.. _Pktgen-dpdk: https://pktgen-dpdk.readthedocs.io/en/latest/index.html
+-----------------------------------------------------------------------------+
|Network Performance |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
index 82a491b72..7d01cb99a 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
@@ -35,18 +35,18 @@ Yardstick Test Case Description TC050
| | 3) interface: the network interface to be turned off. |
| | |
| | The interface to be closed by the attacker can be set by the |
-| | variable of "{{ interface_name }}" |
+| | variable of "{{ interface_name }}":: |
| | |
-| | attackers: |
-| | - |
-| | fault_type: "general-attacker" |
-| | host: {{ attack_host }} |
-| | key: "close-br-public" |
-| | attack_key: "close-interface" |
-| | action_parameter: |
-| | interface: {{ interface_name }} |
-| | rollback_parameter: |
-| | interface: {{ interface_name }} |
+| | attackers: |
+| | - |
+| | fault_type: "general-attacker" |
+| | host: {{ attack_host }} |
+| | key: "close-br-public" |
+| | attack_key: "close-interface" |
+| | action_parameter: |
+| | interface: {{ interface_name }} |
+| | rollback_parameter: |
+| | interface: {{ interface_name }} |
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, the monitor named "openstack-cmd" is |
@@ -56,19 +56,20 @@ Yardstick Test Case Description TC050
| | "openstack-cmd" for this monitor. |
| | 2) command_name: which is the command name used for request |
| | |
-| | There are four instance of the "openstack-cmd" monitor: |
-| | monitor1: |
-| | - monitor_type: "openstack-cmd" |
-| | - command_name: "nova image-list" |
-| | monitor2: |
-| | - monitor_type: "openstack-cmd" |
-| | - command_name: "neutron router-list" |
-| | monitor3: |
-| | - monitor_type: "openstack-cmd" |
-| | - command_name: "heat stack-list" |
-| | monitor4: |
-| | - monitor_type: "openstack-cmd" |
-| | - command_name: "cinder list" |
+| | There are four instance of the "openstack-cmd" monitor:: |
+| | |
+| | monitor1: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "nova image-list" |
+| | monitor2: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "neutron router-list" |
+| | monitor3: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "heat stack-list" |
+| | monitor4: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "cinder list" |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there is one metric: |
| | 1)service_outage_time: which indicates the maximum outage |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
index 9514b6819..7f2be6e7d 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
@@ -65,15 +65,16 @@ Yardstick Test Case Description TC052
| | |
| | In this case, the "operation" adds a flavor and the "result |
| | checker" checks whether ths flavor is created. Their |
-| | parameters show as follows: |
-| | operation: |
-| | -operation_type: "nova-create-flavor" |
-| | -action_parameter: |
-| | flavorconfig: "test-001 test-001 100 1 1" |
-| | result checker: |
-| | -checker_type: "check-flavor" |
-| | -expectedValue: "test-001" |
-| | -condition: "in" |
+| | parameters show as follows:: |
+| | |
+| | operation: |
+| | -operation_type: "nova-create-flavor" |
+| | -action_parameter: |
+| | flavorconfig: "test-001 test-001 100 1 1" |
+| | result checker: |
+| | -checker_type: "check-flavor" |
+| | -expectedValue: "test-001" |
+| | -condition: "in" |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there is one metric: |
| | 1)service_outage_time: which indicates the maximum outage |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
index c861ca90c..25703d3fb 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC055
*************************************
-.. _/proc/cpuinfo: http://www.linfo.org/proc_cpuinfo.html
+.. _`/proc/cpuinfo`: http://www.linfo.org/proc_cpuinfo.html
+-----------------------------------------------------------------------------+
|Compute Capacity |
@@ -41,7 +41,7 @@ Yardstick Test Case Description TC055
| | capacity output. |
| | |
+--------------+--------------------------------------------------------------+
-|references | /proc/cpuinfo_ |
+|references | `/proc/cpuinfo`_ |
| | |
| | ETSI-NFV-TST001 |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
index 1bb43c9e7..245a58e08 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
@@ -49,12 +49,15 @@ Yardstick Test Case Description TC057
| | -host: node1 |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, a kind of monitor is needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters: |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scripts. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
+| | |
+| | 1. monitor_type: which is used for finding the monitor |
+| | class and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2. command_name: which is the command name used for |
+| | request |
| | |
| | In this case, the command_name of monitor1 should be |
| | services that are managed by the cluster manager. |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
index a77653aa5..7b8ee06c7 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
@@ -58,6 +58,7 @@ Yardstick Test Case Description TC063
| | * count: 15 - how many times to stat disk utilization |
| | type: int |
| | unit: na |
+| | |
| | There are default values for each above-mentioned option. |
| | Run in background with other test cases. |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
index af0e64fbf..e1bfd5399 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
@@ -9,9 +9,6 @@ Yardstick Test Case Description TC069
.. _RAMspeed: http://alasir.com/software/ramspeed/
-.. table::
- :class: longtable
-
+-----------------------------------------------------------------------------+
|Memory Bandwidth |
| |
@@ -41,7 +38,8 @@ Yardstick Test Case Description TC069
| | * SLA (optional): 7000 (MBps) min_bandwidth: The minimum |
| | amount of memory bandwidth that is accepted. |
| | * type_id: 1 - runs a specified benchmark |
-| | (by an ID number): |
+| | (by an ID number):: |
+| | |
| | 1 -- INTmark [writing] 4 -- FLOATmark [writing] |
| | 2 -- INTmark [reading] 5 -- FLOATmark [reading] |
| | 3 -- INTmem 6 -- FLOATmem |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
index ad4526405..873c5c99e 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC073
*************************************
-.. _netperf: http://www.netperf.org/netperf/training/Netperf.html
+.. _netperf: https://hewlettpackard.github.io/netperf/
+-----------------------------------------------------------------------------+
|Throughput per NFVI node test |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
index d6beeaff9..8d025eecf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
@@ -91,12 +91,15 @@ Yardstick Test Case Description TC074
| | * workload=[workload module] |
| | If not specified, the default is to run all workloads. The |
| | workload types are: |
+| | |
| | - rs: 100% Read, sequential data |
| | - ws: 100% Write, sequential data |
| | - rr: 100% Read, random access |
| | - wr: 100% Write, random access |
| | - rw: 70% Read / 30% write, random access |
+| | |
| | measurements. |
+| | |
| | * workloads={json maps} |
| | This parameter supercedes the workload and calls the V2.0 |
| | API in StorPerf. It allows for greater control of the |
@@ -131,11 +134,13 @@ Yardstick Test Case Description TC074
| | |
| | Storperf is required to be installed in the environment. |
| | There are two possible methods for Storperf installation: |
-| | Run container on Jump Host |
-| | Run container in a VM |
+| | |
+| | - Run container on Jump Host |
+| | - Run container in a VM |
| | |
| | Running StorPerf on Jump Host |
| | Requirements: |
+| | |
| | - Docker must be installed |
| | - Jump Host must have access to the OpenStack Controller |
| | API |
@@ -146,6 +151,7 @@ Yardstick Test Case Description TC074
| | |
| | Running StorPerf in a VM |
| | Requirements: |
+| | |
| | - VM has docker installed |
| | - VM has OpenStack Controller credentials and can |
| | communicate with the Controller API |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
index 793c3fdd5..df2192313 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
@@ -14,8 +14,8 @@ Yardstick Test Case Description TC081
|Network Latency |
| |
+--------------+--------------------------------------------------------------+
-|test case id | OPNFV_YARDSTICK_TC081_NETWORK_LATENCY_BETWEEN_CONTAINER_AND_ |
-| | VM |
+|test case id | OPNFV_YARDSTICK_TC081_NETWORK_LATENCY_BETWEEN_CONTAINER_AND |
+| | _VM |
| | |
+--------------+--------------------------------------------------------------+
|metric | RTT (Round Trip Time) |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
index 2e7b28e25..b3d44c4bf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
@@ -92,18 +92,19 @@ Yardstick Test Case Description TC084
+--------------+--------------------------------------------------------------+
|pre-test | To run and install SPEC CPU 2006, the following are |
|conditions | required: |
-| | * For SPECint 2006: Both C99 and C++98 compilers are |
-| | installed in VM images; |
-| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 |
-| | compilers installed in VM images; |
-| | * At least 4GB of disk space availabile on VM. |
-| | |
-| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu |
-| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. |
-| | Higher gcc and g++ version may cause compiling error. |
-| | |
-| | For more SPEC CPU 2006 dependencies please visit |
-| | (https://www.spec.org/cpu2006/Docs/techsupport.html) |
+| | |
+| | * For SPECint 2006: Both C99 and C++98 compilers are |
+| | installed in VM images; |
+| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 |
+| | compilers installed in VM images; |
+| | * At least 4GB of disk space availabile on VM. |
+| | |
+| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu |
+| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. |
+| | Higher gcc and g++ version may cause compiling error. |
+| | |
+| | For more SPEC CPU 2006 dependencies please visit |
+| | (https://www.spec.org/cpu2006/Docs/techsupport.html) |
| | |
+--------------+--------------------------------------------------------------+
|test sequence | description and expected result |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc087.rst b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
index 99bfeebfc..c11252606 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
@@ -41,6 +41,7 @@ Yardstick Test Case Description TC087
+--------------+--------------------------------------------------------------+
|attackers | In this test case, an attacker called “kill-process” is |
| | needed. This attacker includes three parameters: |
+| | |
| | 1. fault_type: which is used for finding the attacker's |
| | scripts. It should be set to 'kill-process' in this test |
| | |
@@ -58,6 +59,7 @@ Yardstick Test Case Description TC087
|monitors | This test case utilizes two monitors of type "ip-status" |
| | and one monitor of type "process" to track the following |
| | conditions: |
+| | |
| | 1. "ping_same_network_l2": monitor ICMP traffic between |
| | VMs in the same Neutron network |
| | |
@@ -74,11 +76,13 @@ Yardstick Test Case Description TC087
| | |
+--------------+--------------------------------------------------------------+
|operations | In this test case, the following operations are needed: |
+| | |
| | 1. "nova-create-instance-in_network": create a VM instance |
| | in one of the existing Neutron network. |
| | |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there are two metrics: |
+| | |
| | 1. process_recover_time: which indicates the maximun |
| | time (seconds) from the process being killed to |
| | recovered |
@@ -95,7 +99,9 @@ Yardstick Test Case Description TC087
| | |
+--------------+--------------------------------------------------------------+
|configuration | This test case needs two configuration files: |
+| | |
| | 1. test case file: opnfv_yardstick_tc087.yaml |
+| | |
| | - Attackers: see above “attackers” discription |
| | - waiting_time: which is the time (seconds) from the |
| | process being killed to stoping monitors the monitors |
@@ -126,7 +132,7 @@ Yardstick Test Case Description TC087
| | Neutron network. |
| | |
| | 2. Check connectivity from one VM to an external host on |
-| | the Internet to verify SNAT functionality.
+| | the Internet to verify SNAT functionality. |
| | |
| | Result: The monitor info will be collected. |
| | |
@@ -171,11 +177,14 @@ Yardstick Test Case Description TC087
|test verdict | This test fails if the SLAs are not met or if there is a |
| | test case execution problem. The SLAs are define as follows |
| | for this test: |
+| | |
| | * SDN Controller recovery |
+| | |
| | * process_recover_time <= 30 sec |
| | |
| | * no impact on data plane connectivity during SDN |
| | controller failure and recovery. |
+| | |
| | * packet_drop == 0 |
| | |
+--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc092.rst b/docs/testing/user/userguide/opnfv_yardstick_tc092.rst
index 895074a85..9c833fa23 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc092.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc092.rst
@@ -43,6 +43,7 @@ Yardstick Test Case Description TC092
+--------------+--------------------------------------------------------------+
|attackers | In this test case, an attacker called “kill-process” is |
| | needed. This attacker includes three parameters: |
+| | |
| | 1. ``fault_type``: which is used for finding the attacker's |
| | scripts. It should be set to 'kill-process' in this test |
| | |
@@ -92,17 +93,20 @@ Yardstick Test Case Description TC092
| | |
+--------------+--------------------------------------------------------------+
|configuration | This test case needs two configuration files: |
-| | 1. test case file: opnfv_yardstick_tc092.yaml |
-| | - Attackers: see above “attackers” discription |
-| | - Monitors: see above “monitors” discription |
-| | - waiting_time: which is the time (seconds) from the |
-| | process being killed to stoping monitors the |
-| | monitors |
-| | - SLA: see above “metrics” discription |
+| | 1. test case file: opnfv_yardstick_tc092.yaml |
+| | |
+| | - Attackers: see above “attackers” discription |
+| | - Monitors: see above “monitors” discription |
+| | |
+| | - waiting_time: which is the time (seconds) from the |
+| | process being killed to stoping monitors the |
+| | monitors |
| | |
-| | 2. POD file: pod.yaml The POD configuration should record |
-| | on pod.yaml first. the “host” item in this test case |
-| | will use the node name in the pod.yaml. |
+| | - SLA: see above “metrics” discription |
+| | |
+| | 2. POD file: pod.yaml The POD configuration should record |
+| | on pod.yaml first. the “host” item in this test case |
+| | will use the node name in the pod.yaml. |
| | |
+--------------+--------------------------------------------------------------+
|test sequence | Description and expected result |
@@ -168,11 +172,12 @@ Yardstick Test Case Description TC092
| | |
+--------------+--------------------------------------------------------------+
|step 8 | Start IP connectivity monitors for the new VM: |
-| | 1. Check the L2 connectivity from the existing VMs to the |
-| | new VM in the Neutron network. |
| | |
-| | 2. Check connectivity from one VM to an external host on |
-| | the Internet to verify SNAT functionality. |
+| | 1. Check the L2 connectivity from the existing VMs to the |
+| | new VM in the Neutron network. |
+| | |
+| | 2. Check connectivity from one VM to an external host on |
+| | the Internet to verify SNAT functionality. |
| | |
| | Result: The monitor info will be collected. |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc093.rst b/docs/testing/user/userguide/opnfv_yardstick_tc093.rst
index 31fa5d3d3..4e22e8bf3 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc093.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc093.rst
@@ -43,14 +43,15 @@ Yardstick Test Case Description TC093
+--------------+--------------------------------------------------------------+
|attackers | In this test case, two attackers called “kill-process” are |
| | needed. These attackers include three parameters: |
-| | 1. fault_type: which is used for finding the attacker's |
-| | scripts. It should be set to 'kill-process' in this test |
| | |
-| | 2. process_name: should be set to the name of the Vswitch |
-| | process |
+| | 1. fault_type: which is used for finding the attacker's |
+| | scripts. It should be set to 'kill-process' in this test |
| | |
-| | 3. host: which is the name of the compute node where the |
-| | Vswitch process is running |
+| | 2. process_name: should be set to the name of the Vswitch |
+| | process |
+| | |
+| | 3. host: which is the name of the compute node where the |
+| | Vswitch process is running |
| | |
| | e.g. -fault_type: "kill-process" |
| | -process_name: "openvswitch" |
@@ -60,16 +61,17 @@ Yardstick Test Case Description TC093
|monitors | This test case utilizes two monitors of type "ip-status" |
| | and one monitor of type "process" to track the following |
| | conditions: |
-| | 1. "ping_same_network_l2": monitor ICMP traffic between |
-| | VMs in the same Neutron network |
| | |
-| | 2. "ping_external_snat": monitor ICMP traffic from VMs to |
-| | an external host on the Internet to verify SNAT |
-| | functionality. |
+| | 1. "ping_same_network_l2": monitor ICMP traffic between |
+| | VMs in the same Neutron network |
+| | |
+| | 2. "ping_external_snat": monitor ICMP traffic from VMs to |
+| | an external host on the Internet to verify SNAT |
+| | functionality. |
| | |
-| | 3. "Vswitch process monitor": a monitor checking the |
-| | state of the specified Vswitch process. It measures |
-| | the recovery time of the given process. |
+| | 3. "Vswitch process monitor": a monitor checking the |
+| | state of the specified Vswitch process. It measures |
+| | the recovery time of the given process. |
| | |
| | Monitors of type "ip-status" use the "ping" utility to |
| | verify reachability of a given target IP. |
@@ -99,6 +101,7 @@ Yardstick Test Case Description TC093
+--------------+--------------------------------------------------------------+
|configuration | This test case needs two configuration files: |
| | 1. test case file: opnfv_yardstick_tc093.yaml |
+| | |
| | - Attackers: see above “attackers” description |
| | - monitor_time: which is the time (seconds) from |
| | starting to stoping the monitors |
@@ -173,12 +176,14 @@ Yardstick Test Case Description TC093
|test verdict | This test fails if the SLAs are not met or if there is a |
| | test case execution problem. The SLAs are define as follows |
| | for this test: |
-| | * SDN Vswitch recovery |
-| | * process_recover_time <= 30 sec |
+| | * SDN Vswitch recovery |
+| | |
+| | * process_recover_time <= 30 sec |
+| | |
+| | * no impact on data plane connectivity during SDN |
+| | Vswitch failure and recovery. |
| | |
-| | * no impact on data plane connectivity during SDN |
-| | Vswitch failure and recovery. |
-| | * packet_drop == 0 |
+| | * packet_drop == 0 |
| | |
+--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/references.rst b/docs/testing/user/userguide/references.rst
index 3e18c96e9..e6bc719fd 100644
--- a/docs/testing/user/userguide/references.rst
+++ b/docs/testing/user/userguide/references.rst
@@ -11,12 +11,12 @@ References
OPNFV
=====
-* Parser wiki: https://wiki.opnfv.org/parser
-* Pharos wiki: https://wiki.opnfv.org/pharos
+* Parser wiki: https://wiki.opnfv.org/display/parser
+* Pharos wiki: https://wiki.opnfv.org/display/pharos
* Yardstick CI: https://build.opnfv.org/ci/view/yardstick/
* Yardstick and ETSI TST001 presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925205%2Fopnfv_summit_-_bridging_opnfv_and_etsi.pdf
* Yardstick Project presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925208%2Fopnfv_summit_-_yardstick_project.pdf
-* Yardstick wiki: https://wiki.opnfv.org/yardstick
+* Yardstick wiki: https://wiki.opnfv.org/display/yardstick
References used in Test Cases
=============================
@@ -25,22 +25,22 @@ References used in Test Cases
* cirros-image: https://download.cirros-cloud.net
* cyclictest: https://rt.wiki.kernel.org/index.php/Cyclictest
* DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/
-* DPDK supported NICs: http://dpdk.org/doc/nics
+* DPDK supported NICs: http://core.dpdk.org/supported/
* fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html
-* fio: http://www.bluestop.org/fio/HOWTO.txt
+* fio: https://bluestop.org/files/fio/HOWTO.txt
* free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html
* iperf3: https://iperf.fr/
-* iostat: http://linux.die.net/man/1/iostat
+* iostat: https://linux.die.net/man/1/iostat
* Lmbench man-pages: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html
* Memory bandwidth man-pages: http://manpages.ubuntu.com/manpages/trusty/bw_mem.8.html
* mpstat man-pages: http://manpages.ubuntu.com/manpages/trusty/man1/mpstat.1.html
-* netperf: http://www.netperf.org/netperf/training/Netperf.html
+* netperf: https://hewlettpackard.github.io/netperf/
* pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt
* RAMspeed: http://alasir.com/software/ramspeed/
-* sar: http://linux.die.net/man/1/sar
+* sar: https://linux.die.net/man/1/sar
* SR-IOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
* Storperf: https://wiki.opnfv.org/display/storperf/Storperf
-* unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench
+* unixbench: https://github.com/kdlucas/byte-unixbench/tree/master/UnixBench
Research
@@ -53,7 +53,7 @@ Research
Standards
=========
-* ETSI NFV: http://www.etsi.org/technologies-clusters/technologies/nfv
-* ETSI GS-NFV TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
+* ETSI NFV: https://www.etsi.org/technologies-clusters/technologies/nfv
+* ETSI GS-NFV TST 001: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
* RFC2544: https://www.ietf.org/rfc/rfc2544.txt