aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/conf.py1
-rw-r--r--docs/conf.yaml3
-rw-r--r--docs/index.rst17
-rw-r--r--docs/release/release-notes/release-notes.rst435
-rw-r--r--docs/release/results/euphrates_fraser_comparison.rst610
-rw-r--r--docs/release/results/images/tc002_pod.pngbin0 -> 39106 bytes
-rw-r--r--docs/release/results/images/tc002_pod_fraser.pngbin0 -> 22935 bytes
-rw-r--r--docs/release/results/images/tc002_scenario.pngbin0 -> 44920 bytes
-rw-r--r--docs/release/results/images/tc002_scenario_fraser.pngbin0 -> 25892 bytes
-rw-r--r--docs/release/results/images/tc010_pod.pngbin0 -> 44349 bytes
-rw-r--r--docs/release/results/images/tc010_pod_fraser.pngbin0 -> 37525 bytes
-rw-r--r--docs/release/results/images/tc010_scenario.pngbin0 -> 51251 bytes
-rw-r--r--docs/release/results/images/tc010_scenario_fraser.pngbin0 -> 45190 bytes
-rw-r--r--docs/release/results/images/tc011_pod.pngbin0 -> 43308 bytes
-rw-r--r--docs/release/results/images/tc011_pod_fraser.pngbin0 -> 25593 bytes
-rw-r--r--docs/release/results/images/tc011_scenario.pngbin0 -> 43647 bytes
-rw-r--r--docs/release/results/images/tc011_scenario_fraser.pngbin0 -> 28152 bytes
-rw-r--r--docs/release/results/images/tc012_pod.pngbin0 -> 47996 bytes
-rw-r--r--docs/release/results/images/tc012_pod_fraser.pngbin0 -> 36168 bytes
-rw-r--r--docs/release/results/images/tc012_scenario.pngbin0 -> 51405 bytes
-rw-r--r--docs/release/results/images/tc012_scenario_fraser.pngbin0 -> 41289 bytes
-rw-r--r--docs/release/results/images/tc014_pod.pngbin0 -> 36462 bytes
-rw-r--r--docs/release/results/images/tc014_pod_fraser.pngbin0 -> 29513 bytes
-rw-r--r--docs/release/results/images/tc014_scenario.pngbin0 -> 42056 bytes
-rw-r--r--docs/release/results/images/tc014_scenario_fraser.pngbin0 -> 36018 bytes
-rw-r--r--docs/release/results/images/tc069_pod.pngbin0 -> 41823 bytes
-rw-r--r--docs/release/results/images/tc069_pod_fraser.pngbin0 -> 36041 bytes
-rw-r--r--docs/release/results/images/tc069_scenario.pngbin0 -> 46728 bytes
-rw-r--r--docs/release/results/images/tc069_scenario_fraser.pngbin0 -> 43834 bytes
-rw-r--r--docs/release/results/images/tc082_pod.pngbin0 -> 28096 bytes
-rw-r--r--docs/release/results/images/tc082_pod_fraser.pngbin0 -> 25078 bytes
-rw-r--r--docs/release/results/images/tc082_scenario.pngbin0 -> 16082 bytes
-rw-r--r--docs/release/results/images/tc083_pod.pngbin0 -> 29533 bytes
-rw-r--r--docs/release/results/images/tc083_pod_fraser.pngbin0 -> 25476 bytes
-rw-r--r--docs/release/results/images/tc083_scenario.pngbin0 -> 16481 bytes
-rw-r--r--docs/release/results/index.rst6
-rw-r--r--docs/release/results/os-nosdn-kvm-ha.rst270
-rw-r--r--docs/release/results/os-nosdn-nofeature-ha.rst492
-rw-r--r--docs/release/results/os-nosdn-nofeature-noha.rst259
-rw-r--r--docs/release/results/os-odl_l2-bgpvpn-ha.rst53
-rw-r--r--docs/release/results/os-odl_l2-nofeature-ha.rst743
-rw-r--r--docs/release/results/os-odl_l2-sfc-ha.rst231
-rw-r--r--docs/release/results/os-onos-nofeature-ha.rst257
-rw-r--r--docs/release/results/os-onos-sfc-ha.rst517
-rw-r--r--docs/release/results/overview.rst85
-rw-r--r--docs/release/results/results.rst51
-rw-r--r--docs/release/results/tc002-network-latency.rst525
-rw-r--r--docs/release/results/tc010-memory-read-latency.rst510
-rw-r--r--docs/release/results/tc011-packet-delay-variation.rst432
-rw-r--r--docs/release/results/tc012-memory-read-write-bandwidth.rst513
-rw-r--r--docs/release/results/tc014-cpu-processing-speed.rst512
-rw-r--r--docs/release/results/tc069-memory-write-bandwidth.rst516
-rw-r--r--docs/release/results/tc082-context-switches-under-load.rst187
-rw-r--r--docs/release/results/tc083-network-throughput-between-vm.rst187
-rw-r--r--docs/release/results/yardstick-opnfv-vtc.rst248
-rw-r--r--docs/requirements.txt5
-rw-r--r--docs/templates/test_results_template.rst23
-rwxr-xr-xdocs/testing/developer/devguide/devguide.rst399
-rwxr-xr-xdocs/testing/developer/devguide/devguide_nsb_prox.rst1480
-rw-r--r--docs/testing/developer/devguide/images/PROX_BNG_QOS.pngbin0 -> 134443 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Baremetal_config.pngbin0 -> 89189 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Gen_2port_cfg.pngbin0 -> 83907 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Gen_GUI.pngbin0 -> 236854 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Grafana_1.pngbin0 -> 76923 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Grafana_2.pngbin0 -> 76976 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Grafana_3.pngbin0 -> 76762 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Grafana_4.pngbin0 -> 20013 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Grafana_5.pngbin0 -> 66525 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Grafana_6.pngbin0 -> 88051 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Handle_2port_cfg.pngbin0 -> 105591 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Hardware_Arch.pngbin0 -> 322529 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Openstack_stack_list.pngbin0 -> 11178 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Openstack_stack_show_a.pngbin0 -> 189101 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Openstack_stack_show_b.pngbin0 -> 143152 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_SUT_GUI.pngbin0 -> 150147 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Software_Arch.pngbin0 -> 38458 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Test_BM_Script.pngbin0 -> 94927 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Test_HEAT_Script1.pngbin0 -> 87627 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Test_HEAT_Script2.pngbin0 -> 85248 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_1.pngbin0 -> 60578 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_2.pngbin0 -> 92664 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Traffic_profile.pngbin0 -> 95758 bytes
-rw-r--r--docs/testing/developer/devguide/images/PROX_Yardstick_config.pngbin0 -> 86567 bytes
-rw-r--r--docs/testing/developer/devguide/images/vPE_Diagram.pngbin0 -> 82640 bytes
-rw-r--r--docs/testing/developer/devguide/index.rst2
-rwxr-xr-xdocs/testing/user/userguide/01-introduction.rst56
-rw-r--r--docs/testing/user/userguide/02-methodology.rst27
-rwxr-xr-xdocs/testing/user/userguide/03-architecture.rst87
-rw-r--r--docs/testing/user/userguide/04-installation.rst500
-rw-r--r--docs/testing/user/userguide/05-operation.rst296
-rw-r--r--docs/testing/user/userguide/06-yardstick-plugin.rst (renamed from docs/testing/user/userguide/05-yardstick_plugin.rst)91
-rw-r--r--docs/testing/user/userguide/07-result-store-InfluxDB.rst (renamed from docs/testing/user/userguide/06-result-store-InfluxDB.rst)45
-rw-r--r--docs/testing/user/userguide/08-grafana.rst (renamed from docs/testing/user/userguide/07-grafana.rst)32
-rw-r--r--docs/testing/user/userguide/09-api.rst (renamed from docs/testing/user/userguide/08-api.rst)304
-rw-r--r--docs/testing/user/userguide/09-yardstick_user_interface.rst29
-rw-r--r--docs/testing/user/userguide/10-vtc-overview.rst128
-rw-r--r--docs/testing/user/userguide/10-yardstick-user-interface.rst64
-rw-r--r--docs/testing/user/userguide/11-nsb-overview.rst203
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst258
-rw-r--r--docs/testing/user/userguide/12-nsb_installation.rst889
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst1542
-rw-r--r--docs/testing/user/userguide/13-nsb_operation.rst270
-rw-r--r--docs/testing/user/userguide/14-nsb-operation.rst706
-rw-r--r--docs/testing/user/userguide/15-list-of-tcs.rst271
-rw-r--r--docs/testing/user/userguide/code/multi-devstack-compute-local.conf53
-rw-r--r--docs/testing/user/userguide/code/multi-devstack-controller-local.conf64
-rw-r--r--docs/testing/user/userguide/code/pod_ixia.yaml31
-rw-r--r--docs/testing/user/userguide/code/single-devstack-local.conf62
-rw-r--r--docs/testing/user/userguide/code/single-yardstick-pod.conf22
-rw-r--r--docs/testing/user/userguide/comp-intro.rst4
-rw-r--r--docs/testing/user/userguide/glossary.rst136
-rw-r--r--docs/testing/user/userguide/index.rst19
-rwxr-xr-x[-rw-r--r--]docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst13
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst177
-rw-r--r--docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst179
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst156
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst149
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst159
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst167
-rw-r--r--docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst174
-rwxr-xr-xdocs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst102
-rw-r--r--docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst6
-rw-r--r--docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst189
-rw-r--r--docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst130
-rw-r--r--docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst133
-rw-r--r--docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst96
-rw-r--r--docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst113
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc010.rst3
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc011.rst6
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc012.rst1
-rwxr-xr-xdocs/testing/user/userguide/opnfv_yardstick_tc015.rst141
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc019.rst29
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc025.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc027.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc040.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc042.rst4
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc045.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc046.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc047.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc048.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc049.rst11
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc050.rst63
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc052.rst19
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc053.rst5
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc055.rst4
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc056.rst9
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc057.rst39
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc058.rst16
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc063.rst1
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc069.rst6
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc073.rst2
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc074.rst93
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc081.rst4
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc084.rst25
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc087.rst191
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc088.rst129
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc089.rst129
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc090.rst151
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc091.rst138
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc092.rst201
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc093.rst189
-rw-r--r--docs/testing/user/userguide/references.rst23
162 files changed, 13495 insertions, 5897 deletions
diff --git a/docs/conf.py b/docs/conf.py
new file mode 100644
index 000000000..86fddf13e
--- /dev/null
+++ b/docs/conf.py
@@ -0,0 +1 @@
+from docs_conf.conf import * # pylint: disable=wildcard-import
diff --git a/docs/conf.yaml b/docs/conf.yaml
new file mode 100644
index 000000000..01e08ec7f
--- /dev/null
+++ b/docs/conf.yaml
@@ -0,0 +1,3 @@
+---
+project_cfg: opnfv
+project: Yardstick
diff --git a/docs/index.rst b/docs/index.rst
new file mode 100644
index 000000000..e1339b0dd
--- /dev/null
+++ b/docs/index.rst
@@ -0,0 +1,17 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. (c) Open Platform for NFV Project, Inc. and its contributors
+
+.. _yardstick:
+
+=========
+Yardstick
+=========
+
+.. toctree::
+ :numbered:
+ :maxdepth: 2
+
+ release/release-notes/index
+ testing/user/userguide/index
+ testing/developer/devguide/index
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 4ebf0eceb..b3f3e1aed 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -1,72 +1,61 @@
-License
-=======
-
-OPNFV Euphrates release note for Yardstick Docs
-are licensed under a Creative Commons Attribution 4.0 International License.
-You should have received a copy of the license along with this.
-If not, see <http://creativecommons.org/licenses/by/4.0/>.
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
The *Yardstick framework*, the *Yardstick test cases* are open-source software,
licensed under the terms of the Apache License, Version 2.0.
-OPNFV Euphrates Release Note for Yardstick
-==========================================
+=======================
+Yardstick Release Notes
+=======================
.. toctree::
:maxdepth: 2
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
-.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _Dashboard: http://testresults.opnfv.org/grafana/
-.. _NFV-TST001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
+.. _NFV-TST001: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
Abstract
---------
+========
-This document describes the release note of Yardstick project.
+This document compiles the release notes for the Iruya release of OPNFV Yardstick.
Version History
----------------
+===============
+-------------------+-----------+---------------------------------+
| *Date* | *Version* | *Comment* |
| | | |
+-------------------+-----------+---------------------------------+
-| December 15, 2017 | 5.1.0 | Yardstick for Euphrates release |
-| | | |
-+-------------------+-----------+---------------------------------+
-| October 20, 2017 | 5.0.0 | Yardstick for Euphrates release |
+| Jan 10, 2020 | 9.0.0 | Yardstick for Iruya release |
| | | |
+-------------------+-----------+---------------------------------+
Important Notes
----------------
+===============
The software delivered in the OPNFV Yardstick_ Project, comprising the
-*Yardstick framework*, the *Yardstick test cases* and the experimental
-framework *Apex Lake* is a realization of the methodology in ETSI-ISG
-NFV-TST001_.
+*Yardstick framework*, and the *Yardstick test cases* is a realization of
+the methodology in ETSI-ISG NFV-TST001_.
The *Yardstick* framework is *installer*, *infrastructure* and *application*
independent.
-OPNFV Euphrates Release
------------------------
+OPNFV Iruya Release
+====================
-This Euphrates release provides *Yardstick* as a framework for NFVI testing
+This Iruya release provides *Yardstick* as a framework for NFVI testing
and OPNFV feature testing, automated in the OPNFV CI pipeline, including:
* Documentation generated with Sphinx
* User Guide
-
* Developer Guide
-
* Release notes (this document)
-
* Results
* Automated Yardstick test suite (daily, weekly)
@@ -84,39 +73,29 @@ and OPNFV feature testing, automated in the OPNFV CI pipeline, including:
* Yardstick plug-in configuration yaml files, plug-in install/remove scripts
-For Euphrates release, the *Yardstick framework* is used for the following
+For Iruya release, the *Yardstick framework* is used for the following
testing:
* OPNFV platform testing - generic test cases to measure the categories:
* Compute
-
* Network
-
* Storage
-* OPNFV platform network service benchmarking(NSB)
+* OPNFV platform network service benchmarking (NSB)
* NSB
* Test cases for the following OPNFV Projects:
* Container4NFV
-
* High Availability
-
* IPv6
-
* KVM
-
* Parser
-
* StorPerf
-
* VSperf
- * virtual Traffic Classifier
-
The *Yardstick framework* is developed in the OPNFV community, by the
Yardstick_ team.
@@ -126,49 +105,47 @@ Yardstick_ team.
Release Data
-------------
+============
+--------------------------------+-----------------------+
| **Project** | Yardstick |
| | |
+--------------------------------+-----------------------+
-| **Repo/tag** | yardstick/opnfv-5.1.0 |
+| **Repo/tag** | yardstick/opnfv-9.0.0 |
| | |
+--------------------------------+-----------------------+
-| **Yardstick Docker image tag** | opnfv-5.1.0 |
+| **Yardstick Docker image tag** | opnfv-9.0.0 |
| | |
+--------------------------------+-----------------------+
-| **Release designation** | Euphrates |
+| **Release designation** | Iruya 9.0 |
| | |
+--------------------------------+-----------------------+
-| **Release date** | December 15, 2017 |
+| **Release date** | Jan 10, 2020 |
| | |
+--------------------------------+-----------------------+
-| **Purpose of the delivery** | OPNFV Euphrates 5.1.0 |
+| **Purpose of the delivery** | OPNFV Iruya 9.0.0 |
| | |
+--------------------------------+-----------------------+
Deliverables
-------------
+============
Documents
-^^^^^^^^^
+---------
- - User Guide: http://docs.opnfv.org/en/stable-euphrates/submodules/yardstick/docs/testing/user/userguide/index.html
+ - User Guide: :ref:`<yardstick:userguide>`
- - Developer Guide: http://docs.opnfv.org/en/stable-euphrates/submodules/yardstick/docs/testing/developer/devguide/index.html
+ - Developer Guide: :ref:`<yardstick:devguide>`
Software Deliverables
-^^^^^^^^^^^^^^^^^^^^^
-
+---------------------
- - The Yardstick Docker image: https://hub.docker.com/r/opnfv/yardstick (tag: opnfv-5.1.0)
+ - The Yardstick Docker image: https://hub.docker.com/r/opnfv/yardstick (tag: opnfv-9.0.0)
-
-New Contexts
-############
+List of Contexts
+^^^^^^^^^^^^^^^^
+--------------+-------------------------------------------+
| **Context** | **Description** |
@@ -188,31 +165,42 @@ New Contexts
+--------------+-------------------------------------------+
-New Runners
-###########
-
-+--------------+-------------------------------------------------------+
-| **Runner** | **Description** |
-| | |
-+--------------+-------------------------------------------------------+
-| *Arithmetic* | Steps every run arithmetically according to specified |
-| | input value |
-| | |
-+--------------+-------------------------------------------------------+
-| *Duration* | Runs for a specified period of time |
-| | |
-+--------------+-------------------------------------------------------+
-| *Iteration* | Runs for a specified number of iterations |
-| | |
-+--------------+-------------------------------------------------------+
-| *Sequence* | Selects input value to a scenario from an input file |
-| | and runs all entries sequentially |
-| | |
-+--------------+-------------------------------------------------------+
-
-
-New Scenarios
-#############
+List of Runners
+^^^^^^^^^^^^^^^
+
++----------------+-------------------------------------------------------+
+| **Runner** | **Description** |
+| | |
++----------------+-------------------------------------------------------+
+| *Arithmetic* | Steps every run arithmetically according to specified |
+| | input value |
+| | |
++----------------+-------------------------------------------------------+
+| *Duration* | Runs for a specified period of time |
+| | |
++----------------+-------------------------------------------------------+
+| *Iteration* | Runs for a specified number of iterations |
+| | |
++----------------+-------------------------------------------------------+
+| *IterationIPC* | Runs a configurable number of times before it |
+| | returns. Each iteration has a configurable timeout. |
+| | |
++----------------+-------------------------------------------------------+
+| *Sequence* | Selects input value to a scenario from an input file |
+| | and runs all entries sequentially |
+| | |
++----------------+-------------------------------------------------------+
+| *Dynamictp* | A runner that searches for the max throughput with |
+| | binary search |
+| | |
++----------------+-------------------------------------------------------+
+| *Search* | A runner that runs a specific time before it returns |
+| | |
++----------------+-------------------------------------------------------+
+
+
+List of Scenarios
+^^^^^^^^^^^^^^^^^
+----------------+-----------------------------------------------------+
| **Category** | **Delivered** |
@@ -234,339 +222,126 @@ New Scenarios
| | |
+----------------+-----------------------------------------------------+
| *Compute* | * cpuload |
-| | |
| | * cyclictest |
-| | |
| | * lmbench |
-| | |
| | * lmbench_cache |
-| | |
| | * perf |
-| | |
| | * unixbench |
-| | |
| | * ramspeed |
-| | |
| | * cachestat |
-| | |
| | * memeoryload |
-| | |
| | * computecapacity |
-| | |
| | * SpecCPU2006 |
| | |
+----------------+-----------------------------------------------------+
| *Networking* | * iperf3 |
-| | |
| | * netperf |
-| | |
| | * netperf_node |
-| | |
| | * ping |
-| | |
| | * ping6 |
-| | |
| | * pktgen |
-| | |
| | * sfc |
-| | |
| | * sfc with tacker |
-| | |
-| | * vtc instantion validation |
-| | |
-| | * vtc instantion validation with noisy neighbors |
-| | |
-| | * vtc throughput |
-| | |
-| | * vtc throughput in the presence of noisy neighbors |
-| | |
| | * networkcapacity |
-| | |
| | * netutilization |
-| | |
| | * nstat |
-| | |
| | * pktgenDPDK |
| | |
+----------------+-----------------------------------------------------+
| *Parser* | Tosca2Heat |
| | |
+----------------+-----------------------------------------------------+
-| *Storage* | fio |
-| | |
-| | bonnie++ |
-| | |
-| | storagecapacity |
+| *Storage* | * fio |
+| | * bonnie++ |
+| | * storagecapacity |
| | |
+----------------+-----------------------------------------------------+
| *StorPerf* | storperf |
| | |
+----------------+-----------------------------------------------------+
-| *NSB* | vPE thoughput test case |
+| *NSB* | vFW thoughput test case |
| | |
+----------------+-----------------------------------------------------+
-
New Test cases
-^^^^^^^^^^^^^^
-
-* Generic NFVI test cases
-
- * OPNFV_YARDSTICK_TCO78 - SPEC CPU 2006
-
- * OPNFV_YARDSTICK_TCO79 - Bonnie++
-
-* Kubernetes Test cases
+--------------
- * OPNFV_YARDSTICK_TCO80 - NETWORK LATENCY BETWEEN CONTAINER
+opnfv_yardstick_tc015: Processing speed with impact on energy consumption
+and CPU load.
- * OPNFV_YARDSTICK_TCO81 - NETWORK LATENCY BETWEEN CONTAINER AND VM
+The purpose of TC015 is to evaluate the IaaS compute performance with
+regards to CPU processing speed with its impact on the energy consumption.
+It measures score of single cpu running and parallel running. Energy
+consumption and cpu load are monitored while the cpu test is running.
+The purpose is also to be able to spot the trends. Test results, graphs
+and similar shall be stored for comparison reasons and product evolution
+understanding between different OPNFV versions and/or configurations,
+different server types.
Version Change
---------------
+==============
Module Version Changes
-^^^^^^^^^^^^^^^^^^^^^^
+----------------------
-This is the fifth tracked release of Yardstick. It is based on following
+This is the seventh tracked release of Yardstick. It is based on following
upstream versions:
-- OpenStack Ocata
-
-- OpenDayLight Nitrogen
-
-- ONOS Junco
+- OpenStack Stein
Document Version Changes
-^^^^^^^^^^^^^^^^^^^^^^^^
+------------------------
-This is the fifth tracked version of the Yardstick framework in OPNFV.
+This is the seventh tracked version of the Yardstick framework in OPNFV.
It includes the following documentation updates:
-- Yardstick User Guide: add "network service benchmarking(NSB)" chapter;
- add "Yardstick - NSB Testing -Installation" chapter; add "Yardstick API" chapter;
- add "Yardstick user interface" chapter; Update Yardstick installation chapter;
-
+- Yardstick User Guide:
- Yardstick Developer Guide
-
- Yardstick Release Notes for Yardstick: this document
Feature additions
-^^^^^^^^^^^^^^^^^
-
-- Yardstick RESTful API support
-
-- Network service benchmarking
-
-- Stress testing with Bottlenecks team
-
-- Yardstick framework improvement:
-
- - yardstick report CLI
-
- - Node context support OpenStack configuration via Ansible
-
- - Https support
-
- - Kubernetes context type
-
-- Yardstick container local GUI
-
-- Python 3 support
+-----------------
Scenario Matrix
----------------
-
-For Euphrates 5.0.0, Yardstick was tested on the following scenarios:
-
-+--------------------------+------+---------+------+------+
-| Scenario | Apex | Compass | Fuel | Joid |
-+==========================+======+=========+======+======+
-| os-nosdn-nofeature-noha | | | X | X |
-+--------------------------+------+---------+------+------+
-| os-nosdn-nofeature-ha | X | X | X | X |
-+--------------------------+------+---------+------+------+
-| os-odl_l2-nofeature-ha | | X | X | X |
-+--------------------------+------+---------+------+------+
-| os-odl_l2-nofeature-noha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-odl_l3-nofeature-ha | X | X | X | |
-+--------------------------+------+---------+------+------+
-| os-odl_l3-nofeature-noha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-onos-sfc-ha | | | | |
-+--------------------------+------+---------+------+------+
-| os-onos-nofeature-ha | | X | | X |
-+--------------------------+------+---------+------+------+
-| os-onos-nofeature-noha | | | | |
-+--------------------------+------+---------+------+------+
-| os-odl_l2-sfc-ha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-odl_l2-sfc-noha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-odl_l2-bgpvpn-ha | X | | X | |
-+--------------------------+------+---------+------+------+
-| os-odl_l2-bgpvpn-noha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-kvm-ha | X | | X | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-kvm-noha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-ovs-ha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-ovs-noha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-ocl-nofeature-ha | | X | | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-lxd-ha | | | | X |
-+--------------------------+------+---------+------+------+
-| os-nosdn-lxd-noha | | | | X |
-+--------------------------+------+---------+------+------+
-| os-nosdn-fdio-ha | X | | | |
-+--------------------------+------+---------+------+------+
-| os-odl_l2-fdio-noha | X | | | |
-+--------------------------+------+---------+------+------+
-| os-odl-gluon-noha | X | | | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-openo-ha | | X | | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-kvm_ovs_dpdk | | | X | |
-| -noha | | | | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-kvm_ovs_dpdk-ha | | | X | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-kvm_ovs_dpdk | | | X | |
-| _bar-ha | | | | |
-+--------------------------+------+---------+------+------+
-| os-nosdn-kvm_ovs_dpdk | | | X | |
-| _bar-noha | | | | |
-+--------------------------+------+---------+------+------+
-| opnfv_os-ovn-nofeature- | X | | | |
-| noha_daily | | | | |
-+--------------------------+------+---------+------+------+
+===============
+
Test results
-------------
+============
Test results are available in:
- jenkins logs on CI: https://build.opnfv.org/ci/view/yardstick/
-The reporting pages can be found at:
-
-+---------------+-------------------------------------------------------------------------------------+
-| apex | http://testresults.opnfv.org/reporting/euphrates/yardstick/status-apex.html |
-+---------------+-------------------------------------------------------------------------------------+
-| compass | http://testresults.opnfv.org/reporting/euphrates/yardstick/status-compass.html |
-+---------------+-------------------------------------------------------------------------------------+
-| fuel\@x86 | http://testresults.opnfv.org/reporting/euphrates/yardstick/status-fuel@x86.html |
-+---------------+-------------------------------------------------------------------------------------+
-| fuel\@aarch64 | http://testresults.opnfv.org/reporting/euphrates/yardstick/status-fuel@aarch64.html |
-+---------------+-------------------------------------------------------------------------------------+
-| joid | http://testresults.opnfv.org/reporting/euphrates/yardstick/status-joid.html |
-+---------------+-------------------------------------------------------------------------------------+
Known Issues/Faults
-^^^^^^^^^^^^^^^^^^^
+-------------------
Corrected Faults
-^^^^^^^^^^^^^^^^
+----------------
+
-Euphrates 5.1.0:
-
-+---------------------+-------------------------------------------------------------------------+
-| **JIRA REFERENCE** | **DESCRIPTION** |
-| | |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-841 | Fix various NSB license issues |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-73 | How To Work with Test Cases |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-500 | VNF testing documentation |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-826 | Allow overriding Heat IP addresses to match traffic generator profile |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-828 | Refactor doc/testing/user/userguide "Yardstick Installation" |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-830 | build_yardstick_image Ansible mount module doesn't work on Ubuntu 14.04 |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-833 | ansible_common transform password into lower case |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-847 | tc006, tc079, tc082 miss grafana dashboard in local deployment |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-849 | kill process do not accurately kill the process like "nova-api" |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-850 | tc023 miss description and tc050-58 wrong description |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-852 | tc078 cpu2006 fails in some situation |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-854 | yardstick docker lack of trex_client |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-867 | testcase tc078 have no data stored or dashboard to show results |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-871 | Remove img_modify_playbook assignation in build_yardstick_image.yml |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-829 | "nsb_setup.sh" doesn't parse the controller IP correctly |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-839 | NSB Prox BM test cases to be fixed for incorporating scale-up |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-840 | NSB Prox test documentation of vPE and LW-AFTR test cases |
-+---------------------+-------------------------------------------------------------------------+
-| JIRA: YARDSTICK-848 | NSB "Prox" : Cleanup duplicated traffic profile |
-+---------------------+-------------------------------------------------------------------------+
-
-
-
-
-Euphrates 5.0.0:
-
-+---------------------+--------------------------------------------+
-| **JIRA REFERENCE** | **DESCRIPTION** |
-| | |
-+---------------------+--------------------------------------------+
-| JIRA: YARDSTICK-599 | Could not load EntryPoint.parse when using |
-| | 'openstack -h' |
-+---------------------+--------------------------------------------+
-| JIRA: YARDSTICK-602 | Don't rely on staic ip addresses as they |
-| | are dynamic |
-+---------------------+--------------------------------------------+
-
-
-Euphratess 5.0.0 known restrictions/issues
-------------------------------------------
-+-----------+-----------+----------------------------------------------+
-| Installer | Scenario | Issue |
-+===========+===========+==============================================+
-| any | \*-bgpvpn | Floating ips not supported. Some Test cases |
-| | | related to floating ips are excluded. |
-+-----------+-----------+----------------------------------------------+
-| any | odl_l3-\* | Some test cases related to using floating IP |
-| | | addresses fail because of a known ODL bug. |
-| | | |
-+-----------+-----------+----------------------------------------------+
-| compass | odl_l2-\* | In some test cases, VM instance will failed |
-| | | raising network interfaces. |
-| | | |
-+-----------+-----------+----------------------------------------------+
+Iruya 9.0.0 known restrictions/issues
+======================================
Useful links
-------------
+============
- wiki project page: https://wiki.opnfv.org/display/yardstick/Yardstick
- - wiki Yardstick Euphrates release planing page: https://wiki.opnfv.org/display/yardstick/Yardstick+Euphrates+Release+Planning
+ - wiki Yardstick Iruya release planning page: https://wiki.opnfv.org/display/yardstick/Release+Iruya
- - Yardstick repo: https://git.opnfv.org/cgit/yardstick
+ - Yardstick repo: https://git.opnfv.org/yardstick
- Yardstick CI dashboard: https://build.opnfv.org/ci/view/yardstick
- Yardstick grafana dashboard: http://testresults.opnfv.org/grafana/
- - Yardstick IRC chanel: #opnfv-yardstick
+ - Yardstick IRC channel: #opnfv-yardstick
diff --git a/docs/release/results/euphrates_fraser_comparison.rst b/docs/release/results/euphrates_fraser_comparison.rst
new file mode 100644
index 000000000..1dd328bb7
--- /dev/null
+++ b/docs/release/results/euphrates_fraser_comparison.rst
@@ -0,0 +1,610 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+..
+ Convention for heading levels in Yardstick:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+ Avoid deeper levels because they do not render well.
+
+Test results analysis for Euphrates and Fraser releases
+=======================================================
+
+TC002
+-----
+
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping.
+
+Most test run measurements result on average between 0.39 and 4.00 ms.
+Compared with Euphrates release, the average RTT result of the same pod experiences
+a slight decline in Fraser release. For example, the average RTT of arm-pod5 is
+1.518 in Ehphrates and 1.714 in Fraser. The average RTT of intel-pod18 is 1.6575
+ms in Ehphrates and 1.856 ms in Fraser.
+
+{
+
+ "huawei-pod2:stable/euphrates": [0.3925],
+
+ "lf-pod2:stable/euphrates": [0.5315],
+
+ "lf-pod1:stable/euphrates": [0.62],
+
+ "huawei-pod2:stable/fraser": [0.677],
+
+ "lf-pod1:stable/fraser": [0.725],
+
+ "flex-pod2:stable/euphrates": [0.795],
+
+ "huawei-pod12:stable/euphrates": [0.87],
+
+ "ericsson-pod1:stable/fraser": [0.9165],
+
+ "huawei-pod12:stable/fraser": [1.0465],
+
+ "lf-pod2:stable/fraser": [1.2325],
+
+ "intel-pod5:stable/euphrates": [1.25],
+
+ "ericsson-virtual3:stable/euphrates": [1.2655],
+
+ "ericsson-pod1:stable/euphrates": [1.372],
+
+ "zte-pod2:stable/fraser": [1.395],
+
+ "arm-pod5:stable/euphrates": [1.518],
+
+ "huawei-virtual4:stable/euphrates": [1.5355],
+
+ "ericsson-virtual4:stable/fraser": [1.582],
+
+ "huawei-virtual3:stable/euphrates": [1.606],
+
+ "intel-pod18:stable/euphrates": [1.6575],
+
+ "huawei-virtual4:stable/fraser": [1.697],
+
+ "huawei-virtual8:stable/euphrates": [1.709],
+
+ "arm-pod5:stable/fraser": [1.714],
+
+ "huawei-virtual3:stable/fraser": [1.716],
+
+ "intel-pod18:stable/fraser": [1.856],
+
+ "huawei-virtual2:stable/euphrates": [1.872],
+
+ "arm-pod6:stable/euphrates": [1.895],
+
+ "huawei-virtual2:stable/fraser": [1.964],
+
+ "huawei-virtual1:stable/fraser": [1.9765],
+
+ "huawei-virtual9:stable/euphrates": [2.0745],
+
+ "arm-pod6:stable/fraser": [2.209],
+
+ "huawei-virtual1:stable/euphrates": [2.495],
+
+ "ericsson-virtual2:stable/euphrates": [2.7895],
+
+ "ericsson-virtual4:stable/euphrates": [3.768],
+
+ "ericsson-virtual1:stable/euphrates": [3.8035],
+
+ "ericsson-virtual3:stable/fraser": [3.9175],
+
+ "ericsson-virtual2:stable/fraser": [4.004]
+
+}
+
+TC010
+-----
+
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. Compared with Euphrates release, the memory read latency of the same pod
+also experience a slight decline. Virtual pods seem to have a higher memory read
+latency than physical pods. Compared with X86 pods, the memory read latency of
+arm pods is significant higher.
+
+{
+
+ "ericsson-pod1:stable/euphrates": [5.7785],
+
+ "flex-pod2:stable/euphrates": [5.908],
+
+ "ericsson-virtual1:stable/euphrates": [6.412],
+
+ "intel-pod18:stable/euphrates": [6.5905],
+
+ "intel-pod5:stable/euphrates": [6.6975],
+
+ "ericsson-pod1:stable/fraser": [7.0645],
+
+ "ericsson-virtual4:stable/euphrates": [7.183],
+
+ "intel-pod18:stable/fraser": [7.4465],
+
+ "zte-pod2:stable/fraser": [8.1865],
+
+ "ericsson-virtual2:stable/euphrates": [8.4985],
+
+ "huawei-pod2:stable/euphrates": [8.877],
+
+ "huawei-pod12:stable/euphrates": [9.091],
+
+ "huawei-pod2:stable/fraser": [9.236],
+
+ "huawei-pod12:stable/fraser": [9.615],
+
+ "ericsson-virtual3:stable/euphrates": [9.719],
+
+ "ericsson-virtual2:stable/fraser": [9.8925],
+
+ "huawei-virtual4:stable/euphrates": [10.1195],
+
+ "huawei-virtual3:stable/euphrates": [10.19],
+
+ "huawei-virtual2:stable/fraser": [10.22],
+
+ "huawei-virtual1:stable/euphrates": [10.3045],
+
+ "huawei-virtual9:stable/euphrates": [10.318],
+
+ "ericsson-virtual4:stable/fraser": [10.5465],
+
+ "ericsson-virtual3:stable/fraser": [10.9355],
+
+ "huawei-virtual3:stable/fraser": [10.95],
+
+ "huawei-virtual2:stable/euphrates": [11.274],
+
+ "huawei-virtual4:stable/fraser": [11.557],
+
+ "lf-pod1:stable/euphrates": [15.7025],
+
+ "lf-pod2:stable/euphrates": [15.8495],
+
+ "lf-pod2:stable/fraser": [16.5595],
+
+ "lf-pod1:stable/fraser": [16.8395],
+
+ "arm-pod5:stable/euphrates": [18.092],
+
+ "arm-pod5:stable/fraser": [18.744],
+
+ "huawei-virtual1:stable/fraser": [19.8235],
+
+ "huawei-virtual8:stable/euphrates": [33.999],
+
+ "arm-pod6:stable/euphrates": [41.5605],
+
+ "arm-pod6:stable/fraser": [55.804]
+
+}
+
+TC011
+-----
+
+Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
+different blades. In general, the packet delay variation of the two releases
+look similar.
+
+{
+
+ "arm-pod6:stable/fraser": [1],
+
+ "ericsson-pod1:stable/fraser": [1],
+
+ "ericsson-virtual2:stable/fraser": [1],
+
+ "ericsson-virtual3:stable/fraser": [1],
+
+ "lf-pod2:stable/fraser": [1],
+
+ "huawei-virtual1:stable/fraser": [2997],
+
+ "huawei-virtual2:stable/euphrates": [2997],
+
+ "flex-pod2:stable/euphrates": [2997.5],
+
+ "huawei-virtual3:stable/euphrates": [2998],
+
+ "huawei-virtual3:stable/fraser": [2999],
+
+ "huawei-virtual9:stable/euphrates": [3000],
+
+ "huawei-virtual8:stable/euphrates": [3001],
+
+ "huawei-virtual4:stable/euphrates": [3002],
+
+ "huawei-virtual4:stable/fraser": [3002],
+
+ "ericsson-virtual3:stable/euphrates": [3006],
+
+ "huawei-virtual1:stable/euphrates": [3007],
+
+ "ericsson-virtual2:stable/euphrates": [3009],
+
+ "intel-pod18:stable/euphrates": [3010],
+
+ "ericsson-virtual4:stable/euphrates": [3017],
+
+ "lf-pod2:stable/euphrates": [3021],
+
+ "arm-pod5:stable/euphrates": [3022],
+
+ "arm-pod6:stable/euphrates": [3022],
+
+ "ericsson-pod1:stable/euphrates": [3022],
+
+ "huawei-pod12:stable/euphrates": [3022],
+
+ "huawei-pod12:stable/fraser": [3022],
+
+ "huawei-pod2:stable/euphrates": [3022],
+
+ "huawei-pod2:stable/fraser": [3022],
+
+ "intel-pod18:stable/fraser": [3022],
+
+ "intel-pod5:stable/euphrates": [3022],
+
+ "lf-pod1:stable/euphrates": [3022],
+
+ "lf-pod1:stable/fraser": [3022],
+
+ "zte-pod2:stable/fraser": [3022],
+
+ "huawei-virtual2:stable/fraser": [3025]
+
+}
+
+TC012
+-----
+
+Lmbench is also used to measure the memory read and write bandwidth.
+Like TC010, compared with Euphrates release, the memory read and write bandwidth
+of the same pod also experience a slight decline. And compared with X86 pods, the memory
+read and write bandwidth of arm pods is significant lower.
+
+{
+
+ "lf-pod1:stable/euphrates": [22912.39],
+
+ "lf-pod2:stable/euphrates": [22637.67],
+
+ "lf-pod1:stable/fraser": [20552.9],
+
+ "flex-pod2:stable/euphrates": [20229.99],
+
+ "lf-pod2:stable/fraser": [20058.925],
+
+ "ericsson-pod1:stable/fraser": [18930.78],
+
+ "intel-pod18:stable/fraser": [18757.545],
+
+ "ericsson-virtual1:stable/euphrates": [17474.965],
+
+ "ericsson-pod1:stable/euphrates": [17127.38],
+
+ "ericsson-virtual4:stable/euphrates": [16219.97],
+
+ "ericsson-virtual2:stable/euphrates": [15652.28],
+
+ "ericsson-virtual3:stable/euphrates": [15551.26],
+
+ "ericsson-virtual4:stable/fraser": [15389.465],
+
+ "ericsson-virtual2:stable/fraser": [15343.79],
+
+ "huawei-pod2:stable/euphrates": [15017.2],
+
+ "huawei-pod2:stable/fraser": [14870.78],
+
+ "huawei-virtual4:stable/euphrates": [14266.34],
+
+ "huawei-virtual1:stable/euphrates": [14233.035],
+
+ "huawei-virtual3:stable/euphrates": [14227.63],
+
+ "zte-pod2:stable/fraser": [14157.99],
+
+ "huawei-pod12:stable/euphrates": [14147.245],
+
+ "huawei-pod12:stable/fraser": [14126.99],
+
+ "intel-pod18:stable/euphrates": [14058.33],
+
+ "huawei-virtual3:stable/fraser": [13929.67],
+
+ "huawei-virtual2:stable/euphrates": [13862.85],
+
+ "huawei-virtual4:stable/fraser": [13847.155],
+
+ "huawei-virtual2:stable/fraser": [13702.92],
+
+ "huawei-virtual1:stable/fraser": [13496.45],
+
+ "intel-pod5:stable/euphrates": [13280.32],
+
+ "ericsson-virtual3:stable/fraser": [12733.19],
+
+ "huawei-virtual9:stable/euphrates": [12559.445],
+
+ "huawei-virtual8:stable/euphrates": [8998.02],
+
+ "arm-pod5:stable/euphrates": [4388.875],
+
+ "arm-pod5:stable/fraser": [4326.11],
+
+ "arm-pod6:stable/euphrates": [4260.2],
+
+ "arm-pod6:stable/fraser": [3809.885]
+
+}
+
+TC014
+-----
+
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single CPU running and parallel running. Below are the single CPU running
+scores. It can be seen that the processing test results vary from scores 715 to 3737.
+In general, the single CPU score of the two releases look similar.
+
+{
+
+ "lf-pod2:stable/fraser": [3737.6],
+
+ "lf-pod2:stable/euphrates": [3723.95],
+
+ "lf-pod1:stable/fraser": [3702.7],
+
+ "lf-pod1:stable/euphrates": [3669],
+
+ "intel-pod5:stable/euphrates": [3388.6],
+
+ "intel-pod18:stable/euphrates": [3298.4],
+
+ "flex-pod2:stable/euphrates": [3208.6],
+
+ "ericsson-pod1:stable/fraser": [3131.6],
+
+ "intel-pod18:stable/fraser": [3098.1],
+
+ "ericsson-virtual1:stable/euphrates": [2988.9],
+
+ "zte-pod2:stable/fraser": [2831.4],
+
+ "ericsson-pod1:stable/euphrates": [2669.1],
+
+ "ericsson-virtual4:stable/euphrates": [2598.5],
+
+ "ericsson-virtual2:stable/fraser": [2559.7],
+
+ "ericsson-virtual3:stable/euphrates": [2553.15],
+
+ "huawei-pod2:stable/euphrates": [2531.2],
+
+ "huawei-pod2:stable/fraser": [2528.9],
+
+ "ericsson-virtual4:stable/fraser": [2527.8],
+
+ "ericsson-virtual2:stable/euphrates": [2526.9],
+
+ "huawei-virtual4:stable/euphrates": [2407.4],
+
+ "huawei-virtual3:stable/fraser": [2379.1],
+
+ "huawei-virtual3:stable/euphrates": [2374.6],
+
+ "huawei-virtual4:stable/fraser": [2362.1],
+
+ "huawei-virtual2:stable/euphrates": [2326.4],
+
+ "huawei-virtual9:stable/euphrates": [2324.95],
+
+ "huawei-virtual1:stable/euphrates": [2302.6],
+
+ "huawei-virtual2:stable/fraser": [2299.3],
+
+ "huawei-pod12:stable/euphrates": [2232.2],
+
+ "huawei-pod12:stable/fraser": [2229],
+
+ "huawei-virtual1:stable/fraser": [2171.3],
+
+ "ericsson-virtual3:stable/fraser": [2104.8],
+
+ "huawei-virtual8:stable/euphrates": [2085.3],
+
+ "arm-pod5:stable/fraser": [1764.2],
+
+ "arm-pod5:stable/euphrates": [1754.4],
+
+ "arm-pod6:stable/euphrates": [716.15],
+
+ "arm-pod6:stable/fraser": [715.4]
+
+}
+
+TC069
+-----
+
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test. Below are
+the scores for 32mb block array.
+
+{
+
+ "intel-pod18:stable/euphrates": [18871.79],
+
+ "intel-pod18:stable/fraser": [16939.24],
+
+ "intel-pod5:stable/euphrates": [16055.79],
+
+ "arm-pod6:stable/euphrates": [13327.02],
+
+ "arm-pod6:stable/fraser": [11895.71],
+
+ "flex-pod2:stable/euphrates": [9384.585],
+
+ "zte-pod2:stable/fraser": [9375.33],
+
+ "ericsson-pod1:stable/euphrates": [9331.535],
+
+ "huawei-pod12:stable/euphrates": [9164.88],
+
+ "ericsson-pod1:stable/fraser": [9140.42],
+
+ "huawei-pod2:stable/euphrates": [9026.52],
+
+ "huawei-pod12:stable/fraser": [8993.37],
+
+ "huawei-virtual9:stable/euphrates": [8825.805],
+
+ "huawei-pod2:stable/fraser": [8794.01],
+
+ "huawei-virtual2:stable/fraser": [7670.21],
+
+ "ericsson-virtual1:stable/euphrates": [7615.97],
+
+ "ericsson-virtual4:stable/euphrates": [7539.23],
+
+ "arm-pod5:stable/fraser": [7479.32],
+
+ "arm-pod5:stable/euphrates": [7403.38],
+
+ "huawei-virtual3:stable/euphrates": [7247.89],
+
+ "ericsson-virtual2:stable/fraser": [7219.21],
+
+ "huawei-virtual2:stable/euphrates": [7205.35],
+
+ "huawei-virtual1:stable/euphrates": [7196.405],
+
+ "ericsson-virtual3:stable/euphrates": [7173.72],
+
+ "huawei-virtual4:stable/euphrates": [7131.47],
+
+ "ericsson-virtual2:stable/euphrates": [7129.08],
+
+ "huawei-virtual4:stable/fraser": [7059.045],
+
+ "huawei-virtual3:stable/fraser": [7023.57],
+
+ "lf-pod1:stable/euphrates": [6928.18],
+
+ "lf-pod2:stable/euphrates": [6875.88],
+
+ "lf-pod2:stable/fraser": [6834.7],
+
+ "lf-pod1:stable/fraser": [6775.27],
+
+ "ericsson-virtual4:stable/fraser": [6522.86],
+
+ "ericsson-virtual3:stable/fraser": [5835.59],
+
+ "huawei-virtual8:stable/euphrates": [5729.705],
+
+ "huawei-virtual1:stable/fraser": [5617.12]
+
+}
+
+TC082
+-----
+
+For this test case, we use perf to measure context-switches under load.
+High context switch rates are not themselves an issue, but they may point the
+way to a more significant problem.
+
+{
+
+ "zte-pod2:stable/fraser": [306.5],
+
+ "huawei-pod12:stable/euphrates": [316],
+
+ "lf-pod2:stable/fraser": [337.5],
+
+ "intel-pod18:stable/euphrates": [340],
+
+ "intel-pod18:stable/fraser": [343.5],
+
+ "intel-pod5:stable/euphrates": [357.5],
+
+ "ericsson-pod1:stable/euphrates": [384],
+
+ "lf-pod2:stable/euphrates": [394.5],
+
+ "huawei-pod12:stable/fraser": [399],
+
+ "lf-pod1:stable/euphrates": [435],
+
+ "lf-pod1:stable/fraser": [454],
+
+ "flex-pod2:stable/euphrates": [476],
+
+ "huawei-pod2:stable/euphrates": [518],
+
+ "huawei-pod2:stable/fraser": [544.5],
+
+ "arm-pod5:stable/euphrates": [869.5],
+
+ "huawei-virtual9:stable/euphrates": [1002],
+
+ "huawei-virtual4:stable/fraser": [1138],
+
+ "huawei-virtual4:stable/euphrates": [1174],
+
+ "huawei-virtual3:stable/euphrates": [1239],
+
+ "ericsson-pod1:stable/fraser": [1305],
+
+ "huawei-virtual2:stable/euphrates": [1430],
+
+ "huawei-virtual3:stable/fraser": [1433],
+
+ "huawei-virtual1:stable/fraser": [1470],
+
+ "huawei-virtual1:stable/euphrates": [1489],
+
+ "arm-pod6:stable/fraser": [1738.5],
+
+ "arm-pod6:stable/euphrates": [1883.5]
+
+}
+
+TC083
+-----
+
+TC083 measures network latency and throughput between VMs using netperf.
+The test results shown below are for UDP throughout.
+
+{
+
+ "lf-pod1:stable/euphrates": [2204.42],
+
+ "lf-pod2:stable/fraser": [1893.39],
+
+ "intel-pod18:stable/euphrates": [1835.55],
+
+ "lf-pod2:stable/euphrates": [1676.705],
+
+ "intel-pod5:stable/euphrates": [1612.555],
+
+ "zte-pod2:stable/fraser": [1543.995],
+
+ "lf-pod1:stable/fraser": [1480.86],
+
+ "intel-pod18:stable/fraser": [1417.015],
+
+ "flex-pod2:stable/euphrates": [1370.23],
+
+ "huawei-pod12:stable/euphrates": [1300.12]
+
+}
diff --git a/docs/release/results/images/tc002_pod.png b/docs/release/results/images/tc002_pod.png
new file mode 100644
index 000000000..7f92c471d
--- /dev/null
+++ b/docs/release/results/images/tc002_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc002_pod_fraser.png b/docs/release/results/images/tc002_pod_fraser.png
new file mode 100644
index 000000000..797dc3136
--- /dev/null
+++ b/docs/release/results/images/tc002_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc002_scenario.png b/docs/release/results/images/tc002_scenario.png
new file mode 100644
index 000000000..0ea1ecec6
--- /dev/null
+++ b/docs/release/results/images/tc002_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc002_scenario_fraser.png b/docs/release/results/images/tc002_scenario_fraser.png
new file mode 100644
index 000000000..ff42e6516
--- /dev/null
+++ b/docs/release/results/images/tc002_scenario_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc010_pod.png b/docs/release/results/images/tc010_pod.png
new file mode 100644
index 000000000..c7c623481
--- /dev/null
+++ b/docs/release/results/images/tc010_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc010_pod_fraser.png b/docs/release/results/images/tc010_pod_fraser.png
new file mode 100644
index 000000000..23367d34a
--- /dev/null
+++ b/docs/release/results/images/tc010_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc010_scenario.png b/docs/release/results/images/tc010_scenario.png
new file mode 100644
index 000000000..7c53a5fab
--- /dev/null
+++ b/docs/release/results/images/tc010_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc010_scenario_fraser.png b/docs/release/results/images/tc010_scenario_fraser.png
new file mode 100644
index 000000000..a481a595f
--- /dev/null
+++ b/docs/release/results/images/tc010_scenario_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc011_pod.png b/docs/release/results/images/tc011_pod.png
new file mode 100644
index 000000000..8fec72f5a
--- /dev/null
+++ b/docs/release/results/images/tc011_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc011_pod_fraser.png b/docs/release/results/images/tc011_pod_fraser.png
new file mode 100644
index 000000000..82dc9c763
--- /dev/null
+++ b/docs/release/results/images/tc011_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc011_scenario.png b/docs/release/results/images/tc011_scenario.png
new file mode 100644
index 000000000..2d78ea372
--- /dev/null
+++ b/docs/release/results/images/tc011_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc011_scenario_fraser.png b/docs/release/results/images/tc011_scenario_fraser.png
new file mode 100644
index 000000000..226d0b856
--- /dev/null
+++ b/docs/release/results/images/tc011_scenario_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc012_pod.png b/docs/release/results/images/tc012_pod.png
new file mode 100644
index 000000000..0f2a00910
--- /dev/null
+++ b/docs/release/results/images/tc012_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc012_pod_fraser.png b/docs/release/results/images/tc012_pod_fraser.png
new file mode 100644
index 000000000..66e79be85
--- /dev/null
+++ b/docs/release/results/images/tc012_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc012_scenario.png b/docs/release/results/images/tc012_scenario.png
new file mode 100644
index 000000000..16257988d
--- /dev/null
+++ b/docs/release/results/images/tc012_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc012_scenario_fraser.png b/docs/release/results/images/tc012_scenario_fraser.png
new file mode 100644
index 000000000..4ef44119a
--- /dev/null
+++ b/docs/release/results/images/tc012_scenario_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc014_pod.png b/docs/release/results/images/tc014_pod.png
new file mode 100644
index 000000000..63aead2e8
--- /dev/null
+++ b/docs/release/results/images/tc014_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc014_pod_fraser.png b/docs/release/results/images/tc014_pod_fraser.png
new file mode 100644
index 000000000..697201d76
--- /dev/null
+++ b/docs/release/results/images/tc014_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc014_scenario.png b/docs/release/results/images/tc014_scenario.png
new file mode 100644
index 000000000..98f23ba1b
--- /dev/null
+++ b/docs/release/results/images/tc014_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc014_scenario_fraser.png b/docs/release/results/images/tc014_scenario_fraser.png
new file mode 100644
index 000000000..f7865dcdc
--- /dev/null
+++ b/docs/release/results/images/tc014_scenario_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc069_pod.png b/docs/release/results/images/tc069_pod.png
new file mode 100644
index 000000000..66b272cb4
--- /dev/null
+++ b/docs/release/results/images/tc069_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc069_pod_fraser.png b/docs/release/results/images/tc069_pod_fraser.png
new file mode 100644
index 000000000..1cba192d7
--- /dev/null
+++ b/docs/release/results/images/tc069_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc069_scenario.png b/docs/release/results/images/tc069_scenario.png
new file mode 100644
index 000000000..caf12f8d5
--- /dev/null
+++ b/docs/release/results/images/tc069_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc069_scenario_fraser.png b/docs/release/results/images/tc069_scenario_fraser.png
new file mode 100644
index 000000000..f988b90c6
--- /dev/null
+++ b/docs/release/results/images/tc069_scenario_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc082_pod.png b/docs/release/results/images/tc082_pod.png
new file mode 100644
index 000000000..89e01666b
--- /dev/null
+++ b/docs/release/results/images/tc082_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc082_pod_fraser.png b/docs/release/results/images/tc082_pod_fraser.png
new file mode 100644
index 000000000..d54ab901a
--- /dev/null
+++ b/docs/release/results/images/tc082_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc082_scenario.png b/docs/release/results/images/tc082_scenario.png
new file mode 100644
index 000000000..637a739c3
--- /dev/null
+++ b/docs/release/results/images/tc082_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc083_pod.png b/docs/release/results/images/tc083_pod.png
new file mode 100644
index 000000000..f874191e4
--- /dev/null
+++ b/docs/release/results/images/tc083_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc083_pod_fraser.png b/docs/release/results/images/tc083_pod_fraser.png
new file mode 100644
index 000000000..942cc2074
--- /dev/null
+++ b/docs/release/results/images/tc083_pod_fraser.png
Binary files differ
diff --git a/docs/release/results/images/tc083_scenario.png b/docs/release/results/images/tc083_scenario.png
new file mode 100644
index 000000000..afd80aa02
--- /dev/null
+++ b/docs/release/results/images/tc083_scenario.png
Binary files differ
diff --git a/docs/release/results/index.rst b/docs/release/results/index.rst
index 0560152e0..30cf62284 100644
--- a/docs/release/results/index.rst
+++ b/docs/release/results/index.rst
@@ -14,3 +14,9 @@ Yardstick test results
.. include:: ./overview.rst
.. include:: ./results.rst
+.. include:: ./euphrates_fraser_comparison.rst
+
+.. include:: ./yardstick-opnfv-ha.rst
+.. include:: ./yardstick-opnfv-kvm.rst
+.. include:: ./yardstick-opnfv-parser.rst
+
diff --git a/docs/release/results/os-nosdn-kvm-ha.rst b/docs/release/results/os-nosdn-kvm-ha.rst
deleted file mode 100644
index a8a56f80e..000000000
--- a/docs/release/results/os-nosdn-kvm-ha.rst
+++ /dev/null
@@ -1,270 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-================================
-Test Results for os-nosdn-kvm-ha
-================================
-
-.. toctree::
- :maxdepth: 2
-
-
-fuel
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test
-runs, each run on the Ericsson POD2_ or LF POD2_ between August 24 and 30 in
-2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.44 and 0.75 ms.
-A few runs start with a 0.65 - 0.68 ms RTT spike (This could be because of
-normal ARP handling). One test run has a greater RTT spike of 1.49 ms.
-To be able to draw conclusions more runs should be made. SLA set to 10 ms.
-The SLA value is used as a reference, it has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 92 and 204 MB/s. Within each test run the results
-vary, with a minimum 2 MB/s and maximum 819 MB/s on the totality. Most runs
-have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies more in
-absolute numbers between the dates, between 238 and 819 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in approx. 2.07 ns. The variations within each test run are similar, between
-1.41 and 3.53 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. The reported packet delay variation varies between 0.0051 and 0.0243 ms,
-with an average delay variation between 0.0081 ms and 0.0195 ms.
-
-TC012
------
-Between test dates, the average measurements for memory bandwidth result in
-approx. 13.6 GB/s. Within each test run the results vary more, with a minimal
-BW of 6.09 GB/s and maximum of 16.47 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 2316 and 3619,
-one result each date.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-CPU utilization statistics are collected during UDP flows sent between the VMs
-using pktgen as packet generator tool. The average measurements for CPU
-utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
-appears around 7%.
-
-TC069
------
-Between test dates, the average measurements for memory bandwidth vary between
-22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal
-BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality.
-SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Memory utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for memory
-utilization vary between 225MB to 246MB. The peak of memory utilization appears
-around 340MB.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Cache utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for cache
-utilization vary between 205MB to 212MB.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. Total number of packets received per
-second was average on 200 kpps and total number of packets transmitted per
-second was average on 600 kpps.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ and LF POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all. Approx.
-15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
-difference in RTT results.
-Especially RTT and throughput come out with better results than for instance
-the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
-probably be further analyzed and understood. Also of interest could be
-to make further analyzes to find patterns and reasons for lost traffic.
-Also of interest could be to see if there are continuous variations where
-some test cases stand out with better or worse results than the general test
-case.
-
diff --git a/docs/release/results/os-nosdn-nofeature-ha.rst b/docs/release/results/os-nosdn-nofeature-ha.rst
deleted file mode 100644
index 9e52731d5..000000000
--- a/docs/release/results/os-nosdn-nofeature-ha.rst
+++ /dev/null
@@ -1,492 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-======================================
-Test Results for os-nosdn-nofeature-ha
-======================================
-
-.. toctree::
- :maxdepth: 2
-
-
-apex
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test
-runs, each run on the LF POD1_ between August 25 and 28 in
-2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.74 and 1.08 ms.
-A few runs start with a 0.99 - 1.07 ms RTT spike (This could be because of
-normal ARP handling). One test run has a greater RTT spike of 1.35 ms.
-To be able to draw conclusions more runs should be made. SLA set to 10 ms.
-The SLA value is used as a reference, it has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 128 and 136 MB/s. Within each test run the results
-vary, with a minimum 5 MB/s and maximum 446 MB/s on the totality. Most runs
-have a minimum BW of 5 MB/s (one run at 6 MB/s). The maximum BW varies more in
-absolute numbers between the dates, between 416 and 446 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in approx. 1.09 ns. The variations within each test run are similar, between
-1.0860 and 1.0880 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. The reported packet delay variation varies between 0.0025 and 0.0148 ms,
-with an average delay variation between 0.0056 ms and 0.0157 ms.
-
-TC012
------
-Between test dates, the average measurements for memory bandwidth result in
-approx. 19.70 GB/s. Within each test run the results vary more, with a minimal
-BW of 18.16 GB/s and maximum of 20.13 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 3224.4 and 3842.8,
-one result each date. The average score on the total is 3659.5.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-CPU utilization statistics are collected during UDP flows sent between the VMs
-using pktgen as packet generator tool. The average measurements for CPU
-utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
-appears around 7%.
-
-TC069
------
-Between test dates, the average measurements for memory bandwidth vary between
-22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal
-BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality.
-SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Memory utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for memory
-utilization vary between 225MB to 246MB. The peak of memory utilization appears
-around 340MB.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Cache utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for cache
-utilization vary between 205MB to 212MB.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. Total number of packets received per
-second was average on 200 kpps and total number of packets transmitted per
-second was average on 600 kpps.
-
-Detailed test results
----------------------
-The scenario was run on LF POD1_ with:
-Apex
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-
-Joid
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Intel POD5_ between September 11 and 14 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.59 and 1.70 ms.
-Two test runs have reached the same greater RTT spike of 3.06 ms, which are
-1.66 and 1.70 ms average, but only one has the lower RTT of 1.35 ms. The other
-two runs have no similar spike at all. To be able to draw conclusions more runs
-should be made. SLA set to be 10 ms. The SLA value is used as a reference, it
-has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput and the
-greatest IO read bandwidth of the four runs is 173.3 MB/s. The IO read
-bandwidth of the four runs looks similar on different four days, with an
-average between 32.7 and 60.4 MB/s. One of the runs has a minimum BW of 429
-KM/s and other has a maximum BW of 173.3 MB/s. The SLA of read bandwidth sets
-to be 400 MB/s, which is used as a reference, and it has not been defined by
-OPNFV.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is 1.1 ns on average. The
-variations within each test run are different, some vary from a large range and
-others have a small change. For example, the largest change is on September 14,
-the memory read latency of which is ranging from 1.12 ns to 1.22 ns. However,
-the results on September 12 change very little, which range from 1.14 ns to
-1.17 ns. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on
-different blades. The reported pocket delay variations of the four test runs
-differ from each other. The results on September 13 within the date look
-similar and the values are between 0.0087 and 0.0190 ms, which is 0.0126 ms on
-average. However, on the fourth day, the pocket delay variation has a large
-wide change within the date, which ranges from 0.0032 ms to 0.0121 ms and has
-the minimum average value. The pocket delay variations of other two test runs
-look relatively similar, which are 0.0076 ms and 0.0152 ms on average. The SLA
-value sets to be 10 ms. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the memory
-bandwidth within the second day almost keep stable, which is 11.58 GB/s on
-average. And the memory bandwidth of the fourth day look similar as that of the
-second day, both of which remain stable. The other two test runs relatively
-change from a large wide range, in which the minimum memory bandwidth is 11.22
-GB/s and the maximum bandwidth is 16.65 GB/s with an average bandwidth of about
-12.20 GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference,
-it has not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to measure processing speed, that is instructions per
-second. It can be seen from the dashboard that the processing test results
-vary from scores 3272 to 3444, and there is only one result one date. The
-overall average score is 3371. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is 119.85, 128.02, 121.40 and
-126.08 kpps, of which the result of the second is the highest. The RTT results
-of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS
-results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 184 kpps and the
-minimum throughput is 49 K respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is 38 ms, of which the worest RTT is
-93 ms on Sep. 14th.
-
-CPU load of the four test runs have a large change, since the minimum value and
-the peak of CPU load is 0 percent and 51 percent respectively. And the best
-result is obtained on Sep. 14th.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 22.3 GB/s to 26.8 GB/s and then to 18.5 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
-look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. SLA
-sets to be 7 GB/s. The SLA value is used as a a reference, it has not been
-defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT can reach
-more than 80 ms and the average RTT is usually approx. 38 ms. On the whole, the
-average RTTs of the four runs keep flat.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 268 MiB on Sep
-14, which also has the largest minimum memory. Besides, the rest three test
-runs have the similar used memory. On the other hand, the free memory of the
-four runs have the same smallest minimum value, that is about 223 MiB, and the
-maximum free memory of three runs have the similar result, that is 337 MiB,
-except that on Sep. 14th, whose maximum free memory is 254 MiB. On the whole,
-all the test runs have similar average free memory.
-
-Network throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-network throughput of the four test runs seem quite different, ranging from
-119.85 kpps to 128.02 kpps. The average number of flows in these tests is
-24000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding packet throughput
-differ between 49.4k and 193.3k with an average packet throughput of approx.
-125k. On the whole, the PPS results seem consistent. Within each test run of
-the four runs, when number of flows becomes larger, the packet throughput seems
-not larger in the meantime.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT can reach
-more than 94 ms and the average RTT is usually approx. 35 ms. On the whole, the
-average RTTs of the four runs keep flat.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool.The largest
-cache size is 212 MiB in the four runs, and the smallest cache size is 75 MiB.
-On the whole, the average cache size of the four runs is approx. 208 MiB.
-Meanwhile, the tread of the buffer size looks similar with each other.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs seem quite different, ranging from 119.85 kpps to 128.02
-kpps. The average number of flows in these tests is 239.7k, and each run has a
-minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the
-same time, the corresponding packet throughput differ between 49.4k and 193.3k
-with an average packet throughput of approx. 125k. On the whole, the PPS results
-seem consistent. Within each test run of the four runs, when number of flows
-becomes larger, the packet throughput seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 32 ms. The PPS results are not as consistent as the RTT results.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second differs from each other, in which the smallest number of
-packets transmitted per second is 6 pps on Sep. 12ed and the largest of that is
-210.8 kpps. Meanwhile, the largest total number of packets received per second
-differs from each other, in which the smallest number of packets received per
-second is 2 pps on Sep. 13rd and the largest of that is 250.2 kpps.
-
-In some test runs when running with less than approx. 90000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 1000000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD5_ with:
-Joid
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-
diff --git a/docs/release/results/os-nosdn-nofeature-noha.rst b/docs/release/results/os-nosdn-nofeature-noha.rst
deleted file mode 100644
index 8b7c184bb..000000000
--- a/docs/release/results/os-nosdn-nofeature-noha.rst
+++ /dev/null
@@ -1,259 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-========================================
-Test Results for os-nosdn-nofeature-noha
-========================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Joid
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Intel POD5_ between September 12 and 15 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.50 and 1.68 ms.
-Only one test run has reached greatest RTT spike of 2.92 ms, which has
-the smallest RTT of 1.06 ms. The other three runs have no similar spike at all,
-the minimum and average RTTs of which are approx. 1.50 ms and 1.68 ms. SLA set to
-be 10 ms. The SLA value is used as a reference, it has not been defined by
-OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 177.5
-MB/s. The IO read bandwidth of the four runs looks similar on different four
-days, with an average between 46.7 and 62.5 MB/s. One of the runs has a minimum
-BW of 680 KM/s and other has a maximum BW of 177.5 MB/s. The SLA of read
-bandwidth sets to be 400 MB/s, which is used as a reference, and it has not
-been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-test runs all have an approx. 1.55 K/s for IO reading with an minimum value of
-less than 60 times per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.134 ns and 1.227
-ns on average. The variations within each test run are quite different, some
-vary from a large range and others have a small change. For example, the
-largest change is on September 15, the memory read latency of which is ranging
-from 1.116 ns to 1.393 ns. However, the results on September 12 change very
-little, which mainly keep flat and range from 1.124 ns to 1.55 ns. The SLA sets
-to be 30 ns. The SLA value is used as a reference, it has not been defined by
-OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on
-different blades. The reported pocket delay variations of the four test runs
-differ from each other. The results on September 13 within the date look
-similar and the values are between 0.0213 and 0.0225 ms, which is 0.0217 ms on
-average. However, on the third day, the packet delay variation has a large
-wide change within the date, which ranges from 0.008 ms to 0.0225 ms and has
-the minimum value. On Sep. 12, the packet delay is quite long, for the value is
-between 0.0236 and 0.0287 ms and it also has the maximum packet delay of 0.0287
-ms. The packet delay of the last test run is 0.0151 ms on average. The SLA
-value sets to be 10 ms. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the memory
-bandwidth of three test runs almost keep stable within each run, which is
-11.65, 11.57 and 11.64 GB/s on average. However, the memory read and write
-bandwidth on Sep. 14 has a large range, for it ranges from 11.36 GB/s to 16.68
-GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3222 to 3585, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is 124.8, 160.1, 113.8 and
-137.3 kpps, of which the result of the second is the highest. The RTT results
-of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS
-results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 243.1 kpps and the
-minimum throughput is 37.6 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is between 32 ms and 41 ms, of which
-the worest RTT is 155 ms on Sep. 14th.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and 9 percent respectively. And the best result is obtained on Sep.
-15th, with an CPU load of nine percent.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 22.4 GB/s to 26.5 GB/s and then to 18.6 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
-look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. And
-then with the block size becoming larger, the memory write bandwidth tends to
-decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has
-not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of three test runs look
-similar with each other, and Within these test runs, the maximum RTT can reach
-95 ms and the average RTT is usually approx. 36 ms. The network latency tested
-on Sep. 14 shows that it has a peak latency of 155 ms. But on the whole, the
-average RTTs of the four runs keep flat.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 270 MiB on Sep
-13, which also has the smallest minimum memory utilization. Besides, the rest
-three test runs have the similar used memory with an average memory usage of
-264 MiB. On the other hand, the free memory of the four runs have the same
-smallest minimum value, that is about 223 MiB, and the maximum free memory of
-three runs have the similar result, that is 226 MiB, except that on Sep. 13th,
-whose maximum free memory is 273 MiB. On the whole, all the test runs have
-similar average free memory.
-
-Network throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-network throughput of the four test runs seem quite different, ranging from
-119.85 kpps to 128.02 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding packet throughput
-differ between 38k and 243k with an average packet throughput of approx. 134k.
-On the whole, the PPS results seem consistent. Within each test run of the four
-runs, when number of flows becomes larger, the packet throughput seems not
-larger in the meantime.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT can reach
-79 ms and the average RTT is usually approx. 35 ms. On the whole, the average
-RTTs of the four runs keep flat.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool.The largest
-cache size is 214 MiB in the four runs, and the smallest cache size is 100 MiB.
-On the whole, the average cache size of the four runs is approx. 210 MiB.
-Meanwhile, the tread of the buffer size looks similar with each other. On the
-other hand, the mean buffer size of the four runs keep flat, since they have a
-minimum value of approx. 7 MiB and a maximum value of 8 MiB, with an average
-value of about 8 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs seem quite different, ranging from 113.8 kpps to 124.8 kpps.
-The average number of flows in these tests is 240k, and each run has a minimum
-number of flows of 2 and a maximum number of flows of 1.001 Mil. At the same
-time, the corresponding packet throughput differ between 47.6k and 243.1k with
-an average packet throughput between 113.8k and 160.1k. Within each test run of
-the four runs, when number of flows becomes larger, the packet throughput seems
-not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 79 ms with an average leatency of approx. 35 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 113.8 kpps to 124.8 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar on the first three runs with a minimum
-number of 10 pps and a maximum number of 97 kpps, except the one on Sep. 15th,
-in which the number of packets transmitted per second is 10 pps. Meanwhile, the
-largest total number of packets received per second differs from each other,
-in which the smallest number of packets received per second is 1 pps and the
-largest of that is 276 kpps.
-
-In some test runs when running with less than approx. 90000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 1000000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD5_ with:
-Joid
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-odl_l2-bgpvpn-ha.rst b/docs/release/results/os-odl_l2-bgpvpn-ha.rst
deleted file mode 100644
index 2bd6dc35d..000000000
--- a/docs/release/results/os-odl_l2-bgpvpn-ha.rst
+++ /dev/null
@@ -1,53 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-====================================
-Test Results for os-odl_l2-bgpvpn-ha
-====================================
-
-.. toctree::
- :maxdepth: 2
-
-
-fuel
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Ericsson POD2_ between September 7 and 11 in 2016.
-
-TC043
------
-The round-trip-time (RTT) between 2 nodes is measured using
-ping. Most test run measurements result on average between 0.21 and 0.28 ms.
-A few runs start with a 0.32 - 0.35 ms RTT spike (This could be because of
-normal ARP handling). To be able to draw conclusions more runs should be made.
-SLA set to 10 ms. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
diff --git a/docs/release/results/os-odl_l2-nofeature-ha.rst b/docs/release/results/os-odl_l2-nofeature-ha.rst
deleted file mode 100644
index ac0c5bb59..000000000
--- a/docs/release/results/os-odl_l2-nofeature-ha.rst
+++ /dev/null
@@ -1,743 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-=======================================
-Test Results for os-odl_l2-nofeature-ha
-=======================================
-
-.. toctree::
- :maxdepth: 2
-
-
-apex
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the LF POD1_ between September 14 and 17 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.49 ms and 0.60 ms.
-Only one test run has reached greatest RTT spike of 0.93 ms. Meanwhile, the
-smallest network latency is 0.33 ms, which is obtained on Sep. 14th.
-SLA set to be 10 ms. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 416
-MB/s. The IO read bandwidth of all four runs looks similar, with an average
-between 128 and 131 MB/s. One of the runs has a minimum BW of 497 KB/s. The SLA
-of read bandwidth sets to be 400 MB/s, which is used as a reference, and it has
-not been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-IO read times per second of the four test runs have an average value at 1k per
-second, and meanwhile, the minimum result is only 45 times per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.0859 ns and
-1.0869 ns on average. The variations within each test run are quite different,
-some vary from a large range and others have a small change. For example, the
-largest change is on September 14th, the memory read latency of which is ranging
-from 1.091 ns to 1.086 ns. However.
-The SLA sets to be 30 ns. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. On the first two test runs the reported packet delay variation varies between
-0.0037 and 0.0740 ms, with an average delay variation between 0.0096 ms and 0.0321.
-On the second date the delay variation varies between 0.0063 and 0.0096 ms, with
-an average delay variation of 0.0124 - 0.0141 ms.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the trend of
-three memory bandwidth almost look similar, which all have a narrow range, and
-the average result is 19.88 GB/s. Here SLA set to be 15 GB/s. The SLA value is
-used as a reference, it has not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3754k to 3831k, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is between 307.3 kpps and
-447.1 kpps, of which the result of the third run is the highest. The RTT
-results of all the test runs keep flat at approx. 15 ms. It is obvious that the
-PPS results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 418.1 kpps and the
-minimum throughput is 326.5 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is approx. 15 ms.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and nine percent respectively. And the best result is obtained on Sep.
-1, with an CPU load of nine percent. But on the whole, the CPU load is very
-poor, since the average value is quite small.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 28.2 GB/s to 29.5 GB/s and then to 29.2 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth
-look similar with a minimal BW of 25.8 GB/s and peak value of 28.3 GB/s. And
-then with the block size becoming larger, the memory write bandwidth tends to
-decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 39 ms and the average RTT is usually approx. 15 ms. The network latency
-tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole,
-the average RTTs of the five runs keep flat and the network latency is
-relatively short.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 267 MiB for the
-four runs. In general, the four test runs have very large memory utilization,
-which can reach 257 MiB on average. On the other hand, for the mean free memory,
-the four test runs have the similar trend with that of the mean used memory.
-In general, the mean free memory change from 233 MiB to 241 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the four test runs seem quite different, ranging from
-305.3 kpps to 447.1 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding average packet
-throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results
-seem consistent. Within each test run of the four runs, when number of flows
-becomes larger, the packet throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT is only 42
-ms and the average RTT is usually approx. 15 ms. On the whole, the average
-RTTs of the four runs keep stable and the network latency is relatively small.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool. The largest
-cache size is 212 MiB, which is same for the four runs, and the smallest cache
-size is 75 MiB. On the whole, the average cache size of the four runs look the
-same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer
-size keep flat, since they have a minimum value of 7 MiB and a maximum value of
-8 MiB, with an average value of about 7.9 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of
-flows in these tests is 240k, and each run has a minimum number of flows of 2
-and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
-packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run
-of the four runs, when number of flows becomes larger, the packet throughput
-seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 354.4 kpps to 381.8 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for three test runs, whose values change a
-lot from 10 pps to 501 kpps. While results of the rest test run seem the same
-and keep stable with the average number of packets transmitted per second of 10
-pps. However, the total number of packets received per second of the four runs
-look similar, which have a large wide range of 2 pps to 815 kpps.
-
-In some test runs when running with less than approx. 251000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 251000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on LF POD1_ with:
-Apex
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-
-
-fuel
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.5 and 0.6 ms.
-A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP
-handling). One test run has a greater RTT spike of 1.9 ms, which is the same
-one with the 0.7 ms average. The other runs have no similar spike at all.
-To be able to draw conclusions more runs should be made.
-SLA set to 10 ms. The SLA value is used as a reference, it has not
-been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 170 and 200 MB/s. Within each test run the results
-vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs
-have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in
-absolute numbers between the dates, between 617 and 838 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in approx. 1.2 ns. The variations within each test run are similar, between
-1.215 and 1.219 ns. One exception is February 16, where the average is 1.222
-and varies between 1.22 and 1.28 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. On the first date the reported packet delay variation varies between
-0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms.
-On the second date the delay variation varies between 0.002 and 0.006 ms, with
-an average delay variation of 0.004 ms.
-
-TC012
------
-Between test dates, the average measurements for memory bandwidth vary between
-17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal
-BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 3080 and 3240,
-one result each date. The average score on the total is 3150.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-CPU utilization statistics are collected during UDP flows sent between the VMs
-using pktgen as packet generator tool. The average measurements for CPU
-utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
-appears around 7%.
-
-TC069
------
-Between test dates, the average measurements for memory bandwidth vary between
-15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal
-BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality.
-SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Memory utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for memory
-utilization vary between 225MB to 246MB. The peak of memory utilization appears
-around 340MB.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Cache utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for cache
-utilization vary between 205MB to 212MB.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. Total number of packets received per
-second was average on 200 kpps and total number of packets transmitted per
-second was average on 600 kpps.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ and LF POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all. Approx.
-15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
-difference in RTT results.
-Especially RTT and throughput come out with better results than for instance
-the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
-probably be further analyzed and understood. Also of interest could be
-to make further analyzes to find patterns and reasons for lost traffic.
-Also of interest could be to see if there are continuous variations where
-some test cases stand out with better or worse results than the general test
-case.
-
-
-
-Joid
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Intel POD6_ between September 1 and 8 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.01 ms and 1.88 ms.
-Only one test run has reached greatest RTT spike of 1.88 ms. Meanwhile, the
-smallest network latency is 1.01 ms, which is obtained on Sep. 1st. In general,
-the average of network latency of the four test runs are between 1.29 ms and
-1.34 ms. SLA set to be 10 ms. The SLA value is used as a reference, it has not
-been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 183.65
-MB/s. The IO read bandwidth of the three runs looks similar, with an average
-between 62.9 and 64.3 MB/s, except one on Sep. 1, for its maximum storage
-throughput is only 159.1 MB/s. One of the runs has a minimum BW of 685 KB/s and
-other has a maximum BW of 183.6 MB/s. The SLA of read bandwidth sets to be
-400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-IO read times per second of the four test runs have an average value between
-1.41k per second and 1.64k per second, and meanwhile, the minimum result is
-only 55 times per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.152 ns and 1.179
-ns on average. The variations within each test run are quite different, some
-vary from a large range and others have a small change. For example, the
-largest change is on September 8, the memory read latency of which is ranging
-from 1.120 ns to 1.221 ns. However, the results on September 7 change very
-little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
-different blades. The reported packet delay variations of the four test runs
-differ from each other. In general, the packet delay of the first two runs look
-similar, for they both stay stable within each run. And the mean packet delay
-of them are 0.0087 ms and 0.0127 ms respectively. Of the four runs, the fourth
-has the worst result, because the packet delay reaches 0.0187 ms. The SLA value
-sets to be 10 ms. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the trend of
-three memory bandwidth almost look similar, which all have a narrow range, and
-the average result is 11.78 GB/s. Here SLA set to be 15 GB/s. The SLA value is
-used as a reference, it has not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3260k to 3328k, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is between 307.3 kpps and
-447.1 kpps, of which the result of the third run is the highest. The RTT
-results of all the test runs keep flat at approx. 15 ms. It is obvious that the
-PPS results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 418.1 kpps and the
-minimum throughput is 326.5 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is approx. 15 ms.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and nine percent respectively. And the best result is obtained on Sep.
-1, with an CPU load of nine percent. But on the whole, the CPU load is very
-poor, since the average value is quite small.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 21.9 GB/s to 25.9 GB/s and then to 17.8 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth
-look similar with a minimal BW of 24.8 GB/s and peak value of 27.8 GB/s. And
-then with the block size becoming larger, the memory write bandwidth tends to
-decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 39 ms and the average RTT is usually approx. 15 ms. The network latency
-tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole,
-the average RTTs of the five runs keep flat and the network latency is
-relatively short.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 267 MiB for the
-four runs. In general, the four test runs have very large memory utilization,
-which can reach 257 MiB on average. On the other hand, for the mean free memory,
-the four test runs have the similar trend with that of the mean used memory.
-In general, the mean free memory change from 233 MiB to 241 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the four test runs seem quite different, ranging from
-305.3 kpps to 447.1 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding average packet
-throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results
-seem consistent. Within each test run of the four runs, when number of flows
-becomes larger, the packet throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT is only 42
-ms and the average RTT is usually approx. 15 ms. On the whole, the average
-RTTs of the four runs keep stable and the network latency is relatively small.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool. The largest
-cache size is 212 MiB, which is same for the four runs, and the smallest cache
-size is 75 MiB. On the whole, the average cache size of the four runs look the
-same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer
-size keep flat, since they have a minimum value of 7 MiB and a maximum value of
-8 MiB, with an average value of about 7.9 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of
-flows in these tests is 240k, and each run has a minimum number of flows of 2
-and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
-packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run
-of the four runs, when number of flows becomes larger, the packet throughput
-seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 354.4 kpps to 381.8 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for three test runs, whose values change a
-lot from 10 pps to 501 kpps. While results of the rest test run seem the same
-and keep stable with the average number of packets transmitted per second of 10
-pps. However, the total number of packets received per second of the four runs
-look similar, which have a large wide range of 2 pps to 815 kpps.
-
-In some test runs when running with less than approx. 251000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 251000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD6_ with:
-Joid
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
diff --git a/docs/release/results/os-odl_l2-sfc-ha.rst b/docs/release/results/os-odl_l2-sfc-ha.rst
deleted file mode 100644
index e27562cae..000000000
--- a/docs/release/results/os-odl_l2-sfc-ha.rst
+++ /dev/null
@@ -1,231 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-==================================
-Test Results for os-odl_l2-sfc-ha
-==================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Fuel
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the LF POD2_ or Ericsson POD2_ between September 16 and 20 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.32 ms and 1.42 ms.
-Only one test run on Sep. 20 has reached greatest RTT spike of 4.66 ms.
-Meanwhile, the smallest network latency is 0.16 ms, which is obtained on Sep.
-17th. To sum up, the curve of network latency has very small wave, which is
-less than 5 ms. SLA sets to be 10 ms. The SLA value is used as a reference, it
-has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 734
-MB/s. The IO read bandwidth of the first three runs looks similar, with an
-average of less than 100 KB/s, except one on Sep. 20, whose maximum storage
-throughput can reach 734 MB/s. The SLA of read bandwidth sets to be 400 MB/s,
-which is used as a reference, and it has not been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-IO read times per second of the four test runs have an average value between
-1.8k per second and 3.27k per second, and meanwhile, the minimum result is
-only 60 times per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.085 ns and 1.218
-ns on average. The variations within each test run are quite small. For
-Ericsson pod2, the average of memory latency is approx. 1.217 ms. While for LF
-pod2, the average value is about 1.085 ms. It can be seen that the performance
-of LF is better than Ericsson's. The SLA sets to be 30 ns. The SLA value is
-used as a reference, it has not been defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. The four test runs all have a narrow range
-of change with the average memory and write BW of 18.5 GB/s. Here SLA set to be
-15 GB/s. The SLA value is used as a reference, it has not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3209k to 3843k, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the three test runs is between 439 kpps and
-582 kpps, and the test run on Sep. 17th has the lowest average value of 371
-kpps. The RTT results of all the test runs keep flat at approx. 10 ms. It is
-obvious that the PPS results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved, since the largest packet throughput is 680 kpps and the
-minimum throughput is 319 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar trend of RTT with the average value of approx. 12 ms.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and ten percent respectively. And the best result is obtained on Sep.
-17th, with an CPU load of ten percent. But on the whole, the CPU load is very
-poor, since the average value is quite small.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the average memory write
-bandwidth tends to become larger first and then smaller within every run test
-for the two pods, which rangs from 25.1 GB/s to 29.4 GB/s and then to 19.2 GB/s
-on average. Since the test id is one, it is that only the INT memory write
-bandwidth is tested. On the whole, with the block size becoming larger, the
-memory write bandwidth tends to decrease. SLA sets to be 7 GB/s. The SLA value
-is used as a reference, it has not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 27 ms and the average RTT is usually approx. 12 ms. The network latency
-tested on Sep. 27th has a peak latency of 27 ms. But on the whole, the average
-RTTs of the four runs keep flat.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 269 MiB for the
-four runs. In general, the four test runs have very large memory utilization,
-which can reach 251 MiB on average. On the other hand, for the mean free memory,
-the four test runs have the similar trend with that of the mean used memory.
-In general, the mean free memory change from 231 MiB to 248 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the four test runs seem quite different, ranging from
-371 kpps to 582 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding average packet
-throughput is between 319 kpps and 680 kpps. In summary, the PPS results
-seem consistent. Within each test run of the four runs, when number of flows
-becomes larger, the packet throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT is only 24
-ms and the average RTT is usually approx. 12 ms. On the whole, the average
-RTTs of the four runs keep stable and the network latency is relatively small.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool. The largest
-cache size is 213 MiB, and the smallest cache size is 99 MiB, which is same for
-the four runs. On the whole, the average cache size of the four runs look the
-same and is between 184 MiB and 205 MiB. Meanwhile, the tread of the buffer
-size keep stable, since they have a minimum value of 7 MiB and a maximum value of
-8 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs differ from 371 kpps to 582 kpps. The average number of
-flows in these tests is 240k, and each run has a minimum number of flows of 2
-and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
-packet throughput differ between 319 kpps to 680 kpps. Within each test run
-of the four runs, when number of flows becomes larger, the packet throughput
-seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 24 ms with an average leatency of less than 13 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 370 kpps to 582 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for the four test runs, whose values change a
-lot from 10 pps to 697 kpps. However, the total number of packets received per
-second of three runs look similar, which have a large wide range of 2 pps to
-1.497 Mpps, while the results on Sep. 18th and 20th have very small maximum
-number of packets received per second of 817 kpps.
-
-In some test runs when running with less than approx. 251000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 251000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ and LF POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-onos-nofeature-ha.rst b/docs/release/results/os-onos-nofeature-ha.rst
deleted file mode 100644
index d8b3ace5f..000000000
--- a/docs/release/results/os-onos-nofeature-ha.rst
+++ /dev/null
@@ -1,257 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-======================================
-Test Results for os-onos-nofeature-ha
-======================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Joid
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 5 scenario test runs, each run
-on the Intel POD6_ between September 13 and 16 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.50 and 1.68 ms.
-Only one test run has reached greatest RTT spike of 2.62 ms, which has
-the smallest RTT of 1.00 ms. The other four runs have no similar spike at all,
-the minimum and average RTTs of which are approx. 1.06 ms and 1.32 ms. SLA set
-to be 10 ms. The SLA value is used as a reference, it has not been defined by
-OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 175.4
-MB/s. The IO read bandwidth of the four runs looks similar on different four
-days, with an average between 58.1 and 62.0 MB/s, except one on Sep. 14, for
-its maximum storage throughput is only 133.0 MB/s. One of the runs has a
-minimum BW of 497 KM/s and other has a maximum BW of 177.4 MB/s. The SLA of read
-bandwidth sets to be 400 MB/s, which is used as a reference, and it has not
-been defined by OPNFV.
-
-The results of storage IOPS for the five runs look similar with each other. The
-IO read times per second of the five test runs have an average value between
-1.20 K/s and 1.61 K/s, and meanwhile, the minimum result is only 41 times per
-second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the five runs is between 1.146 ns and 1.172
-ns on average. The variations within each test run are quite different, some
-vary from a large range and others have a small change. For example, the
-largest change is on September 13, the memory read latency of which is ranging
-from 1.152 ns to 1.221 ns. However, the results on September 14 change very
-little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
-different blades. The reported packet delay variations of the five test runs
-differ from each other. In general, the packet delay of the first two runs look
-similar, for they both stay stable within each run. And the mean packet delay of
-of them are 0.07714 ms and 0.07982 ms respectively. Of the five runs, the third
-has the worst result, because the packet delay reaches 0.08384 ms. The trend of
-therest two runs look the same, for the average packet delay are 0.07808 ms and
-0.07727 ms respectively. The SLA value sets to be 10 ms. The SLA value is used
-as a reference, it has not been defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the five test runs, the memory
-bandwidth of last three test runs almost keep stable within each run, which is
-11.64, 11.71 and 11.61 GB/s on average. However, the memory read and write
-bandwidth on Sep. 13 has a large range, for it ranges from 6.68 GB/s to 11.73
-GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3208 to 3314, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the five test runs is between 259.6 kpps and
-318.4 kpps, of which the result of the second run is the highest. The RTT
-results of all the test runs keep flat at approx. 20 ms. It is obvious that the
-PPS results are not as consistent as the RTT results.
-
-The No. flows of the five test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 398.9 kpps and the
-minimum throughput is 250.6 kpps respectively.
-
-There are no errors of packets received in the five runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the five
-runs have the similar average vaue, that is between 17 ms and 22 ms, of which
-the worest RTT is 53 ms on Sep. 14th.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and 10 percent respectively. And the best result is obtained on Sep.
-13rd, with an CPU load of 10 percent.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 21.6 GB/s to 26.8 GB/s and then to 18.4 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
-look similar with a minimal BW of 23.0 GB/s and peak value of 28.6 GB/s. And
-then with the block size becoming larger, the memory write bandwidth tends to
-decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has
-not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the five test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 53 ms and the average RTT is usually approx. 18 ms. The network latency
-tested on Sep. 14 shows that it has a peak latency of 53 ms. But on the whole,
-the average RTTs of the five runs keep flat and the network latency is
-relatively short.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 272 MiB on Sep
-14. In general, the mean used memory of the five test runs have the similar
-trend and the minimum memory used size is approx. 150 MiB, and the average
-used memory size is about 250 MiB. On the other hand, for the mean free memory,
-the five test runs have the similar trend, whose mean free memory change from
-218 MiB to 342 MiB, with an average value of approx. 38 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the five test runs seem quite different, ranging from
-285.29 kpps to 297.76 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding packet throughput
-differ between 250.6k and 398.9k with an average packet throughput between
-277.2 K and 318.4 K. In summary, the PPS results seem consistent. Within each
-test run of the five runs, when number of flows becomes larger, the packet
-throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the five test runs
-look similar with each other. Within each test run, the maximum RTT is only 49
-ms and the average RTT is usually approx. 20 ms. On the whole, the average
-RTTs of the five runs keep stable and the network latency is relatively short.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool.The largest
-cache size is 215 MiB in the four runs, and the smallest cache size is 95 MiB.
-On the whole, the average cache size of the five runs change a little and is
-about 200 MiB, except the one on Sep. 14th, the mean cache size is very small,
-which keeps 102 MiB. Meanwhile, the tread of the buffer size keep flat, since
-they have a minimum value of 7 MiB and a maximum value of 8 MiB, with an
-average value of about 7.8 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs seem quite different, ranging from 285.29 kpps to 297.76
-kpps. The average number of flows in these tests is 239.7k, and each run has a
-minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the
-same time, the corresponding packet throughput differ between 227.3k and 398.9k
-with an average packet throughput between 277.2k and 318.4k. Within each test
-run of the five runs, when number of flows becomes larger, the packet
-throughput seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
- between 0 ms and 49 ms with an average leatency of less than 22 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the five runs differ from 250.6 kpps to 398.9 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for four test runs, whose values change a
-lot from 10 pps to 399 kpps, except the one on Sep. 14th, whose total number
-of transmitted per second keep stable, that is 10 pps. Similarly, the total
-number of packets received per second look the same for four runs, except the
-one on Sep. 14th, whose value is only 10 pps.
-
-In some test runs when running with less than approx. 90000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 250000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD6_ with:
-Joid
-OpenStack Mitaka
-Onos Goldeneye
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-onos-sfc-ha.rst b/docs/release/results/os-onos-sfc-ha.rst
deleted file mode 100644
index e52ae3d55..000000000
--- a/docs/release/results/os-onos-sfc-ha.rst
+++ /dev/null
@@ -1,517 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-===============================
-Test Results for os-onos-sfc-ha
-===============================
-
-.. toctree::
- :maxdepth: 2
-
-
-fuel
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Ericsson POD2_ or LF POD2_ between September 5 and 10 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.5 and 0.6 ms.
-A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP
-handling). One test run has a greater RTT spike of 1.9 ms, which is the same
-one with the 0.7 ms average. The other runs have no similar spike at all.
-To be able to draw conclusions more runs should be made.
-SLA set to 10 ms. The SLA value is used as a reference, it has not
-been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 170 and 200 MB/s. Within each test run the results
-vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs
-have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in
-absolute numbers between the dates, between 617 and 838 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in approx. 1.2 ns. The variations within each test run are similar, between
-1.215 and 1.219 ns. One exception is February 16, where the average is 1.222
-and varies between 1.22 and 1.28 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. On the first date the reported packet delay variation varies between
-0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms.
-On the second date the delay variation varies between 0.002 and 0.006 ms, with
-an average delay variation of 0.004 ms.
-
-TC012
------
-Between test dates, the average measurements for memory bandwidth vary between
-17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal
-BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 3080 and 3240,
-one result each date. The average score on the total is 3150.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-CPU utilization statistics are collected during UDP flows sent between the VMs
-using pktgen as packet generator tool. The average measurements for CPU
-utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
-appears around 7%.
-
-TC069
------
-Between test dates, the average measurements for memory bandwidth vary between
-15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal
-BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality.
-SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Memory utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for memory
-utilization vary between 225MB to 246MB. The peak of memory utilization appears
-around 340MB.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Cache utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for cache
-utilization vary between 205MB to 212MB.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. Total number of packets received per
-second was average on 200 kpps and total number of packets transmitted per
-second was average on 600 kpps.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ and LF POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-Onos Goldeneye
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all. Approx.
-15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
-difference in RTT results.
-Especially RTT and throughput come out with better results than for instance
-the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
-probably be further analyzed and understood. Also of interest could be
-to make further analyzes to find patterns and reasons for lost traffic.
-Also of interest could be to see if there are continuous variations where
-some test cases stand out with better or worse results than the general test
-case.
-
-
-Joid
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Intel POD6_ between September 8 and 11 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.35 ms and 1.57 ms.
-Only one test run has reached greatest RTT spike of 2.58 ms. Meanwhile, the
-smallest network latency is 1.11 ms, which is obtained on Sep. 11st. In
-general, the average of network latency of the four test runs are between 1.35
-ms and 1.57 ms. SLA set to be 10 ms. The SLA value is used as a reference, it
-has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 175.4
-MB/s. The IO read bandwidth of the three runs looks similar, with an average
-between 43.7 and 56.3 MB/s, except one on Sep. 8, for its maximum storage
-throughput is only 107.9 MB/s. One of the runs has a minimum BW of 478 KM/s and
-other has a maximum BW of 168.6 MB/s. The SLA of read bandwidth sets to be
-400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-IO read times per second of the four test runs have an average value between
-978 per second and 1.20 K/s, and meanwhile, the minimum result is only 36 times
-per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.164 ns and 1.244
-ns on average. The variations within each test run are quite different, some
-vary from a large range and others have a small change. For example, the
-largest change is on September 10, the memory read latency of which is ranging
-from 1.128 ns to 1.381 ns. However, the results on September 11 change very
-little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
-different blades. The reported packet delay variations of the four test runs
-differ from each other. In general, the packet delay of two runs look similar,
-for they both stay stable within each run. And the mean packet delay of them
-are 0.0772 ms and 0.0788 ms respectively. Of the four runs, the fourth has the
-worst result, because the packet delay reaches 0.0838 ms. The rest one has a
-large wide range from 0.0666 ms to 0.0798 ms. The SLA value sets to be 10 ms.
-The SLA value is used as a reference, it has not been defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the trend of the
-memory bandwidth almost look similar, which all have a large wide range, and
-the minimum and maximum results are 9.02 GB/s and 18.14 GB/s. Here SLA set to
-be 15 GB/s. The SLA value is used as a reference, it has not been defined by
-OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3395 to 3475, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is between 362.1 kpps and
-363.5 kpps, of which the result of the third run is the highest. The RTT
-results of all the test runs keep flat at approx. 17 ms. It is obvious that the
-PPS results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 418.1 kpps and the
-minimum throughput is 326.5 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is approx. 17 ms, of which the worst
-RTT is 39 ms on Sep. 11st.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and nine percent respectively. And the best result is obtained on Sep.
-10, with an CPU load of nine percent.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 25.9 GB/s to 26.6 GB/s and then to 18.1 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is from 2 kb to 16 kb, the memory write
-bandwidth look similar with a minimal BW of 22.1 GB/s and peak value of 28.6
-GB/s. And then with the block size becoming larger, the memory write bandwidth
-tends to decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference,
-it has not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 39 ms and the average RTT is usually approx. 17 ms. The network latency
-tested on Sep. 11 shows that it has a peak latency of 39 ms. But on the whole,
-the average RTTs of the five runs keep flat and the network latency is
-relatively short.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 270 MiB on the
-first two runs. In general, the mean used memory of two test runs have very
-large memory utilization, which can reach 264 MiB on average. And the other two
-runs have a large wide range of memory usage with the minimum value of 150 MiB
-and the maximum value of 270 MiB. On the other hand, for the mean free memory,
-the four test runs have the similar trend with that of the mean used memory.
-In general, the mean free memory change from 220 MiB to 342 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the four test runs seem quite different, ranging from
-326.5 kpps to 418.1 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding packet throughput
-differ between 326.5 kpps and 418.1 kpps with an average packet throughput between
-361.7 kpps and 363.5 kpps. In summary, the PPS results seem consistent. Within each
-test run of the four runs, when number of flows becomes larger, the packet
-throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT is only 47
-ms and the average RTT is usually approx. 15 ms. On the whole, the average
-RTTs of the four runs keep stable and the network latency is relatively small.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool. The largest
-cache size is 214 MiB, which is same for the four runs, and the smallest cache
-size is 94 MiB. On the whole, the average cache size of the four runs look the
-same and is between 198 MiB and 207 MiB. Meanwhile, the tread of the buffer
-size keep flat, since they have a minimum value of 7 MiB and a maximum value of
-8 MiB, with an average value of about 7.9 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs seem quite the same, which is approx. 363 kpps. The average
-number of flows in these tests is 240k, and each run has a minimum number of
-flows of 2 and a maximum number of flows of 1.001 Mil. At the same time, the
-corresponding packet throughput differ between 327 kpps and 418 kpps with an
-average packet throughput of about 363 kpps. Within each test run of the four
-runs, when number of flows becomes larger, the packet throughput seems not
-larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 47 ms with an average leatency of less than 16 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 361.7 kpps to 365.0 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for two test runs, whose values change a
-lot from 10 pps to 432 kpps. While results of the other test runs seem the same
-and keep stable with the average number of packets transmitted per second of 10
-pps. However, the total number of packets received per second of the four runs
-look similar, which have a large wide range of 2 pps to 657 kpps.
-
-In some test runs when running with less than approx. 250000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 250000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD6_ with:
-Joid
-OpenStack Mitaka
-Onos Goldeneye
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
diff --git a/docs/release/results/overview.rst b/docs/release/results/overview.rst
index b4a050545..b5e6a43a6 100644
--- a/docs/release/results/overview.rst
+++ b/docs/release/results/overview.rst
@@ -3,6 +3,15 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Ericsson AB and others.
+..
+ Convention for heading levels in Yardstick:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+ Avoid deeper levels because they do not render well.
+
Yardstick test tesult document overview
=======================================
@@ -42,55 +51,31 @@ environment, features or test framework.
The list of scenarios supported by each installer can be described as follows:
-+-------------------------+---------+---------+---------+---------+
-| Scenario | Apex | Compass | Fuel | Joid |
-+=========================+=========+=========+=========+=========+
-| os-nosdn-nofeature-noha | | | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-nofeature-ha | X | X | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-nofeature-ha | X | X | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-nofeature-noha| | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l3-nofeature-ha | X | X | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l3-nofeature-noha| | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-onos-sfc-ha | X | X | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-onos-sfc-noha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-onos-nofeature-ha | X | X | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-onos-nofeature-noha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-sfc-ha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-sfc-noha | X | X | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-bgpvpn-ha | X | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-bgpvpn-noha | | X | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-kvm-ha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-kvm-noha | | X | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-ovs-ha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-ovs-noha | X | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-ocl-nofeature-ha | | | | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-lxd-ha | | | | X |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-lxd-noha | | | | X |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-fdio-noha | X | | | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-moon-ha | | X | | |
-+-------------------------+---------+---------+---------+---------+
++-------------------------+------+---------+----------+------+------+-------+
+| Scenario | Apex | Compass | Fuel-arm | Fuel | Joid | Daisy |
++=========================+======+=========+==========+======+======+=======+
+| os-nosdn-nofeature-noha | X | | | | X | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-nofeature-ha | X | | X | X | X | X |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-bar-noha | X | | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-bar-ha | X | | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-odl-bgpvpn-ha | X | | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-calipso-noha | X | | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-kvm-ha | | X | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-odl_l3-nofeature-ha | | X | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-odl-sfc-ha | | X | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-odl-nofeature-ha | | | | X | | X |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-ovs-ha | | | | X | | |
++-------------------------+------+---------+----------+------+------+-------+
To qualify for release, the scenarios must have deployed and been successfully
tested in four consecutive installations to establish stability of deployment
@@ -103,4 +88,4 @@ References
* IEEE Std 829-2008. "Standard for Software and System Test Documentation".
-* OPNFV Colorado release note for Yardstick.
+* OPNFV Fraser release note for Yardstick.
diff --git a/docs/release/results/results.rst b/docs/release/results/results.rst
index 04c6b9f87..f0c20360b 100644
--- a/docs/release/results/results.rst
+++ b/docs/release/results/results.rst
@@ -2,41 +2,56 @@
.. License.
.. http://creativecommons.org/licenses/by/4.0
-Results listed by scenario
-==========================
+..
+ Convention for heading levels in Yardstick:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+ Avoid deeper levels because they do not render well.
-The following sections describe the yardstick results as evaluated for the
-Colorado release scenario validation runs. Each section describes the
-determined state of the specific scenario as deployed in the Colorado
-release process.
+Results listed by test cases
+----------------------------
+
+.. _TOM: https://wiki.opnfv.org/display/testing/R+post-processing+of+the+Yardstick+results
+
+
+The following sections describe the yardstick test case results as evaluated
+for the OPNFV Fraser release scenario validation runs. Each section describes
+the determined state of the specific test case as executed in the Fraser release
+process. All test date are analyzed using TOM_ tool.
Scenario Results
-================
+----------------
.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/
+
The following documents contain results of Yardstick test cases executed on
-OPNFV labs, triggered by OPNFV CI pipeline, documented per scenario.
+OPNFV labs, triggered by OPNFV CI pipeline, documented per test case.
+For hardware details of OPNFV labs, please visit: https://wiki.opnfv.org/display/pharos/Community+Labs
.. toctree::
:maxdepth: 1
- os-nosdn-nofeature-ha.rst
- os-nosdn-nofeature-noha.rst
- os-odl_l2-nofeature-ha.rst
- os-odl_l2-bgpvpn-ha.rst
- os-odl_l2-sfc-ha.rst
- os-nosdn-kvm-ha.rst
- os-onos-nofeature-ha.rst
- os-onos-sfc-ha.rst
+ tc002-network-latency.rst
+ tc010-memory-read-latency.rst
+ tc011-packet-delay-variation.rst
+ tc012-memory-read-write-bandwidth.rst
+ tc014-cpu-processing-speed.rst
+ tc069-memory-write-bandwidth.rst
+ tc082-context-switches-under-load.rst
+ tc083-network-throughput-between-vm.rst
Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_.
+Test results for Fraser release are collected from April 10, 2018 to May 13, 2018.
Feature Test Results
-====================
+--------------------
The following features were verified by Yardstick test cases:
@@ -48,8 +63,6 @@ The following features were verified by Yardstick test cases:
* Parser
- * Virtual Traffic Classifier (see :doc:`yardstick-opnfv-vtc`)
-
* StorPerf
.. note:: The test cases for IPv6 and Parser Projects are included in the
diff --git a/docs/release/results/tc002-network-latency.rst b/docs/release/results/tc002-network-latency.rst
new file mode 100644
index 000000000..064983bec
--- /dev/null
+++ b/docs/release/results/tc002-network-latency.rst
@@ -0,0 +1,525 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+======================================
+Test results for TC002 network latency
+======================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC002 verifies that network latency is within acceptable boundaries when packets travel between hosts located on same or different compute blades.
+Ping packets (ICMP protocol's mandatory ECHO_REQUEST datagram) are sent from host VM to target VM(s) to elicit ICMP ECHO_RESPONSE.
+
+Metric: RTT (Round Trip Time)
+Unit: ms
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [0.214],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [0.309],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [0.3145],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [0.3585],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [0.3765],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [0.403],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [0.413],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [0.494],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [0.5715],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [0.5785],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [0.617],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [0.62],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [0.632],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [0.635],
+
+ "os-odl-bgpvpn-ha:lf-pod1:apex": [0.658],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [0.663],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [0.668],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [0.668],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [0.6815],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [0.7005],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [0.778],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [0.7825],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [0.7885],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [0.795],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [0.8045],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [0.8335],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [0.8755],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [0.8855],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [0.8895],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [0.901],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [0.956],
+
+ "os-nosdn-lxd-noha:intel-pod5:joid": [1.131],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [1.173],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [1.2015],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [1.204],
+
+ "os-nosdn-lxd-ha:intel-pod5:joid": [1.2245],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [1.2285],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [1.3055],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [1.309],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [1.313],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [1.319],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [1.3425],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [1.3475],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [1.348],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [1.432],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual9:compass": [1.442],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [1.4505],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [1.497],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [1.504],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [1.519],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [1.5415],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [1.5785],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [1.604],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [1.61],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [1.633],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [1.6485],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [1.7085],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [1.71],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [1.7955],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [1.838],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [1.88],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [1.8975],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [1.923],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [1.944],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [1.968],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [1.986],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [2.0415],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [2.071],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [2.0855],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [2.1085],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [2.1135],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [2.234],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [2.294],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [2.304],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [2.378],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [2.397],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [2.472],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [2.603],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [2.635],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [2.9055],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [3.1295],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [3.337],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [3.634],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual1:fuel": [3.875],
+
+ "os-odl-nofeature-noha:ericsson-virtual1:fuel": [3.9655],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [3.9795]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc002_scenario.png
+ :width: 800px
+ :alt: TC002 influence of scenario
+
+{
+
+ "os-odl_l2-moon-ha": [0.3415],
+
+ "os-nosdn-ovs-ha": [0.3625],
+
+ "os-nosdn-ovs_dpdk-noha": [0.378],
+
+ "os-nosdn-ovs_dpdk-ha": [0.5265],
+
+ "os-nosdn-bar-noha": [0.632],
+
+ "os-odl-bgpvpn-ha": [0.658],
+
+ "os-ovn-nofeature-noha": [0.668],
+
+ "os-odl_l3-nofeature-ha": [0.8545],
+
+ "os-nosdn-ovs-noha": [0.8575],
+
+ "os-nosdn-bar-ha": [0.903],
+
+ "os-odl-sfc-ha": [1.127],
+
+ "os-nosdn-lxd-noha": [1.131],
+
+ "os-nosdn-nofeature-ha": [1.152],
+
+ "os-odl_l2-moon-noha": [1.1825],
+
+ "os-nosdn-lxd-ha": [1.2245],
+
+ "os-odl_l3-nofeature-noha": [1.337],
+
+ "os-odl-nofeature-ha": [1.352],
+
+ "os-odl-sfc-noha": [1.4255],
+
+ "os-nosdn-kvm-noha": [1.5045],
+
+ "os-nosdn-openbaton-ha": [1.5665],
+
+ "os-nosdn-nofeature-noha": [1.729],
+
+ "os-nosdn-kvm-ha": [1.7745],
+
+ "os-odl-nofeature-noha": [3.106]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc002_pod.png
+ :width: 800px
+ :alt: TC002 influence of the POD
+
+{
+
+ "huawei-pod2": [0.3925],
+
+ "lf-pod2": [0.5315],
+
+ "lf-pod1": [0.62],
+
+ "flex-pod2": [0.795],
+
+ "huawei-pod12": [0.87],
+
+ "intel-pod5": [1.25],
+
+ "ericsson-virtual3": [1.2655],
+
+ "ericsson-pod1": [1.372],
+
+ "arm-pod5": [1.518],
+
+ "huawei-virtual4": [1.5355],
+
+ "huawei-virtual3": [1.606],
+
+ "intel-pod18": [1.6575],
+
+ "huawei-virtual8": [1.709],
+
+ "huawei-virtual2": [1.872],
+
+ "arm-pod6": [1.895],
+
+ "huawei-virtual9": [2.0745],
+
+ "huawei-virtual1": [2.495],
+
+ "ericsson-virtual2": [2.7895],
+
+ "ericsson-virtual4": [3.768],
+
+ "ericsson-virtual1": [3.8035]
+
+}
+
+
+Fraser release
+--------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [0.42],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [0.557],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [0.5765],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [0.582],
+
+ "os-odl-bgpvpn-ha:lf-pod1:apex": [0.678],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [0.7075],
+
+ "os-nosdn-calipso-noha:lf-pod1:apex": [0.713],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [0.7155],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [0.732],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [0.7415],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [0.7565],
+
+ "os-nosdn-ovs-ha:arm-pod6:fuel": [0.8015],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [0.908],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [0.9165],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [0.969],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [0.9765],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [1.0245],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [1.0495],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [1.1645],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [1.206],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [1.236],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [1.241],
+
+ "os-nosdn-nofeature-ha:zte-pod2:daisy": [1.2805],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [1.286],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [1.299],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [1.305],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [1.309],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [1.314],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [1.431],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [1.457],
+
+ "os-odl-nofeature-ha:zte-pod2:daisy": [1.517],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [1.576],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [1.592],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [1.714],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [1.809],
+
+ "os-nosdn-bar-noha:huawei-virtual4:compass": [1.81],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [1.8505],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [1.8895],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [1.909],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [1.925],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [1.964],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [1.9755],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [1.9765],
+
+ "os-nosdn-bar-noha:huawei-virtual3:compass": [1.9915],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [1.9925],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [2.0265],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [2.106],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [2.124],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [2.185],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [2.281],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [2.432],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [2.483],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [2.524],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [3.9175],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [4.338],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [4.641]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc002_scenario_fraser.png
+ :width: 800px
+ :alt: TC002 influence of scenario
+
+{
+
+ "os-odl-bgpvpn-ha": [0.678],
+
+ "os-nosdn-calipso-noha": [0.713],
+
+ "os-nosdn-ovs-ha": [0.7245],
+
+ "os-odl_l3-nofeature-ha": [0.7435],
+
+ "os-odl-sfc-ha": [0.796],
+
+ "os-nosdn-kvm-ha": [1.059],
+
+ "os-nosdn-bar-ha": [1.083],
+
+ "os-nosdn-ovs-noha": [1.09],
+
+ "os-odl-sfc-noha": [1.196],
+
+ "os-nosdn-nofeature-noha": [1.26],
+
+ "os-nosdn-nofeature-ha": [1.291],
+
+ "os-odl_l3-nofeature-noha": [1.308],
+
+ "os-nosdn-bar-noha": [1.4125],
+
+ "os-nosdn-kvm-noha": [1.4475],
+
+ "os-odl-nofeature-ha": [1.508],
+
+ "os-odl-nofeature-noha": [1.914],
+
+ "os-nosdn-openbaton-ha": [1.9755]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc002_pod_fraser.png
+ :width: 800px
+ :alt: TC002 influence of the POD
+
+{
+
+ "huawei-pod2": [0.677],
+
+ "lf-pod1": [0.725],
+
+ "ericsson-pod1": [0.9165],
+
+ "huawei-pod12": [1.0465],
+
+ "lf-pod2": [1.2325],
+
+ "zte-pod2": [1.395],
+
+ "ericsson-virtual4": [1.582],
+
+ "huawei-virtual4": [1.697],
+
+ "arm-pod5": [1.714],
+
+ "huawei-virtual3": [1.716],
+
+ "intel-pod18": [1.856],
+
+ "huawei-virtual2": [1.964],
+
+ "huawei-virtual1": [1.9765],
+
+ "arm-pod6": [2.209],
+
+ "ericsson-virtual3": [3.9175],
+
+ "ericsson-virtual2": [4.004]
+
+}
diff --git a/docs/release/results/tc010-memory-read-latency.rst b/docs/release/results/tc010-memory-read-latency.rst
new file mode 100644
index 000000000..81559d647
--- /dev/null
+++ b/docs/release/results/tc010-memory-read-latency.rst
@@ -0,0 +1,510 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+==========================================
+Test results for TC010 memory read latency
+==========================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC010 measures the memory read latency for varying memory sizes and strides.
+The test results shown below are for memory size of 16MB.
+
+Metric: Memory read latency
+Unit: ns
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [5.3165],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [5.908],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [6.412],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [6.545],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [6.592],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [6.5975],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [6.7675],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [6.7675],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [6.7945],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [6.839],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [6.9695],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [7.123],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [7.289],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [7.4315],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [7.9],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [8.178],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [8.616],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [8.646],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [8.8615],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [8.87],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [8.877],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [8.892],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [8.898],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [8.952],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [8.9745],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [9.0375],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [9.083],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [9.09],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [9.094],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [9.293],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [9.3525],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [9.477],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [9.5445],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [9.5575],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [9.6435],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [9.68],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [9.728],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [9.751],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [9.8645],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [9.969],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [10.029],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [10.088],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [10.2985],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [10.318],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [10.3215],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [10.617],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [10.762],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [10.7715],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [10.866],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [10.871],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [11.1605],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [11.227],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [11.348],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [11.453],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [11.571],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [11.5925],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [11.689],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [11.8695],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [12.199],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [12.433],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [12.713],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [15.328],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [15.4265],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [15.428],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [15.545],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [15.55],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [15.6395],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [15.696],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [15.774],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [16.6455],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [16.861],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [18.071],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [18.116],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [18.8365],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [18.927],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [29.557],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [32.492],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [37.623],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [41.345],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [42.3795],
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc010_scenario.png
+ :width: 800px
+ :alt: TC010 influence of scenario
+
+{
+
+ "os-nosdn-ovs-noha": [7.9],
+
+ "os-nosdn-ovs_dpdk-noha": [8.641],
+
+ "os-nosdn-ovs_dpdk-ha": [8.6815],
+
+ "os-nosdn-openbaton-ha": [8.882],
+
+ "os-odl_l2-moon-ha": [8.948],
+
+ "os-odl_l3-nofeature-ha": [8.992],
+
+ "os-nosdn-nofeature-ha": [9.118],
+
+ "os-nosdn-nofeature-noha": [9.174],
+
+ "os-odl_l2-moon-noha": [9.312],
+
+ "os-odl_l3-nofeature-noha": [9.5535],
+
+ "os-odl-nofeature-noha": [9.673],
+
+ "os-odl-sfc-noha": [9.8385],
+
+ "os-odl-sfc-ha": [9.98],
+
+ "os-nosdn-kvm-noha": [10.088],
+
+ "os-nosdn-kvm-ha": [11.1705],
+
+ "os-nosdn-bar-ha": [12.1395],
+
+ "os-nosdn-ovs-ha": [15.3195],
+
+ "os-ovn-nofeature-noha": [15.545],
+
+ "os-odl-nofeature-ha": [16.301],
+
+ "os-nosdn-bar-noha": [16.861]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc010_pod.png
+ :width: 800px
+ :alt: TC010 influence of the POD
+
+{
+
+ "ericsson-pod1": [5.7785],
+
+ "flex-pod2": [5.908],
+
+ "ericsson-virtual1": [6.412],
+
+ "intel-pod18": [6.5905],
+
+ "intel-pod5": [6.6975],
+
+ "ericsson-virtual4": [7.183],
+
+ "ericsson-virtual2": [8.4985],
+
+ "huawei-pod2": [8.877],
+
+ "huawei-pod12": [9.091],
+
+ "ericsson-virtual3": [9.719],
+
+ "huawei-virtual4": [10.1195],
+
+ "huawei-virtual3": [10.19],
+
+ "huawei-virtual1": [10.3045],
+
+ "huawei-virtual9": [10.318],
+
+ "huawei-virtual2": [11.274],
+
+ "lf-pod1": [15.7025],
+
+ "lf-pod2": [15.8495],
+
+ "arm-pod5": [18.092],
+
+ "huawei-virtual8": [33.999],
+
+ "arm-pod6": [41.5605]
+
+}
+
+
+Fraser release
+--------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [6.8675],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [6.991],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [7.5535],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [7.571],
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [7.635],
+
+ "os-nosdn-nofeature-ha:zte-pod2:daisy": [8.153],
+
+ "os-odl-nofeature-ha:zte-pod2:daisy": [8.1935],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [9.1715],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [9.1875],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [9.241],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [9.255],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [9.388],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [9.5825],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [9.5875],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [9.6345],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [9.6535],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [9.743],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [9.82],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [9.8715],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [9.982],
+
+ "os-nosdn-bar-noha:huawei-virtual4:compass": [10.0195],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [10.1285],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [10.1335],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [10.22],
+
+ "os-nosdn-bar-noha:huawei-virtual3:compass": [10.2845],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [10.4185],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [10.4555],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [10.5635],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [10.6515],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [10.9355],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [11.2015],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [12.984],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [13.306],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [13.721],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [14.133],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [14.158],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [14.375],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [14.396],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [14.9375],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [14.957],
+
+ "os-nosdn-calipso-noha:lf-pod1:apex": [16.3445],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [16.478],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [16.4895],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [16.55],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [16.5665],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [16.598],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [16.805],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [16.9095],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [17.494],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [17.4995],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [18.094],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [18.744],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [19.8235],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [20.758],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [26.5245],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [55.667],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [56.175],
+
+ "os-nosdn-ovs-ha:arm-pod6:fuel": [57.86]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc010_scenario_fraser.png
+ :width: 800px
+ :alt: TC010 influence of scenario
+
+{
+
+ "os-nosdn-openbaton-ha": [7.5535],
+
+ "os-odl-nofeature-ha": [8.2535],
+
+ "os-odl-sfc-ha": [9.251],
+
+ "os-nosdn-nofeature-ha": [9.464],
+
+ "os-odl-sfc-noha": [9.8265],
+
+ "os-odl_l3-nofeature-ha": [9.836],
+
+ "os-odl_l3-nofeature-noha": [10.0565],
+
+ "os-nosdn-nofeature-noha": [10.079],
+
+ "os-nosdn-kvm-ha": [10.418],
+
+ "os-nosdn-ovs-noha": [10.43],
+
+ "os-nosdn-kvm-noha": [10.603],
+
+ "os-nosdn-bar-noha": [11.067],
+
+ "os-nosdn-bar-ha": [13.911],
+
+ "os-odl-nofeature-noha": [14.046],
+
+ "os-nosdn-calipso-noha": [16.3445],
+
+ "os-nosdn-ovs-ha": [16.478],
+
+ "os-ovn-nofeature-noha": [16.805]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc010_pod_fraser.png
+ :width: 800px
+ :alt: TC010 influence of the POD
+
+{
+
+ "ericsson-pod1": [7.0645],
+
+ "intel-pod18": [7.4465],
+
+ "zte-pod2": [8.1865],
+
+ "huawei-pod2": [9.236],
+
+ "huawei-pod12": [9.615],
+
+ "ericsson-virtual2": [9.8925],
+
+ "huawei-virtual2": [10.22],
+
+ "ericsson-virtual4": [10.5465],
+
+ "ericsson-virtual3": [10.9355],
+
+ "huawei-virtual3": [10.95],
+
+ "huawei-virtual4": [11.557],
+
+ "lf-pod2": [16.5595],
+
+ "lf-pod1": [16.8395],
+
+ "arm-pod5": [18.744],
+
+ "huawei-virtual1": [19.8235],
+
+ "arm-pod6": [55.804]
+
+}
diff --git a/docs/release/results/tc011-packet-delay-variation.rst b/docs/release/results/tc011-packet-delay-variation.rst
new file mode 100644
index 000000000..f255b50ca
--- /dev/null
+++ b/docs/release/results/tc011-packet-delay-variation.rst
@@ -0,0 +1,432 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+=============================================
+Test results for TC011 packet delay variation
+=============================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC011 measures the packet delay variation sending the packets from one VM to the other.
+
+Metric: packet delay variation (jitter)
+Unit: ms
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [2996],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [2996],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [2996],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [2996],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [2997],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [2997],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [2997],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [2997],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [2997],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [2997.5],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [2998],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [2998],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [2999],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [2999.5],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [3000],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [3001],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [3002],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [3002],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [3002],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [3002],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [3003],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [3003.5],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [3004],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [3004],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [3004.5],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [3005],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [3006],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [3006.5],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [3009],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [3010],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [3010],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [3012],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [3017],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [3017],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [3017],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [3018],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [3020],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [3021],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [3022],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [3022],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [3022],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [3022],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [3022],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [3022],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [3022],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [3022],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [3022],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [3022],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [3022],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [3022],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [3022],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [3022],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [3022],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [3022],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [3022],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [3022],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [3022],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [3023],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [3023],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [3024]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc011_scenario.png
+ :width: 800px
+ :alt: TC011 influence of scenario
+
+{
+
+ "os-nosdn-ovs_dpdk-noha": [2996],
+
+ "os-odl_l3-nofeature-noha": [2997],
+
+ "os-nosdn-kvm-noha": [2999],
+
+ "os-nosdn-ovs_dpdk-ha": [3002],
+
+ "os-nosdn-kvm-ha": [3014.5],
+
+ "os-odl-sfc-noha": [3018],
+
+ "os-nosdn-nofeature-noha": [3020],
+
+ "os-nosdn-openbaton-ha": [3020],
+
+ "os-nosdn-bar-ha": [3022],
+
+ "os-nosdn-bar-noha": [3022],
+
+ "os-nosdn-nofeature-ha": [3022],
+
+ "os-odl-nofeature-ha": [3022],
+
+ "os-odl-sfc-ha": [3022],
+
+ "os-odl_l2-moon-ha": [3022],
+
+ "os-odl_l2-moon-noha": [3022],
+
+ "os-odl_l3-nofeature-ha": [3022],
+
+ "os-ovn-nofeature-noha": [3022]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc011_pod.png
+ :width: 800px
+ :alt: TC011 influence of the POD
+
+{
+
+ "huawei-virtual2": [2997],
+
+ "flex-pod2": [2997.5],
+
+ "huawei-virtual3": [2998],
+
+ "huawei-virtual9": [3000],
+
+ "huawei-virtual8": [3001],
+
+ "huawei-virtual4": [3002],
+
+ "ericsson-virtual3": [3006],
+
+ "huawei-virtual1": [3007],
+
+ "ericsson-virtual2": [3009],
+
+ "intel-pod18": [3010],
+
+ "ericsson-virtual4": [3017],
+
+ "lf-pod2": [3021],
+
+ "arm-pod5": [3022],
+
+ "arm-pod6": [3022],
+
+ "ericsson-pod1": [3022],
+
+ "huawei-pod12": [3022],
+
+ "huawei-pod2": [3022],
+
+ "intel-pod5": [3022],
+
+ "lf-pod1": [3022]
+
+}
+
+
+Fraser release
+--------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [1],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [1],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [1],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [1],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [1],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [1511.5],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [2996],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [2997],
+
+ "os-nosdn-bar-noha:huawei-virtual4:compass": [2997],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [2997],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [2997],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [2997],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [2997],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [2997],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [2997],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [3000],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [3003],
+
+ "os-nosdn-bar-noha:huawei-virtual3:compass": [3011],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [3015.5],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [3019],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [3021],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [3021],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [3022],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-calipso-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [3022],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [3022],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [3022],
+
+ "os-nosdn-nofeature-ha:zte-pod2:daisy": [3022],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [3022],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [3022],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [3022],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [3022],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [3022],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [3022],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [3022.5],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [3023],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [3023],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [3025]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc011_scenario_fraser.png
+ :width: 800px
+ :alt: TC011 influence of scenario
+
+{
+
+ "os-ovn-nofeature-noha": [1511.5],
+
+ "os-nosdn-kvm-noha": [2997],
+
+ "os-odl-sfc-ha": [3021],
+
+ "os-nosdn-bar-ha": [3022],
+
+ "os-nosdn-bar-noha": [3022],
+
+ "os-nosdn-calipso-noha": [3022],
+
+ "os-nosdn-kvm-ha": [3022],
+
+ "os-nosdn-nofeature-ha": [3022],
+
+ "os-nosdn-nofeature-noha": [3022],
+
+ "os-nosdn-openbaton-ha": [3022],
+
+ "os-odl_l3-nofeature-ha": [3022],
+
+ "os-odl_l3-nofeature-noha": [3022],
+
+ "os-odl-sfc-noha": [3023]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc011_pod_fraser.png
+ :width: 800px
+ :alt: TC011 influence of the POD
+
+{
+
+ "arm-pod6": [1],
+
+ "ericsson-pod1": [1],
+
+ "ericsson-virtual2": [1],
+
+ "ericsson-virtual3": [1],
+
+ "lf-pod2": [1],
+
+ "huawei-virtual1": [2997],
+
+ "huawei-virtual3": [2999],
+
+ "huawei-virtual4": [3002],
+
+ "huawei-pod12": [3022],
+
+ "huawei-pod2": [3022],
+
+ "intel-pod18": [3022],
+
+ "lf-pod1": [3022],
+
+ "zte-pod2": [3022],
+
+ "huawei-virtual2": [3025]
+
+}
diff --git a/docs/release/results/tc012-memory-read-write-bandwidth.rst b/docs/release/results/tc012-memory-read-write-bandwidth.rst
new file mode 100644
index 000000000..71d69cde9
--- /dev/null
+++ b/docs/release/results/tc012-memory-read-write-bandwidth.rst
@@ -0,0 +1,513 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+==================================================
+Test results for TC012 memory read/write bandwidth
+==================================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC012 measures the rate at which data can be read from and written to the memory (this includes all levels of memory).
+In this test case, the bandwidth to read data from memory and then write data to the same memory location are measured.
+
+Metric: memory bandwidth
+Unit: MBps
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [23126.325],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [23123.975],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [23068.965],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [22972.46],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [22912.015],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [22911.35],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [22900.93],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [22767.56],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [22721.83],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [22511.565],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [22071.235],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [21646.415],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [20229.99],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [17491.18],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [17474.965],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [17141.375],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [17134.99],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [17124.27],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [16599.325],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [16309.13],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [16137.48],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [15960.76],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [15685.505],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [15536.65],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [15431.795],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [15129.27],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [15125.51],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [15030.65],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [15019.89],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [15005.11],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [14975.645],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [14968.97],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [14968.97],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [14741.425],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [14714.28],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [14674.38],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [14664.12],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [14587.62],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [14539.94],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [14534.54],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [14511.925],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [14496.875],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [14378.87],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [14366.69],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [14356.695],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [14341.605],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [14327.78],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [14313.81],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [14284.365],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [14157.99],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [14144.86],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [14138.9],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [14117.7],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [14097.255],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [14085.675],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [14071.605],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [14059.51],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [14057.155],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [14051.945],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [14020.74],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [14017.915],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [13954.27],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [13915.87],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [13874.59],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [13812.215],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [13777.59],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [13765.36],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [13559.905],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [13477.52],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [13255.17],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [13189.64],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [12718.545],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [12559.445],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [12409.66],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [8832.515],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [8823.955],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [4398.08],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [4375.75],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [4260.77],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [4259.62]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc012_scenario.png
+ :width: 800px
+ :alt: TC012 influence of scenario
+
+{
+
+ "os-ovn-nofeature-noha": [22900.93],
+
+ "os-nosdn-bar-noha": [22721.83],
+
+ "os-nosdn-ovs-ha": [22063.67],
+
+ "os-odl-nofeature-ha": [17146.05],
+
+ "os-odl-nofeature-noha": [16017.41],
+
+ "os-nosdn-ovs-noha": [16005.74],
+
+ "os-nosdn-nofeature-noha": [15290.94],
+
+ "os-nosdn-nofeature-ha": [15038.74],
+
+ "os-nosdn-bar-ha": [14972.975],
+
+ "os-odl_l2-moon-ha": [14956.955],
+
+ "os-odl_l3-nofeature-ha": [14839.21],
+
+ "os-odl-sfc-ha": [14823.48],
+
+ "os-nosdn-ovs_dpdk-ha": [14822.17],
+
+ "os-nosdn-ovs_dpdk-noha": [14725.9],
+
+ "os-odl_l2-moon-noha": [14665.4],
+
+ "os-odl_l3-nofeature-noha": [14483.09],
+
+ "os-odl-sfc-noha": [14373.21],
+
+ "os-nosdn-openbaton-ha": [14135.325],
+
+ "os-nosdn-kvm-noha": [14020.26],
+
+ "os-nosdn-kvm-ha": [13996.02]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc012_pod.png
+ :width: 800px
+ :alt: TC012 influence of the POD
+
+{
+
+ "lf-pod1": [22912.39],
+
+ "lf-pod2": [22637.67],
+
+ "flex-pod2": [20229.99],
+
+ "ericsson-virtual1": [17474.965],
+
+ "ericsson-pod1": [17127.38],
+
+ "ericsson-virtual4": [16219.97],
+
+ "ericsson-virtual2": [15652.28],
+
+ "ericsson-virtual3": [15551.26],
+
+ "huawei-pod2": [15017.2],
+
+ "huawei-virtual4": [14266.34],
+
+ "huawei-virtual1": [14233.035],
+
+ "huawei-virtual3": [14227.63],
+
+ "huawei-pod12": [14147.245],
+
+ "intel-pod18": [14058.33],
+
+ "huawei-virtual2": [13862.85],
+
+ "intel-pod5": [13280.32],
+
+ "huawei-virtual9": [12559.445],
+
+ "huawei-virtual8": [8998.02],
+
+ "arm-pod5": [4388.875],
+
+ "arm-pod6": [4260.2]
+
+}
+
+
+Fraser release
+--------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [21421.795],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [21075],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [21017.44],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [20991.46],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [20812.405],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [20694.035],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [20672.765],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [20269.65],
+
+ "os-nosdn-calipso-noha:lf-pod1:apex": [20186.32],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [19959.915],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [19719.38],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [19654.505],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [19391.145],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [19378.64],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [19103.43],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [18688.695],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [18557.95],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [17088.61],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [17040.78],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [16057.235],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [15622.355],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [15422.235],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [15403.09],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [15141.58],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [14922.37],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [14864.195],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [14856.295],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [14796.035],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [14484.375],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [14441.955],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [14373],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [14330.44],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [14320.305],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [14253.715],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [14203.655],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [14179.93],
+
+ "os-odl-nofeature-ha:zte-pod2:daisy": [14177.135],
+
+ "os-nosdn-nofeature-ha:zte-pod2:daisy": [14150.825],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [14100.87],
+
+ "os-nosdn-bar-noha:huawei-virtual4:compass": [14033.36],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [13963.73],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [13874.775],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [13805.65],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [13754.63],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [13702.92],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [13638.115],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [13637.83],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [13635.66],
+
+ "os-nosdn-bar-noha:huawei-virtual3:compass": [13635.58],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [13544.95],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [13514.27],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [13496.45],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [13475.38],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [12733.19],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [12682.805],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [4326.11],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [3824.13],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [3797.795],
+
+ "os-nosdn-ovs-ha:arm-pod6:fuel": [3749.91]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc012_scenario_fraser.png
+ :width: 800px
+ :alt: TC012 influence of scenario
+
+{
+
+ "os-ovn-nofeature-noha": [20694.035],
+
+ "os-nosdn-calipso-noha": [20186.32],
+
+ "os-nosdn-openbaton-ha": [18557.95],
+
+ "os-nosdn-ovs-ha": [17048.17],
+
+ "os-odl-nofeature-noha": [16191.125],
+
+ "os-nosdn-ovs-noha": [15790.32],
+
+ "os-nosdn-bar-ha": [14833.97],
+
+ "os-odl-sfc-ha": [14828.72],
+
+ "os-odl_l3-nofeature-ha": [14801.25],
+
+ "os-nosdn-kvm-ha": [14700.1],
+
+ "os-nosdn-nofeature-ha": [14610.48],
+
+ "os-nosdn-nofeature-noha": [14555.975],
+
+ "os-odl-sfc-noha": [14508.14],
+
+ "os-nosdn-bar-noha": [14395.22],
+
+ "os-odl-nofeature-ha": [14231.245],
+
+ "os-odl_l3-nofeature-noha": [14161.58],
+
+ "os-nosdn-kvm-noha": [13845.685]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc012_pod_fraser.png
+ :width: 800px
+ :alt: TC012 influence of the POD
+
+{
+
+ "lf-pod1": [20552.9],
+
+ "lf-pod2": [20058.925],
+
+ "ericsson-pod1": [18930.78],
+
+ "intel-pod18": [18757.545],
+
+ "ericsson-virtual4": [15389.465],
+
+ "ericsson-virtual2": [15343.79],
+
+ "huawei-pod2": [14870.78],
+
+ "zte-pod2": [14157.99],
+
+ "huawei-pod12": [14126.99],
+
+ "huawei-virtual3": [13929.67],
+
+ "huawei-virtual4": [13847.155],
+
+ "huawei-virtual2": [13702.92],
+
+ "huawei-virtual1": [13496.45],
+
+ "ericsson-virtual3": [12733.19],
+
+ "arm-pod5": [4326.11],
+
+ "arm-pod6": [3809.885]
+
+}
diff --git a/docs/release/results/tc014-cpu-processing-speed.rst b/docs/release/results/tc014-cpu-processing-speed.rst
new file mode 100644
index 000000000..a2eeb6302
--- /dev/null
+++ b/docs/release/results/tc014-cpu-processing-speed.rst
@@ -0,0 +1,512 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+===========================================
+Test results for TC014 cpu processing speed
+===========================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC014 measures score of single cpu running using UnixBench.
+
+Metric: score of single CPU running
+Unit: N/A
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-odl-sfc-noha:lf-pod1:apex": [3735.2],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [3725.5],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [3711],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [3708.4],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [3705.7],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [3704],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [3703.2],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [3702.8],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [3698.7],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [3654.8],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [3635.55],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [3633.2],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [3450.3],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [3406.4],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [3360.4],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [3340.65],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [3208.6],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [3134.8],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [3056.2],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [2988.9],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [2773.7],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [2645.85],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [2625.3],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [2601.3],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [2590.4],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [2570.2],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [2558.8],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [2556.5],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [2554.6],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [2536.75],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [2533.55],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [2531.85],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [2531.7],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [2531.2],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [2531],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [2529.6],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [2520.5],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [2481.15],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [2474],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [2472.6],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [2471],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [2470.6],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [2464.15],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [2455.9],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [2455.3],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [2446.85],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [2444.75],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [2430.9],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [2421.3],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [2415.7],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [2399.4],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [2391.85],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [2391.45],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [2380.7],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [2379.6],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [2371.9],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [2364.6],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [2363.4],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [2362],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [2358.5],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [2358.45],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [2336],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [2326.6],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [2324.95],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [2320.2],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [2318.5],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [2312.8],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [2311.7],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [2301.15],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [2297.7],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [2232.8],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [2232.1],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [2230],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [2154],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [2150.1],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [2004.3],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [1754.5],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [1754.15],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [716.15],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [716.05]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc014_scenario.png
+ :width: 800px
+ :alt: TC014 influence of scenario
+
+{
+
+ "os-nosdn-ovs-ha": [3725.5],
+
+ "os-ovn-nofeature-noha": [3654.8],
+
+ "os-nosdn-bar-noha": [3633.2],
+
+ "os-odl-nofeature-ha": [3407.8],
+
+ "os-nosdn-ovs-noha": [2583.2],
+
+ "os-odl-nofeature-noha": [2578.9],
+
+ "os-nosdn-nofeature-noha": [2553.2],
+
+ "os-nosdn-nofeature-ha": [2532.8],
+
+ "os-odl_l2-moon-ha": [2530.5],
+
+ "os-nosdn-bar-ha": [2527],
+
+ "os-odl_l3-nofeature-ha": [2501.5],
+
+ "os-nosdn-ovs_dpdk-noha": [2473.65],
+
+ "os-odl-sfc-ha": [2472.9],
+
+ "os-odl_l2-moon-noha": [2470.8],
+
+ "os-nosdn-ovs_dpdk-ha": [2461.9],
+
+ "os-odl_l3-nofeature-noha": [2442.8],
+
+ "os-nosdn-kvm-noha": [2392.9],
+
+ "os-odl-sfc-noha": [2370.5],
+
+ "os-nosdn-kvm-ha": [2358.5],
+
+ "os-nosdn-openbaton-ha": [2231.8]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc014_pod.png
+ :width: 800px
+ :alt: TC014 influence of the POD
+
+{
+
+ "lf-pod2": [3723.95],
+
+ "lf-pod1": [3669],
+
+ "intel-pod5": [3388.6],
+
+ "intel-pod18": [3298.4],
+
+ "flex-pod2": [3208.6],
+
+ "ericsson-virtual1": [2988.9],
+
+ "ericsson-pod1": [2669.1],
+
+ "ericsson-virtual4": [2598.5],
+
+ "ericsson-virtual3": [2553.15],
+
+ "huawei-pod2": [2531.2],
+
+ "ericsson-virtual2": [2526.9],
+
+ "huawei-virtual4": [2407.4],
+
+ "huawei-virtual3": [2374.6],
+
+ "huawei-virtual2": [2326.4],
+
+ "huawei-virtual9": [2324.95],
+
+ "huawei-virtual1": [2302.6],
+
+ "huawei-pod12": [2232.2],
+
+ "huawei-virtual8": [2085.3],
+
+ "arm-pod5": [1754.4],
+
+ "arm-pod6": [716.15]
+
+}
+
+
+Fraser release
+--------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [3747.3],
+
+ "os-nosdn-calipso-noha:lf-pod1:apex": [3727.2],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [3726.5],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [3723.8],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [3718.9],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [3717.75],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [3706.5],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [3704.9],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [3687.7],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [3635.4],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [3632.55],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [3569],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [3432.1],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [3133.85],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [3079.8],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [3074.75],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [2976.2],
+
+ "os-nosdn-nofeature-ha:zte-pod2:daisy": [2910.95],
+
+ "os-odl-nofeature-ha:zte-pod2:daisy": [2801.1],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [2603],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [2559.7],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [2539.1],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [2530.5],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [2529.4],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [2528.9],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [2527.8],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [2527.4],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [2517.8],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [2472.4],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [2469.1],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [2452.05],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [2438.7],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [2418.4],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [2404.35],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [2391],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [2376.75],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [2376.2],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [2359.45],
+
+ "os-nosdn-bar-noha:huawei-virtual4:compass": [2353.3],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [2351.9],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [2339.4],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [2335.6],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [2328],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [2324.5],
+
+ "os-nosdn-bar-noha:huawei-virtual3:compass": [2317.3],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [2313.95],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [2308.1],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [2299.3],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [2250.4],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [2229.7],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [2228.8],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [2171.3],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [2104.8],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [1961.35],
+
+ "os-nosdn-ovs-ha:arm-pod5:fuel": [1764.2],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [1730.95],
+
+ "os-nosdn-ovs-ha:arm-pod6:fuel": [715.55],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [715.4],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [715.25]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc014_scenario_fraser.png
+ :width: 800px
+ :alt: TC014 influence of scenario
+
+{
+
+ "os-nosdn-calipso-noha": [3727.2],
+
+ "os-ovn-nofeature-noha": [3723.8],
+
+ "os-odl-nofeature-noha": [3128.05],
+
+ "os-nosdn-openbaton-ha": [2976.2],
+
+ "os-nosdn-ovs-ha": [2814.5],
+
+ "os-odl-nofeature-ha": [2801.4],
+
+ "os-nosdn-nofeature-ha": [2649.7],
+
+ "os-nosdn-ovs-noha": [2587.3],
+
+ "os-odl_l3-nofeature-ha": [2528.45],
+
+ "os-odl-sfc-ha": [2527.6],
+
+ "os-nosdn-bar-ha": [2526.55],
+
+ "os-nosdn-kvm-ha": [2516.95],
+
+ "os-odl-sfc-noha": [2453.65],
+
+ "os-nosdn-bar-noha": [2447.7],
+
+ "os-nosdn-nofeature-noha": [2443.85],
+
+ "os-odl_l3-nofeature-noha": [2394.3],
+
+ "os-nosdn-kvm-noha": [2379.7]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc014_pod_fraser.png
+ :width: 800px
+ :alt: TC014 influence of the POD
+
+{
+
+ "lf-pod2": [3737.6],
+
+ "lf-pod1": [3702.7],
+
+ "ericsson-pod1": [3131.6],
+
+ "intel-pod18": [3098.1],
+
+ "zte-pod2": [2831.4],
+
+ "ericsson-virtual2": [2559.7],
+
+ "huawei-pod2": [2528.9],
+
+ "ericsson-virtual4": [2527.8],
+
+ "huawei-virtual3": [2379.1],
+
+ "huawei-virtual4": [2362.1],
+
+ "huawei-virtual2": [2299.3],
+
+ "huawei-pod12": [2229],
+
+ "huawei-virtual1": [2171.3],
+
+ "ericsson-virtual3": [2104.8],
+
+ "arm-pod5": [1764.2],
+
+ "arm-pod6": [715.4]
+
+}
diff --git a/docs/release/results/tc069-memory-write-bandwidth.rst b/docs/release/results/tc069-memory-write-bandwidth.rst
new file mode 100644
index 000000000..4cd3be3b0
--- /dev/null
+++ b/docs/release/results/tc069-memory-write-bandwidth.rst
@@ -0,0 +1,516 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+=============================================
+Test results for TC069 memory write bandwidth
+=============================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC069 measures the maximum possible cache and memory performance while reading and writing certain
+blocks of data (starting from 1Kb and further in power of 2) continuously through ALU and FPU
+respectively. Measure different aspects of memory performance via synthetic simulations.
+Each simulation consists of four performances (Copy, Scale, Add, Triad).
+The test results shown below are for writing 32MB integer block size.
+
+Metric: memory write bandwidth
+Unit: MBps
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [20113.395],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [19183.58],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [17851.35],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [16312.37],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [15633.245],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [13332.065],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [13327.02],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [9462.74],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [9384.585],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [9235.98],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [9213.6],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [9152.18],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [9079.45],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [9071.13],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [9068.06],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [9031.24],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [9019.53],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [8977.3],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [8960.635],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [8825.805],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [8282.75],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [8116.33],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [8083.97],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [8083.52],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [7799.145],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [7776.12],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [7680.37],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [7615.97],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [7612.62],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [7518.62],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [7489.67],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [7478.57],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [7465.82],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [7443.16],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [7442.855],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [7440.65],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [7401.16],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [7389.505],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [7385.76],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [7382.345],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [7286.385],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [7272.06],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [7261.73],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [7253.64],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [7247.89],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [7214.01],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [7207.39],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [7205.565],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [7201.005],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [7132.835],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [7117.05],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [7064.18],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [6997.295],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [6992.21],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [6975.63],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [6972.63],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [6955],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [6954.5],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [6953.35],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [6951.89],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [6932.29],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [6929.54],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [6921.6],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [6913.355],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [6848.58],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [6818.74],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [6812.16],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [6808.18],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [6807.565],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [6774.76],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [6759.4],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [6756.9],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [6543.46],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [6504.34],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [6481.005],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [6461.5],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [6152.375],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [5941.7],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [4564.515]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc069_scenario.png
+ :width: 800px
+ :alt: TC069 influence of scenario
+
+{
+
+ "os-nosdn-openbaton-ha": [9187.16],
+
+ "os-odl_l2-moon-ha": [9010.57],
+
+ "os-nosdn-nofeature-ha": [8886.75],
+
+ "os-odl_l3-nofeature-ha": [8779.67],
+
+ "os-odl_l2-moon-noha": [8114.995],
+
+ "os-nosdn-ovs_dpdk-ha": [7864.07],
+
+ "os-odl_l3-nofeature-noha": [7632.11],
+
+ "os-odl-sfc-ha": [7624.67],
+
+ "os-nosdn-nofeature-noha": [7470.66],
+
+ "os-odl-nofeature-ha": [7372.23],
+
+ "os-nosdn-ovs_dpdk-noha": [7311.54],
+
+ "os-odl-sfc-noha": [7300.56],
+
+ "os-nosdn-ovs-noha": [7280.005],
+
+ "os-odl-nofeature-noha": [7162.67],
+
+ "os-nosdn-kvm-ha": [7130.775],
+
+ "os-nosdn-kvm-noha": [7041.13],
+
+ "os-ovn-nofeature-noha": [6954.5],
+
+ "os-nosdn-ovs-ha": [6913.355],
+
+ "os-nosdn-bar-ha": [6829.17],
+
+ "os-nosdn-bar-noha": [6812.16]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc069_pod.png
+ :width: 800px
+ :alt: TC069 influence of the POD
+
+{
+
+ "intel-pod18": [18871.79],
+
+ "intel-pod5": [16055.79],
+
+ "arm-pod6": [13327.02],
+
+ "flex-pod2": [9384.585],
+
+ "ericsson-pod1": [9331.535],
+
+ "huawei-pod12": [9164.88],
+
+ "huawei-pod2": [9026.52],
+
+ "huawei-virtual9": [8825.805],
+
+ "ericsson-virtual1": [7615.97],
+
+ "ericsson-virtual4": [7539.23],
+
+ "arm-pod5": [7403.38],
+
+ "huawei-virtual3": [7247.89],
+
+ "huawei-virtual2": [7205.35],
+
+ "huawei-virtual1": [7196.405],
+
+ "ericsson-virtual3": [7173.72],
+
+ "huawei-virtual4": [7131.47],
+
+ "ericsson-virtual2": [7129.08],
+
+ "lf-pod1": [6928.18],
+
+ "lf-pod2": [6875.88],
+
+ "huawei-virtual8": [5729.705]
+
+}
+
+
+Fraser release
+--------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [18382.49],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [16774.52],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [16680.305],
+
+ "os-nosdn-ovs-ha:arm-pod6:fuel": [11925.22],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [11895.71],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [11880.7],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [9471.095],
+
+ "os-odl-nofeature-ha:zte-pod2:daisy": [9375.33],
+
+ "os-nosdn-nofeature-ha:zte-pod2:daisy": [9372.95],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [9174.36],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [9051.57],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [8894.74],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [8857.23],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [8855.8],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [8840.94],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [8826.23],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [8039.48],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [7670.21],
+
+ "os-nosdn-ovs-ha:arm-pod5:fuel": [7590.9],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [7579.625],
+
+ "os-nosdn-bar-noha:huawei-virtual3:compass": [7511.775],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [7475.16],
+
+ "os-nosdn-bar-noha:huawei-virtual4:compass": [7435.08],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [7426.79],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [7362.8],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [7263.45],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [7262.72],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [7241.07],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [7219.21],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [7174.33],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [7170.795],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [7158.335],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [7122.45],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [7104.9],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [7044.37],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [7011.075],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [6950.28],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [6918.31],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [6903.11],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [6880.98],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [6863.39],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [6851.54],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [6834.75],
+
+ "os-nosdn-calipso-noha:lf-pod1:apex": [6833.92],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [6814.68],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [6809.44],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [6784.48],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [6737.64],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [6708.61],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [6697.2],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [6626.51],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [6609.57],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [6606.87],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [6547.39],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [6465.48],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [6413],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [6409.075],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [6128.79],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [5835.59],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [5617.12]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc069_scenario_fraser.png
+ :width: 800px
+ :alt: TC069 influence of scenario
+
+{
+
+ "os-nosdn-openbaton-ha": [16774.52],
+
+ "os-odl-nofeature-ha": [9363.69],
+
+ "os-nosdn-nofeature-ha": [8878.01],
+
+ "os-odl_l3-nofeature-ha": [8748.4],
+
+ "os-odl-sfc-ha": [8708.045],
+
+ "os-nosdn-nofeature-noha": [7426.79],
+
+ "os-nosdn-kvm-noha": [7230.79],
+
+ "os-odl-sfc-noha": [7224.11],
+
+ "os-odl-nofeature-noha": [7187.84],
+
+ "os-nosdn-ovs-noha": [7044.37],
+
+ "os-nosdn-bar-ha": [6947.87],
+
+ "os-odl_l3-nofeature-noha": [6895.96],
+
+ "os-nosdn-kvm-ha": [6890.92],
+
+ "os-nosdn-calipso-noha": [6833.92],
+
+ "os-nosdn-ovs-ha": [6833.495],
+
+ "os-nosdn-bar-noha": [6811.66],
+
+ "os-ovn-nofeature-noha": [6809.44]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc069_pod_fraser.png
+ :width: 800px
+ :alt: TC069 influence of the POD
+
+{
+
+ "intel-pod18": [16939.24],
+
+ "arm-pod6": [11895.71],
+
+ "zte-pod2": [9375.33],
+
+ "ericsson-pod1": [9140.42],
+
+ "huawei-pod12": [8993.37],
+
+ "huawei-pod2": [8794.01],
+
+ "huawei-virtual2": [7670.21],
+
+ "arm-pod5": [7479.32],
+
+ "ericsson-virtual2": [7219.21],
+
+ "huawei-virtual4": [7059.045],
+
+ "huawei-virtual3": [7023.57],
+
+ "lf-pod2": [6834.7],
+
+ "lf-pod1": [6775.27],
+
+ "ericsson-virtual4": [6522.86],
+
+ "ericsson-virtual3": [5835.59],
+
+ "huawei-virtual1": [5617.12]
+
+}
diff --git a/docs/release/results/tc082-context-switches-under-load.rst b/docs/release/results/tc082-context-switches-under-load.rst
new file mode 100644
index 000000000..92bc69907
--- /dev/null
+++ b/docs/release/results/tc082-context-switches-under-load.rst
@@ -0,0 +1,187 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+==================================================
+Test results for TC082 context switches under load
+==================================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC082 measures various software performance events using perf.
+The test results shown below are for context-switches.
+
+Metric: context switches
+Unit: N/A
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [316],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [340],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [357.5],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [384],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [394.5],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [435],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [476],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [518],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [863],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [871],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [1002],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [1174],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [1239],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [1430],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [1489],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [1883.5]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc082_scenario.png
+ :width: 800px
+ :alt: TC082 influence of scenario
+
+{
+
+ "os-nosdn-nofeature-ha": [505],
+
+ "os-odl-nofeature-ha": [863]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc082_pod.png
+ :width: 800px
+ :alt: TC082 influence of the POD
+
+{
+
+ "huawei-pod12": [316],
+
+ "intel-pod18": [340],
+
+ "intel-pod5": [357.5],
+
+ "ericsson-pod1": [384],
+
+ "lf-pod2": [394.5],
+
+ "lf-pod1": [435],
+
+ "flex-pod2": [476],
+
+ "huawei-pod2": [518],
+
+ "arm-pod5": [869.5],
+
+ "huawei-virtual9": [1002],
+
+ "huawei-virtual4": [1174],
+
+ "huawei-virtual3": [1239],
+
+ "huawei-virtual2": [1430],
+
+ "huawei-virtual1": [1489],
+
+ "arm-pod6": [1883.5]
+
+}
+
+
+Fraser release
+--------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-nofeature-ha:zte-pod2:daisy": [306.5],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [337.5],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [343.5],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [399],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [454],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [544.5],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [1138],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [1305],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [1433],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [1470],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [1738.5]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc082_pod_fraser.png
+ :width: 800px
+ :alt: TC082 influence of the POD
+
+{
+
+ "zte-pod2": [306.5],
+
+ "lf-pod2": [337.5],
+
+ "intel-pod18": [343.5],
+
+ "huawei-pod12": [399],
+
+ "lf-pod1": [454],
+
+ "huawei-pod2": [544.5],
+
+ "huawei-virtual4": [1138],
+
+ "ericsson-pod1": [1305],
+
+ "huawei-virtual3": [1433],
+
+ "huawei-virtual1": [1470],
+
+ "arm-pod6": [1738.5]
+
+}
diff --git a/docs/release/results/tc083-network-throughput-between-vm.rst b/docs/release/results/tc083-network-throughput-between-vm.rst
new file mode 100644
index 000000000..0389eaafe
--- /dev/null
+++ b/docs/release/results/tc083-network-throughput-between-vm.rst
@@ -0,0 +1,187 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+=====================================================
+Test results for TC083 network throughput between VMs
+=====================================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC083 measures network latency and throughput between VMs using netperf.
+The test results shown below are for UDP throughout.
+
+Metric: UDP stream throughput
+Unit: 10^6bits/s
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [2204.42],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [1835.55],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [1676.705],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [1612.555],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [1370.23],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [1300.12],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [1070.455],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [1004.32],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [753.46],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [735.07],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [531.63],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [493.985],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [448.82],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [193.43],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [189.99],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [80.15]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc083_scenario.png
+ :width: 800px
+ :alt: TC083 influence of scenario
+
+{
+
+ "os-nosdn-nofeature-ha": [1109.12],
+
+ "os-odl-nofeature-ha": [531.63]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc083_pod.png
+ :width: 800px
+ :alt: TC083 influence of the POD
+
+{
+
+ "lf-pod1": [2204.42],
+
+ "intel-pod18": [1835.55],
+
+ "lf-pod2": [1676.705],
+
+ "intel-pod5": [1612.555],
+
+ "flex-pod2": [1370.23],
+
+ "huawei-pod12": [1300.12],
+
+ "huawei-pod2": [1070.455],
+
+ "ericsson-pod1": [1004.32],
+
+ "huawei-virtual9": [753.46],
+
+ "huawei-virtual4": [735.07],
+
+ "huawei-virtual3": [493.985],
+
+ "arm-pod5": [451.38],
+
+ "arm-pod6": [193.43],
+
+ "huawei-virtual1": [189.99],
+
+ "huawei-virtual2": [80.15]
+
+}
+
+
+Fraser release
+--------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [1893.39],
+
+ "os-nosdn-nofeature-ha:zte-pod2:daisy": [1543.995],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [1480.86],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [1417.015],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [1028.55],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [1007.65],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [811.795],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [552.95],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [227.655],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [216.63],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [59.255]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc083_pod_fraser.png
+ :width: 800px
+ :alt: TC083 influence of the POD
+
+{
+
+ "lf-pod2": [1893.39],
+
+ "zte-pod2": [1543.995],
+
+ "lf-pod1": [1480.86],
+
+ "intel-pod18": [1417.015],
+
+ "huawei-pod12": [1028.55],
+
+ "huawei-pod2": [1007.65],
+
+ "ericsson-pod1": [811.795],
+
+ "huawei-virtual4": [552.95],
+
+ "arm-pod6": [227.655],
+
+ "huawei-virtual1": [216.63],
+
+ "huawei-virtual3": [59.255]
+
+}
diff --git a/docs/release/results/yardstick-opnfv-vtc.rst b/docs/release/results/yardstick-opnfv-vtc.rst
deleted file mode 100644
index 059b5491f..000000000
--- a/docs/release/results/yardstick-opnfv-vtc.rst
+++ /dev/null
@@ -1,248 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _Dashboard006: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc006
-.. _Dashboard007: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc007
-.. _Dashboard020: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc020
-.. _Dashboard021: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc021
-.. _DashboardVTC: http://testresults.opnfv.org/grafana/dashboard/db/vtc-dashboard
-====================================
-Test Results for yardstick-opnfv-vtc
-====================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Details
-=======
-
-.. after this doc is filled, remove all comments and include the scenario in
-.. results.rst by removing the comment on the file name.
-
-
-Overview of test results
-------------------------
-
-.. general on metrics collected, number of iterations
-
-The virtual Traffic Classifier (vtc) Scenario supported by Yardstick is used by 4 Test Cases:
-
-- TC006
-- TC007
-- TC020
-- TC021
-
-
-* TC006
-
-TC006 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking Test.
-It collects measures about the end-to-end throughput supported by the
-virtual Traffic Classifier (vTC).
-Results of the test are shown in the Dashboard006_
-The throughput is expressed as percentage of the available bandwidth on the NIC.
-
-
-* TC007
-
-TC007 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking in presence of
-noisy neighbors Test.
-It collects measures about the end-to-end throughput supported by the
-virtual Traffic Classifier when a user-defined number of noisy neighbors is deployed.
-Results of the test are shown in the Dashboard007_
-The throughput is expressed as percentage of the available bandwidth on the NIC.
-
-
-* TC020
-
-TC020 is the Virtual Traffic Classifier Instantiation Test.
-It verifies that a newly instantiated vTC is alive and functional and its instantiation
-is correctly supported by the underlying infrastructure.
-Results of the test are shown in the Dashboard020_
-
-
-* TC021
-
-TC021 is the Virtual Traffic Classifier Instantiation in presence of noisy neighbors Test.
-It verifies that a newly instantiated vTC is alive and functional and its instantiation
-is correctly supported by the underlying infrastructure when noisy neighbors are present.
-Results of the test are shown in the Dashboard021_
-
-* Generic
-
-In the Generic scenario the Virtual Traffic Classifier is running on a standard Openstack
-setup and traffic is being replayed from a neighbor VM. The traffic sent contains
-various protocols and applications, and the VTC identifies them and exports the data.
-Results of the test are shown in the DashboardVTC.
-
-Detailed test results
----------------------
-
-* TC006
-
-The results for TC006 have been obtained using the following test case
-configuration:
-
-- Context: Dummy
-- Scenario: vtc_throughput
-- Network Techology: SR-IOV
-- vTC Flavor: m1.large
-
-
-* TC007
-
-The results for TC007 have been obtained using the following test case
-configuration:
-
-- Context: Dummy
-- Scenario: vtc_throughput_noisy
-- Network Techology: SR-IOV
-- vTC Flavor: m1.large
-- Number of noisy neighbors: 2
-- Number of cores per neighbor: 2
-- Amount of RAM per neighbor: 1G
-
-
-* TC020
-
-The results for TC020 have been obtained using the following test case
-configuration:
-
-The results listed in previous section have been obtained using the following
-test case configuration:
-
-- Context: Dummy
-- Scenario: vtc_instantiation_validation
-- Network Techology: SR-IOV
-- vTC Flavor: m1.large
-
-
-* TC021
-
-The results listed in previous section have been obtained using the following
-test case configuration:
-
-- Context: Dummy
-- Scenario: vtc_instantiation_validation
-- Network Techology: SR-IOV
-- vTC Flavor: m1.large
-- Number of noisy neighbors: 2
-- Number of cores per neighbor: 2
-- Amount of RAM per neighbor: 1G
-
-
-For all the test cases, the user can specify different values for the parameters.
-
-* Generic
-
-The results listed in the previous section have been obtained, using a
-standard Openstack setup.
-The user can replay his/her own traffic and see the corresponding results.
-
-Rationale for decisions
------------------------
-
-* TC006
-
-The result of the test is a number between 0 and 100 which represents the percentage of bandwidth
-available on the NIC that corresponds to the supported throughput by the vTC.
-
-
-* TC007
-
-The result of the test is a number between 0 and 100 which represents the percentage of bandwidth
-available on the NIC that corresponds to the supported throughput by the vTC.
-
-* TC020
-
-The execution of the test is done as described in the following:
-
-- The vTC is deployed on the OpenStack testbed;
-- Some traffic is sent to the vTC;
-- The vTC changes the header of the packets and sends them back to the packet generator;
-- The packet generator checks that all the packets are received correctly and have been changed
-correctly by the vTC.
-
-The test is declared as PASSED if all the packets are correcly received by the packet generator
-and they have been modified by the virtual Traffic Classifier as required.
-
-
-* TC021
-
-The execution of the test is done as described in the following:
-
-- The vTC is deployed on the OpenStack testbed;
-- The noisy neighbors are deployed as requested by the user;
-- Some traffic is sent to the vTC;
-- The vTC change the header of the packets and sends them back to the packet generator;
-- The packet generator checks that all the packets are received correctly and have been changed
-correctly by the vTC
-
-The test is declared as PASSED if all the packets are correcly received by the packet generator
-and they have been modified by the virtual Traffic Classifier as required.
-
-* Generic
-
-The execution of the test consists of the following actions:
-
-- The vTC is deployed on the OpenStack testbed;
-- The traffic generator VM is deployed on the Openstack Testbed;
-- Traffic data are relevant to the network setup;
-- Traffic is sent to the vTC;
-
-
-
-Conclusions and recommendations
--------------------------------
-
-* TC006
-
-The obtained results show that the virtual Traffic Classifier can support up to 4 Gbps
-(40% of the available bandwidth) correspond to the expected behaviour of the virtual
-Traffic Classifier.
-Using the configuration with SR-IOV and large flavor, the expected throughput should
-generally be in the range between 3 and 4 Gbps.
-
-
-* TC007
-
-These results correspond to the configuration in which the virtual Traffic Classifier uses SR-IOV
-Virtual Functions and the flavor is set to large for the virtual machine.
-The throughput is in the range between 2.5 Gbps and 3.7 Gbps.
-This shows that the effect of 2 noisy neighbors reduces the throughput of
-the service between 10 and 20%.
-Increasing number of neihbours would have a higher impact on the performance.
-
-
-* TC020
-
-The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
-Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is
-correctly instantiated, it is able to receive and send packets using SR-IOV technology
-and to forward packets back to the packet generator changing the TCP/IP header as required.
-
-
-* TC021
-
-The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
-Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is
-correctly instantiated, it is able to receive and send packets using SR-IOV technology
-and to forward packets back to the packet generator changing the TCP/IP header as required,
-also in presence of noisy neighbors.
-
-* Generic
-
-The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
-Using the aforementioned configuration the expected application protocols are identified
-and their traffic statistics are demonstrated in the DashboardVTC, a group of popular
-applications is selected to demonstrate the sound operation of the vTC.
-The demonstrated application protocols are:
-- HTTP
-- Skype
-- Bittorrent
-- Youtube
-- Dropbox
-- Twitter
-- Viber
-- iCloud
diff --git a/docs/requirements.txt b/docs/requirements.txt
new file mode 100644
index 000000000..440843584
--- /dev/null
+++ b/docs/requirements.txt
@@ -0,0 +1,5 @@
+lfdocs-conf
+sphinx_opnfv_theme
+# Uncomment the following line if your project uses Sphinx to document
+# HTTP APIs
+# sphinxcontrib-httpdomain
diff --git a/docs/templates/test_results_template.rst b/docs/templates/test_results_template.rst
index f04b2b2a8..8885588ae 100644
--- a/docs/templates/test_results_template.rst
+++ b/docs/templates/test_results_template.rst
@@ -1,3 +1,18 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ~~~~~~~ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
=====================
Yardstick Test Report
=====================
@@ -46,16 +61,16 @@ TCXXX
on-demand test cases (HA, KVM, Parser)
* Overview of test results
-.. general on metrics collected, number of iterations
+ .. general on metrics collected, number of iterations
* Detailed test results
-.. info on lab, installer, scenario
+ .. info on lab, installer, scenario
* Rationale for decisions
-.. pass/fail
+ .. pass/fail
* Conclusions and recommendations
-.. did the expected behavior occured?
+ .. did the expected behavior occured?
General
=======
diff --git a/docs/testing/developer/devguide/devguide.rst b/docs/testing/developer/devguide/devguide.rst
index dade49b75..2065f6e0d 100755
--- a/docs/testing/developer/devguide/devguide.rst
+++ b/docs/testing/developer/devguide/devguide.rst
@@ -1,16 +1,42 @@
+..
+ Licensed under the Apache License, Version 2.0 (the "License"); you may
+ not use this file except in compliance with the License. You may obtain
+ a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ License for the specific language governing permissions and limitations
+ under the License.
+
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ~~~~~~~ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
Introduction
-=============
+------------
-Yardstick is a project dealing with performance testing. Yardstick produces its own test cases but can also be considered as a framework to support feature project testing.
+Yardstick is a project dealing with performance testing. Yardstick produces
+its own test cases but can also be considered as a framework to support feature
+project testing.
-Yardstick developed a test API that can be used by any OPNFV project. Therefore there are many ways to contribute to Yardstick.
+Yardstick developed a test API that can be used by any OPNFV project. Therefore
+there are many ways to contribute to Yardstick.
You can:
* Develop new test cases
* Review codes
* Develop Yardstick API / framework
-* Develop Yardstick grafana dashboards and Yardstick reporting page
+* Develop Yardstick grafana dashboards and Yardstick reporting page
* Write Yardstick documentation
This developer guide describes how to interact with the Yardstick project.
@@ -19,28 +45,30 @@ part is a list of “How to” to help you to join the Yardstick family whatever
your field of interest is.
Where can I find some help to start?
---------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. _`user guide`: http://artifacts.opnfv.org/yardstick/danube/1.0/docs/stesting_user_userguide/index.html
+.. _`user guide`: https://artifacts.opnfv.org/yardstick/docs/testing_user_userguide/index.html
.. _`wiki page`: https://wiki.opnfv.org/display/yardstick/
This guide is made for you. You can have a look at the `user guide`_.
There are also references on documentation, video tutorials, tips in the
-project `wiki page`_. You can also directly contact us by mail with [Yardstick] prefix in the title at opnfv-tech-discuss@lists.opnfv.org or on the IRC chan #opnfv-yardstick.
+project `wiki page`_. You can also directly contact us by mail with
+``#yardstick`` or ``[yardstick]`` prefix in the subject at
+``opnfv-tech-discuss@lists.opnfv.org`` or on the IRC channel ``#opnfv-yardstick``.
Yardstick developer areas
-==========================
+-------------------------
Yardstick framework
---------------------
+~~~~~~~~~~~~~~~~~~~
-Yardstick can be considered as a framework. Yardstick is release as a docker
+Yardstick can be considered as a framework. Yardstick is released as a docker
file, including tools, scripts and a CLI to prepare the environement and run
-tests. It simplifies the integration of external test suites in CI pipeline
-and provide commodity tools to collect and display results.
+tests. It simplifies the integration of external test suites in CI pipelines
+and provides commodity tools to collect and display results.
-Since Danube, test categories also known as tiers have been created to group
+Since Danube, test categories (also known as tiers) have been created to group
similar tests, provide consistant sub-lists and at the end optimize test
duration for CI (see How To section).
@@ -56,44 +84,54 @@ The tiers are:
How Todos?
-===========
+----------
How Yardstick works?
----------------------
+~~~~~~~~~~~~~~~~~~~~
The installation and configuration of the Yardstick is described in the `user guide`_.
How to work with test cases?
-----------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Sample Test cases
++++++++++++++++++
-**Sample Test cases**
+Yardstick provides many sample test cases which are located at ``samples`` directory of repo.
-Yardstick provides many sample test cases which are located at "samples" directory of repo.
+Sample test cases are designed with the following goals:
-Sample test cases are designed as following goals:
+1. Helping user better understand Yardstick features (including new feature and
+ new test capacity).
-1. Helping user better understand yardstick features(including new feature and new test capacity).
+2. Helping developer to debug a new feature and test case before it is
+ offically released.
-2. Helping developer to debug his new feature and test case before it is offical released.
+3. Helping other developers understand and verify the new patch before the
+ patch is merged.
-3. Helping other developers understand and verify the new patch before the patch merged.
+Developers should upload their sample test cases as well when they are
+uploading a new patch which is about the Yardstick new test case or new feature.
-So developers should upload your sample test case as well when they are trying to upload a new patch which is about the yardstick new test case or new feature.
+OPNFV Release Test cases
+++++++++++++++++++++++++
-**OPNFV Release Test cases**
+OPNFV Release test cases are located at ``yardstick/tests/opnfv/test_cases``.
+These test cases are run by OPNFV CI jobs, which means these test cases should
+be more mature than sample test cases.
+OPNFV scenario owners can select related test cases and add them into the test
+suites which represent their scenario.
-OPNFV Release test cases which are located at "tests/opnfv/test_cases" of repo.
-those test cases are runing by OPNFV CI jobs, It means those test cases should be more mature than sample test cases.
-OPNFV scenario owners can select related test cases and add them into the test suites which is represent the scenario.
-
-**Test case Description File**
+Test case Description File
+++++++++++++++++++++++++++
This section will introduce the meaning of the Test case description file.
-we will use ping.yaml as a example to show you how to understand the test case description file.
-In this Yaml file, you can easily find it consists of two sections. One is “Scenarios”, the other is “Context”.::
+we will use ping.yaml as a example to show you how to understand the test case
+description file.
+This ``yaml`` file consists of two sections. One is ``scenarios``, the other
+is ``context``.::
---
# Sample benchmark task config file
@@ -150,18 +188,32 @@ In this Yaml file, you can easily find it consists of two sections. One is “Sc
{% endif %}
-"Contexts" section is the description of pre-condition of testing. As ping.yaml shown, you can configure the image, flavor , name ,affinity and network of Test VM(servers), with this section, you will get a pre-condition env for Testing.
-Yardstick will automatic setup the stack which are described in this section.
-In fact, yardstick use convert this section to heat template and setup the VMs by heat-client (Meanwhile, yardstick can support to convert this section to Kubernetes template to setup containers).
-
-Two Test VMs(athena and ares) are configured by keyword "servers".
-"flavor" will determine how many vCPU, how much memory for test VMs.
-As "yardstick-flavor" is a basic flavor which will be automatically created when you run command "yardstick env prepare". "yardstick-flavor" is "1 vCPU 1G RAM,3G Disk".
-"image" is the image name of test VMs. if you use cirros.3.5.0, you need fill the username of this image into "user". the "policy" of placement of Test VMs have two values (affinity and availability).
-"availability" means anti-affinity. In "network" section, you can configure which provide network and physical_network you want Test VMs use.
-you may need to configure segmentation_id when your network is vlan.
-
-Moreover, you can configure your specific flavor as below, yardstick will setup the stack for you. ::
+The ``contexts`` section is the description of pre-condition of testing. As
+``ping.yaml`` shows, you can configure the image, flavor, name, affinity and
+network of Test VM (servers), with this section, you will get a pre-condition
+env for Testing.
+Yardstick will automatically setup the stack which are described in this
+section.
+Yardstick converts this section to heat template and sets up the VMs with
+heat-client (Yardstick can also support to convert this section to Kubernetes
+template to setup containers).
+
+In the examples above, two Test VMs (athena and ares) are configured by
+keyword ``servers``.
+``flavor`` will determine how many vCPU, how much memory for test VMs.
+As ``yardstick-flavor`` is a basic flavor which will be automatically created
+when you run command ``yardstick env prepare``. ``yardstick-flavor`` is
+``1 vCPU 1G RAM,3G Disk``.
+``image`` is the image name of test VMs. If you use ``cirros.3.5.0``, you need
+fill the username of this image into ``user``.
+The ``policy`` of placement of Test VMs have two values (``affinity`` and
+``availability``). ``availability`` means anti-affinity.
+In the ``network`` section, you can configure which ``provider`` network and
+``physical_network`` you want Test VMs to use.
+You may need to configure ``segmentation_id`` when your network is vlan.
+
+Moreover, you can configure your specific flavor as below, Yardstick will setup
+the stack for you. ::
flavor:
name: yardstick-new-flavor
@@ -170,7 +222,8 @@ Moreover, you can configure your specific flavor as below, yardstick will setup
disk: 2
-Besides default heat stack, yardstick also allow you to setup other two types stack. they are "Node" and "Kubernetes". ::
+Besides default ``Heat`` context, Yardstick also allows you to setup two other
+types of context. They are ``Node`` and ``Kubernetes``. ::
context:
type: Kubernetes
@@ -183,48 +236,64 @@ and ::
name: LF
+The ``scenarios`` section is the description of testing steps, you can
+orchestrate the complex testing step through scenarios.
-"Scenarios" section is the description of testing step, you can orchestrate the complex testing step through orchestrate scenarios.
+Each scenario will do one testing step.
+In one scenario, you can configure the type of scenario (operation), ``runner``
+type and ``sla`` of the scenario.
-Each scenario will do one testing step, In one scenario, you can configure the type of scenario(operation), runner type and SLA of the scenario.
+For TC002, We only have one step, which is Ping from host VM to target VM. In
+this step, we also have some detailed operations implemented (such as ssh to
+VM, ping from VM1 to VM2. Get the latency, verify the SLA, report the result).
-For TC002, We only have one step , that is Ping from host VM to target VM. In this step, we also have some detail operation implement ( such as ssh to VM, ping from VM1 to VM2. Get the latency, verify the SLA, report the result).
+If you want to get this implementation details implement, you can check with
+the scenario.py file. For Ping scenario, you can find it in Yardstick repo
+(``yardstick/yardstick/benchmark/scenarios/networking/ping.py``).
-If you want to get this detail implement , you can check with the scenario.py file. For Ping scenario, you can find it in yardstick repo ( yardstick / yardstick / benchmark / scenarios / networking / ping.py)
+After you select the type of scenario (such as Ping), you will select one type
+of ``runner``, there are 4 types of runner. ``Iteration`` and ``Duration`` are
+the most commonly used, and the default is ``Iteration``.
-after you select the type of scenario( such as Ping), you will select one type of runner, there are 4 types of runner. Usually, we use the "Iteration" and "Duration". and Default is "Iteration".
-For Iteration, you can specify the iteration number and interval of iteration. ::
+For ``Iteration``, you can specify the iteration number and interval of iteration. ::
runner:
type: Iteration
iterations: 10
interval: 1
-That means yardstick will iterate the 10 times of Ping test and the interval of each iteration is one second.
+That means Yardstick will repeat the Ping test 10 times and the interval of
+each iteration is one second.
-For Duration, you can specify the duration of this scenario and the interval of each ping test. ::
+For ``Duration``, you can specify the duration of this scenario and the
+interval of each ping test. ::
runner:
type: Duration
duration: 60
interval: 10
-That means yardstick will run the ping test as loop until the total time of this scenario reach the 60s and the interval of each loop is ten seconds.
+That means Yardstick will run the ping test as loop until the total time of
+this scenario reaches 60s and the interval of each loop is ten seconds.
-SLA is the criterion of this scenario. that depends on the scenario. different scenario can have different SLA metric.
+SLA is the criterion of this scenario. This depends on the scenario. Different
+scenarios can have different SLA metric.
-**How to write a new test case**
+How to write a new test case
+++++++++++++++++++++++++++++
-Yardstick already provide a library of testing step. that means yardstick provide lots of type scenario.
+Yardstick already provides a library of testing steps (i.e. different types of
+scenario).
-Basiclly, What you need to do is to orchestrate the scenario from the library.
-
-Here, We will show two cases. One is how to write a simple test case, the other is how to write a quite complex test case.
+Basically, what you need to do is to orchestrate the scenario from the library.
+Here, we will show two cases. One is how to write a simple test case, the other
+is how to write a quite complex test case.
Write a new simple test case
+''''''''''''''''''''''''''''
First, you can image a basic test case description as below.
@@ -314,7 +383,7 @@ First, you can image a basic test case description as below.
TODO
How can I contribute to Yardstick?
------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are already a contributor of any OPNFV project, you can contribute to
Yardstick. If you are totally new to OPNFV, you must first create your Linux
@@ -329,16 +398,17 @@ We distinguish 2 levels of contributors:
Yardstick commitors are promoted by the Yardstick contributors.
Gerrit & JIRA introduction
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++
.. _Gerrit: https://www.gerritcodereview.com/
-.. _`OPNFV Gerrit`: http://gerrit.opnfv.org/
+.. _`OPNFV Gerrit`: http://gerrit.opnfv.org/gerrit
.. _link: https://identity.linuxfoundation.org/
.. _JIRA: https://jira.opnfv.org/secure/Dashboard.jspa
OPNFV uses Gerrit_ for web based code review and repository management for the
Git Version Control System. You can access `OPNFV Gerrit`_. Please note that
-you need to have Linux Foundation ID in order to use OPNFV Gerrit. You can get one from this link_.
+you need to have Linux Foundation ID in order to use OPNFV Gerrit. You can get
+one from this link_.
OPNFV uses JIRA_ for issue management. An important principle of change
management is to have two-way trace-ability between issue management
@@ -350,14 +420,16 @@ If you want to contribute to Yardstick, you can pick a issue from Yardstick's
JIRA dashboard or you can create you own issue and submit it to JIRA.
Install Git and Git-reviews
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++
Installing and configuring Git and Git-Review is necessary in order to submit
-code to Gerrit. The `Getting to the code <https://wiki.opnfv.org/display/DEV/Developer+Getting+Started>`_ page will provide you with some help for that.
+code to Gerrit. The
+`Getting to the code <https://wiki.opnfv.org/display/DEV/Developer+Getting+Started>`_
+page will provide you with some help for that.
Verify your patch locally before submitting
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++++++++++++++++++++++++++++++++++++++++++++
Once you finish a patch, you can submit it to Gerrit for code review. A
developer sends a new patch to Gerrit will trigger patch verify job on Jenkins
@@ -366,7 +438,8 @@ code coverage test. Before you submit your patch, it is recommended to run the
patch verification in your local environment first.
Open a terminal window and set the project's directory to the working
-directory using the ``cd`` command. Assume that ``YARDSTICK_REPO_DIR`` is the path to the Yardstick project folder on your computer::
+directory using the ``cd`` command. Assume that ``YARDSTICK_REPO_DIR`` is the
+path to the Yardstick project folder on your computer::
cd $YARDSTICK_REPO_DIR
@@ -376,8 +449,12 @@ Verify your patch::
It is used in CI but also by the CLI.
+For more details on ``tox`` and tests, please refer to the `Running tests`_
+and `working with tox`_ sections below, which describe the different available
+environments.
+
Submit the code with Git
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++
Tell Git which files you would like to take into account for the next commit.
This is called 'staging' the files, by placing them into the staging area,
@@ -408,7 +485,7 @@ Git repository::
JIRA: YARDSTICK-XXX
-.. _`this document`: http://chris.beams.io/posts/git-commit/
+.. _`this document`: https://chris.beams.io/posts/git-commit/
The message that is required for the commit should follow a specific set of
rules. This practice allows to standardize the description messages attached
@@ -417,7 +494,7 @@ to the commits, and eventually navigate among the latter more easily.
`This document`_ happened to be very clear and useful to get started with that.
Push the code to Gerrit for review
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++
Now that the code has been comitted into your local Git repository the
following step is to push it online to Gerrit for it to be reviewed. The
@@ -432,26 +509,27 @@ Yardstick committers and contributors to review your codes.
:width: 800px
:alt: Gerrit for code review
-You can find Yardstick people info `here <https://wiki.opnfv.org/display/yardstick/People>`_.
+You can find a list Yardstick people
+`here <https://wiki.opnfv.org/display/yardstick/Yardstick+People>`_, or use
+the ``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit.
Modify the code under review in Gerrit
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+++++++++++++++++++++++++++++++++++++++
At the same time the code is being reviewed in Gerrit, you may need to edit it
to make some changes and then send it back for review. The following steps go
through the procedure.
Once you have modified/edited your code files under your IDE, you will have to
-stage them. The 'status' command is very helpful at this point as it provides
-an overview of Git's current state::
+stage them. The ``git status`` command is very helpful at this point as it
+provides an overview of Git's current state::
git status
-The output of the command provides us with the files that have been modified
-after the latest commit.
+This command lists the files that have been modified since the last commit.
You can now stage the files that have been modified as part of the Gerrit code
-review edition/modification/improvement using ``git add`` command. It is now
+review addition/modification/improvement using ``git add`` command. It is now
time to commit the newly modified files, but the objective here is not to
create a new commit, we simply want to inject the new changes into the
previous commit. You can achieve that with the '--amend' option on the
@@ -466,9 +544,172 @@ The final step consists in pushing the newly modified commit to Gerrit::
git review
+Backporting changes to stable branches
+--------------------------------------
+During the release cycle, when master and the ``stable/<release>`` branch have
+diverged, it may be necessary to backport (cherry-pick) changes top the
+``stable/<release>`` branch once they have merged to master.
+These changes should be identified by the committers reviewing the patch.
+Changes should be backported **as soon as possible** after merging of the
+original code.
+
+..note::
+ Besides the commit and review process below, the Jira tick must be updated to
+ add dual release versions and indicate that the change is to be backported.
+
+The process for backporting is as follows:
+
+* Committer A merges a change to master (process for normal changes).
+* Committer A cherry-picks the change to ``stable/<release>`` branch (if the
+ bug has been identified for backporting).
+* The original author should review the code and verify that it still works
+ (and give a ``+1``).
+* Committer B reviews the change, gives a ``+2`` and merges to
+ ``stable/<release>``.
+
+A backported change needs a ``+1`` and a ``+2`` from a committer who didn’t
+propose the change (i.e. minimum 3 people involved).
+
+Development guidelines
+----------------------
+This section provides guidelines and best practices for feature development
+and bug fixing in Yardstick.
+
+In general, bug fixes should be submitted as a single patch.
+
+When developing larger features, all commits on the local topic branch can be
+submitted together, by running ``git review`` on the tip of the branch. This
+creates a chain of related patches in gerrit.
+
+Each commit should contain one logical change and the author should aim for no
+more than 300 lines of code per commit. This helps to make the changes easier
+to review.
+
+Each feature should have the following:
+
+* Feature/bug fix code
+* Unit tests (both positive and negative)
+* Functional tests (optional)
+* Sample testcases (if applicable)
+* Documentation
+* Update to release notes
+
+Coding style
+~~~~~~~~~~~~
+.. _`OpenStack Style Guidelines`: https://docs.openstack.org/hacking/latest/user/hacking.html
+.. _`OPNFV coding guidelines`: https://wiki.opnfv.org/display/DEV/Contribution+Guidelines
+
+Please follow the `OpenStack Style Guidelines`_ for code contributions (the
+section on Internationalization (i18n) Strings is not applicable).
+
+When writing commit message, the `OPNFV coding guidelines`_ on git commit
+message style should also be used.
+
+Running tests
+~~~~~~~~~~~~~
+Once your patch has been submitted, a number of tests will be run by Jenkins
+CI to verify the patch. Before submitting your patch, you should run these
+tests locally. You can do this using ``tox``, which has a number of different
+test environments defined in ``tox.ini``.
+Calling ``tox`` without any additional arguments runs the default set of
+tests (unit tests, functional tests, coverage and pylint).
+
+If some tests are failing, you can save time and select test environments
+individually, by passing one or more of the following command-line options to
+``tox``:
+
+* ``-e py27``: Unit tests using Python 2.7
+* ``-e py3``: Unit tests using Python 3
+* ``-e pep8``: Linter and style checks on updated files
+* ``-e functional``: Functional tests using Python 2.7
+* ``-e functional-py3``: Functional tests using Python 3
+* ``-e coverage``: Code coverage checks
+
+.. note:: You need to stage your changes prior to running coverage for those
+ changes to be checked.
+
+In addition to the tests run by Jenkins (listed above), there are a number of
+other test environments defined.
+
+* ``-e pep8-full``: Linter and style checks are run on the whole repo (not
+ just on updated files)
+* ``-e os-requirements``: Check that the requirements are compatible with
+ OpenStack requirements.
+
+Working with tox
+++++++++++++++++
+.. _virtualenv: https://virtualenv.pypa.io/en/stable/
+
+``tox`` uses `virtualenv`_ to create isolated Python environments to run the
+tests in. The test environments are located at
+``.tox/<environment_name>`` e.g. ``.tox/py27``.
+
+If requirements are changed, you will need to recreate the tox test
+environment to make sure the new requirements are installed. This is done by
+passing the additional ``-r`` command-line option to ``tox``::
+
+ tox -r -e ...
+
+This can also be achieved by deleting the test environments manually before
+running ``tox``::
+
+ rm -rf .tox/<environment_name>
+ rm -rf .tox/py27
+
+Writing unit tests
+~~~~~~~~~~~~~~~~~~
+For each change submitted, a set of unit tests should be submitted, which
+should include both positive and negative testing.
+
+In order to help identify which tests are needed, follow the guidelines below.
+
+* In general, there should be a separate test for each branching point, return
+ value and input set.
+* Negative tests should be written to make sure exceptions are raised and/or
+ handled appropriately.
+
+The following convention should be used for naming tests::
+
+ test_<method_name>_<some_comment>
+
+The comment gives more information on the nature of the test, the side effect
+being checked, or the parameter being modified::
+
+ test_my_method_runtime_error
+ test_my_method_invalid_credentials
+ test_my_method_param1_none
+
+Mocking
++++++++
+The ``mock`` library is used for unit testing to stub out external libraries.
+
+The following conventions are used in Yardstick:
+
+* Use ``mock.patch.object`` instead of ``mock.patch``.
+
+* When naming mocked classes/functions, use ``mock_<class_and_function_name>``
+ e.g. ``mock_subprocess_call``
+
+* Avoid decorating classes with mocks. Apply the mocking in ``setUp()``::
+
+ @mock.patch.object(ssh, 'SSH')
+ class MyClassTestCase(unittest.TestCase):
+
+ should be::
+
+ class MyClassTestCase(unittest.TestCase):
+ def setUp(self):
+ self._mock_ssh = mock.patch.object(ssh, 'SSH')
+ self.mock_ssh = self._mock_ssh.start()
+
+ self.addCleanup(self._stop_mocks)
+
+ def _stop_mocks(self):
+ self._mock_ssh.stop()
Plugins
-==========
+-------
-For information about Yardstick plugins, refer to the chapter **Installing a plug-in into Yardstick** in the `user guide`_.
+For information about Yardstick plugins, refer to the chapter
+**Installing a plug-in into Yardstick** in the `user guide`_.
diff --git a/docs/testing/developer/devguide/devguide_nsb_prox.rst b/docs/testing/developer/devguide/devguide_nsb_prox.rst
new file mode 100755
index 000000000..44a216b75
--- /dev/null
+++ b/docs/testing/developer/devguide/devguide_nsb_prox.rst
@@ -0,0 +1,1480 @@
+Introduction
+=============
+
+This document describes the steps to create a new NSB PROX test based on
+existing PROX functionalities. NSB PROX provides is a simple approximation
+of an operation and can be used to develop best practices and TCO models
+for Telco customers, investigate the impact of new Intel compute,
+network and storage technologies, characterize performance, and develop
+optimal system architectures and configurations.
+
+NSB PROX Supports Baremetal, Openstack and standalone configuration.
+
+.. contents::
+
+Prerequisites
+=============
+
+In order to integrate PROX tests into NSB, the following prerequisites are
+required.
+
+.. _`dpdk wiki page`: https://www.dpdk.org/
+.. _`yardstick wiki page`: https://wiki.opnfv.org/display/yardstick/
+.. _`Prox documentation`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation
+.. _`openstack wiki page`: https://wiki.openstack.org/wiki/Main_Page
+.. _`grafana getting started`: http://docs.grafana.org/guides/gettingstarted/
+.. _`opnfv grafana dashboard`: https://wiki.opnfv.org/display/yardstick/How+to+work+with+grafana+dashboard
+.. _`Prox command line`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation#Command_line_options
+.. _`grafana deployment`: https://wiki.opnfv.org/display/yardstick/How+to+deploy+InfluxDB+and+Grafana+locally
+.. _`Prox options`: https://01.org/intel-data-plane-performance-demonstrators/documentation/prox-documentation#.5Beal_options.5D
+.. _`NSB Installation`: http://artifacts.opnfv.org/yardstick/docs/userguide/index.html#document-09-installation
+
+* A working knowledge of Yardstick. See `yardstick wiki page`_.
+* A working knowledge of PROX. See `Prox documentation`_.
+* Knowledge of Openstack. See `openstack wiki page`_.
+* Knowledge of how to use Grafana. See `grafana getting started`_.
+* How to Deploy InfluxDB & Grafana. See `grafana deployment`_.
+* How to use Grafana in OPNFV/Yardstick. See `opnfv grafana dashboard`_.
+* How to install NSB. See `NSB Installation`_
+
+Sample Prox Test Hardware Architecture
+======================================
+
+The following is a diagram of a sample NSB PROX Hardware Architecture
+for both NSB PROX on Bare metal and on Openstack.
+
+In this example when running yardstick on baremetal, yardstick will
+run on the deployment node, the generator will run on the deployment node
+and the SUT(SUT) will run on the Controller Node.
+
+
+.. image:: images/PROX_Hardware_Arch.png
+ :width: 800px
+ :alt: Sample NSB PROX Hard Architecture
+
+Prox Test Architecture
+======================
+
+In order to create a new test, one must understand the architecture of
+the test.
+
+A NSB Prox test architecture is composed of:
+
+* A traffic generator. This provides blocks of data on 1 or more ports
+ to the SUT.
+ The traffic generator also consumes the result packets from the system
+ under test.
+* A SUT consumes the packets generated by the packet
+ generator, and applies one or more tasks to the packets and return the
+ modified packets to the traffic generator.
+
+ This is an example of a sample NSB PROX test architecture.
+
+.. image:: images/PROX_Software_Arch.png
+ :width: 800px
+ :alt: NSB PROX test Architecture
+
+This diagram is of a sample NSB PROX test application.
+
+* Traffic Generator
+
+ * Generator Tasks - Composted of 1 or more tasks (It is possible to
+ have multiple tasks sending packets to same port No. See Tasks Ai and Aii
+ plus Di and Dii)
+
+ * Task Ai - Generates Packets on Port 0 of Traffic Generator
+ and send to Port 0 of SUT Port 0
+ * Task Aii - Generates Packets on Port 0 of Traffic Generator
+ and send to Port 0 of SUT Port 0
+ * Task B - Generates Packets on Port 1 of Traffic Generator
+ and send to Port 1 of SUT Port 1
+ * Task C - Generates Packets on Port 2 of Traffic Generator
+ and send to Port 2 of SUT Port 2
+ * Task Di - Generates Packets on Port 3 of Traffic Generator
+ and send to Port 3 of SUT Port 3
+ * Task Dii - Generates Packets on Port 0 of Traffic Generator
+ and send to Port 0 of SUT Port 0
+
+ * Verifier Tasks - Composed of 1 or more tasks which receives
+ packets from SUT
+
+ * Task E - Receives packets on Port 0 of Traffic Generator sent
+ from Port 0 of SUT Port 0
+ * Task F - Receives packets on Port 1 of Traffic Generator sent
+ from Port 1 of SUT Port 1
+ * Task G - Receives packets on Port 2 of Traffic Generator sent
+ from Port 2 of SUT Port 2
+ * Task H - Receives packets on Port 3 of Traffic Generator sent
+ from Port 3 of SUT Port 3
+
+* SUT
+
+ * Receiver Tasks - Receives packets from generator - Composed on 1 or
+ more tasks which consume the packs sent from Traffic Generator
+
+ * Task A - Receives Packets on Port 0 of System-Under-Test from
+ Traffic Generator Port 0, and forwards packets to Task E
+ * Task B - Receives Packets on Port 1 of System-Under-Test from
+ Traffic Generator Port 1, and forwards packets to Task E
+ * Task C - Receives Packets on Port 2 of System-Under-Test from
+ Traffic Generator Port 2, and forwards packets to Task E
+ * Task D - Receives Packets on Port 3 of System-Under-Test from
+ Traffic Generator Port 3, and forwards packets to Task E
+
+ * Processing Tasks - Composed of multiple tasks in series which carry
+ out some processing on received packets before forwarding to the
+ task.
+
+ * Task E - This receives packets from the Receiver Tasks,
+ carries out some operation on the data and forwards to result
+ packets to the next task in the sequence - Task F
+ * Task F - This receives packets from the previous Task - Task
+ E, carries out some operation on the data and forwards to result
+ packets to the next task in the sequence - Task G
+ * Task G - This receives packets from the previous Task - Task F
+ and distributes the result packages to the Transmitter tasks
+
+ * Transmitter Tasks - Composed on 1 or more tasks which send the
+ processed packets back to the Traffic Generator
+
+ * Task H - Receives Packets from Task G of System-Under-Test and
+ sends packets to Traffic Generator Port 0
+ * Task I - Receives Packets from Task G of System-Under-Test and
+ sends packets to Traffic Generator Port 1
+ * Task J - Receives Packets from Task G of System-Under-Test and
+ sends packets to Traffic Generator Port 2
+ * Task K - Receives Packets From Task G of System-Under-Test and
+ sends packets to Traffic Generator Port 3
+
+NSB Prox Test
+=============
+
+A NSB Prox test is composed of the following components :-
+
+* Test Description File. Usually called
+ ``tc_prox_<context>_<test>-<ports>.yaml`` where
+
+ * <context> is either ``baremetal`` or ``heat_context``
+ * <test> is the a one or 2 word description of the test.
+ * <ports> is the number of ports used
+
+ Example tests ``tc_prox_baremetal_l2fwd-2.yaml`` or
+ ``tc_prox_heat_context_vpe-4.yaml``. This file describes the components
+ of the test, in the case of openstack the network description and
+ server descriptions, in the case of baremetal the hardware
+ description location. It also contains the name of the Traffic Generator,
+ the SUT config file and the traffic profile description, all described below.
+ See `Test Description File`_
+
+* Traffic Profile file. Example ``prox_binsearch.yaml``. This describes the
+ packet size, tolerated loss, initial line rate to start traffic at, test
+ interval etc See `Traffic Profile File`_
+
+* Traffic Generator Config file. Usually called ``gen_<test>-<ports>.cfg``.
+
+ This describes the activity of the traffic generator
+
+ * What each core of the traffic generator does,
+ * The packet of data sent by a core on a port of the traffic generator
+ to the system under test
+ * What core is used to wait on what port for data from the system
+ under test.
+
+ Example traffic generator config file ``gen_l2fwd-4.cfg``
+ See `Traffic Generator Config file`_
+
+* SUT Config file. Usually called ``handle_<test>-<ports>.cfg``.
+
+ This describes the activity of the SUTs
+
+ * What each core of the does,
+ * What cores receives packets from what ports
+ * What cores perform operations on the packets and pass the packets onto
+ another core
+ * What cores receives packets from what cores and transmit the packets on
+ the ports to the Traffic Verifier tasks of the Traffic Generator.
+
+ Example traffic generator config file ``handle_l2fwd-4.cfg``
+ See `SUT Config File`_
+
+* NSB PROX Baremetal Configuration file. Usually called
+ ``prox-baremetal-<ports>.yaml``
+
+ * <ports> is the number of ports used
+
+ This is required for baremetal only. This describes hardware, NICs,
+ IP addresses, Network drivers, usernames and passwords.
+ See `Baremetal Configuration File`_
+
+* Grafana Dashboard. Usually called
+ ``Prox_<context>_<test>-<port>-<DateAndTime>.json`` where
+
+ * <context> Is ``BM``,``heat``,``ovs_dpdk`` or ``sriov``
+ * <test> Is the a one or 2 word description of the test.
+ * <port> is the number of ports used express as ``2Port`` or ``4Port``
+ * <DateAndTime> is the Date and Time expressed as a string.
+
+ Example grafana dashboard ``Prox_BM_L2FWD-4Port-1507804504588.json``
+
+Other files may be required. These are test specific files and will be
+covered later.
+
+
+Test Description File
+---------------------
+
+Here we will discuss the test description for
+baremetal, openstack and standalone.
+
+Test Description File for Baremetal
+-----------------------------------
+
+This section will introduce the meaning of the Test case description
+file. We will use ``tc_prox_baremetal_l2fwd-2.yaml`` as an example to
+show you how to understand the test description file.
+
+.. image:: images/PROX_Test_BM_Script.png
+ :width: 800px
+ :alt: NSB PROX Test Description File
+
+Now let's examine the components of the file in detail
+
+1. ``traffic_profile`` - This specifies the traffic profile for the
+ test. In this case ``prox_binsearch.yaml`` is used. See
+ `Traffic Profile File`_
+
+2. ``topology`` - This is either ``prox-tg-topology-1.yaml`` or
+ ``prox-tg-topology-2.yaml`` or ``prox-tg-topology-4.yaml``
+ depending on number of ports required.
+
+3. ``nodes`` - This names the Traffic Generator and the System
+ under Test. Does not need to change.
+
+4. ``interface_speed_gbps`` - This is an optional parameter. If not present
+ the system defaults to 10Gbps. This defines the speed of the interfaces.
+
+5. ``collectd`` - (Optional) This specifies we want to collect NFVI statistics
+ like CPU Utilization,
+
+6. ``prox_path`` - Location of the Prox executable on the traffic
+ generator (Either baremetal or Openstack Virtual Machine)
+
+7. ``prox_config`` - This is the ``SUT Config File``.
+ In this case it is ``handle_l2fwd-2.cfg``
+
+ A number of additional parameters can be added. This example
+ is for VPE::
+
+ options:
+ interface_speed_gbps: 10
+
+ traffic_config:
+ tolerated_loss: 0.01
+ test_precision: 0.01
+ packet_sizes: [64]
+ duration: 30
+ lower_bound: 0.0
+ upper_bound: 100.0
+
+ vnf__0:
+ prox_path: /opt/nsb_bin/prox
+ prox_config: ``configs/handle_vpe-4.cfg``
+ prox_args:
+ ``-t``: ````
+ prox_files:
+ ``configs/vpe_ipv4.lua`` : ````
+ ``configs/vpe_dscp.lua`` : ````
+ ``configs/vpe_cpe_table.lua`` : ````
+ ``configs/vpe_user_table.lua`` : ````
+ ``configs/vpe_rules.lua`` : ````
+ prox_generate_parameter: True
+
+ ``interface_speed_gbps`` - this specifies the speed of the interface
+ in Gigabits Per Second. This is used to calculate pps(packets per second).
+ If the interfaces are of different speeds, then this specifies the speed
+ of the slowest interface. This parameter is optional. If omitted the
+ interface speed defaults to 10Gbps.
+
+ ``traffic_config`` - This allows the values here to override the values in
+ in the traffic_profile file. e.g. "prox_binsearch.yaml". Values provided
+ here override values provided in the "traffic_profile" section of the
+ traffic_profile file. Some, all or none of the values can be provided here.
+
+ The values describes the packet size, tolerated loss, initial line rate
+ to start traffic at, test interval etc See `Traffic Profile File`_
+
+ ``prox_files`` - this specified that a number of addition files
+ need to be provided for the test to run correctly. This files
+ could provide routing information,hashing information or a
+ hashing algorithm and ip/mac information.
+
+ ``prox_generate_parameter`` - this specifies that the NSB application
+ is required to provide information to the nsb Prox in the form
+ of a file called ``parameters.lua``, which contains information
+ retrieved from either the hardware or the openstack configuration.
+
+8. ``prox_args`` - this specifies the command line arguments to start
+ prox. See `prox command line`_.
+
+9. ``prox_config`` - This specifies the Traffic Generator config file.
+
+10. ``runner`` - This is set to ``ProxDuration`` - This specifies that the
+ test runs for a set duration. Other runner types are available
+ but it is recommend to use ``ProxDuration``. The following parameters
+ are supported
+
+ ``interval`` - (optional) - This specifies the sampling interval.
+ Default is 1 sec
+
+ ``sampled`` - (optional) - This specifies if sampling information is
+ required. Default ``no``
+
+ ``duration`` - This is the length of the test in seconds. Default
+ is 60 seconds.
+
+ ``confirmation`` - This specifies the number of confirmation retests to
+ be made before deciding to increase or decrease line speed. Default 0.
+
+11. ``context`` - This is ``context`` for a 2 port Baremetal configuration.
+
+ If a 4 port configuration was required then file
+ ``prox-baremetal-4.yaml`` would be used. This is the NSB Prox
+ baremetal configuration file.
+
+Traffic Profile File
+--------------------
+
+This describes the details of the traffic flow. In this case
+``prox_binsearch.yaml`` is used.
+
+.. image:: images/PROX_Traffic_profile.png
+ :width: 800px
+ :alt: NSB PROX Traffic Profile
+
+
+1. ``name`` - The name of the traffic profile. This name should match the
+ name specified in the ``traffic_profile`` field in the Test
+ Description File.
+
+2. ``traffic_type`` - This specifies the type of traffic pattern generated,
+ This name matches class name of the traffic generator. See::
+
+ network_services/traffic_profile/prox_binsearch.py class ProxBinSearchProfile(ProxProfile)
+
+ In this case it lowers the traffic rate until the number of packets
+ sent is equal to the number of packets received (plus a
+ tolerated loss). Once it achieves this it increases the traffic
+ rate in order to find the highest rate with no traffic loss.
+
+ Custom traffic types can be created by creating a new traffic profile class.
+
+3. ``tolerated_loss`` - This specifies the percentage of packets that
+ can be lost/dropped before
+ we declare success or failure. Success is Transmitted-Packets from
+ Traffic Generator is greater than or equal to
+ packets received by Traffic Generator plus tolerated loss.
+
+4. ``test_precision`` - This specifies the precision of the test
+ results. For some tests the success criteria may never be
+ achieved because the test precision may be greater than the
+ successful throughput. For finer results increase the precision
+ by making this value smaller.
+
+5. ``packet_sizes`` - This specifies the range of packets size this
+ test is run for.
+
+6. ``duration`` - This specifies the sample duration that the test
+ uses to check for success or failure.
+
+7. ``lower_bound`` - This specifies the test initial lower bound sample rate.
+ On success this value is increased.
+
+8. ``upper_bound`` - This specifies the test initial upper bound sample rate.
+ On success this value is decreased.
+
+Other traffic profiles exist eg prox_ACL.yaml which does not
+compare what is received with what is transmitted. It just
+sends packet at max rate.
+
+It is possible to create custom traffic profiles with by
+creating new file in the same folder as prox_binsearch.yaml.
+See this prox_vpe.yaml as example::
+
+ schema: ``nsb:traffic_profile:0.1``
+
+ name: prox_vpe
+ description: Prox vPE traffic profile
+
+ traffic_profile:
+ traffic_type: ProxBinSearchProfile
+ tolerated_loss: 100.0 #0.001
+ test_precision: 0.01
+ # The minimum size of the Ethernet frame for the vPE test is 68 bytes.
+ packet_sizes: [68]
+ duration: 5
+ lower_bound: 0.0
+ upper_bound: 100.0
+
+Test Description File for Openstack
+-----------------------------------
+
+We will use ``tc_prox_heat_context_l2fwd-2.yaml`` as a example to show
+you how to understand the test description file.
+
+ .. image:: images/PROX_Test_HEAT_Script1.png
+ :width: 800px
+ :alt: NSB PROX Test Description File - Part 1
+
+
+ .. image:: images/PROX_Test_HEAT_Script2.png
+ :width: 800px
+ :alt: NSB PROX Test Description File - Part 2
+
+Now lets examine the components of the file in detail
+
+Sections 1 to 9 are exactly the same in Baremetal and in Heat. Section
+``10`` is replaced with sections A to F. Section 10 was for a baremetal
+configuration file. This has no place in a heat configuration.
+
+A. ``image`` - yardstick-samplevnfs. This is the name of the image
+ created during the installation of NSB. This is fixed.
+
+B. ``flavor`` - The flavor is created dynamically. However we could
+ use an already existing flavor if required. In that case the
+ flavor would be named::
+
+ flavor: yardstick-flavor
+
+C. ``extra_specs`` - This allows us to specify the number of
+ cores sockets and hyperthreading assigned to it. In this case
+ we have 1 socket with 10 codes and no hyperthreading enabled.
+
+D. ``placement_groups`` - default. Do not change for NSB PROX.
+
+E. ``servers`` - ``tg_0`` is the traffic generator and ``vnf_0``
+ is the system under test.
+
+F. ``networks`` - is composed of a management network labeled ``mgmt``
+ and one uplink network labeled ``uplink_0`` and one downlink
+ network labeled ``downlink_0`` for 2 ports. If this was a 4 port
+ configuration there would be 2 extra downlink ports. See this
+ example from a 4 port l2fwd test.::
+
+ networks:
+ mgmt:
+ cidr: '10.0.1.0/24'
+ uplink_0:
+ cidr: '10.0.2.0/24'
+ gateway_ip: 'null'
+ port_security_enabled: False
+ enable_dhcp: 'false'
+ downlink_0:
+ cidr: '10.0.3.0/24'
+ gateway_ip: 'null'
+ port_security_enabled: False
+ enable_dhcp: 'false'
+ uplink_1:
+ cidr: '10.0.4.0/24'
+ gateway_ip: 'null'
+ port_security_enabled: False
+ enable_dhcp: 'false'
+ downlink_1:
+ cidr: '10.0.5.0/24'
+ gateway_ip: 'null'
+ port_security_enabled: False
+ enable_dhcp: 'false'
+
+Test Description File for Standalone
+------------------------------------
+
+We will use ``tc_prox_ovs-dpdk_l2fwd-2.yaml`` as a example to show
+you how to understand the test description file.
+
+ .. image:: images/PROX_Test_ovs_dpdk_Script_1.png
+ :width: 800px
+ :alt: NSB PROX Test Standalone Description File - Part 1
+
+ .. image:: images/PROX_Test_ovs_dpdk_Script_2.png
+ :width: 800px
+ :alt: NSB PROX Test Standalone Description File - Part 2
+
+Now lets examine the components of the file in detail
+
+Sections 1 to 9 are exactly the same in Baremetal and in Heat. Section
+``10`` is replaced with sections A to F. Section 10 was for a baremetal
+configuration file. This has no place in a heat configuration.
+
+A. ``file`` - Pod file for Baremetal Traffic Generator configuration:
+ IP Address, User/Password & Interfaces
+
+B. ``type`` - This defines the type of standalone configuration.
+ Possible values are ``StandaloneOvsDpdk`` or ``StandaloneSriov``
+
+C. ``file`` - Pod file for Standalone host configuration:
+ IP Address, User/Password & Interfaces
+
+D. ``vm_deploy`` - Deploy a new VM or use an existing VM
+
+E. ``ovs_properties`` - OVS Version, DPDK Version and configuration
+ to use.
+
+F. ``flavor``- NSB image generated when installing NSB using ansible-playbook::
+
+ ram- Configurable RAM for SUT VM
+ extra_specs
+ hw:cpu_sockets - Configurable number of Sockets for SUT VM
+ hw:cpu_cores - Configurable number of Cores for SUT VM
+ hw:cpu_threads- Configurable number of Threads for SUT VM
+
+G. ``mgmt`` - Management port of the SUT VM. Preconfig needed on TG & SUT host machines.
+ is the system under test.
+
+
+H. ``xe0`` - Upline Network port
+
+I. ``xe1`` - Downline Network port
+
+J. ``uplink_0`` - Uplink Phy port of the NIC on the host. This will be used to create
+ the Virtual Functions.
+
+K. ``downlink_0`` - Downlink Phy port of the NIC on the host. This will be used to
+ create the Virtual Functions.
+
+Traffic Generator Config file
+-----------------------------
+
+This section will describe the traffic generator config file.
+This is the same for both baremetal and heat. See this example
+of ``gen_l2fwd_multiflow-2.cfg`` to explain the options.
+
+.. image:: images/PROX_Gen_2port_cfg.png
+ :width: 1400px
+ :alt: NSB PROX Gen Config File
+
+The configuration file is divided into multiple sections, each
+of which is used to define some parameters and options.::
+
+ [eal options]
+ [variables]
+ [port 0]
+ [port 1]
+ [port .]
+ [port Z]
+ [defaults]
+ [global]
+ [core 0]
+ [core 1]
+ [core 2]
+ [core .]
+ [core Z]
+
+See `prox options`_ for details
+
+Now let's examine the components of the file in detail
+
+1. ``[eal options]`` - This specified the EAL (Environmental
+ Abstraction Layer) options. These are default values and
+ are not changed. See `dpdk wiki page`_.
+
+2. ``[variables]`` - This section contains variables, as
+ the name suggests. Variables for Core numbers, mac
+ addresses, ip addresses etc. They are assigned as a
+ ``key = value`` where the key is used in place of the value.
+
+ .. caution::
+ A special case for valuables with a value beginning with
+ ``@@``. These values are dynamically updated by the NSB
+ application at run time. Values like MAC address,
+ IP Address etc.
+
+3. ``[port 0]`` - This section describes the DPDK Port. The number
+ following the keyword ``port`` usually refers to the DPDK Port
+ Id. usually starting from ``0``. Because you can have multiple
+ ports this entry usually repeated. Eg. For a 2 port setup
+ ``[port0]`` and ``[port 1]`` and for a 4 port setup ``[port 0]``,
+ ``[port 1]``, ``[port 2]`` and ``[port 3]``::
+
+ [port 0]
+ name=p0
+ mac=hardware
+ rx desc=2048
+ tx desc=2048
+ promiscuous=yes
+
+ a. In this example ``name = p0`` assigned the name ``p0`` to the
+ port. Any name can be assigned to a port.
+ b. ``mac=hardware`` sets the MAC address assigned by the hardware
+ to data from this port.
+ c. ``rx desc=2048`` sets the number of available descriptors to
+ allocate for receive packets. This can be changed and can
+ effect performance.
+ d. ``tx desc=2048`` sets the number of available descriptors to
+ allocate for transmit packets. This can be changed and can
+ effect performance.
+ e. ``promiscuous=yes`` this enables promiscuous mode for this port.
+
+4. ``[defaults]`` - Here default operations and settings can be over
+ written. In this example ``mempool size=4K`` the number of mbufs
+ per task is altered. Altering this value could effect
+ performance. See `prox options`_ for details.
+
+5. ``[global]`` - Here application wide setting are supported. Things
+ like application name, start time, duration and memory
+ configurations can be set here. In this example.::
+
+ [global]
+ start time=5
+ name=Basic Gen
+
+ a. ``start time=5`` Time is seconds after which average
+ stats will be started.
+ b. ``name=Basic Gen`` Name of the configuration.
+
+6. ``[core 0]`` - This core is designated the master core. Every
+ Prox application must have a master core. The master mode must
+ be assigned to exactly one task, running alone on one core.::
+
+ [core 0]
+ mode=master
+
+7. ``[core 1]`` - This describes the activity on core 1. Cores can
+ be configured by means of a set of [core #] sections, where
+ # represents either:
+
+ a. an absolute core number: e.g. on a 10-core, dual socket
+ system with hyper-threading,
+ cores are numbered from 0 to 39.
+
+ b. PROX allows a core to be identified by a core number, the
+ letter 's', and a socket number.
+
+ It is possible to write a baremetal and an openstack test which use
+ the same traffic generator config file and SUT config file.
+ In this case it is advisable not to use physical
+ core numbering.
+
+ However it is also possible to write NSB Prox tests that
+ have been optimized for a particular hardware configuration.
+ In this case it is advisable to use the core numbering.
+ It is up to the user to make sure that cores from
+ the right sockets are used (i.e. from the socket on which the NIC
+ is attached to), to ensure good performance (EPA).
+
+ Each core can be assigned with a set of tasks, each running
+ one of the implemented packet processing modes.::
+
+ [core 1]
+ name=p0
+ task=0
+ mode=gen
+ tx port=p0
+ bps=1250000000
+ ; Ethernet + IP + UDP
+ pkt inline=${sut_mac0} 70 00 00 00 00 01 08 00 45 00 00 1c 00 01 00 00 40 11 f7 7d 98 10 64 01 98 10 64 02 13 88 13 88 00 08 55 7b
+ ; src_ip: 152.16.100.0/8
+ random=0000XXX1
+ rand_offset=29
+ ; dst_ip: 152.16.100.0/8
+ random=0000XXX0
+ rand_offset=33
+ random=0001001110001XXX0001001110001XXX
+ rand_offset=34
+
+ a. ``name=p0`` - Name assigned to the core.
+ b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``.
+ Task 1 can be defined later in this core or
+ can be defined in another ``[core 1]`` section with ``task=1``
+ later in configuration file. Sometimes running
+ multiple task related to the same packet on the same physical
+ core improves performance, however sometimes it
+ is optimal to move task to a separate core. This is best
+ decided by checking performance.
+ c. ``mode=gen`` - Specifies the action carried out by this task on
+ this core. Supported modes are: classify, drop, gen, lat, genl4, nop, l2fwd, gredecap,
+ greencap, lbpos, lbnetwork, lbqinq, lb5tuple, ipv6_decap, ipv6_encap,
+ qinqdecapv4, qinqencapv4, qos, routing, impair,
+ mirror, unmpls, tagmpls, nat, decapnsh, encapnsh, police, acl
+ Which are :-
+
+ * Classify
+ * Drop
+ * Basic Forwarding (no touch)
+ * L2 Forwarding (change MAC)
+ * GRE encap/decap
+ * Load balance based on packet fields
+ * Symmetric load balancing
+ * QinQ encap/decap IPv4/IPv6
+ * ARP
+ * QoS
+ * Routing
+ * Unmpls
+ * Nsh encap/decap
+ * Policing
+ * ACL
+
+ In the traffic generator we expect a core to generate packets (``gen``)
+ and to receive packets & calculate latency (``lat``)
+ This core does ``gen`` . ie it is a traffic generator.
+
+ To understand what each of the modes support please see
+ `prox documentation`_.
+
+ d. ``tx port=p0`` - This specifies that the packets generated are
+ transmitted to port ``p0``
+ e. ``bps=1250000000`` - This indicates Bytes Per Second to
+ generate packets.
+ f. ``; Ethernet + IP + UDP`` - This is a comment. Items starting with
+ ``;`` are ignored.
+ g. ``pkt inline=${sut_mac0} 70 00 00 00 ...`` - Defines the packet
+ format as a sequence of bytes (each
+ expressed in hexadecimal notation). This defines the packet
+ that is generated. This packets begins
+ with the hexadecimal sequence assigned to ``sut_mac`` and the
+ remainder of the bytes in the string.
+ This packet could now be sent or modified by ``random=..``
+ described below before being sent to target.
+ h. ``; src_ip: 152.16.100.0/8`` - Comment
+ i. ``random=0000XXX1`` - This describes a field of the packet
+ containing random data. This string can be
+ 8,16,24 or 32 character long and represents 1,2,3 or 4
+ bytes of data. In this case it describes a byte of
+ data. Each character in string can be 0,1 or ``X``. 0 or 1
+ are fixed bit values in the data packet and ``X`` is a
+ random bit. So random=0000XXX1 generates 00000001(1),
+ 00000011(3), 00000101(5), 00000111(7),
+ 00001001(9), 00001011(11), 00001101(13) and 00001111(15)
+ combinations.
+ j. ``rand_offset=29`` - Defines where to place the previously
+ defined random field.
+ k. ``; dst_ip: 152.16.100.0/8`` - Comment
+ l. ``random=0000XXX0`` - This is another random field which
+ generates a byte of 00000000(0), 00000010(2),
+ 00000100(4), 00000110(6), 00001000(8), 00001010(10),
+ 00001100(12) and 00001110(14) combinations.
+ m. ``rand_offset=33`` - Defines where to place the previously
+ defined random field.
+ n. ``random=0001001110001XXX0001001110001XXX`` - This is
+ another random field which generates 4 bytes.
+ o. ``rand_offset=34`` - Defines where to place the previously
+ defined 4 byte random field.
+
+ Core 2 executes same scenario as Core 1. The only difference
+ in this case is that the packets are generated
+ for Port 1.
+
+8. ``[core 3]`` - This defines the activities on core 3. The purpose
+ of ``core 3`` and ``core 4`` is to receive packets
+ sent by the SUT.::
+
+ [core 3]
+ name=rec 0
+ task=0
+ mode=lat
+ rx port=p0
+ lat pos=42
+
+ a. ``name=rec 0`` - Name assigned to the core.
+ b. ``task=0`` - Each core can run a set of tasks. Starting with
+ ``0``. Task 1 can be defined later in this core or
+ can be defined in another ``[core 1]`` section with
+ ``task=1`` later in configuration file. Sometimes running
+ multiple task related to the same packet on the same
+ physical core improves performance, however sometimes it
+ is optimal to move task to a separate core. This is
+ best decided by checking performance.
+ c. ``mode=lat`` - Specifies the action carried out by this task on this
+ core.
+ Supported modes are: ``acl``, ``classify``, ``drop``, ``gredecap``,
+ ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``, ``lbnetwork``,
+ ``lbpos``, ``lbqinq``, ``nop``, ``police``, ``qinqdecapv4``,
+ ``qinqencapv4``, ``qos``, ``routing``, ``impair``, ``lb5tuple``,
+ ``mirror``, ``unmpls``, ``tagmpls``, ``nat``, ``decapnsh``, ``encapnsh``,
+ ``gen``, ``genl4`` and ``lat``. This task(0) per core(3) receives packets
+ on port.
+ d. ``rx port=p0`` - The port to receive packets on ``Port 0``. Core 4 will
+ receive packets on ``Port 1``.
+ e. ``lat pos=42`` - Describes where to put a 4-byte timestamp in the packet.
+ Note that the packet length should be longer than ``lat pos`` + 4 bytes
+ to avoid truncation of the timestamp. It defines where the timestamp is
+ to be read from. Note that the SUT workload might cause the position of
+ the timestamp to change (i.e. due to encapsulation).
+
+SUT Config File
+---------------
+
+This section will describes the SUT(VNF) config file. This is the same for both
+baremetal and heat. See this example of ``handle_l2fwd_multiflow-2.cfg`` to
+explain the options.
+
+.. image:: images/PROX_Handle_2port_cfg.png
+ :width: 1400px
+ :alt: NSB PROX Handle Config File
+
+See `prox options`_ for details
+
+Now let's examine the components of the file in detail
+
+1. ``[eal options]`` - same as the Generator config file. This specified the
+ EAL (Environmental Abstraction Layer) options. These are default values and
+ are not changed. See `dpdk wiki page`_.
+
+2. ``[port 0]`` - This section describes the DPDK Port. The number following
+ the keyword ``port`` usually refers to the DPDK Port Id. usually starting
+ from ``0``. Because you can have multiple ports this entry usually
+ repeated. E.g. For a 2 port setup ``[port0]`` and ``[port 1]`` and for a 4
+ port setup ``[port 0]``, ``[port 1]``, ``[port 2]`` and ``[port 3]``::
+
+ [port 0]
+ name=if0
+ mac=hardware
+ rx desc=2048
+ tx desc=2048
+ promiscuous=yes
+
+ a. In this example ``name =if0`` assigned the name ``if0`` to the port. Any
+ name can be assigned to a port.
+ b. ``mac=hardware`` sets the MAC address assigned by the hardware to data
+ from this port.
+ c. ``rx desc=2048`` sets the number of available descriptors to allocate
+ for receive packets. This can be changed and can effect performance.
+ d. ``tx desc=2048`` sets the number of available descriptors to allocate
+ for transmit packets. This can be changed and can effect performance.
+ e. ``promiscuous=yes`` this enables promiscuous mode for this port.
+
+3. ``[defaults]`` - Here default operations and settings can be over written.::
+
+ [defaults]
+ mempool size=8K
+ memcache size=512
+
+ a. In this example ``mempool size=8K`` the number of mbufs per task is
+ altered. Altering this value could effect performance. See
+ `prox options`_ for details.
+ b. ``memcache size=512`` - number of mbufs cached per core, default is 256
+ this is the cache_size. Altering this value could affect performance.
+
+4. ``[global]`` - Here application wide setting are supported. Things like
+ application name, start time, duration and memory configurations can be set
+ here.
+ In this example.::
+
+ [global]
+ start time=5
+ name=Basic Gen
+
+ a. ``start time=5`` Time is seconds after which average stats will be
+ started.
+ b. ``name=Handle L2FWD Multiflow (2x)`` Name of the configuration.
+
+5. ``[core 0]`` - This core is designated the master core. Every Prox
+ application must have a master core. The master mode must be assigned to
+ exactly one task, running alone on one core.::
+
+ [core 0]
+ mode=master
+
+6. ``[core 1]`` - This describes the activity on core 1. Cores can be
+ configured by means of a set of [core #] sections, where # represents
+ either:
+
+ a. an absolute core number: e.g. on a 10-core, dual socket system with
+ hyper-threading, cores are numbered from 0 to 39.
+
+ b. PROX allows a core to be identified by a core number, the letter 's',
+ and a socket number. However NSB PROX is hardware agnostic (physical and
+ virtual configurations are the same) it is advisable no to use physical
+ core numbering.
+
+ Each core can be assigned with a set of tasks, each running one of the
+ implemented packet processing modes.::
+
+ [core 1]
+ name=none
+ task=0
+ mode=l2fwd
+ dst mac=@@tester_mac1
+ rx port=if0
+ tx port=if1
+
+ a. ``name=none`` - No name assigned to the core.
+ b. ``task=0`` - Each core can run a set of tasks. Starting with ``0``.
+ Task 1 can be defined later in this core or can be defined in another
+ ``[core 1]`` section with ``task=1`` later in configuration file.
+ Sometimes running multiple task related to the same packet on the same
+ physical core improves performance, however sometimes it is optimal to
+ move task to a separate core. This is best decided by checking
+ performance.
+ c. ``mode=l2fwd`` - Specifies the action carried out by this task on this
+ core. Supported modes are: ``acl``, ``classify``, ``drop``,
+ ``gredecap``, ``greencap``, ``ipv6_decap``, ``ipv6_encap``, ``l2fwd``,
+ ``lbnetwork``, ``lbpos``, ``lbqinq``, ``nop``, ``police``,
+ ``qinqdecapv4``, ``qinqencapv4``, ``qos``, ``routing``, ``impair``,
+ ``lb5tuple``, ``mirror``, ``unmpls``, ``tagmpls``, ``nat``,
+ ``decapnsh``, ``encapnsh``, ``gen``, ``genl4`` and ``lat``. This code
+ does ``l2fwd``. i.e. it does the L2FWD.
+
+ d. ``dst mac=@@tester_mac1`` - The destination mac address of the packet
+ will be set to the MAC address of ``Port 1`` of destination device.
+ (The Traffic Generator/Verifier)
+ e. ``rx port=if0`` - This specifies that the packets are received from
+ ``Port 0`` called if0
+ f. ``tx port=if1`` - This specifies that the packets are transmitted to
+ ``Port 1`` called if1
+
+ In this example we receive a packet on core on a port, carry out operation
+ on the packet on the core and transmit it on on another port still using
+ the same task on the same core.
+
+ On some implementation you may wish to use multiple tasks, like this.::
+
+ [core 1]
+ name=rx_task
+ task=0
+ mode=l2fwd
+ dst mac=@@tester_p0
+ rx port=if0
+ tx cores=1t1
+ drop=no
+
+ name=l2fwd_if0
+ task=1
+ mode=nop
+ rx ring=yes
+ tx port=if0
+ drop=no
+
+ In this example you can see Core 1/Task 0 called ``rx_task`` receives the
+ packet from if0 and perform the l2fwd. However instead of sending the
+ packet to a port it sends it to a core see ``tx cores=1t1``. In this case it
+ sends it to Core 1/Task 1.
+
+ Core 1/Task 1 called ``l2fwd_if0``, receives the packet, not from a port but
+ from the ring. See ``rx ring=yes``. It does not perform any operation on the
+ packet See ``mode=none`` and sends the packets to ``if0`` see
+ ``tx port=if0``.
+
+ It is also possible to implement more complex operations by chaining
+ multiple operations in sequence and using rings to pass packets from one
+ core to another.
+
+ In this example, we show a Broadband Network Gateway (BNG) with Quality of
+ Service (QoS). Communication from task to task is via rings.
+
+ .. image:: images/PROX_BNG_QOS.png
+ :width: 1000px
+ :alt: NSB PROX Config File for BNG_QOS
+
+Baremetal Configuration File
+----------------------------
+
+This is required for baremetal testing. It describes the IP address of the
+various ports, the Network devices drivers and MAC addresses and the network
+configuration.
+
+In this example we will describe a 2 port configuration. This file is the same
+for all 2 port NSB Prox tests on the same platforms/configuration.
+
+ .. image:: images/PROX_Baremetal_config.png
+ :width: 1000px
+ :alt: NSB PROX Yardstick Config
+
+Now let's describe the sections of the file.
+
+ 1. ``TrafficGen`` - This section describes the Traffic Generator node of the
+ test configuration. The name of the node ``trafficgen_1`` must match the
+ node name in the ``Test Description File for Baremetal`` mentioned
+ earlier. The password attribute of the test needs to be configured. All
+ other parameters can remain as default settings.
+ 2. ``interfaces`` - This defines the DPDK interfaces on the Traffic
+ Generator.
+ 3. ``xe0`` is DPDK Port 0. ``lspci`` and ``./dpdk-devbind.py -s`` can be used
+ to provide the interface information. ``netmask`` and ``local_ip`` should
+ not be changed
+ 4. ``xe1`` is DPDK Port 1. If more than 2 ports are required then ``xe1``
+ section needs to be repeated and modified accordingly.
+ 5. ``vnf`` - This section describes the SUT of the test configuration. The
+ name of the node ``vnf`` must match the node name in the
+ ``Test Description File for Baremetal`` mentioned earlier. The password
+ attribute of the test needs to be configured. All other parameters can
+ remain as default settings
+ 6. ``interfaces`` - This defines the DPDK interfaces on the SUT
+ 7. ``xe0`` - Same as 3 but for the ``SUT``.
+ 8. ``xe1`` - Same as 4 but for the ``SUT`` also.
+ 9. ``routing_table`` - All parameters should remain unchanged.
+ 10. ``nd_route_tbl`` - All parameters should remain unchanged.
+
+Grafana Dashboard
+-----------------
+
+The grafana dashboard visually displays the results of the tests. The steps
+required to produce a grafana dashboard are described here.
+
+.. _yardstick-config-label:
+
+ a. Configure ``yardstick`` to use influxDB to store test results. See file
+ ``/etc/yardstick/yardstick.conf``.
+
+ .. image:: images/PROX_Yardstick_config.png
+ :width: 1000px
+ :alt: NSB PROX Yardstick Config
+
+ 1. Specify the dispatcher to use influxDB to store results.
+ 2. "target = .. " - Specify location of influxDB to store results.
+ "db_name = yardstick" - name of database. Do not change
+ "username = root" - username to use to store result. (Many tests are
+ run as root)
+ "password = ... " - Please set to root user password
+
+ b. Deploy InfludDB & Grafana. See how to Deploy InfluxDB & Grafana. See
+ `grafana deployment`_.
+ c. Generate the test data. Run the tests as follows .::
+
+ yardstick --debug task start tc_prox_<context>_<test>-ports.yaml
+
+ eg.::
+
+ yardstick --debug task start tc_prox_heat_context_l2fwd-4.yaml
+
+ d. Now build the dashboard for the test you just ran. The easiest way to do this is to copy an existing dashboard and rename the
+ test and the field names. The procedure to do so is described here. See `opnfv grafana dashboard`_.
+
+How to run NSB Prox Test on an baremetal environment
+====================================================
+
+In order to run the NSB PROX test.
+
+ 1. Install NSB on Traffic Generator node and Prox in SUT. See
+ `NSB Installation`_
+
+ 2. To enter container::
+
+ docker exec -it yardstick /bin/bash
+
+ 3. Install baremetal configuration file (POD files)
+
+ a. Go to location of PROX tests in container ::
+
+ cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox
+
+ b. Install prox-baremetal-2.yam and prox-baremetal-4.yaml for that
+ topology into this directory as per `Baremetal Configuration File`_
+
+ c. Install and configure ``yardstick.conf`` ::
+
+ cd /etc/yardstick/
+
+ Modify /etc/yardstick/yardstick.conf as per yardstick-config-label_
+
+ 4. Execute the test. Eg.::
+
+ yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
+
+How to run NSB Prox Test on an Openstack environment
+====================================================
+
+In order to run the NSB PROX test.
+
+ 1. Install NSB on Openstack deployment node. See `NSB Installation`_
+
+ 2. To enter container::
+
+ docker exec -it yardstick /bin/bash
+
+ 3. Install configuration file
+
+ a. Goto location of PROX tests in container ::
+
+ cd /home/opnfv/repos/yardstick/samples/vnf_samples/nsut/prox
+
+ b. Install and configure ``yardstick.conf`` ::
+
+ cd /etc/yardstick/
+
+ Modify /etc/yardstick/yardstick.conf as per yardstick-config-label_
+
+
+ 4. Execute the test. Eg.::
+
+ yardstick --debug task start ./tc_prox_heat_context_l2fwd-4.yaml
+
+Frequently Asked Questions
+==========================
+
+Here is a list of frequently asked questions.
+
+NSB Prox does not work on Baremetal, How do I resolve this?
+-----------------------------------------------------------
+
+If PROX NSB does not work on baremetal, problem is either in network
+configuration or test file.
+
+1. Verify network configuration. Execute existing baremetal test.::
+
+ yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
+
+ If test does not work then error in network configuration.
+
+ a. Check DPDK on Traffic Generator and SUT via:- ::
+
+ /root/dpdk-17./usertools/dpdk-devbind.py
+
+ b. Verify MAC addresses match ``prox-baremetal-<ports>.yaml`` via ``ifconfig`` and ``dpdk-devbind``
+
+ c. Check your eth port is what you expect. You would not be the first person to think that
+ the port your cable is plugged into is ethX when in fact it is ethY. Use
+ ethtool to visually confirm that the eth is where you expect.::
+
+ ethtool -p ethX
+
+ A led should start blinking on port. (On both System-Under-Test and Traffic Generator)
+
+ d. Check cable.
+
+ Install Linux kernel network driver and ensure your ports are
+ ``bound`` to the driver via ``dpdk-devbind``. Bring up port on both
+ SUT and Traffic Generator and check connection.
+
+ i) On SUT and on Traffic Generator::
+
+ ifconfig ethX/enoX up
+
+ ii) Check link
+
+ ethtool ethX/enoX
+
+ See ``Link detected`` if ``yes`` .... Cable is good. If ``no`` you have an issue with your cable/port.
+
+2. If existing baremetal works then issue is with your test. Check the traffic
+ generator gen_<test>-<ports>.cfg to ensure it is producing a valid packet.
+
+How do I debug NSB Prox on Baremetal?
+-------------------------------------
+
+1. Execute the test as follows::
+
+ yardstick --debug task start ./tc_prox_baremetal_l2fwd-4.yaml
+
+2. Login to Traffic Generator as ``root``.::
+
+ cd
+ /opt/nsb_bin/prox -f /tmp/gen_<test>-<ports>.cfg
+
+3. Login to SUT as ``root``.::
+
+ cd
+ /opt/nsb_bin/prox -f /tmp/handle_<test>-<ports>.cfg
+
+4. Now let's examine the Generator Output. In this case the output of
+ ``gen_l2fwd-4.cfg``.
+
+ .. image:: images/PROX_Gen_GUI.png
+ :width: 1000px
+ :alt: NSB PROX Traffic Generator GUI
+
+ Now let's examine the output
+
+ 1. Indicates the amount of data successfully transmitted on Port 0
+ 2. Indicates the amount of data successfully received on port 1
+ 3. Indicates the amount of data successfully handled for port 1
+
+ It appears what is transmitted is received.
+
+ .. Caution::
+ The number of packets MAY not exactly match because the ports are read in
+ sequence.
+
+ .. Caution::
+ What is transmitted on PORT X may not always be received on same port.
+ Please check the Test scenario.
+
+5. Now lets examine the SUT Output
+
+ .. image:: images/PROX_SUT_GUI.png
+ :width: 1400px
+ :alt: NSB PROX SUT GUI
+
+ Now lets examine the output
+
+ 1. What is received on 0 is transmitted on 1, received on 1 transmitted on 0,
+ received on 2 transmitted on 3 and received on 3 transmitted on 2.
+ 2. No packets are Failed.
+ 3. No packets are discarded.
+
+ We can also dump the packets being received or transmitted via the following commands. ::
+
+ dump Arguments: <core id> <task id> <nb packets>
+ Create a hex dump of <nb_packets> from <task_id> on <core_id> showing how
+ packets have changed between RX and TX.
+ dump_rx Arguments: <core id> <task id> <nb packets>
+ Create a hex dump of <nb_packets> from <task_id> on <core_id> at RX
+ dump_tx Arguments: <core id> <task id> <nb packets>
+ Create a hex dump of <nb_packets> from <task_id> on <core_id> at TX
+
+ eg.::
+
+ dump_tx 1 0 1
+
+NSB Prox works on Baremetal but not in Openstack. How do I resolve this?
+------------------------------------------------------------------------
+
+NSB Prox on Baremetal is a lot more forgiving than NSB Prox on Openstack. A
+badly formed packed may still work with PROX on Baremetal. However on
+Openstack the packet must be correct and all fields of the header correct.
+E.g. A packet with an invalid Protocol ID would still work in Baremetal but
+this packet would be rejected by openstack.
+
+
+ 1. Check the validity of the packet.
+ 2. Use a known good packet in your test
+ 3. If using ``Random`` fields in the traffic generator, disable them and
+ retry.
+
+
+How do I debug NSB Prox on Openstack?
+-------------------------------------
+
+1. Execute the test as follows::
+
+ yardstick --debug task start --keep-deploy ./tc_prox_heat_context_l2fwd-4.yaml
+
+2. Access docker image if required via::
+
+ docker exec -it yardstick /bin/bash
+
+3. Install openstack credentials.
+
+ Depending on your openstack deployment, the location of these credentials
+ may vary.
+ On this platform I do this via::
+
+ scp root@10.237.222.55:/etc/kolla/admin-openrc.sh .
+ source ./admin-openrc.sh
+
+4. List Stack details
+
+ a. Get the name of the Stack.
+
+ .. image:: images/PROX_Openstack_stack_list.png
+ :width: 1000px
+ :alt: NSB PROX openstack stack list
+
+ b. Get the Floating IP of the Traffic Generator & SUT
+
+ This generates a lot of information. Please note the floating IP of the
+ VNF and the Traffic Generator.
+
+ .. image:: images/PROX_Openstack_stack_show_a.png
+ :width: 1000px
+ :alt: NSB PROX openstack stack show (Top)
+
+ From here you can see the floating IP Address of the SUT / VNF
+
+ .. image:: images/PROX_Openstack_stack_show_b.png
+ :width: 1000px
+ :alt: NSB PROX openstack stack show (Top)
+
+ From here you can see the floating IP Address of the Traffic Generator
+
+ c. Get ssh identity file
+
+ In the docker container locate the identity file.::
+
+ cd /home/opnfv/repos/yardstick/yardstick/resources/files
+ ls -lt
+
+5. Login to SUT as ``Ubuntu``.::
+
+ ssh -i ./yardstick_key-01029d1d ubuntu@172.16.2.158
+
+ Change to root::
+
+ sudo su
+
+ Now continue as baremetal.
+
+6. Login to SUT as ``Ubuntu``.::
+
+ ssh -i ./yardstick_key-01029d1d ubuntu@172.16.2.156
+
+ Change to root::
+
+ sudo su
+
+ Now continue as baremetal.
+
+How do I resolve "Quota exceeded for resources"
+-----------------------------------------------
+
+This usually occurs due to 2 reasons when executing an openstack test.
+
+1. One or more stacks already exists and are consuming all resources. To resolve ::
+
+ openstack stack list
+
+ Response::
+
+ +--------------------------------------+--------------------+-----------------+----------------------+--------------+
+ | ID | Stack Name | Stack Status | Creation Time | Updated Time |
+ +--------------------------------------+--------------------+-----------------+----------------------+--------------+
+ | acb559d7-f575-4266-a2d4-67290b556f15 | yardstick-e05ba5a4 | CREATE_COMPLETE | 2017-12-06T15:00:05Z | None |
+ | 7edf21ce-8824-4c86-8edb-f7e23801a01b | yardstick-08bda9e3 | CREATE_COMPLETE | 2017-12-06T14:56:43Z | None |
+ +--------------------------------------+--------------------+-----------------+----------------------+--------------+
+
+ In this case 2 stacks already exist.
+
+ To remove stack::
+
+ openstack stack delete yardstick-08bda9e3
+ Are you sure you want to delete this stack(s) [y/N]? y
+
+2. The openstack configuration quotas are too small.
+
+ The solution is to increase the quota. Use below to query existing quotas::
+
+ openstack quota show
+
+ And to set quota::
+
+ openstack quota set <resource>
+
+Openstack CLI fails or hangs. How do I resolve this?
+----------------------------------------------------
+
+If it fails due to ::
+
+ Missing value auth-url required for auth plugin password
+
+Check your shell environment for Openstack variables. One of them should
+contain the authentication URL ::
+
+
+ OS_AUTH_URL=``https://192.168.72.41:5000/v3``
+
+Or similar. Ensure that openstack configurations are exported. ::
+
+ cat /etc/kolla/admin-openrc.sh
+
+Result ::
+
+ export OS_PROJECT_DOMAIN_NAME=default
+ export OS_USER_DOMAIN_NAME=default
+ export OS_PROJECT_NAME=admin
+ export OS_TENANT_NAME=admin
+ export OS_USERNAME=admin
+ export OS_PASSWORD=BwwSEZqmUJA676klr9wa052PFjNkz99tOccS9sTc
+ export OS_AUTH_URL=http://193.168.72.41:35357/v3
+ export OS_INTERFACE=internal
+ export OS_IDENTITY_API_VERSION=3
+ export EXTERNAL_NETWORK=yardstick-public
+
+and visible.
+
+If the Openstack CLI appears to hang, then verify the proxys and ``no_proxy``
+are set correctly. They should be similar to ::
+
+ FTP_PROXY="http://<your_proxy>:<port>/"
+ HTTPS_PROXY="http://<your_proxy>:<port>/"
+ HTTP_PROXY="http://<your_proxy>:<port>/"
+ NO_PROXY="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
+ ftp_proxy="http://<your_proxy>:<port>/"
+ http_proxy="http://<your_proxy>:<port>/"
+ https_proxy="http://<your_proxy>:<port>/"
+ no_proxy="localhost,127.0.0.1,10.237.222.55,10.237.223.80,10.237.222.134,.ir.intel.com"
+
+Where
+
+ 1) 10.237.222.55 = IP Address of deployment node
+ 2) 10.237.223.80 = IP Address of Controller node
+ 3) 10.237.222.134 = IP Address of Compute Node
+
+How to Understand the Grafana output?
+-------------------------------------
+
+ .. image:: images/PROX_Grafana_1.png
+ :width: 1000px
+ :alt: NSB PROX Grafana_1
+
+ .. image:: images/PROX_Grafana_2.png
+ :width: 1000px
+ :alt: NSB PROX Grafana_2
+
+ .. image:: images/PROX_Grafana_3.png
+ :width: 1000px
+ :alt: NSB PROX Grafana_3
+
+ .. image:: images/PROX_Grafana_4.png
+ :width: 1000px
+ :alt: NSB PROX Grafana_4
+
+ .. image:: images/PROX_Grafana_5.png
+ :width: 1000px
+ :alt: NSB PROX Grafana_5
+
+ .. image:: images/PROX_Grafana_6.png
+ :width: 1000px
+ :alt: NSB PROX Grafana_6
+
+A. Test Parameters - Test interval, Duration, Tolerated Loss and Test Precision
+
+B. No. of packets send and received during test
+
+C. Generator Stats - Average Throughput per step (Step Duration is specified by
+ "Duration" field in A above)
+
+D. Packet size
+
+E. No. of packets sent by the generator per second per interface in millions
+ of packets per second.
+
+F. No. of packets recieved by the generator per second per interface in millions
+ of packets per second.
+
+G. No. of packets received by the SUT from the generator in millions of packets
+ per second.
+
+H. No. of packets sent by the the SUT to the generator in millions of packets
+ per second.
+
+I. No. of packets sent by the Generator to the SUT per step per interface
+ in millions of packets per second.
+
+J. No. of packets received by the Generator from the SUT per step per interface
+ in millions of packets per second.
+
+K. No. of packets sent and received by the generator and lost by the SUT that
+ meet the success criteria
+
+L. The change in the Percentage of Line Rate used over a test, The MAX and the
+ MIN should converge to within the interval specified as the
+ ``test-precision``.
+
+M. Packet size supported during test. If *N/A* appears in any field the
+ result has not been decided.
+
+N. The Theretical Maximum no. of packets per second that can be sent for this
+ packet size.
+
+O. No. of packets sent by the generator in MPPS
+
+P. No. of packets received by the generator in MPPS
+
+Q. No. of packets sent by SUT.
+
+R. No. of packets received by the SUT
+
+S. Total no. of dropped packets -- Packets sent but not received back by the
+ generator, these may be dropped by the SUT or the generator.
+
+T. The tolerated no. of dropped packets.
+
+U. Test throughput in Gbps
+
+V. Latencey per Port
+ * Va - Port XE0
+ * Vb - Port XE1
+ * Vc - Port XE0
+ * Vd - Port XE0
+
+W. CPU Utilization
+ * Wa - CPU Utilization of the Generator
+ * Wb - CPU Utilization of the SUT
diff --git a/docs/testing/developer/devguide/images/PROX_BNG_QOS.png b/docs/testing/developer/devguide/images/PROX_BNG_QOS.png
new file mode 100644
index 000000000..3c720945c
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_BNG_QOS.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Baremetal_config.png b/docs/testing/developer/devguide/images/PROX_Baremetal_config.png
new file mode 100644
index 000000000..5cd914035
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Baremetal_config.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Gen_2port_cfg.png b/docs/testing/developer/devguide/images/PROX_Gen_2port_cfg.png
new file mode 100644
index 000000000..07731cabc
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Gen_2port_cfg.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Gen_GUI.png b/docs/testing/developer/devguide/images/PROX_Gen_GUI.png
new file mode 100644
index 000000000..e96aea3de
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Gen_GUI.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_1.png b/docs/testing/developer/devguide/images/PROX_Grafana_1.png
new file mode 100644
index 000000000..144000bb8
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Grafana_1.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_2.png b/docs/testing/developer/devguide/images/PROX_Grafana_2.png
new file mode 100644
index 000000000..af1ebb315
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Grafana_2.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_3.png b/docs/testing/developer/devguide/images/PROX_Grafana_3.png
new file mode 100644
index 000000000..a287670c9
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Grafana_3.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_4.png b/docs/testing/developer/devguide/images/PROX_Grafana_4.png
new file mode 100644
index 000000000..6752dc324
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Grafana_4.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_5.png b/docs/testing/developer/devguide/images/PROX_Grafana_5.png
new file mode 100644
index 000000000..45746d86b
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Grafana_5.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Grafana_6.png b/docs/testing/developer/devguide/images/PROX_Grafana_6.png
new file mode 100644
index 000000000..5bdbcf26a
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Grafana_6.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Handle_2port_cfg.png b/docs/testing/developer/devguide/images/PROX_Handle_2port_cfg.png
new file mode 100644
index 000000000..6505bedfd
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Handle_2port_cfg.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Hardware_Arch.png b/docs/testing/developer/devguide/images/PROX_Hardware_Arch.png
new file mode 100644
index 000000000..6e69dd6e3
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Hardware_Arch.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Openstack_stack_list.png b/docs/testing/developer/devguide/images/PROX_Openstack_stack_list.png
new file mode 100644
index 000000000..f67d10e6d
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Openstack_stack_list.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Openstack_stack_show_a.png b/docs/testing/developer/devguide/images/PROX_Openstack_stack_show_a.png
new file mode 100644
index 000000000..00e7620e7
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Openstack_stack_show_a.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Openstack_stack_show_b.png b/docs/testing/developer/devguide/images/PROX_Openstack_stack_show_b.png
new file mode 100644
index 000000000..bbe9b8631
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Openstack_stack_show_b.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_SUT_GUI.png b/docs/testing/developer/devguide/images/PROX_SUT_GUI.png
new file mode 100644
index 000000000..204083d1d
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_SUT_GUI.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Software_Arch.png b/docs/testing/developer/devguide/images/PROX_Software_Arch.png
new file mode 100644
index 000000000..c31f1e24a
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Software_Arch.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png b/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png
new file mode 100644
index 000000000..84e640a20
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Test_BM_Script.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script1.png b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script1.png
new file mode 100644
index 000000000..bd375dba1
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script1.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script2.png b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script2.png
new file mode 100644
index 000000000..99d9d24e6
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Test_HEAT_Script2.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_1.png b/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_1.png
new file mode 100644
index 000000000..73f6f2920
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_1.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_2.png b/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_2.png
new file mode 100644
index 000000000..ad7d22822
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Test_ovs_dpdk_Script_2.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Traffic_profile.png b/docs/testing/developer/devguide/images/PROX_Traffic_profile.png
new file mode 100644
index 000000000..660bca342
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Traffic_profile.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/PROX_Yardstick_config.png b/docs/testing/developer/devguide/images/PROX_Yardstick_config.png
new file mode 100644
index 000000000..8d346b03a
--- /dev/null
+++ b/docs/testing/developer/devguide/images/PROX_Yardstick_config.png
Binary files differ
diff --git a/docs/testing/developer/devguide/images/vPE_Diagram.png b/docs/testing/developer/devguide/images/vPE_Diagram.png
new file mode 100644
index 000000000..c3985b7d9
--- /dev/null
+++ b/docs/testing/developer/devguide/images/vPE_Diagram.png
Binary files differ
diff --git a/docs/testing/developer/devguide/index.rst b/docs/testing/developer/devguide/index.rst
index 92a18f6ee..194099a27 100644
--- a/docs/testing/developer/devguide/index.rst
+++ b/docs/testing/developer/devguide/index.rst
@@ -11,6 +11,6 @@ Yardstick Developer Guide
.. toctree::
:maxdepth: 4
- :numbered:
devguide
+ devguide_nsb_prox
diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst
index c1d5def98..2a3ab4ea7 100755
--- a/docs/testing/user/userguide/01-introduction.rst
+++ b/docs/testing/user/userguide/01-introduction.rst
@@ -1,7 +1,17 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB and others.
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) OPNFV, Ericsson AB and others.
+
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
============
Introduction
@@ -9,8 +19,8 @@ Introduction
**Welcome to Yardstick's documentation !**
-.. _Pharos: https://wiki.opnfv.org/pharos
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Pharos: https://wiki.opnfv.org/display/pharos
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
.. _Presentation: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_yardstick_project.pdf?version=1&modificationDate=1458848320000&api=v2
Yardstick_ is an OPNFV Project.
@@ -32,7 +42,7 @@ independent.
About This Document
-===================
+-------------------
This document consists of the following chapters:
@@ -42,43 +52,45 @@ This document consists of the following chapters:
* Chapter :doc:`02-methodology` describes the methodology implemented by the
*Yardstick* Project for :term:`NFVI` verification.
-* Chapter :doc:`03-architecture` provides information on the software architecture
- of *Yardstick*.
+* Chapter :doc:`03-architecture` provides information on the software
+ architecture of *Yardstick*.
* Chapter :doc:`04-installation` provides instructions to install *Yardstick*.
-* Chapter :doc:`05-yardstick_plugin` provides information on how to integrate
+* Chapter :doc:`05-operation` provides information on how to use *Yardstick*
+ to run and create testcases.
+
+* Chapter :doc:`06-yardstick-plugin` provides information on how to integrate
other OPNFV testing projects into *Yardstick*.
-* Chapter :doc:`06-result-store-InfluxDB` provides inforamtion on how to run
+* Chapter :doc:`07-result-store-InfluxDB` provides inforamtion on how to run
plug-in test cases and store test results into community's InfluxDB.
-* Chapter :doc:`07-grafana` provides inforamtion on *Yardstick* grafana dashboard
- and how to add a dashboard into *Yardstick* grafana dashboard.
+* Chapter :doc:`08-grafana` provides inforamtion on *Yardstick* grafana
+ dashboard and how to add a dashboard into *Yardstick* grafana dashboard.
-* Chapter :doc:`08-api` provides inforamtion on *Yardstick* ReST API and how to
+* Chapter :doc:`09-api` provides inforamtion on *Yardstick* ReST API and how to
use *Yardstick* API.
-* Chapter :doc:`09-yardstick_user_interface` provides inforamtion on how to use
+* Chapter :doc:`10-yardstick-user-interface` provides inforamtion on how to use
yardstick report CLI to view the test result in table format and also values
pinned on to a graph
-* Chapter :doc:`10-vtc-overview` provides information on the :term:`VTC`.
-
-* Chapter :doc:`13-nsb-overview` describes the methodology implemented by the
+* Chapter :doc:`12-nsb-overview` describes the methodology implemented by the
Yardstick - Network service benchmarking to test real world usecase for a
given VNF.
-* Chapter :doc:`14-nsb_installation` provides instructions to install
- *Yardstick - Network service benchmarking testing*.
+* Chapter :doc:`13-nsb-installation` provides instructions to install
+ *Yardstick - Network Service Benchmarking (NSB) testing*.
+
+* Chapter :doc:`14-nsb-operation` provides information on running *NSB*
* Chapter :doc:`15-list-of-tcs` includes a list of available *Yardstick* test
cases.
-
Contact Yardstick
-=================
+-----------------
Feedback? `Contact us`_
-.. _Contact us: opnfv-users@lists.opnfv.org
+.. _Contact us: mailto:opnfv-users@lists.opnfv.org&subject="#yardstick"
diff --git a/docs/testing/user/userguide/02-methodology.rst b/docs/testing/user/userguide/02-methodology.rst
index 34d271095..bb490c404 100644
--- a/docs/testing/user/userguide/02-methodology.rst
+++ b/docs/testing/user/userguide/02-methodology.rst
@@ -1,20 +1,30 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB and others.
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) OPNFV, Ericsson AB and others.
+
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
===========
Methodology
===========
Abstract
-========
+--------
This chapter describes the methodology implemented by the Yardstick project for
verifying the :term:`NFVI` from the perspective of a :term:`VNF`.
ETSI-NFV
-========
+--------
.. _NFV-TST001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
.. _Yardsticktst: https://wiki.opnfv.org/download/attachments/2925202/opnfv_summit_-_bridging_opnfv_and_etsi.pdf?version=1&modificationDate=1458848320000&api=v2
@@ -53,7 +63,7 @@ The methodology includes five steps:
.. seealso:: Yardsticktst_ for material on alignment ETSI TST001 and Yardstick.
Metrics
-=======
+-------
The metrics, as defined by ETSI GS NFV-TST001, are shown in
:ref:`Table1 <table2_1>`, :ref:`Table2 <table2_2>` and
@@ -98,6 +108,10 @@ options).
| | * Latency for storage read/write operations |
| | * Throughput for storage read/write operations |
| | |
++---------+-------------------------------------------------------------------|
+| Energy | * Energy consumption in Watts (transversal to all others |
+| | scenario) |
+| | |
+---------+-------------------------------------------------------------------+
.. _table2_2:
@@ -173,6 +187,7 @@ options).
| | TC010 | TC024 | |
| | TC012 | TC055 | |
| | TC014 | | |
+| | TC015 | | |
| | TC069 | | |
+---------+-------------------+----------------+------------------------------+
| Network | TC001 | TC044 | TC016 [1]_ |
diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst
index 8336b609d..94081b0e1 100755
--- a/docs/testing/user/userguide/03-architecture.rst
+++ b/docs/testing/user/userguide/03-architecture.rst
@@ -1,30 +1,42 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2016 Huawei Technologies Co.,Ltd and others
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) 2016 Huawei Technologies Co.,Ltd and others
+
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
============
Architecture
============
Abstract
-========
-This chapter describes the yardstick framework software architecture. we will introduce it from Use-Case View,
-Logical View, Process View and Deployment View. More technical details will be introduced in this chapter.
+--------
+
+This chapter describes the Yardstick framework software architecture. We will
+introduce it from Use Case View, Logical View, Process View and Deployment
+View. More technical details will be introduced in this chapter.
Overview
-========
+--------
Architecture overview
----------------------
+^^^^^^^^^^^^^^^^^^^^^
Yardstick is mainly written in Python, and test configurations are made
in YAML. Documentation is written in reStructuredText format, i.e. .rst
files. Yardstick is inspired by Rally. Yardstick is intended to run on a
computer with access and credentials to a cloud. The test case is described
in a configuration file given as an argument.
-How it works: the benchmark task configuration file is parsed and converted into
-an internal model. The context part of the model is converted into a Heat
+How it works: the benchmark task configuration file is parsed and converted
+into an internal model. The context part of the model is converted into a Heat
template and deployed into a stack. Each scenario is run using a runner, either
serially or in parallel. Each runner runs in its own subprocess executing
commands in a VM using SSH. The output of each scenario is written as json
@@ -33,7 +45,8 @@ the test result will be shown with grafana.
Concept
--------
+^^^^^^^
+
**Benchmark** - assess the relative performance of something
**Benchmark** configuration file - describes a single test case in yaml format
@@ -43,13 +56,15 @@ names, image names, affinity rules and network configurations. A context is
converted into a simplified Heat template, which is used to deploy onto the
Openstack environment.
-**Data** - Output produced by running a benchmark, written to a file in json format
+**Data** - Output produced by running a benchmark, written to a file in json
+format
**Runner** - Logic that determines how a test scenario is run and reported, for
example the number of test iterations, input value stepping and test duration.
Predefined runner types exist for re-usage, see `Runner types`_.
-**Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, LmBench, ...)
+**Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf,
+LmBench, ...)
**SLA** - Relates to what result boundary a test case must meet to pass. For
example a latency limit, amount or ratio of lost packets and so on. Action
@@ -59,7 +74,7 @@ configuration file and evaluated by the runner.
Runner types
-------------
+^^^^^^^^^^^^
There exists several predefined runner types to choose between when designing
a test scenario:
@@ -126,10 +141,11 @@ Snippet of an Iteration runner configuration:
Use-Case View
-=============
+-------------
+
Yardstick Use-Case View shows two kinds of users. One is the Tester who will
-do testing in cloud, the other is the User who is more concerned with test result
-and result analyses.
+do testing in cloud, the other is the User who is more concerned with test
+result and result analyses.
For testers, they will run a single test case or test case suite to verify
infrastructure compliance or bencnmark their own infrastructure performance.
@@ -155,7 +171,8 @@ on OPNFV testing dashboard which use MongoDB as backend.
:alt: Yardstick Use-Case View
Logical View
-============
+------------
+
Yardstick Logical View describes the most important classes, their
organization, and the most important use-case realizations.
@@ -192,7 +209,8 @@ finished.
:alt: Yardstick framework architecture in Danube
Process View (Test execution flow)
-==================================
+----------------------------------
+
Yardstick process view shows how yardstick runs a test case. Below is the
sequence graph about the test execution flow using heat context, and each
object represents one module in yardstick:
@@ -219,7 +237,8 @@ will call "Openstack" to undeploy the heat stack. Once the stack is
undepoyed, the whole test ends.
Deployment View
-===============
+---------------
+
Yardstick deployment view shows how the yardstick tool can be deployed into the
underlying platform. Generally, yardstick tool is installed on JumpServer(see
`07-installation` for detail installation steps), and JumpServer is
@@ -232,7 +251,7 @@ result for better showing.
:alt: Yardstick Deployment View
Yardstick Directory structure
-=============================
+-----------------------------
**yardstick/** - Yardstick main directory.
@@ -240,27 +259,27 @@ Yardstick Directory structure
with support for different installers.
*docs/* - All documentation is stored here, such as configuration guides,
- user guides and Yardstick descriptions.
+ user guides and Yardstick test case descriptions.
*etc/* - Used for test cases requiring specific POD configurations.
*samples/* - test case samples are stored here, most of all scenario and
- feature's samples are shown in this directory.
+ feature samples are shown in this directory.
-*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as
- well as the test cases run to verify the NFVI (*opnfv/*) are stored.
- Also configurations of what to run daily and weekly at the different
- PODs is located here.
+*tests/* - The test cases run to verify the NFVI (*opnfv/*) are stored here.
+ The configurations of what to run daily and weekly at the different
+ PODs are also located here.
-*tools/* - Currently contains tools to build image for VMs which are deployed
- by Heat. Currently contains how to build the yardstick-trusty-server
- image with the different tools that are needed from within the image.
+*tools/* - Contains tools to build image for VMs which are deployed by Heat.
+ Currently contains how to build the yardstick-image with the
+ different tools that are needed from within the image.
*plugin/* - Plug-in configuration files are stored here.
-*vTC/* - Contains the files for running the virtual Traffic Classifier tests.
-
-*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts,
- CLI parsing, keys, plotting tools, dispatcher, plugin
+*yardstick/* - Contains the internals of Yardstick: :term:`Runners <runner>`,
+ :term:`Scenarios <scenario>`, :term:`Contexts <context>`, CLI
+ parsing, keys, plotting tools, dispatcher, plugin
install/remove scripts and so on.
+*yardstick/tests* - The Yardstick internal tests (*functional/* and *unit/*)
+ are stored here.
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst
index 828c49581..6fe42232d 100644
--- a/docs/testing/user/userguide/04-installation.rst
+++ b/docs/testing/user/userguide/04-installation.rst
@@ -1,13 +1,33 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others.
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others.
+
+
+ Convention for heading levels in Yardstick documentation:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ~~~~~~~ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
======================
Yardstick Installation
======================
-
Yardstick supports installation by Docker or directly in Ubuntu. The
installation procedure for Docker and direct installation are detailed in
the sections below.
@@ -39,19 +59,20 @@ Several prerequisites are needed for Yardstick:
4. Connectivity from the Jumphost to the SUT public/external network
.. note:: *Jumphost* refers to any server which meets the previous
-requirements. Normally it is the same server from where the OPNFV
-deployment has been triggered.
+ requirements. Normally it is the same server from where the OPNFV
+ deployment has been triggered.
.. warning:: Connectivity from Jumphost is essential and it is of paramount
-importance to make sure it is working before even considering to install
-and run Yardstick. Make also sure you understand how your networking is
-designed to work.
+ importance to make sure it is working before even considering to install
+ and run Yardstick. Make also sure you understand how your networking is
+ designed to work.
.. note:: If your Jumphost is operating behind a company http proxy and/or
-Firewall, please first consult `Proxy Support`_ section which is towards the
-end of this document. That section details some tips/tricks which *may* be of
-help in a proxified environment.
+ Firewall, please first consult `Proxy Support`_ section which is towards
+ the end of this document. That section details some tips/tricks which *may*
+ be of help in a proxified environment.
+.. _Install Yardstick using Docker:
Install Yardstick using Docker (first option) (**recommended**)
---------------------------------------------------------------
@@ -85,28 +106,37 @@ Run the Docker image to get a Yardstick container::
docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock \
-p 8888:5000 --name yardstick opnfv/yardstick:stable
-.. table:: Description of the parameters used with ``docker run`` command
-
- ======================= ====================================================
- Parameters Detail
- ======================= ====================================================
- -itd -i: interactive, Keep STDIN open even if not
- attached
- -t: allocate a pseudo-TTY detached mode, in the
- background
- ======================= ====================================================
- --privileged If you want to build ``yardstick-image`` in
- Yardstick container, this parameter is needed
- ======================= ====================================================
- -p 8888:5000 Redirect the a host port (8888) to a container port
- (5000)
- ======================= ====================================================
- -v /var/run/docker.sock If you want to use yardstick env grafana/influxdb to
- :/var/run/docker.sock create a grafana/influxdb container out of Yardstick
- container
- ======================= ====================================================
- --name yardstick The name for this container
-
+Description of the parameters used with ``docker run`` command
+
+ +------------------------+--------------------------------------------------+
+ | Parameters | Detail |
+ +========================+==================================================+
+ | -itd | -i: interactive, Keep STDIN open even if not |
+ | | attached |
+ | +--------------------------------------------------+
+ | | -t: allocate a pseudo-TTY detached mode, in the |
+ | | background |
+ +------------------------+--------------------------------------------------+
+ | --privileged | If you want to build ``yardstick-image`` in |
+ | | Yardstick container, this parameter is needed |
+ +------------------------+--------------------------------------------------+
+ | -p 8888:5000 | Redirect the a host port (8888) to a container |
+ | | port (5000) |
+ +------------------------+--------------------------------------------------+
+ | -v /var/run/docker.sock| If you want to use yardstick env |
+ | :/var/run/docker.sock | grafana/influxdb to create a grafana/influxdb |
+ | | container out of Yardstick container |
+ +------------------------+--------------------------------------------------+
+ | --name yardstick | The name for this container |
+ +------------------------+--------------------------------------------------+
+
+
+If the host is restarted
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The yardstick container must be started if the host is rebooted::
+
+ docker start yardstick
Configure the Yardstick container environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -119,7 +149,7 @@ in the following sections. Before that, access the Yardstick container::
and then configure Yardstick environments in the Yardstick container.
Using the CLI command ``env prepare`` (first way) (**recommended**)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In the Yardstick container, the Yardstick repository is located in the
``/home/opnfv/repos`` directory. Yardstick provides a CLI to prepare OpenStack
@@ -129,18 +159,18 @@ automatically::
yardstick env prepare
.. note:: Since Euphrates release, the above command will not be able to
-automatically configure the ``/etc/yardstick/openstack.creds`` file. So before
-running the above command, it is necessary to create the
-``/etc/yardstick/openstack.creds`` file and save OpenStack environment
-variables into it manually. If you have the openstack credential file saved
-outside the Yardstick Docker container, you can do this easily by mapping the
-credential file into Yardstick container using::
+ automatically configure the ``/etc/yardstick/openstack.creds`` file. So before
+ running the above command, it is necessary to create the
+ ``/etc/yardstick/openstack.creds`` file and save OpenStack environment
+ variables into it manually. If you have the openstack credential file saved
+ outside the Yardstick Docker container, you can do this easily by mapping the
+ credential file into Yardstick container using::
- '-v /path/to/credential_file:/etc/yardstick/openstack.creds'
+ '-v /path/to/credential_file:/etc/yardstick/openstack.creds'
-when running the Yardstick container. For details of the required OpenStack
-environment variables please refer to section `Export OpenStack environment
-variables`_.
+ when running the Yardstick container. For details of the required OpenStack
+ environment variables please refer to section `Export OpenStack environment
+ variables`_.
The ``env prepare`` command may take up to 6-8 minutes to finish building
yardstick-image and other environment preparation. Meanwhile if you wish to
@@ -151,10 +181,10 @@ terminal window and execute the following command::
Manually exporting the env variables and initializing OpenStack (second way)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Export OpenStack environment variables
-######################################
+''''''''''''''''''''''''''''''''''''''
Before running Yardstick it is necessary to export OpenStack environment
variables::
@@ -166,13 +196,13 @@ Environment variables in the ``openrc`` file have to include at least::
OS_AUTH_URL
OS_USERNAME
OS_PASSWORD
- OS_TENANT_NAME
+ OS_PROJECT_NAME
EXTERNAL_NETWORK
A sample ``openrc`` file may look like this::
export OS_PASSWORD=console
- export OS_TENANT_NAME=admin
+ export OS_PROJECT_NAME=admin
export OS_AUTH_URL=http://172.16.1.222:35357/v2.0
export OS_USERNAME=admin
export OS_VOLUME_API_VERSION=2
@@ -180,7 +210,7 @@ A sample ``openrc`` file may look like this::
Manual creation of Yardstick flavor and guest images
-####################################################
+''''''''''''''''''''''''''''''''''''''''''''''''''''
Before executing Yardstick test cases, make sure that Yardstick flavor and
guest image are available in OpenStack. Detailed steps about creating the
@@ -216,8 +246,8 @@ Yardstick is installed::
sudo -EH tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh
.. warning:: Before building the guest image inside the Yardstick container,
-make sure the container is granted with privilege. The script will create files
-by default in ``/tmp/workspace/yardstick`` and the files will be owned by root.
+ make sure the container is granted with privilege. The script will create files
+ by default in ``/tmp/workspace/yardstick`` and the files will be owned by root.
The created image can be added to OpenStack using the OpenStack client or via
the OpenStack Dashboard::
@@ -237,14 +267,14 @@ image. Add Cirros and Ubuntu images to OpenStack::
Automatic initialization of OpenStack (third way)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++++++++
Similar to the second way, the first step is also to
`Export OpenStack environment variables`_. Then the following steps should be
done.
Automatic creation of Yardstick flavor and guest images
-#######################################################
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''
Yardstick has a script for automatically creating Yardstick flavor and building
Yardstick guest images. This script is mainly used for CI and can be also used
@@ -264,7 +294,7 @@ For usage of Yardstick GUI, please watch our demo video at
`Yardstick GUI demo`_.
.. note:: The Yardstick GUI is still in development, the GUI layout and
-features may change.
+ features may change.
Delete the Yardstick container
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -301,12 +331,6 @@ Prerequisite preparation::
sudo -EH pip install appdirs==1.4.0
sudo -EH pip install virtualenv
-Create a virtual environment::
-
- virtualenv ~/yardstick_venv
- export YARDSTICK_VENV=~/yardstick_venv
- source ~/yardstick_venv/bin/activate
-
Download the source code and install Yardstick from it::
git clone https://gerrit.opnfv.org/gerrit/yardstick
@@ -314,6 +338,10 @@ Download the source code and install Yardstick from it::
cd ~/yardstick
sudo -EH ./install.sh
+If the host is ever restarted, nginx and uwsgi need to be restarted::
+
+ service nginx restart
+ uwsgi -i /etc/yardstick/yardstick.ini
Configure the Yardstick environment (**Todo**)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -322,7 +350,6 @@ For installing Yardstick directly in Ubuntu, the ``yardstick env`` command is
not available. You need to prepare OpenStack environment variables and create
Yardstick flavor and guest images manually.
-
Uninstall Yardstick
^^^^^^^^^^^^^^^^^^^
@@ -429,7 +456,7 @@ of Yardstick ``help`` command and ``ping.py`` test sample::
yardstick task start samples/ping.yaml
.. note:: The above commands could be run in both the Yardstick container and
-the Ubuntu directly.
+ the Ubuntu directly.
Each testing tool supported by Yardstick has a sample configuration file.
These configuration files can be found in the ``samples`` directory.
@@ -437,6 +464,114 @@ These configuration files can be found in the ``samples`` directory.
Default location for the output is ``/tmp/yardstick.out``.
+Automatic installation of Yardstick
+-----------------------------------
+
+Automatic installation can be used as an alternative to the manual by
+providing parameters for ansible script ``install.yaml`` in a ``nsb_setup.sh``
+file. Yardstick can be installed on the bare metal and to the container. Yardstick
+container can be either pulled or built.
+
+Bare metal installation
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Modify ``nsb_setup.sh`` file ``install.yaml`` parameters to install Yardstick
+on Ubuntu server:
+
+.. code-block:: console
+
+ ansible-playbook -i install-inventory.ini install.yaml \
+ -e IMAGE_PROPERTY='none' \
+ -e YARDSTICK_DIR=<path to Yardstick folder>
+
+.. note:: By default ``INSTALLATION_MODE`` is ``baremetal``.
+
+.. note:: No modification in ``install-inventory.ini`` is needed for Yardstick
+ installation.
+
+.. note:: To install Yardstick in virtual environment pass parameter
+ ``-e VIRTUAL_ENVIRONMENT=True``.
+
+Container installation
+^^^^^^^^^^^^^^^^^^^^^^
+
+Modify ``install.yaml`` parameters in ``nsb_setup.sh`` file to pull or build
+Yardstick container. To pull Yardstick image and start container run:
+
+.. code-block:: console
+
+ ansible-playbook -i install-inventory.ini install.yaml \
+ -e IMAGE_PROPERTY='none' \
+ -e INSTALLATION_MODE=container_pull
+
+.. note:: Yardstick docker image is available for both Ubuntu 16.04 and Ubuntu
+ 18.04. By default Ubuntu 16.04 based docker image is used. To use
+ Ubuntu 18.04 based docker image pass ``-i opnfv/yardstick-ubuntu-18.04``
+ parameter to ``nsb_setup.sh``.
+
+To build Yardstick image modify Dockerfile as per comments in it and run:
+
+.. code-block:: console
+
+ cd yardstick
+ docker build -f docker/Dockerfile -t opnfv/yardstick:<tag> .
+
+.. note:: Yardstick docker image based on Ubuntu 16.04 will be built.
+ Pass ``-f docker/Dockerfile_ubuntu18`` to build Yardstick docker image based
+ on Ubuntu 18.04.
+
+.. note:: Add ``--build-arg http_proxy=http://<proxy_host>:<proxy_port>`` to
+ build docker image if server is behind the proxy.
+
+Parameters for ``install.yaml``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description of the parameters used with ``install.yaml``:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | -i install-inventory.ini|| Installs package dependency to remote servers |
+ | || and localhost |
+ | || Mandatory parameter |
+ | || By default no remote servers are provided |
+ +-------------------------+-------------------------------------------------+
+ | -e YARDSTICK_DIR || Path to Yardstick folder |
+ | || Mandatory parameter for Yardstick bare metal |
+ | || installation |
+ +-------------------------+-------------------------------------------------+
+ | -e INSTALLATION_MODE || baremetal: Yardstick is installed to the bare |
+ | | metal |
+ | || Default parameter |
+ | +-------------------------------------------------+
+ | || container: Yardstick is installed in container |
+ | || Container is built from Dockerfile |
+ | +-------------------------------------------------+
+ | || container_pull: Yardstick is installed in |
+ | || container |
+ | || Container is pulled from docker hub |
+ +-------------------------+-------------------------------------------------+
+ | -e OS_RELEASE || xenial or bionic: Ubuntu version to be used for|
+ | || VM image (nsb or normal) |
+ | || Default is Ubuntu 16.04, xenial |
+ +-------------------------+-------------------------------------------------+
+ | -e IMAGE_PROPERTY || nsb: Build Yardstick NSB VM image |
+ | || Used to run Yardstick NSB tests on sample VNF |
+ | || Default parameter |
+ | +-------------------------------------------------+
+ | || normal: Build VM image to run ping test in |
+ | || OpenStack |
+ | +-------------------------------------------------+
+ | || none: don't build a VM image. |
+ +-------------------------+-------------------------------------------------+
+ | -e VIRTUAL_ENVIRONMENT || False or True: Whether install in virtualenv |
+ | || Default is False |
+ +-------------------------+-------------------------------------------------+
+ | -e YARD_IMAGE_ARCH || CPU architecture on servers |
+ | || Default is 'amd64' |
+ +-------------------------+-------------------------------------------------+
+
+
Deploy InfluxDB and Grafana using Docker
----------------------------------------
@@ -448,26 +583,26 @@ Grafana to display data in the following sections.
Automatic deployment of InfluxDB and Grafana containers (**recommended**)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Firstly, enter the Yardstick container::
+1. Enter the Yardstick container::
- sudo -EH docker exec -it yardstick /bin/bash
+ sudo -EH docker exec -it yardstick /bin/bash
-Secondly, create InfluxDB container and configure with the following command::
+2. Create InfluxDB container and configure with the following command::
- yardstick env influxdb
+ yardstick env influxdb
-Thirdly, create and configure Grafana container::
+3. Create and configure Grafana container::
- yardstick env grafana
+ yardstick env grafana
-Then you can run a test case and visit http://host_ip:3000
+Then you can run a test case and visit http://host_ip:1948
(``admin``/``admin``) to see the results.
.. note:: Executing ``yardstick env`` command to deploy InfluxDB and Grafana
-requires Jumphost's docker API version => 1.24. Run the following command to
-check the docker API version on the Jumphost::
+ requires Jumphost's docker API version => 1.24. Run the following command to
+ check the docker API version on the Jumphost::
- docker version
+ docker version
Manual deployment of InfluxDB and Grafana containers
@@ -486,21 +621,21 @@ Run influxDB::
sudo -EH docker run -d --name influxdb \
-p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \
tutum/influxdb
- docker exec -it influxdb bash
Configure influxDB::
- influx
- >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES
- >CREATE DATABASE yardstick;
- >use yardstick;
- >show MEASUREMENTS;
+ docker exec -it influxdb influx
+ > CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES
+ > CREATE DATABASE yardstick;
+ > use yardstick;
+ > show MEASUREMENTS;
+ > exit
Run Grafana::
- sudo -EH docker run -d --name grafana -p 3000:3000 grafana/grafana
+ sudo -EH docker run -d --name grafana -p 1948:3000 grafana/grafana
-Log on http://{YOUR_IP_HERE}:3000 using ``admin``/``admin`` and configure
+Log on to ``http://{YOUR_IP_HERE}:1948`` using ``admin``/``admin`` and configure
database resource to be ``{YOUR_IP_HERE}:8086``.
.. image:: images/Grafana_config.png
@@ -513,7 +648,7 @@ Configure ``yardstick.conf``::
sudo cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
sudo vi /etc/yardstick/yardstick.conf
-Modify ``yardstick.conf``::
+Modify ``yardstick.conf`` to add the ``influxdb`` dispatcher::
[DEFAULT]
debug = True
@@ -526,207 +661,11 @@ Modify ``yardstick.conf``::
username = root
password = root
-Now you can run Yardstick test cases and store the results in influxDB.
-
+Now Yardstick will store results in InfluxDB when you run a testcase.
Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**)
---------------------------------------------------------
-
-Yardstick common CLI
---------------------
-
-List test cases
-^^^^^^^^^^^^^^^
-
-``yardstick testcase list``: This command line would list all test cases in
-Yardstick. It would show like below::
-
- +---------------------------------------------------------------------------------------
- | Testcase Name | Description
- +---------------------------------------------------------------------------------------
- | opnfv_yardstick_tc001 | Measure network throughput using pktgen
- | opnfv_yardstick_tc002 | measure network latency using ping
- | opnfv_yardstick_tc005 | Measure Storage IOPS, throughput and latency using fio.
- ...
- +---------------------------------------------------------------------------------------
-
-
-Show a test case config file
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Take opnfv_yardstick_tc002 for an example. This test case measure network
-latency. You just need to type in ``yardstick testcase show
-opnfv_yardstick_tc002``, and the console would show the config yaml of this
-test case::
-
- ---
-
- schema: "yardstick:task:0.1"
- description: >
- Yardstick TC002 config file;
- measure network latency using ping;
-
- {% set image = image or "cirros-0.3.5" %}
-
- {% set provider = provider or none %}
- {% set physical_network = physical_network or 'physnet1' %}
- {% set segmentation_id = segmentation_id or none %}
- {% set packetsize = packetsize or 100 %}
-
- scenarios:
- {% for i in range(2) %}
- -
- type: Ping
- options:
- packetsize: {{packetsize}}
- host: athena.demo
- target: ares.demo
-
- runner:
- type: Duration
- duration: 60
- interval: 10
-
- sla:
- max_rtt: 10
- action: monitor
- {% endfor %}
-
- context:
- name: demo
- image: {{image}}
- flavor: yardstick-flavor
- user: cirros
-
- placement_groups:
- pgrp1:
- policy: "availability"
-
- servers:
- athena:
- floating_ip: true
- placement: "pgrp1"
- ares:
- placement: "pgrp1"
-
- networks:
- test:
- cidr: '10.0.1.0/24'
- {% if provider == "vlan" %}
- provider: {{provider}}
- physical_network: {{physical_network}}å
- {% if segmentation_id %}
- segmentation_id: {{segmentation_id}}
- {% endif %}
- {% endif %}
-
-
-Start a task to run yardstick test case
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-If you want run a test case, then you need to use ``yardstick task start
-<test_case_path>`` this command support some parameters as below::
-
- +---------------------+--------------------------------------------------+
- | Parameters | Detail |
- +=====================+==================================================+
- | -d | show debug log of yardstick running |
- | | |
- +---------------------+--------------------------------------------------+
- | --task-args | If you want to customize test case parameters, |
- | | use "--task-args" to pass the value. The format |
- | | is a json string with parameter key-value pair. |
- | | |
- +---------------------+--------------------------------------------------+
- | --task-args-file | If you want to use yardstick |
- | | env prepare command(or |
- | | related API) to load the |
- +---------------------+--------------------------------------------------+
- | --parse-only | |
- | | |
- | | |
- +---------------------+--------------------------------------------------+
- | --output-file \ | Specify where to output the log. if not pass, |
- | OUTPUT_FILE_PATH | the default value is |
- | | "/tmp/yardstick/yardstick.log" |
- | | |
- +---------------------+--------------------------------------------------+
- | --suite \ | run a test suite, TEST_SUITE_PATH specify where |
- | TEST_SUITE_PATH | the test suite locates |
- | | |
- +---------------------+--------------------------------------------------+
-
-
-Run Yardstick in a local environment
-------------------------------------
-
-We also have a guide about how to run Yardstick in a local environment.
-This work is contributed by Tapio Tallgren.
-You can find this guide at `How to run Yardstick in a local environment`_.
-
-
-Create a test suite for Yardstick
-------------------------------------
-
-A test suite in yardstick is a yaml file which include one or more test cases.
-Yardstick is able to support running test suite task, so you can customize your
-own test suite and run it in one task.
-
-``tests/opnfv/test_suites`` is the folder where Yardstick puts CI test suite.
-A typical test suite is like below (the ``fuel_test_suite.yaml`` example)::
-
- ---
- # Fuel integration test task suite
-
- schema: "yardstick:suite:0.1"
-
- name: "fuel_test_suite"
- test_cases_dir: "samples/"
- test_cases:
- -
- file_name: ping.yaml
- -
- file_name: iperf3.yaml
-
-As you can see, there are two test cases in the ``fuel_test_suite.yaml``. The
-``schema`` and the ``name`` must be specified. The test cases should be listed
-via the tag ``test_cases`` and their relative path is also marked via the tag
-``test_cases_dir``.
-
-Yardstick test suite also supports constraints and task args for each test
-case. Here is another sample (the ``os-nosdn-nofeature-ha.yaml`` example) to
-show this, which is digested from one big test suite::
-
- ---
-
- schema: "yardstick:suite:0.1"
-
- name: "os-nosdn-nofeature-ha"
- test_cases_dir: "tests/opnfv/test_cases/"
- test_cases:
- -
- file_name: opnfv_yardstick_tc002.yaml
- -
- file_name: opnfv_yardstick_tc005.yaml
- -
- file_name: opnfv_yardstick_tc043.yaml
- constraint:
- installer: compass
- pod: huawei-pod1
- task_args:
- huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml",
- "host": "node4.LF","target": "node5.LF"}'
-
-As you can see in test case ``opnfv_yardstick_tc043.yaml``, there are two
-tags, ``constraint`` and ``task_args``. ``constraint`` is to specify which
-installer or pod it can be run in the CI environment. ``task_args`` is to
-specify the task arguments for each pod.
-
-All in all, to create a test suite in Yardstick, you just need to create a
-yaml file and add test cases, constraint or task arguments if necessary.
-
-
Proxy Support
-------------
@@ -786,7 +725,7 @@ stop and delete the container::
sudo docker rm yardstick
.. warning:: Be careful, the above ``rm`` command will delete the container
-completely. Everything on this container will be lost.
+ completely. Everything on this container will be lost.
Then follow the previous instructions `Prepare the Yardstick container`_ to
rebuild the Yardstick container.
@@ -800,4 +739,3 @@ References
.. _`Cirros 0.3.5`: http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
.. _`Ubuntu 16.04`: https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
.. _`Yardstick GUI demo`: https://www.youtube.com/watch?v=M3qbJDp6QBk
-.. _`How to run Yardstick in a local environment`: https://wiki.opnfv.org/display/yardstick/How+to+run+Yardstick+in+a+local+environment
diff --git a/docs/testing/user/userguide/05-operation.rst b/docs/testing/user/userguide/05-operation.rst
new file mode 100644
index 000000000..82539c97f
--- /dev/null
+++ b/docs/testing/user/userguide/05-operation.rst
@@ -0,0 +1,296 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel, Ericsson AB, Huawei Technologies Co. Ltd and others.
+
+..
+ Convention for heading levels in Yardstick:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+ Avoid deeper levels because they do not render well.
+
+===============
+Yardstick Usage
+===============
+
+Once you have yardstick installed, you can start using it to run testcases
+immediately, through the CLI. You can also define and run new testcases and
+test suites. This chapter details basic usage (running testcases), as well as
+more advanced usage (creating your own testcases).
+
+Yardstick common CLI
+--------------------
+
+List test cases
+^^^^^^^^^^^^^^^
+
+``yardstick testcase list``: This command line would list all test cases in
+Yardstick. It would show like below::
+
+ +---------------------------------------------------------------------------------------
+ | Testcase Name | Description
+ +---------------------------------------------------------------------------------------
+ | opnfv_yardstick_tc001 | Measure network throughput using pktgen
+ | opnfv_yardstick_tc002 | measure network latency using ping
+ | opnfv_yardstick_tc005 | Measure Storage IOPS, throughput and latency using fio.
+ ...
+ +---------------------------------------------------------------------------------------
+
+
+Show a test case config file
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Take opnfv_yardstick_tc002 for an example. This test case measure network
+latency. You just need to type in ``yardstick testcase show
+opnfv_yardstick_tc002``, and the console would show the config yaml of this
+test case:
+
+.. literalinclude::
+ ../../../../tests/opnfv/test_cases/opnfv_yardstick_tc002.yaml
+ :lines: 9-
+
+Run a Yardstick test case
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you want run a test case, then you need to use ``yardstick task start
+<test_case_path>`` this command support some parameters as below:
+
+ +---------------------+--------------------------------------------------+
+ | Parameters | Detail |
+ +=====================+==================================================+
+ | -d | show debug log of yardstick running |
+ | | |
+ +---------------------+--------------------------------------------------+
+ | --task-args | If you want to customize test case parameters, |
+ | | use "--task-args" to pass the value. The format |
+ | | is a json string with parameter key-value pair. |
+ | | |
+ +---------------------+--------------------------------------------------+
+ | --task-args-file | If you want to use yardstick |
+ | | env prepare command(or |
+ | | related API) to load the |
+ +---------------------+--------------------------------------------------+
+ | --parse-only | |
+ | | |
+ | | |
+ +---------------------+--------------------------------------------------+
+ | --output-file \ | Specify where to output the log. if not pass, |
+ | OUTPUT_FILE_PATH | the default value is |
+ | | "/tmp/yardstick/yardstick.log" |
+ | | |
+ +---------------------+--------------------------------------------------+
+ | --suite \ | run a test suite, TEST_SUITE_PATH specify where |
+ | TEST_SUITE_PATH | the test suite locates |
+ | | |
+ +---------------------+--------------------------------------------------+
+
+
+Run Yardstick in a local environment
+------------------------------------
+
+We also have a guide about `How to run Yardstick in a local environment`_.
+This work is contributed by Tapio Tallgren.
+
+Create a new testcase for Yardstick
+-----------------------------------
+
+As a user, you may want to define a new testcase in addition to the ones
+already available in Yardstick. This section will show you how to do this.
+
+Each testcase consists of two sections:
+
+* ``scenarios`` describes what will be done by the test
+* ``context`` describes the environment in which the test will be run.
+
+Defining the testcase scenarios
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+TODO
+
+Defining the testcase context(s)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Each testcase consists of one or more contexts, which describe the environment
+in which the testcase will be run.
+Current available contexts are:
+
+* ``Dummy``: this is a no-op context, and is used when there is no environment
+ to set up e.g. when testing whether OpenStack services are available
+* ``Node``: this context is used to perform operations on baremetal servers
+* ``Heat``: uses OpenStack to provision the required hosts, networks, etc.
+* ``Kubernetes``: uses Kubernetes to provision the resources required for the
+ test.
+
+Regardless of the context type, the ``context`` section of the testcase will
+consist of the following::
+
+ context:
+ name: demo
+ type: Dummy|Node|Heat|Kubernetes
+
+The content of the ``context`` section will vary based on the context type.
+
+Dummy Context
++++++++++++++
+
+No additional information is required for the Dummy context::
+
+ context:
+ name: my_context
+ type: Dummy
+
+Node Context
+++++++++++++
+
+TODO
+
+Heat Context
+++++++++++++
+
+In addition to ``name`` and ``type``, a Heat context requires the following
+arguments:
+
+* ``image``: the image to be used to boot VMs
+* ``flavor``: the flavor to be used for VMs in the context
+* ``user``: the username for connecting into the VMs
+* ``networks``: The networks to be created, networks are identified by name
+
+ * ``name``: network name (required)
+ * (TODO) Any optional attributes
+
+* ``servers``: The servers to be created
+
+ * ``name``: server name
+ * (TODO) Any optional attributes
+
+In addition to the required arguments, the following optional arguments can be
+passed to the Heat context:
+
+* ``placement_groups``:
+
+ * ``name``: the name of the placement group to be created
+ * ``policy``: either ``affinity`` or ``availability``
+* ``server_groups``:
+
+ * ``name``: the name of the server group
+ * ``policy``: either ``affinity`` or ``anti-affinity``
+
+Combining these elements together, a sample Heat context config looks like:
+
+.. literalinclude::
+ ../../../../yardstick/tests/integration/dummy-scenario-heat-context.yaml
+ :start-after: ---
+ :emphasize-lines: 14-
+
+Using exisiting HOT Templates
+'''''''''''''''''''''''''''''
+
+TODO
+
+Kubernetes Context
+++++++++++++++++++
+
+TODO
+
+Using multiple contexts in a testcase
++++++++++++++++++++++++++++++++++++++
+
+When using multiple contexts in a testcase, the ``context`` section is replaced
+by a ``contexts`` section, and each context is separated with a ``-`` line::
+
+ contexts:
+ -
+ name: context1
+ type: Heat
+ ...
+ -
+ name: context2
+ type: Node
+ ...
+
+
+Reusing a context
++++++++++++++++++
+
+Typically, a context is torn down after a testcase is run, however, the user
+may wish to keep an context intact after a testcase is complete.
+
+.. note::
+ This feature has been implemented for the Heat context only
+
+To keep or reuse a context, the ``flags`` option must be specified:
+
+* ``no_setup``: skip the deploy stage, and fetch the details of a deployed
+ context/Heat stack.
+* ``no_teardown``: skip the undeploy stage, thus keeping the stack intact for
+ the next test
+
+If either of these ``flags`` are ``True``, the context information must still
+be given. By default, these flags are disabled::
+
+ context:
+ name: mycontext
+ type: Heat
+ flags:
+ no_setup: True
+ no_teardown: True
+ ...
+
+Create a test suite for Yardstick
+---------------------------------
+
+A test suite in Yardstick is a .yaml file which includes one or more test
+cases. Yardstick is able to support running test suite task, so you can
+customize your own test suite and run it in one task.
+
+``tests/opnfv/test_suites`` is the folder where Yardstick puts CI test suite.
+A typical test suite is like below (the ``fuel_test_suite.yaml`` example):
+
+.. literalinclude::
+ ../../../../tests/opnfv/test_suites/fuel_test_suite.yaml
+ :lines: 9-
+
+As you can see, there are two test cases in the ``fuel_test_suite.yaml``. The
+``schema`` and the ``name`` must be specified. The test cases should be listed
+via the tag ``test_cases`` and their relative path is also marked via the tag
+``test_cases_dir``.
+
+Yardstick test suite also supports constraints and task args for each test
+case. Here is another sample (the ``os-nosdn-nofeature-ha.yaml`` example) to
+show this, which is digested from one big test suite::
+
+ ---
+
+ schema: "yardstick:suite:0.1"
+
+ name: "os-nosdn-nofeature-ha"
+ test_cases_dir: "tests/opnfv/test_cases/"
+ test_cases:
+ -
+ file_name: opnfv_yardstick_tc002.yaml
+ -
+ file_name: opnfv_yardstick_tc005.yaml
+ -
+ file_name: opnfv_yardstick_tc043.yaml
+ constraint:
+ installer: compass
+ pod: huawei-pod1
+ task_args:
+ huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml",
+ "host": "node4.LF","target": "node5.LF"}'
+
+As you can see in test case ``opnfv_yardstick_tc043.yaml``, there are two
+tags, ``constraint`` and ``task_args``. ``constraint`` is to specify which
+installer or pod it can be run in the CI environment. ``task_args`` is to
+specify the task arguments for each pod.
+
+All in all, to create a test suite in Yardstick, you just need to create a
+yaml file and add test cases, constraint or task arguments if necessary.
+
+References
+----------
+
+.. _`How to run Yardstick in a local environment`: https://wiki.opnfv.org/display/yardstick/How+to+run+Yardstick+in+a+local+environment
diff --git a/docs/testing/user/userguide/05-yardstick_plugin.rst b/docs/testing/user/userguide/06-yardstick-plugin.rst
index ec0b49ff1..a5d890b14 100644
--- a/docs/testing/user/userguide/05-yardstick_plugin.rst
+++ b/docs/testing/user/userguide/06-yardstick-plugin.rst
@@ -3,13 +3,23 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others.
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
===================================
Installing a plug-in into Yardstick
===================================
Abstract
-========
+--------
Yardstick provides a ``plugin`` CLI command to support integration with other
OPNFV testing projects. Below is an example invocation of Yardstick plugin
@@ -17,7 +27,7 @@ command and Storperf plug-in sample.
Installing Storperf into Yardstick
-==================================
+----------------------------------
Storperf is delivered as a Docker container from
https://hub.docker.com/r/opnfv/storperf/tags/.
@@ -31,7 +41,7 @@ In this introduction we will install Storperf on Jump Host.
Step 0: Environment preparation
->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Running Storperf on Jump Host
Requirements:
@@ -47,26 +57,26 @@ environment and other dependencies:
1. Make sure docker is installed.
2. Make sure Keystone, Nova, Neutron, Glance, Heat are installed correctly.
3. Make sure Jump Host have access to the OpenStack Controller API.
-4. Make sure Jump Host must have internet connectivity for downloading docker image.
-5. You need to know where to get basic openstack Keystone authorization info, such as
- OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL, OS_USERNAME.
-6. To run a Storperf container, you need to have OpenStack Controller environment
- variables defined and passed to Storperf container. The best way to do this is to
- put environment variables in a "storperf_admin-rc" file. The storperf_admin-rc
- should include credential environment variables at least:
-
-* OS_AUTH_URL
-* OS_USERNAME
-* OS_PASSWORD
-* OS_TENANT_ID
-* OS_TENANT_NAME
-* OS_PROJECT_NAME
-* OS_PROJECT_ID
-* OS_USER_DOMAIN_ID
-
-*Yardstick* has a "prepare_storperf_admin-rc.sh" script which can be used to
-generate the "storperf_admin-rc" file, this script is located at
-test/ci/prepare_storperf_admin-rc.sh
+4. Make sure Jump Host must have internet connectivity for downloading docker
+ image.
+5. You need to know where to get basic openstack Keystone authorization info,
+ such as OS_PASSWORD, OS_PROJECT_NAME, OS_AUTH_URL, OS_USERNAME.
+6. To run a Storperf container, you need to have OpenStack Controller
+ environment variables defined and passed to Storperf container. The best way
+ to do this is to put environment variables in a "storperf_admin-rc" file.
+ The storperf_admin-rc should include credential environment variables at
+ least:
+
+ * OS_AUTH_URL
+ * OS_USERNAME
+ * OS_PASSWORD
+ * OS_PROJECT_NAME
+ * OS_PROJECT_ID
+ * OS_USER_DOMAIN_ID
+
+*Yardstick* has a ``prepare_storperf_admin-rc.sh`` script which can be used to
+generate the ``storperf_admin-rc`` file, this script is located at
+``test/ci/prepare_storperf_admin-rc.sh``
::
@@ -76,8 +86,9 @@ test/ci/prepare_storperf_admin-rc.sh
USERNAME=${OS_USERNAME:-admin}
PASSWORD=${OS_PASSWORD:-console}
+ # OS_TENANT_NAME is still present to keep backward compatibility with legacy
+ # deployments, but should be replaced by OS_PROJECT_NAME.
TENANT_NAME=${OS_TENANT_NAME:-admin}
- TENANT_ID=`openstack project show admin|grep '\bid\b' |awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'`
PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME}
PROJECT_ID=`openstack project show admin|grep '\bid\b' |awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'`
USER_DOMAIN_ID=${OS_USER_DOMAIN_ID:-default}
@@ -90,23 +101,21 @@ test/ci/prepare_storperf_admin-rc.sh
echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc
echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc
echo "OS_PROJECT_ID="$PROJECT_ID >> ~/storperf_admin-rc
- echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc
- echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc
echo "OS_USER_DOMAIN_ID="$USER_DOMAIN_ID >> ~/storperf_admin-rc
-The generated "storperf_admin-rc" file will be stored in the root directory. If
-you installed *Yardstick* using Docker, this file will be located in the
+The generated ``storperf_admin-rc`` file will be stored in the root directory.
+If you installed *Yardstick* using Docker, this file will be located in the
container. You may need to copy it to the root directory of the Storperf
deployed host.
Step 1: Plug-in configuration file preparation
->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To install a plug-in, first you need to prepare a plug-in configuration file in
-YAML format and store it in the "plugin" directory. The plugin configration file
-work as the input of yardstick "plugin" command. Below is the Storperf plug-in
-configuration file sample:
+YAML format and store it in the "plugin" directory. The plugin configration
+file work as the input of yardstick "plugin" command. Below is the Storperf
+plug-in configuration file sample:
::
---
@@ -126,29 +135,29 @@ Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host
in my local environment.
Step 2: Plug-in install/remove scripts preparation
->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-In "yardstick/resource/scripts" directory, there are two folders: a "install"
-folder and a "remove" folder. You need to store the plug-in install/remove
-scripts in these two folders respectively.
+In ``yardstick/resource/scripts`` directory, there are two folders: an
+``install`` folder and a ``remove`` folder. You need to store the plug-in
+install/remove scripts in these two folders respectively.
The detailed installation or remove operation should de defined in these two
scripts. The name of both install and remove scripts should match the plugin-in
name that you specified in the plug-in configuration file.
-For example, the install and remove scripts for Storperf are both named to
-"storperf.bash".
+For example, the install and remove scripts for Storperf are both named
+``storperf.bash``.
Step 3: Install and remove Storperf
->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To install Storperf, simply execute the following command::
# Install Storperf
yardstick plugin install plugin/storperf.yaml
-removing Storperf from yardstick
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Removing Storperf from Yardstick
+++++++++++++++++++++++++++++++++
To remove Storperf, simply execute the following command::
diff --git a/docs/testing/user/userguide/06-result-store-InfluxDB.rst b/docs/testing/user/userguide/07-result-store-InfluxDB.rst
index 747927889..8a9196b1b 100644
--- a/docs/testing/user/userguide/06-result-store-InfluxDB.rst
+++ b/docs/testing/user/userguide/07-result-store-InfluxDB.rst
@@ -1,14 +1,23 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others.
+ License.
+ http://creativecommons.org/licenses/by/4.0
+ (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others.
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
==============================================
Store Other Project's Test Results in InfluxDB
==============================================
Abstract
-========
+--------
.. _Framework: https://wiki.opnfv.org/download/attachments/6827660/wiki.png?version=1&modificationDate=1470298075000&api=v2
@@ -21,10 +30,10 @@ into community's InfluxDB. The framework is shown in Framework_.
:alt: Store Other Project's Test Results in InfluxDB
Store Storperf Test Results into Community's InfluxDB
-=====================================================
+-----------------------------------------------------
.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py
-.. _Mingjiang: limingjiang@huawei.com
+.. _Mingjiang: mailto:limingjiang@huawei.com
.. _Visual: https://wiki.opnfv.org/download/attachments/6827660/tc074.PNG?version=1&modificationDate=1470298075000&api=v2
.. _Login: http://testresults.opnfv.org/grafana/login
@@ -40,12 +49,13 @@ into community's InfluxDB:
will be supported in the future.
Our plan is to support rest-api in D release so that other testing projects can
-call the rest-api to use yardstick dispatcher service to push data to yardstick's
-influxdb database.
+call the rest-api to use yardstick dispatcher service to push data to
+Yardstick's InfluxDB database.
-For now, influxdb only support line protocol, and the json protocol is deprecated.
+For now, InfluxDB only supports line protocol, and the json protocol is
+deprecated.
-Take ping test case for example, the raw_result is json format like this:
+Take ping test case for example, the ``raw_result`` is json format like this:
::
"benchmark": {
@@ -61,23 +71,24 @@ Take ping test case for example, the raw_result is json format like this:
"runner_id": 2625
}
-With the help of "influxdb_line_protocol", the json is transform to like below as a line string:
-::
+With the help of "influxdb_line_protocol", the json is transform to like below
+as a line string::
'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown,pod_name=unknown,
runner_id=2625,scenarios=Ping,target=ares.demo,task_id=77755f38-1f6a-4667-a7f3-
301c99963656,version=unknown rtt.ares=1.125 1470315409868094976'
-So, for data output of json format, you just need to transform json into line format and call
-influxdb api to post the data into the database. All this function has been implemented in Influxdb_.
-If you need support on this, please contact Mingjiang_.
+So, for data output of json format, you just need to transform json into line
+format and call influxdb api to post the data into the database. All this
+function has been implemented in Influxdb_. If you need support on this, please
+contact Mingjiang_.
::
curl -i -XPOST 'http://104.197.68.199:8086/write?db=yardstick' --
data-binary 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown, ...'
-Grafana will be used for visualizing the collected test data, which is shown in Visual_. Grafana
-can be accessed by Login_.
+Grafana will be used for visualizing the collected test data, which is shown in
+Visual_. Grafana can be accessed by Login_.
.. image:: images/results_visualization.png
diff --git a/docs/testing/user/userguide/07-grafana.rst b/docs/testing/user/userguide/08-grafana.rst
index 416857b71..ebe9f570d 100644
--- a/docs/testing/user/userguide/07-grafana.rst
+++ b/docs/testing/user/userguide/08-grafana.rst
@@ -3,13 +3,23 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) 2016 Huawei Technologies Co.,Ltd and others
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
=================
Grafana dashboard
=================
Abstract
-========
+--------
This chapter describes the Yardstick grafana dashboard. The Yardstick grafana
dashboard can be found here: http://testresults.opnfv.org/grafana/
@@ -21,14 +31,14 @@ dashboard can be found here: http://testresults.opnfv.org/grafana/
Public access
-=============
+-------------
Yardstick provids a public account for accessing to the dashboard. The username
and password are both set to ‘opnfv’.
Testcase dashboard
-==================
+------------------
For each test case, there is a dedicated dashboard. Shown here is the dashboard
of TC002.
@@ -36,7 +46,7 @@ of TC002.
.. image:: images/TC002.png
:width: 800px
- :alt:TC002 dashboard
+ :alt: TC002 dashboard
For each test case dashboard. On the top left, we have a dashboard selection,
you can switch to different test cases using this pull-down menu.
@@ -56,7 +66,7 @@ zoom out the chart.
Administration access
-=====================
+---------------------
For a user with administration rights it is easy to update and save any
dashboard configuration. Saved updates immediately take effect and become live.
@@ -72,11 +82,11 @@ This may cause issues like:
Any change made by administrator should be careful.
-Add a dashboard into yardstick grafana
-======================================
+Add a dashboard into Yardstick Grafana
+--------------------------------------
Due to security concern, users that using the public opnfv account are not able
-to edit the yardstick grafana directly.It takes a few more steps for a
+to edit the yardstick grafana directly. It takes a few more steps for a
non-yardstick user to add a custom dashboard into yardstick grafana.
There are 6 steps to go.
@@ -108,8 +118,10 @@ There are 6 steps to go.
5. When finished with all Grafana configuration changes in this temporary
dashboard then chose "export" of the updated dashboard copy into a JSON file
- and put it up for review in Gerrit, in file /yardstick/dashboard/Yardstick-TCxxx-yyyyyyyyyyyyy.
- For instance a typical default name of the file would be "Yardstick-TC001 Copy-1234567891234".
+ and put it up for review in Gerrit, in file
+ ``/yardstick/dashboard/Yardstick-TCxxx-yyyyyyyyyyyyy``.
+ For instance a typical default name of the file would be
+ ``Yardstick-TC001 Copy-1234567891234``.
6. Once you finish your dashboard, the next step is exporting the configuration
file and propose a patch into Yardstick. Yardstick team will review and
diff --git a/docs/testing/user/userguide/08-api.rst b/docs/testing/user/userguide/09-api.rst
index ff6e62228..f227878ae 100644
--- a/docs/testing/user/userguide/08-api.rst
+++ b/docs/testing/user/userguide/09-api.rst
@@ -2,9 +2,19 @@
.. License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
+.. Convention for heading levels in Yardstick documentation:
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
+=====================
Yardstick Restful API
-======================
+=====================
Abstract
@@ -17,11 +27,14 @@ Available API
-------------
/yardstick/env/action
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^^
-Description: This API is used to prepare Yardstick test environment. For Euphrates, it supports:
+Description: This API is used to prepare Yardstick test environment.
+For Euphrates, it supports:
-1. Prepare yardstick test environment, including set external network environment variable, load Yardstick VM images and create flavors;
+1. Prepare yardstick test environment, including setting the
+ ``EXTERNAL_NETWORK`` environment variable, load Yardstick VM images and
+ create flavors;
2. Start an InfluxDB Docker container and config Yardstick output to InfluxDB;
3. Start a Grafana Docker container and config it with the InfluxDB.
@@ -35,30 +48,33 @@ Prepare Yardstick test environment
Example::
{
- 'action': 'prepareYardstickEnv'
+ 'action': 'prepare_env'
}
-This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
+This is an asynchronous API. You need to call ``/yardstick/asynctask`` API to
+get the task result.
Start and config an InfluxDB docker container
Example::
{
- 'action': 'createInfluxDBContainer'
+ 'action': 'create_influxdb'
}
-This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
+This is an asynchronous API. You need to call ``/yardstick/asynctask`` API to
+get the task result.
Start and config a Grafana docker container
Example::
{
- 'action': 'createGrafanaContainer'
+ 'action': 'create_grafana'
}
-This is an asynchronous API. You need to call /yardstick/asynctask API to get the task result.
+This is an asynchronous API. You need to call ``/yardstick/asynctask`` API to
+get the task result.
/yardstick/asynctask
@@ -73,10 +89,15 @@ Method: GET
Get the status of asynchronous tasks
Example::
- http://localhost:8888/yardstick/asynctask?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/yardstick/asynctask?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c
The returned status will be 0(running), 1(finished) and 2(failed).
+NOTE::
+
+ <SERVER IP>: The ip of the host where you start your yardstick container
+ <PORT>: The outside port of port mapping which set when you start start yardstick container
+
/yardstick/testcases
^^^^^^^^^^^^^^^^^^^^
@@ -90,7 +111,7 @@ Method: GET
Get a list of released test cases
Example::
- http://localhost:8888/yardstick/testcases
+ http://<SERVER IP>:<PORT>/yardstick/testcases
/yardstick/testcases/release/action
@@ -106,14 +127,15 @@ Run a released test case
Example::
{
- 'action': 'runTestCase',
+ 'action': 'run_test_case',
'args': {
'opts': {},
- 'testcase': 'tc002'
+ 'testcase': 'opnfv_yardstick_tc002'
}
}
-This is an asynchronous API. You need to call /yardstick/results to get the result.
+This is an asynchronous API. You need to call ``/yardstick/results`` to get the
+result.
/yardstick/testcases/samples/action
@@ -129,20 +151,22 @@ Run a sample test case
Example::
{
- 'action': 'runTestCase',
+ 'action': 'run_test_case',
'args': {
'opts': {},
'testcase': 'ping'
}
}
-This is an asynchronous API. You need to call /yardstick/results to get the result.
+This is an asynchronous API. You need to call ``/yardstick/results`` to get
+the result.
/yardstick/testcases/<testcase_name>/docs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Description: This API is used to the documentation of a certain released test case.
+Description: This API is used to the documentation of a certain released test
+case.
Method: GET
@@ -151,11 +175,11 @@ Method: GET
Get the documentation of a certain test case
Example::
- http://localhost:8888/yardstick/taskcases/opnfv_yardstick_tc002/docs
+ http://<SERVER IP>:<PORT>/yardstick/taskcases/opnfv_yardstick_tc002/docs
/yardstick/testsuites/action
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to run a Yardstick test suite.
@@ -167,17 +191,19 @@ Run a test suite
Example::
{
- 'action': 'runTestSuite',
+ 'action': 'run_test_suite',
'args': {
'opts': {},
- 'testcase': 'smoke'
+ 'testsuite': 'opnfv_smoke'
}
}
-This is an asynchronous API. You need to call /yardstick/results to get the result.
+This is an asynchronous API. You need to call /yardstick/results to get the
+result.
/yardstick/tasks/<task_id>/log
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Description: This API is used to get the real time log of test case execution.
@@ -188,13 +214,15 @@ Method: GET
Get real time of test case execution
Example::
- http://localhost:8888/yardstick/tasks/14795be8-f144-4f54-81ce-43f4e3eab33f/log?index=0
+ http://<SERVER IP>:<PORT>/yardstick/tasks/14795be8-f144-4f54-81ce-43f4e3eab33f/log?index=0
/yardstick/results
^^^^^^^^^^^^^^^^^^
-Description: This API is used to get the test results of tasks. If you call /yardstick/testcases/samples/action API, it will return a task id. You can use the returned task id to get the results by using this API.
+Description: This API is used to get the test results of tasks. If you call
+/yardstick/testcases/samples/action API, it will return a task id. You can use
+the returned task id to get the results by using this API.
Method: GET
@@ -203,17 +231,19 @@ Method: GET
Get test results of one task
Example::
- http://localhost:8888/yardstick/results?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/yardstick/results?task_id=3f3f5e03-972a-4847-a5f8-154f1b31db8c
This API will return a list of test case result
-/api/v2/yardstick/openrcs/action
+/api/v2/yardstick/openrcs
+^^^^^^^^^^^^^^^^^^^^^^^^^
-Description: This API provides functionality of handling OpenStack credential file (openrc). For Euphrates, it supports:
+Description: This API provides functionality of handling OpenStack credential
+file (openrc). For Euphrates, it supports:
1. Upload an openrc file for an OpenStack environment;
-2. Update an openrc file;
+2. Update an openrc;
3. Get openrc file information;
4. Delete an openrc file.
@@ -252,7 +282,6 @@ Example::
"OS_PASSWORD": "console",
"OS_PROJECT_DOMAIN_NAME": "default",
"OS_PROJECT_NAME": "admin",
- "OS_TENANT_NAME": "admin",
"OS_USERNAME": "admin",
"OS_USER_DOMAIN_NAME": "default"
},
@@ -261,12 +290,21 @@ Example::
}
+/api/v2/yardstick/openrcs/<openrc_id>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API provides functionality of handling OpenStack credential file (openrc). For Euphrates, it supports:
+
+1. Get openrc file information;
+2. Delete an openrc file.
+
+
METHOD: GET
Get openrc file information
Example::
- http://localhost:8888/api/v2/yardstick/openrcs/5g6g3e02-155a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/openrcs/5g6g3e02-155a-4847-a5f8-154f1b31db8c
METHOD: DELETE
@@ -275,16 +313,16 @@ METHOD: DELETE
Delete openrc file
Example::
- http://localhost:8888/api/v2/yardstick/openrcs/5g6g3e02-155a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/openrcs/5g6g3e02-155a-4847-a5f8-154f1b31db8c
-/api/v2/yardstick/pods/action
+/api/v2/yardstick/pods
+^^^^^^^^^^^^^^^^^^^^^^
-Description: This API provides functionality of handling Yardstick pod file (pod.yaml). For Euphrates, it supports:
+Description: This API provides functionality of handling Yardstick pod file
+(pod.yaml). For Euphrates, it supports:
1. Upload a pod file;
-2. Get pod file information;
-3. Delete an openrc file.
Which API to call will depend on the parameters.
@@ -304,12 +342,20 @@ Example::
}
+/api/v2/yardstick/pods/<pod_id>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API provides functionality of handling Yardstick pod file (pod.yaml). For Euphrates, it supports:
+
+1. Get pod file information;
+2. Delete an openrc file.
+
METHOD: GET
Get pod file information
Example::
- http://localhost:8888/api/v2/yardstick/pods/5g6g3e02-155a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/pods/5g6g3e02-155a-4847-a5f8-154f1b31db8c
METHOD: DELETE
@@ -317,16 +363,16 @@ METHOD: DELETE
Delete openrc file
Example::
- http://localhost:8888/api/v2/yardstick/pods/5g6g3e02-155a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/pods/5g6g3e02-155a-4847-a5f8-154f1b31db8c
-/api/v2/yardstick/images/action
+/api/v2/yardstick/images
+^^^^^^^^^^^^^^^^^^^^^^^^
-Description: This API is used to do some work related to Yardstick VM images. For Euphrates, it supports:
+Description: This API is used to do some work related to Yardstick VM images.
+For Euphrates, it supports:
1. Load Yardstick VM images;
-2. Get image's information;
-3. Delete images.
Which API to call will depend on the parameters.
@@ -338,16 +384,27 @@ Load VM images
Example::
{
- 'action': 'load_images'
+ 'action': 'load_image',
+ 'args': {
+ 'name': 'yardstick-image'
+ }
}
+/api/v2/yardstick/images/<image_id>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API is used to do some work related to Yardstick VM images. For Euphrates, it supports:
+
+1. Get image's information;
+2. Delete images
+
METHOD: GET
Get image information
Example::
- http://localhost:8888/api/v2/yardstick/images/5g6g3e02-155a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/images/5g6g3e02-155a-4847-a5f8-154f1b31db8c
METHOD: DELETE
@@ -355,19 +412,16 @@ METHOD: DELETE
Delete images
Example::
- http://localhost:8888/api/v2/yardstick/images/5g6g3e02-155a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/images/5g6g3e02-155a-4847-a5f8-154f1b31db8c
-/api/v2/yardstick/tasks/action
+/api/v2/yardstick/tasks
+^^^^^^^^^^^^^^^^^^^^^^^
-Description: This API is used to do some work related to yardstick tasks. For Euphrates, it supports:
+Description: This API is used to do some work related to yardstick tasks. For
+Euphrates, it supports:
1. Create a Yardstick task;
-2. run a Yardstick task;
-3. Add a test case to a task;
-4. Add a test suite to a task;
-5. Get a tasks' information;
-6. Delete a task.
Which API to call will depend on the parameters.
@@ -387,20 +441,35 @@ Example::
}
+/api/v2/yardstick/tasks/<task_id>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API is used to do some work related to yardstick tasks. For Euphrates, it supports:
+
+1. Add a environment to a task
+2. Add a test case to a task;
+3. Add a test suite to a task;
+4. run a Yardstick task;
+5. Get a tasks' information;
+6. Delete a task.
+
+
METHOD: PUT
+Add a environment to a task
-Run a task
Example::
{
- 'action': 'run'
+ 'action': 'add_environment',
+ 'args': {
+ 'environment_id': 'e3cadbbb-0419-4fed-96f1-a232daa0422a'
+ }
}
METHOD: PUT
-
Add a test case to a task
Example::
@@ -413,8 +482,8 @@ Example::
}
-METHOD: PUT
+METHOD: PUT
Add a test suite to a task
Example::
@@ -428,29 +497,43 @@ Example::
}
+METHOD: PUT
+
+Run a task
+
+Example::
+
+ {
+ 'action': 'run'
+ }
+
+
+
METHOD: GET
Get a task's information
Example::
- http://localhost:8888/api/v2/yardstick/tasks/5g6g3e02-155a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/tasks/5g6g3e02-155a-4847-a5f8-154f1b31db8c
METHOD: DELETE
Delete a task
+
Example::
- http://localhost:8888/api/v2/yardstick/tasks/5g6g3e02-155a-4847-a5f8-154f1b31db8c
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/tasks/5g6g3e02-155a-4847-a5f8-154f1b31db8c
-/api/v2/yardstick/testcases/action
-Description: This API is used to do some work related to yardstick testcases. For Euphrates, it supports:
+/api/v2/yardstick/testcases
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API is used to do some work related to Yardstick testcases.
+For Euphrates, it supports:
1. Upload a test case;
2. Get all released test cases' information;
-3. Get a certain released test case's information;
-4. Delete a test case.
Which API to call will depend on the parameters.
@@ -475,16 +558,24 @@ METHOD: GET
Get all released test cases' information
Example::
- http://localhost:8888/api/v2/yardstick/testcases
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/testcases
+
+
+/api/v2/yardstick/testcases/<case_name>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API is used to do some work related to yardstick testcases. For Euphrates, it supports:
+1. Get certain released test case's information;
+2. Delete a test case.
METHOD: GET
-Get a certain released test case's information
+Get certain released test case's information
Example::
- http://localhost:8888/api/v2/yardstick/testcases/opnfv_yardstick_tc002
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/testcases/opnfv_yardstick_tc002
METHOD: DELETE
@@ -492,17 +583,18 @@ METHOD: DELETE
Delete a certain test case
Example::
- http://localhost:8888/api/v2/yardstick/testcases/opnfv_yardstick_tc002
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/testcases/opnfv_yardstick_tc002
-/api/v2/yardstick/testsuites/action
-Description: This API is used to do some work related to yardstick test suites. For Euphrates, it supports:
+/api/v2/yardstick/testsuites
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API is used to do some work related to yardstick test suites.
+For Euphrates, it supports:
1. Create a test suite;
-2. Get a certain test suite's information;
-3. Get all test suites;
-4. Delete a test case.
+2. Get all test suites;
Which API to call will depend on the parameters.
@@ -514,7 +606,7 @@ Create a test suite
Example::
{
- 'action': 'create_sutie',
+ 'action': 'create_suite',
'args': {
'name': <suite_name>,
'testcases': [
@@ -527,19 +619,27 @@ Example::
METHOD: GET
-Get a certain test suite's information
+Get all test suite
Example::
- http://localhost:8888/api/v2/yardstick/testsuites/<suite_name>
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/testsuites
+
+
+/api/v2/yardstick/testsuites
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Description: This API is used to do some work related to yardstick test suites. For Euphrates, it supports:
+
+1. Get certain test suite's information;
+2. Delete a test case.
METHOD: GET
-Get all test suite
+Get certain test suite's information
Example::
- http://localhost:8888/api/v2/yardstick/testsuites
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/testsuites/<suite_name>
METHOD: DELETE
@@ -548,17 +648,17 @@ METHOD: DELETE
Delete a certain test suite
Example::
- http://localhost:8888/api/v2/yardstick/testsuites/<suite_name>
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/testsuites/<suite_name>
-/api/v2/yardstick/projects/action
+/api/v2/yardstick/projects
+^^^^^^^^^^^^^^^^^^^^^^^^^^
-Description: This API is used to do some work related to yardstick test projects. For Euphrates, it supports:
+Description: This API is used to do some work related to Yardstick test
+projects. For Euphrates, it supports:
1. Create a Yardstick project;
-2. Get a certain project's information;
-3. Get all projects;
-4. Delete a project.
+2. Get all projects;
Which API to call will depend on the parameters.
@@ -580,19 +680,27 @@ Example::
METHOD: GET
-Get a certain project's information
+Get all projects' information
Example::
- http://localhost:8888/api/v2/yardstick/projects/<project_id>
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/projects
+/api/v2/yardstick/projects
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API is used to do some work related to yardstick test projects. For Euphrates, it supports:
+
+1. Get certain project's information;
+2. Delete a project.
+
METHOD: GET
-Get all projects' information
+Get certain project's information
Example::
- http://localhost:8888/api/v2/yardstick/projects
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/projects/<project_id>
METHOD: DELETE
@@ -601,17 +709,17 @@ METHOD: DELETE
Delete a certain project
Example::
- http://localhost:8888/api/v2/yardstick/projects/<project_id>
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/projects/<project_id>
-/api/v2/yardstick/containers/action
+/api/v2/yardstick/containers
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Description: This API is used to do some work related to Docker containers. For Euphrates, it supports:
+Description: This API is used to do some work related to Docker containers.
+For Euphrates, it supports:
1. Create a Grafana Docker container;
2. Create an InfluxDB Docker container;
-3. Get a certain container's information;
-4. Delete a container.
Which API to call will depend on the parameters.
@@ -644,13 +752,21 @@ Example::
}
+/api/v2/yardstick/containers/<container_id>
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Description: This API is used to do some work related to Docker containers. For Euphrates, it supports:
+
+1. Get certain container's information;
+2. Delete a container.
+
METHOD: GET
-Get a certain container's information
+Get certain container's information
Example::
- http://localhost:8888/api/v2/yardstick/containers/<container_id>
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/containers/<container_id>
METHOD: DELETE
@@ -659,4 +775,4 @@ METHOD: DELETE
Delete a certain container
Example::
- http://localhost:8888/api/v2/yardstick/containers/<container_id>
+ http://<SERVER IP>:<PORT>/api/v2/yardstick/containers/<container_id>
diff --git a/docs/testing/user/userguide/09-yardstick_user_interface.rst b/docs/testing/user/userguide/09-yardstick_user_interface.rst
deleted file mode 100644
index 9058dd46d..000000000
--- a/docs/testing/user/userguide/09-yardstick_user_interface.rst
+++ /dev/null
@@ -1,29 +0,0 @@
-Yardstick User Interface
-========================
-
-This interface provides a user to view the test result
-in table format and also values pinned on to a graph.
-
-
-Command
--------
-::
-
- yardstick report generate <task-ID> <testcase-filename>
-
-
-Description
------------
-
-1. When the command is triggered using the task-id and the testcase
-name provided the respective values are retrieved from the
-database (influxdb in this particular case).
-
-2. The values are then formatted and then provided to the html
-template framed with complete html body using Django Framework.
-
-3. Then the whole template is written into a html file.
-
-The graph is framed with Timestamp on x-axis and output values
-(differ from testcase to testcase) on y-axis with the help of
-"Highcharts".
diff --git a/docs/testing/user/userguide/10-vtc-overview.rst b/docs/testing/user/userguide/10-vtc-overview.rst
deleted file mode 100644
index 8ed17873d..000000000
--- a/docs/testing/user/userguide/10-vtc-overview.rst
+++ /dev/null
@@ -1,128 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others.
-
-==========================
-Virtual Traffic Classifier
-==========================
-
-Abstract
-========
-
-.. _TNOVA: http://www.t-nova.eu/
-.. _TNOVAresults: http://www.t-nova.eu/results/
-.. _Yardstick: https://wiki.opnfv.org/yardstick
-
-This chapter provides an overview of the virtual Traffic Classifier, a
-contribution to OPNFV Yardstick_ from the EU Project TNOVA_.
-Additional documentation is available in TNOVAresults_.
-
-Overview
-========
-
-The virtual Traffic Classifier (:term:`VTC`) :term:`VNF`, comprises of a
-Virtual Network Function Component (:term:`VNFC`). The :term:`VNFC` contains
-both the Traffic Inspection module, and the Traffic forwarding module, needed
-to run the :term:`VNF`. The exploitation of Deep Packet Inspection
-(:term:`DPI`) methods for traffic classification is built around two basic
-assumptions:
-
-* third parties unaffiliated with either source or recipient are able to
-inspect each IP packet’s payload
-
-* the classifier knows the relevant syntax of each application’s packet
-payloads (protocol signatures, data patterns, etc.).
-
-The proposed :term:`DPI` based approach will only use an indicative, small
-number of the initial packets from each flow in order to identify the content
-and not inspect each packet.
-
-In this respect it follows the Packet Based per Flow State (term:`PBFS`). This
-method uses a table to track each session based on the 5-tuples (src address,
-dest address, src port,dest port, transport protocol) that is maintained for
-each flow.
-
-Concepts
-========
-
-* *Traffic Inspection*: The process of packet analysis and application
-identification of network traffic that passes through the :term:`VTC`.
-
-* *Traffic Forwarding*: The process of packet forwarding from an incoming
-network interface to a pre-defined outgoing network interface.
-
-* *Traffic Rule Application*: The process of packet tagging, based on a
-predefined set of rules. Packet tagging may include e.g. Type of Service
-(:term:`ToS`) field modification.
-
-Architecture
-============
-
-The Traffic Inspection module is the most computationally intensive component
-of the :term:`VNF`. It implements filtering and packet matching algorithms in
-order to support the enhanced traffic forwarding capability of the :term:`VNF`.
-The component supports a flow table (exploiting hashing algorithms for fast
-indexing of flows) and an inspection engine for traffic classification.
-
-The implementation used for these experiments exploits the nDPI library.
-The packet capturing mechanism is implemented using libpcap. When the
-:term:`DPI` engine identifies a new flow, the flow register is updated with the
-appropriate information and transmitted across the Traffic Forwarding module,
-which then applies any required policy updates.
-
-The Traffic Forwarding moudle is responsible for routing and packet forwarding.
-It accepts incoming network traffic, consults the flow table for classification
-information for each incoming flow and then applies pre-defined policies
-marking e.g. :term:`ToS`/Differentiated Services Code Point (:term:`DSCP`)
-multimedia traffic for Quality of Service (:term:`QoS`) enablement on the
-forwarded traffic.
-It is assumed that the traffic is forwarded using the default policy until it
-is identified and new policies are enforced.
-
-The expected response delay is considered to be negligible, as only a small
-number of packets are required to identify each flow.
-
-Graphical Overview
-==================
-
-.. code-block:: console
-
- +----------------------------+
- | |
- | Virtual Traffic Classifier |
- | |
- | Analysing/Forwarding |
- | ------------> |
- | ethA ethB |
- | |
- +----------------------------+
- | ^
- | |
- v |
- +----------------------------+
- | |
- | Virtual Switch |
- | |
- +----------------------------+
-
-Install
-=======
-
-run the vTC/build.sh with root privileges
-
-Run
-===
-
-::
-
- sudo ./pfbridge -a eth1 -b eth2
-
-
-.. note:: Virtual Traffic Classifier is not support in OPNFV Danube release.
-
-
-Development Environment
-=======================
-
-Ubuntu 14.04 Ubuntu 16.04
diff --git a/docs/testing/user/userguide/10-yardstick-user-interface.rst b/docs/testing/user/userguide/10-yardstick-user-interface.rst
new file mode 100644
index 000000000..246e1b1df
--- /dev/null
+++ b/docs/testing/user/userguide/10-yardstick-user-interface.rst
@@ -0,0 +1,64 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
+========================
+Yardstick User Interface
+========================
+
+This chapter describes how to generate HTML reports, used to view, store, share
+or publish test results in table and graph formats.
+
+The following layouts are available:
+
+* The compact HTML report layout is suitable for testcases producing a few
+ metrics over a short period of time. All metrics for all timestamps are
+ displayed in the data table and on the graph.
+
+* The dynamic HTML report layout consists of a wider data table, a graph, and
+ a tree that allows selecting the metrics to be displayed. This layout is
+ suitable for testcases, such as NSB ones, producing a lot of metrics over
+ a longer period of time.
+
+
+Commands
+--------
+
+To generate the compact HTML report, run::
+
+ yardstick report generate <task-ID> <testcase-filename>
+
+To generate the dynamic HTML report, run::
+
+ yardstick report generate-nsb <task-ID> <testcase-filename>
+
+
+Description
+-----------
+
+1. When the command is triggered, the relevant values for the
+ provided task-id and testcase name are retrieved from the
+ database (`InfluxDB`_ in this particular case).
+
+2. The values are then formatted and provided to the html
+ template to be rendered using `Jinja2`_.
+
+3. Then the rendered template is written into a html file.
+
+The graph is framed with Timestamp on x-axis and output values
+(differ from testcase to testcase) on y-axis with the help of
+`Chart.js`_.
+
+.. _InfluxDB: https://www.influxdata.com/time-series-platform/influxdb/
+.. _Jinja2: http://jinja.pocoo.org/docs/2.10/
+.. _Chart.js: https://www.chartjs.org/
diff --git a/docs/testing/user/userguide/11-nsb-overview.rst b/docs/testing/user/userguide/11-nsb-overview.rst
deleted file mode 100644
index 8ce90f65d..000000000
--- a/docs/testing/user/userguide/11-nsb-overview.rst
+++ /dev/null
@@ -1,203 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
-
-Network Services Benchmarking (NSB)
-===================================
-
-Abstract
---------
-
-.. _Yardstick: https://wiki.opnfv.org/yardstick
-
-This chapter provides an overview of the NSB, a contribution to OPNFV
-Yardstick_ from Intel.
-
-Overview
---------
-
-The goal of NSB is to Extend Yardstick to perform real world VNFs and NFVi Characterization and
-benchmarking with repeatable and deterministic methods.
-
-The Network Service Benchmarking (NSB) extends the yardstick framework to do
-VNF characterization and benchmarking in three different execution
-environments - bare metal i.e. native Linux environment, standalone virtual
-environment and managed virtualized environment (e.g. Open stack etc.).
-It also brings in the capability to interact with external traffic generators
-both hardware & software based for triggering and validating the traffic
-according to user defined profiles.
-
-NSB extension includes:
-
- - Generic data models of Network Services, based on ETSI spec `ETSI GS NFV-TST 001 <http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf>`_
-
- - New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc
-
- - Generic VNF configuration models and metrics implemented with Python
- classes
-
- - Traffic generator features and traffic profiles
-
- - L1-L3 state-less traffic profiles
-
- - L4-L7 state-full traffic profiles
-
- - Tunneling protocol / network overlay support
-
- - Test case samples
-
- - Ping
-
- - Trex
-
- - vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc
-
- - Traffic generators like Trex, ab/nginx, ixia, iperf etc
-
- - KPIs for a given use case:
-
- - System agent support for collecting NFVi KPI. This includes:
-
- - CPU statistic
-
- - Memory BW
-
- - OVS-DPDK Stats
-
- - Network KPIs, e.g., inpackets, outpackets, thoughput, latency etc
-
- - VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc
-
-Architecture
-------------
-
-The Network Service (NS) defines a set of Virtual Network Functions (VNF)
-connected together using NFV infrastructure.
-
-The Yardstick NSB extension can support multiple VNFs created by different
-vendors including traffic generators. Every VNF being tested has its
-own data model. The Network service defines a VNF modelling on base of performed
-network functionality. The part of the data model is a set of the configuration
-parameters, number of connection points used and flavor including core and
-memory amount.
-
-The ETSI defines a Network Service as a set of configurable VNFs working in
-some NFV Infrastructure connecting each other using Virtual Links available
-through Connection Points. The ETSI MANO specification defines a set of
-management entities called Network Service Descriptors (NSD) and
-VNF Descriptors (VNFD) that define real Network Service. The picture below
-makes an example how the real Network Operator use-case can map into ETSI
-Network service definition
-
-Network Service framework performs the necessary test steps. It may involve
-
- - Interacting with traffic generator and providing the inputs on traffic
- type / packet structure to generate the required traffic as per the
- test case. Traffic profiles will be used for this.
-
- - Executing the commands required for the test procedure and analyses the
- command output for confirming whether the command got executed correctly
- or not. E.g. As per the test case, run the traffic for the given
- time period / wait for the necessary time delay
-
- - Verify the test result.
-
- - Validate the traffic flow from SUT
-
- - Fetch the table / data from SUT and verify the value as per the test case
-
- - Upload the logs from SUT onto the Test Harness server
-
- - Read the KPI's provided by particular VNF
-
-Components of Network Service
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
- * *Models for Network Service benchmarking*: The Network Service benchmarking
- requires the proper modelling approach. The NSB provides models using Python
- files and defining of NSDs and VNFDs.
-
- The benchmark control application being a part of OPNFV yardstick can call
- that python models to instantiate and configure the VNFs. Depending on
- infrastructure type (bare-metal or fully virtualized) that calls could be
- made directly or using MANO system.
-
- * *Traffic generators in NSB*: Any benchmark application requires a set of
- traffic generator and traffic profiles defining the method in which traffic
- is generated.
-
- The Network Service benchmarking model extends the Network Service
- definition with a set of Traffic Generators (TG) that are treated
- same way as other VNFs being a part of benchmarked network service.
- Same as other VNFs the traffic generator are instantiated and terminated.
-
- Every traffic generator has own configuration defined as a traffic profile and
- a set of KPIs supported. The python models for TG is extended by specific calls
- to listen and generate traffic.
-
- * *The stateless TREX traffic generator*: The main traffic generator used as
- Network Service stimulus is open source TREX tool.
-
- The TREX tool can generate any kind of stateless traffic.
-
- .. code-block:: console
-
- +--------+ +-------+ +--------+
- | | | | | |
- | Trex | ---> | VNF | ---> | Trex |
- | | | | | |
- +--------+ +-------+ +--------+
-
- Supported testcases scenarios:
-
- - Correlated UDP traffic using TREX traffic generator and replay VNF.
-
- - using different IMIX configuration like pure voice, pure video traffic etc
-
- - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows
-
- - Using different number of rules configured like 1 rule, 1K, 10K rules
-
- For UDP correlated traffic following Key Performance Indicators are collected
- for every combination of test case parameters:
-
- - RFC2544 throughput for various loss rate defined (1% is a default)
-
-Graphical Overview
-------------------
-
-NSB Testing with yardstick framework facilitate performance testing of various
-VNFs provided.
-
-.. code-block:: console
-
- +-----------+
- | | +-----------+
- | vPE | ->|TGen Port 0|
- | TestCase | | +-----------+
- | | |
- +-----------+ +------------------+ +-------+ |
- | | -- API --> | VNF | <--->
- +-----------+ | Yardstick | +-------+ |
- | Test Case | --> | NSB Testing | |
- +-----------+ | | |
- | | | |
- | +------------------+ |
- +-----------+ | +-----------+
- | Traffic | ->|TGen Port 1|
- | patterns | +-----------+
- +-----------+
-
- Figure 1: Network Service - 2 server configuration
-
-VNFs supported for chracterization:
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-1. CGNAPT - Carrier Grade Network Address and port Translation
-2. vFW - Virtual Firewall
-3. vACL - Access Control List
-5. Prox - Packet pROcessing eXecution engine:
- - VNF can act as Drop, Basic Forwarding (no touch), L2 Forwarding (change MAC), GRE encap/decap, Load balance based on packet fields, Symmetric load balancing,
- - QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
-6. UDP_Replay
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
new file mode 100644
index 000000000..45b087a47
--- /dev/null
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -0,0 +1,258 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2016-2019 Intel Corporation.
+
+.. Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
+===================================
+Network Services Benchmarking (NSB)
+===================================
+
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
+.. _`ETSI GS NFV-TST001`: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf
+
+Abstract
+--------
+
+This chapter provides an overview of the NSB, a contribution to OPNFV
+Yardstick_ from Intel.
+
+Overview
+--------
+
+Network Services Benchmarking (:term:`NSB`) uses the :term:`Yardstick`
+framework for performing :term:`VNF` and :term:`NFVI` characterisation in an
+:term:`NFV` environment.
+
+For VNF characterisation, NSB will onboard a VNF, source and sink traffic to it
+via traffic generators, and collect a variety of key performance indicators
+(:term:`KPI`) during VNF execution. The stream of KPI data is stored in a
+database, and it is visualized in a performance-visualization dashboard.
+
+For NFVI characterisation, a fixed test VNF, called :term:`PROX` is used.
+PROX implements a suite of test cases and visualizes the output data of the
+test suite. The PROX test cases implement various execution kernels found in
+real-world VNFs, and the output of the test cases provides an indication of
+the fitness of the infrastructure for running NFV services, in addition to
+indicating potential performance optimizations for the NFVI.
+
+NSB extends the Yardstick framework to do VNF characterization in three
+different execution environments - bare metal i.e. native Linux environment,
+standalone virtual environment and managed virtualized environment (e.g.
+OpenStack). It also brings in the capability to interact with external traffic
+generators, both hardware and software based, for triggering and validating the
+traffic according to user defined profiles.
+
+NSB extension includes:
+
+* Generic data models of Network Services, based on ETSI spec
+ `ETSI GS NFV-TST001`_
+* Standalone :term:`context` for VNF testing SRIOV, OVS-DPDK, etc
+* Generic VNF configuration models and metrics implemented with Python
+ classes
+* Traffic generator features and traffic profiles
+
+ * L1-L3 stateless traffic profiles
+ * L4-L7 state-full traffic profiles
+ * Tunneling protocol/network overlay support
+
+* Scenarios that handle NSB test cases execution
+
+ * NSPerf - scenario that handles generic NSB test case execution
+ (setup and init tg/vnf, trigger traffic on tg, collect kpi)
+ * NSPerf-RFC2544 - scenario that allows repeatable triggering of traffic on
+ traffic generators until test case acceptance criteria is met
+ (for example RFC2544 binary search)
+
+* Test case samples
+
+ * Ping
+ * Trex
+ * vPE, vCGNAT, vFirewall etc - ipv4 throughput, latency etc
+
+* Traffic generators i.e. Trex, ab/nginx, ixia, iperf, etc
+* KPIs for a given use case:
+
+ * System agent support for collecting NFVi KPI. This includes:
+
+ * CPU statistic
+ * Memory BW
+ * OVS-DPDK Stats
+
+ * Network KPIs e.g. inpackets, outpackets, thoughput, latency
+ * VNF KPIs e.g. packet_in, packet_drop, packet_fwd
+
+Architecture
+------------
+
+The Network Service (NS) defines a set of Virtual Network Functions (VNF)
+connected together using NFV infrastructure.
+
+The Yardstick NSB extension can support multiple VNFs created by different
+vendors including traffic generators. Every VNF being tested has its
+own data model. The Network service defines a VNF modelling on base of
+performed network functionality. The part of the data model is a set of the
+configuration parameters, number of connection points used and flavor including
+core and memory amount.
+
+ETSI defines a Network Service as a set of configurable VNFs working in some
+NFV Infrastructure connecting each other using Virtual Links available through
+Connection Points. The ETSI MANO specification defines a set of management
+entities called Network Service Descriptors (NSD) and VNF Descriptors (VNFD)
+that define real Network Service. The picture below makes an example how the
+real Network Operator use-case can map into ETSI Network service definition.
+
+Network Service framework performs the necessary test steps. It may involve:
+
+* Interacting with traffic generator and providing the inputs on traffic
+ type / packet structure to generate the required traffic as per the
+ test case. Traffic profiles will be used for this.
+* Executing the commands required for the test procedure and analyses the
+ command output for confirming whether the command got executed correctly
+ or not e.g. as per the test case, run the traffic for the given
+ time period and wait for the necessary time delay.
+* Verify the test result.
+* Validate the traffic flow from SUT.
+* Fetch the data from SUT and verify the value as per the test case.
+* Upload the logs from SUT onto the Test Harness server
+* Retrieve the KPI's provided by particular VNF
+
+Components of Network Service
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. TODO: provide a list of components in this section and describe them in
+ later sub-sections
+
+.. Components are the methodology, TGs, framework extensions, KPI collection,
+ Testcases, SampleVNFs
+.. Framework extentions include: VNF models, NSPerf Scenario, contexts
+
+* *Models for Network Service benchmarking*: The Network Service benchmarking
+ requires the proper modelling approach. The NSB provides models using Python
+ files and defining of NSDs and VNFDs.
+
+The benchmark control application being a part of OPNFV Yardstick can call
+that Python models to instantiate and configure the VNFs. Depending on
+infrastructure type (bare-metal or fully virtualized) that calls could be
+made directly or using MANO system.
+
+* *Traffic generators in NSB*: Any benchmark application requires a set of
+ traffic generator and traffic profiles defining the method in which traffic
+ is generated.
+
+The Network Service benchmarking model extends the Network Service
+definition with a set of Traffic Generators (TG) that are treated
+same way as other VNFs being a part of benchmarked network service.
+Same as other VNFs the traffic generator are instantiated and terminated.
+
+Every traffic generator has own configuration defined as a traffic profile
+and a set of KPIs supported. The python models for TG is extended by
+specific calls to listen and generate traffic.
+
+* *The stateless TREX traffic generator*: The main traffic generator used as
+ Network Service stimulus is open source TREX tool.
+
+The TREX tool can generate any kind of stateless traffic.
+
+.. code-block:: console
+
+ +--------+ +-------+ +--------+
+ | | | | | |
+ | Trex | ---> | VNF | ---> | Trex |
+ | | | | | |
+ +--------+ +-------+ +--------+
+
+Supported testcases scenarios:
+
+* Correlated UDP traffic using TREX traffic generator and replay VNF.
+
+ * Using different IMIX configuration like pure voice, pure video traffic etc
+ * Using different number IP flows e.g. 1, 1K, 16K, 64K, 256K, 1M flows
+ * Using different number of rules configured e.g. 1, 1K, 10K rules
+
+For UDP correlated traffic following Key Performance Indicators are collected
+for every combination of test case parameters:
+
+* RFC2544 throughput for various loss rate defined (1% is a default)
+
+KPI Collection
+^^^^^^^^^^^^^^
+
+KPI collection is the process of sampling KPIs at multiple intervals to allow
+for investigation into anomalies during runtime. Some KPI intervals are
+adjustable. KPIs are collected from traffic generators and NFVI for the SUT.
+There is already some reporting in NSB available, but NSB collects all KPIs for
+analytics to process.
+
+Below is an example list of basic KPIs:
+
+* Throughput
+* Latency
+* Packet delay variation
+* Maximum establishment rate
+* Maximum tear-down rate
+* Maximum simultaneous number of sessions
+
+Of course, there can be many other KPIs that will be relevant for a specific
+NFVI, but in most cases these KPIs are enough to give you a basic picture of
+the SUT. NSB also uses :term:`collectd` in order to collect the KPIs. Currently
+the following collectd plug-ins are enabled for NSB testcases:
+
+* Libvirt
+* Interface stats
+* OvS events
+* vSwitch stats
+* Huge Pages
+* RAM
+* CPU usage
+* Intel® PMU
+* Intel® RDT
+
+Graphical Overview
+------------------
+
+NSB Testing with Yardstick framework facilitate performance testing of various
+VNFs provided.
+
+.. code-block:: console
+
+ +-----------+
+ | | +-------------+
+ | vPE | -->| TGen Port 0 |
+ | TestCase | | +-------------+
+ | | |
+ +-----------+ +---------------+ +-------+ |
+ | | ---> | VNF | <--->
+ +-----------+ | Yardstick | +-------+ |
+ | Test Case | --> | NSB Testing | |
+ +-----------+ | | |
+ | | | |
+ | +---------------+ |
+ +-----------+ | +-------------+
+ | Traffic | -->| TGen Port 1 |
+ | patterns | +-------------+
+ +-----------+
+
+ Figure 1: Network Service - 2 server configuration
+
+VNFs supported for chracterization
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+1. CGNAPT - Carrier Grade Network Address and port Translation
+2. vFW - Virtual Firewall
+3. vACL - Access Control List
+4. PROX - Packet pROcessing eXecution engine:
+ * VNF can act as Drop, Basic Forwarding (no touch),
+ L2 Forwarding (change MAC), GRE encap/decap, Load balance based on
+ packet fields, Symmetric load balancing
+ * QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
+5. UDP_Replay
diff --git a/docs/testing/user/userguide/12-nsb_installation.rst b/docs/testing/user/userguide/12-nsb_installation.rst
deleted file mode 100644
index a584ca231..000000000
--- a/docs/testing/user/userguide/12-nsb_installation.rst
+++ /dev/null
@@ -1,889 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
-
-Yardstick - NSB Testing -Installation
-=====================================
-
-Abstract
---------
-
-The Network Service Benchmarking (NSB) extends the yardstick framework to do
-VNF characterization and benchmarking in three different execution
-environments viz., bare metal i.e. native Linux environment, standalone virtual
-environment and managed virtualized environment (e.g. Open stack etc.).
-It also brings in the capability to interact with external traffic generators
-both hardware & software based for triggering and validating the traffic
-according to user defined profiles.
-
-The steps needed to run Yardstick with NSB testing are:
-
-* Install Yardstick (NSB Testing).
-* Setup/Reference pod.yaml describing Test topology
-* Create/Reference the test configuration yaml file.
-* Run the test case.
-
-
-Prerequisites
--------------
-
-Refer chapter Yardstick Installation for more information on yardstick
-prerequisites
-
-Several prerequisites are needed for Yardstick(VNF testing):
-
- - Python Modules: pyzmq, pika.
-
- - flex
-
- - bison
-
- - build-essential
-
- - automake
-
- - libtool
-
- - librabbitmq-dev
-
- - rabbitmq-server
-
- - collectd
-
- - intel-cmt-cat
-
-Hardware & Software Ingredients
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-SUT requirements:
-
-
- +-----------+--------------------+
- | Item | Description |
- +-----------+--------------------+
- | Memory | Min 20GB |
- +-----------+--------------------+
- | NICs | 2 x 10G |
- +-----------+--------------------+
- | OS | Ubuntu 16.04.3 LTS |
- +-----------+--------------------+
- | kernel | 4.4.0-34-generic |
- +-----------+--------------------+
- | DPDK | 17.02 |
- +-----------+--------------------+
-
-Boot and BIOS settings:
-
-
- +------------------+---------------------------------------------------+
- | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
- | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
- | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
- | | iommu=on iommu=pt intel_iommu=on |
- | | Note: nohz_full and rcu_nocbs is to disable Linux |
- | | kernel interrupts |
- +------------------+---------------------------------------------------+
- |BIOS | CPU Power and Performance Policy <Performance> |
- | | CPU C-state Disabled |
- | | CPU P-state Disabled |
- | | Enhanced Intel® Speedstep® Tech Disabled |
- | | Hyper-Threading Technology (If supported) Enabled |
- | | Virtualization Techology Enabled |
- | | Intel(R) VT for Direct I/O Enabled |
- | | Coherency Enabled |
- | | Turbo Boost Disabled |
- +------------------+---------------------------------------------------+
-
-
-
-Install Yardstick (NSB Testing)
--------------------------------
-
-Download the source code and install Yardstick from it
-
-.. code-block:: console
-
- git clone https://gerrit.opnfv.org/gerrit/yardstick
-
- cd yardstick
-
- # Switch to latest stable branch
- # git checkout <tag or stable branch>
- git checkout stable/euphrates
-
-Configure the network proxy, either using the environment variables or setting
-the global environment file:
-
-.. code-block:: ini
- cat /etc/environment
- http_proxy='http://proxy.company.com:port'
- https_proxy='http://proxy.company.com:port'
-
-.. code-block:: console
- export http_proxy='http://proxy.company.com:port'
- export https_proxy='http://proxy.company.com:port'
-
-The last step is to modify the Yardstick installation inventory, used by
-Ansible:
-
-.. code-block:: ini
- cat ./ansible/yardstick-install-inventory.ini
- [jumphost]
- localhost ansible_connection=local
-
- [yardstick-standalone]
- yardstick-standalone-node ansible_host=192.168.1.2
- yardstick-standalone-node-2 ansible_host=192.168.1.3
-
- # section below is only due backward compatibility.
- # it will be removed later
- [yardstick:children]
- jumphost
-
- [all:vars]
- ansible_user=root
- ansible_pass=root
-
-
-To execute an installation for a Bare-Metal or a Standalone context:
-
-.. code-block:: console
-
- ./nsb_setup.sh
-
-
-To execute an installation for an OpenStack context:
-
-.. code-block:: console
-
- ./nsb_setup.sh <path to admin-openrc.sh>
-
-Above command setup docker with latest yardstick code. To execute
-
-.. code-block:: console
-
- docker exec -it yardstick bash
-
-It will also automatically download all the packages needed for NSB Testing setup.
-Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
-
-System Topology:
-----------------
-
-.. code-block:: console
-
- +----------+ +----------+
- | | | |
- | | (0)----->(0) | |
- | TG1 | | DUT |
- | | | |
- | | (1)<-----(1) | |
- +----------+ +----------+
- trafficgen_1 vnf
-
-
-Environment parameters and credentials
---------------------------------------
-
-Config yardstick conf
-^^^^^^^^^^^^^^^^^^^^^
-
-If user did not run 'yardstick env influxdb' inside the container, which will generate
-correct yardstick.conf, then create the config file manually (run inside the container):
-
- cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
- vi /etc/yardstick/yardstick.conf
-
-Add trex_path, trex_client_lib and bin_path in 'nsb' section.
-
-::
-
- [DEFAULT]
- debug = True
- dispatcher = file, influxdb
-
- [dispatcher_influxdb]
- timeout = 5
- target = http://{YOUR_IP_HERE}:8086
- db_name = yardstick
- username = root
- password = root
-
- [nsb]
- trex_path=/opt/nsb_bin/trex/scripts
- bin_path=/opt/nsb_bin
- trex_client_lib=/opt/nsb_bin/trex_client/stl
-
-Run Yardstick - Network Service Testcases
------------------------------------------
-
-
-NS testing - using yardstick CLI
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
- See :doc:`04-installation`
-
-.. code-block:: console
-
-
- docker exec -it yardstick /bin/bash
- source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
- export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
- yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
-
-Network Service Benchmarking - Bare-Metal
------------------------------------------
-
-Bare-Metal Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Bare-Metal 2-Node setup:
-########################
-.. code-block:: console
-
- +----------+ +----------+
- | | | |
- | | (0)----->(0) | |
- | TG1 | | DUT |
- | | | |
- | | (n)<-----(n) | |
- +----------+ +----------+
- trafficgen_1 vnf
-
-Bare-Metal 3-Node setup - Correlated Traffic:
-#############################################
-.. code-block:: console
-
- +----------+ +----------+ +------------+
- | | | | | |
- | | | | | |
- | | (0)----->(0) | | | UDP |
- | TG1 | | DUT | | Replay |
- | | | | | |
- | | | |(1)<---->(0)| |
- +----------+ +----------+ +------------+
- trafficgen_1 vnf trafficgen_2
-
-
-Bare-Metal Config pod.yaml
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.::
-
- cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
-
-.. code-block:: YAML
-
- nodes:
- -
- name: trafficgen_1
- role: TrafficGen
- ip: 1.1.1.1
- user: root
- password: r00t
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.0"
- driver: i40e # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:01"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.1"
- driver: i40e # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.20"
- netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
-
- -
- name: vnf
- role: vnf
- ip: 1.1.1.2
- user: root
- password: r00t
- host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.0"
- driver: i40e # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.19"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:03"
-
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.1"
- driver: i40e # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.19"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:04"
- routing_table:
- - network: "152.16.100.20"
- netmask: "255.255.255.0"
- gateway: "152.16.100.20"
- if: "xe0"
- - network: "152.16.40.20"
- netmask: "255.255.255.0"
- gateway: "152.16.40.20"
- if: "xe1"
- nd_route_tbl:
- - network: "0064:ff9b:0:0:0:0:9810:6414"
- netmask: "112"
- gateway: "0064:ff9b:0:0:0:0:9810:6414"
- if: "xe0"
- - network: "0064:ff9b:0:0:0:0:9810:2814"
- netmask: "112"
- gateway: "0064:ff9b:0:0:0:0:9810:2814"
- if: "xe1"
-
-
-Network Service Benchmarking - Standalone Virtualization
---------------------------------------------------------
-
-SR-IOV:
-^^^^^^^
-
-SR-IOV Pre-requisites
-#####################
-
-On Host:
- a) Create a bridge for VM to connect to external network
-
- .. code-block:: console
-
- brctl addbr br-int
- brctl addif br-int <interface_name> #This interface is connected to internet
-
- b) Build guest image for VNF to run.
- Most of the sample test cases in Yardstick are using a guest image called
- ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
- Yardstick has a tool for building this custom image with samplevnf.
- It is necessary to have ``sudo`` rights to use this tool.
-
- Also you may need to install several additional packages to use this tool, by
- following the commands below::
-
- sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
-
- This image can be built using the following command in the directory where Yardstick is installed
-
- .. code-block:: console
-
- export YARD_IMG_ARCH='amd64'
- sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
-
- Please use ansible script to generate a cloud image refer to :doc:`04-installation`
-
- for more details refer to chapter :doc:`04-installation`
-
- .. note:: VM should be build with static IP and should be accessible from yardstick host.
-
-
-SR-IOV Config pod.yaml describing Topology
-##########################################
-
-SR-IOV 2-Node setup:
-####################
-.. code-block:: console
-
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | VF NIC | | VF NIC |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +----------+ +-------------------------+
- | | | ^ ^ |
- | | | | | |
- | | (0)<----->(0) | ------ | |
- | TG1 | | SUT | |
- | | | | |
- | | (n)<----->(n) |------------------ |
- +----------+ +-------------------------+
- trafficgen_1 host
-
-
-
-SR-IOV 3-Node setup - Correlated Traffic
-########################################
-.. code-block:: console
-
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | VF NIC | | VF NIC |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +----------+ +-------------------------+ +--------------+
- | | | ^ ^ | | |
- | | | | | | | |
- | | (0)<----->(0) | ------ | | | TG2 |
- | TG1 | | SUT | | | (UDP Replay) |
- | | | | | | |
- | | (n)<----->(n) | ------ | (n)<-->(n) | |
- +----------+ +-------------------------+ +--------------+
- trafficgen_1 host trafficgen_2
-
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.
-
-.. code-block:: console
-
- cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
- cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
-
-.. note:: Update all the required fields like ip, user, password, pcis, etc...
-
-SR-IOV Config pod_trex.yaml
-###########################
-
-.. code-block:: YAML
-
- nodes:
- -
- name: trafficgen_1
- role: TrafficGen
- ip: 1.1.1.1
- user: root
- password: r00t
- key_filename: /root/.ssh/id_rsa
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.0"
- driver: i40e # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:01"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.1"
- driver: i40e # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.20"
- netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
-
-SR-IOV Config host_sriov.yaml
-#############################
-
-.. code-block:: YAML
-
- nodes:
- -
- name: sriov
- role: Sriov
- ip: 192.168.100.101
- user: ""
- password: ""
-
-SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
-
-Update "contexts" section
-"""""""""""""""""""""""""
-
-.. code-block:: YAML
-
- contexts:
- - name: yardstick
- type: Node
- file: /etc/yardstick/nodes/standalone/pod_trex.yaml
- - type: StandaloneSriov
- file: /etc/yardstick/nodes/standalone/host_sriov.yaml
- name: yardstick
- vm_deploy: True
- flavor:
- images: "/var/lib/libvirt/images/ubuntu.qcow2"
- ram: 4096
- extra_specs:
- hw:cpu_sockets: 1
- hw:cpu_cores: 6
- hw:cpu_threads: 2
- user: "" # update VM username
- password: "" # update password
- servers:
- vnf:
- network_ports:
- mgmt:
- cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
- xe0:
- - uplink_0
- xe1:
- - downlink_0
- networks:
- uplink_0:
- phy_port: "0000:05:00.0"
- vpci: "0000:00:07.0"
- cidr: '152.16.100.10/24'
- gateway_ip: '152.16.100.20'
- downlink_0:
- phy_port: "0000:05:00.1"
- vpci: "0000:00:08.0"
- cidr: '152.16.40.10/24'
- gateway_ip: '152.16.100.20'
-
-
-
-OVS-DPDK:
-^^^^^^^^^
-
-OVS-DPDK Pre-requisites
-#######################
-
-On Host:
- a) Create a bridge for VM to connect to external network
-
- .. code-block:: console
-
- brctl addbr br-int
- brctl addif br-int <interface_name> #This interface is connected to internet
-
- b) Build guest image for VNF to run.
- Most of the sample test cases in Yardstick are using a guest image called
- ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
- Yardstick has a tool for building this custom image with samplevnf.
- It is necessary to have ``sudo`` rights to use this tool.
-
- Also you may need to install several additional packages to use this tool, by
- following the commands below::
-
- sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
-
- This image can be built using the following command in the directory where Yardstick is installed::
-
- export YARD_IMG_ARCH='amd64'
- sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
- sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
-
- for more details refer to chapter :doc:`04-installation`
-
- .. note:: VM should be build with static IP and should be accessible from yardstick host.
-
- c) OVS & DPDK version.
- - OVS 2.7 and DPDK 16.11.1 above version is supported
-
- d) Setup OVS/DPDK on host.
- Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
-
-
-OVS-DPDK Config pod.yaml describing Topology
-############################################
-
-OVS-DPDK 2-Node setup:
-######################
-
-
-.. code-block:: console
-
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | virtio | | virtio |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +--------+ +--------+
- | vHOST0 | | vHOST1 |
- +----------+ +-------------------------+
- | | | ^ ^ |
- | | | | | |
- | | (0)<----->(0) | ------ | |
- | TG1 | | SUT | |
- | | | (ovs-dpdk) | |
- | | (n)<----->(n) |------------------ |
- +----------+ +-------------------------+
- trafficgen_1 host
-
-
-OVS-DPDK 3-Node setup - Correlated Traffic
-##########################################
-
-.. code-block:: console
-
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | virtio | | virtio |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +--------+ +--------+
- | vHOST0 | | vHOST1 |
- +----------+ +-------------------------+ +------------+
- | | | ^ ^ | | |
- | | | | | | | |
- | | (0)<----->(0) | ------ | | | TG2 |
- | TG1 | | SUT | | |(UDP Replay)|
- | | | (ovs-dpdk) | | | |
- | | (n)<----->(n) | ------ |(n)<-->(n)| |
- +----------+ +-------------------------+ +------------+
- trafficgen_1 host trafficgen_2
-
-
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.
-
-.. code-block:: console
-
- cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
- cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
-
-.. note:: Update all the required fields like ip, user, password, pcis, etc...
-
-OVS-DPDK Config pod_trex.yaml
-#############################
-
-.. code-block:: YAML
-
- nodes:
- -
- name: trafficgen_1
- role: TrafficGen
- ip: 1.1.1.1
- user: root
- password: r00t
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.0"
- driver: i40e # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:01"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.1"
- driver: i40e # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.20"
- netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
-
-OVS-DPDK Config host_ovs.yaml
-#############################
-
-.. code-block:: YAML
-
- nodes:
- -
- name: ovs_dpdk
- role: OvsDpdk
- ip: 192.168.100.101
- user: ""
- password: ""
-
-ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
-
-Update "contexts" section
-"""""""""""""""""""""""""
-
-.. code-block:: YAML
-
- contexts:
- - name: yardstick
- type: Node
- file: /etc/yardstick/nodes/standalone/pod_trex.yaml
- - type: StandaloneOvsDpdk
- name: yardstick
- file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
- vm_deploy: True
- ovs_properties:
- version:
- ovs: 2.7.0
- dpdk: 16.11.1
- pmd_threads: 2
- ram:
- socket_0: 2048
- socket_1: 2048
- queues: 4
- vpath: "/usr/local"
-
- flavor:
- images: "/var/lib/libvirt/images/ubuntu.qcow2"
- ram: 4096
- extra_specs:
- hw:cpu_sockets: 1
- hw:cpu_cores: 6
- hw:cpu_threads: 2
- user: "" # update VM username
- password: "" # update password
- servers:
- vnf:
- network_ports:
- mgmt:
- cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
- xe0:
- - uplink_0
- xe1:
- - downlink_0
- networks:
- uplink_0:
- phy_port: "0000:05:00.0"
- vpci: "0000:00:07.0"
- cidr: '152.16.100.10/24'
- gateway_ip: '152.16.100.20'
- downlink_0:
- phy_port: "0000:05:00.1"
- vpci: "0000:00:08.0"
- cidr: '152.16.40.10/24'
- gateway_ip: '152.16.100.20'
-
-
-Enabling other Traffic generator
---------------------------------
-
-IxLoad:
-^^^^^^^
-
-1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
- Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
- If the installation was not done inside the container, after installing the IXIA client,
- check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
- inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
- to /usr/bin/ixiapython<ver> inside the container.
-
-2. Update pod_ixia.yaml file with ixia details.
-
- .. code-block:: console
-
- cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
-
- Config pod_ixia.yaml
-
- .. code-block:: yaml
-
-
- nodes:
- -
- name: trafficgen_1
- role: IxNet
- ip: 1.2.1.1 #ixia machine ip
- user: user
- password: r00t
- key_filename: /root/.ssh/id_rsa
- tg_config:
- ixchassis: "1.2.1.7" #ixia chassis ip
- tcl_port: "8009" # tcl server port
- lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
- root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
- py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
- py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
- dut_result_dir: "/mnt/ixia"
- version: 8.1
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:5" # Card:port
- driver: "none"
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:10:64:14:00"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:6" # [(Card, port)]
- driver: "none"
- dpdk_port_num: 1
- local_ip: "152.40.40.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:28:28:14:00"
-
- for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
-
-3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
- You will also need to configure the IxLoad machine to start the IXIA
- IxosTclServer. This can be started like so:
-
- - Connect to the IxLoad machine using RDP
- - Go to:
- ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
- or
- ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
-
-4. Create a folder "Results" in c:\ and share the folder on the network.
-
-5. execute testcase in samplevnf folder.
- eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
-
-IxNetwork:
-^^^^^^^^^^
-
-1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
- Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
-2. Update pod_ixia.yaml file with ixia details.
-
- .. code-block:: console
-
- cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
-
- Config pod_ixia.yaml
-
- .. code-block:: yaml
-
- nodes:
- -
- name: trafficgen_1
- role: IxNet
- ip: 1.2.1.1 #ixia machine ip
- user: user
- password: r00t
- key_filename: /root/.ssh/id_rsa
- tg_config:
- ixchassis: "1.2.1.7" #ixia chassis ip
- tcl_port: "8009" # tcl server port
- lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
- root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
- py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
- py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
- dut_result_dir: "/mnt/ixia"
- version: 8.1
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:5" # Card:port
- driver: "none"
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:10:64:14:00"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "2:6" # [(Card, port)]
- driver: "none"
- dpdk_port_num: 1
- local_ip: "152.40.40.20"
- netmask: "255.255.0.0"
- local_mac: "00:98:28:28:14:00"
-
- for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
-
-3. Start IxNetwork TCL Server
- You will also need to configure the IxNetwork machine to start the IXIA
- IxNetworkTclServer. This can be started like so:
-
- - Connect to the IxNetwork machine using RDP
- - Go to: ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
-
-4. execute testcase in samplevnf folder.
- eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
-
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
new file mode 100644
index 000000000..35f67b92f
--- /dev/null
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -0,0 +1,1542 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2016-2019 Intel Corporation.
+
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
+
+================
+NSB Installation
+================
+
+.. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
+.. _devstack: https://docs.openstack.org/devstack/pike/>
+.. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
+
+Abstract
+--------
+
+The steps needed to run Yardstick with NSB testing are:
+
+* Install Yardstick (NSB Testing).
+* Setup/reference ``pod.yaml`` describing Test topology.
+* Create/reference the test configuration yaml file.
+* Run the test case.
+
+Prerequisites
+-------------
+
+Refer to :doc:`04-installation` for more information on Yardstick
+prerequisites.
+
+Several prerequisites are needed for Yardstick (VNF testing):
+
+ * Python Modules: pyzmq, pika.
+ * flex
+ * bison
+ * build-essential
+ * automake
+ * libtool
+ * librabbitmq-dev
+ * rabbitmq-server
+ * collectd
+ * intel-cmt-cat
+
+Hardware & Software Ingredients
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+SUT requirements:
+
+ ======= ===================
+ Item Description
+ ======= ===================
+ Memory Min 20GB
+ NICs 2 x 10G
+ OS Ubuntu 16.04.3 LTS
+ kernel 4.4.0-34-generic
+ DPDK 17.02
+ ======= ===================
+
+Boot and BIOS settings:
+
+ ============= =================================================
+ Boot settings default_hugepagesz=1G hugepagesz=1G hugepages=16
+ hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33
+ nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33
+ iommu=on iommu=pt intel_iommu=on
+ Note: nohz_full and rcu_nocbs is to disable Linux
+ kernel interrupts
+ BIOS CPU Power and Performance Policy <Performance>
+ CPU C-state Disabled
+ CPU P-state Disabled
+ Enhanced Intel® Speedstep® Tech Disabl
+ Hyper-Threading Technology (If supported) Enabled
+ Virtualization Techology Enabled
+ Intel(R) VT for Direct I/O Enabled
+ Coherency Enabled
+ Turbo Boost Disabled
+ ============= =================================================
+
+Install Yardstick (NSB Testing)
+-------------------------------
+
+Yardstick with NSB can be installed using ``nsb_setup.sh``.
+The ``nsb_setup.sh`` allows to:
+
+1. Install Yardstick in specified mode: bare metal or container.
+ Refer :doc:`04-installation`.
+2. Install package dependencies on remote servers used as traffic generator or
+ sample VNF. Install DPDK, sample VNFs, TREX, collectd.
+ Add such servers to ``install-inventory.ini`` file to either
+ ``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
+ It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
+3. Build VM image either nsb or normal. The nsb VM image is used to run
+ Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
+ The normal VM image is used to run Yardstick ping tests in OpenStack context.
+4. Add nsb or normal VM image to OpenStack together with OpenStack variables.
+
+Firstly, configure the network proxy, either using the environment variables or
+setting the global environment file.
+
+Set environment in the file::
+
+ http_proxy='http://proxy.company.com:port'
+ https_proxy='http://proxy.company.com:port'
+
+Set environment variables:
+
+.. code-block:: console
+
+ export http_proxy='http://proxy.company.com:port'
+ export https_proxy='http://proxy.company.com:port'
+
+Download the source code and check out the latest stable branch:
+
+.. code-block:: console
+
+ git clone https://gerrit.opnfv.org/gerrit/yardstick
+ cd yardstick
+ # Switch to latest stable branch
+ git checkout stable/gambia
+
+Modify the Yardstick installation inventory used by Ansible:
+
+.. code-block:: ini
+
+ cat ./ansible/install-inventory.ini
+ [jumphost]
+ localhost ansible_connection=local
+
+ # section below is only due backward compatibility.
+ # it will be removed later
+ [yardstick:children]
+ jumphost
+
+ [yardstick-baremetal]
+ baremetal ansible_host=192.168.2.51 ansible_connection=ssh
+
+ [yardstick-standalone]
+ standalone ansible_host=192.168.2.52 ansible_connection=ssh
+
+ [all:vars]
+ # Uncomment credentials below if needed
+ ansible_user=root
+ ansible_ssh_pass=root
+ # ansible_ssh_private_key_file=/root/.ssh/id_rsa
+ # When IMG_PROPERTY is passed neither normal nor nsb set
+ # "path_to_vm=/path/to/image" to add it to OpenStack
+ # path_to_img=/tmp/workspace/yardstick-image.img
+
+ # List of CPUs to be isolated (not used by default)
+ # Grub line will be extended with:
+ # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
+ # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
+
+.. warning::
+
+ Before running ``nsb_setup.sh`` make sure python is installed on servers
+ added to ``yardstick-standalone`` and ``yardstick-baremetal`` groups.
+
+.. note::
+
+ SSH access without password needs to be configured for all your nodes
+ defined in ``install-inventory.ini`` file.
+ If you want to use password authentication you need to install ``sshpass``::
+
+ sudo -EH apt-get install sshpass
+
+
+.. note::
+
+ A VM image built by other means than Yardstick can be added to OpenStack.
+ Uncomment and set correct path to the VM image in the
+ ``install-inventory.ini`` file::
+
+ path_to_img=/tmp/workspace/yardstick-image.img
+
+
+.. note::
+
+ CPU isolation can be applied to the remote servers, like:
+ ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
+ ``install-inventory.ini`` file.
+
+By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
+docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
+installs packages to the servers given in ``yardstick-standalone`` and
+``yardstick-baremetal`` host groups.
+
+To pull Yardstick built based on Ubuntu 18 run::
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
+
+To change default behavior modify parameters for ``install.yaml`` in
+``nsb_setup.sh`` file.
+
+Refer chapter :doc:`04-installation` for more details on ``install.yaml``
+parameters.
+
+To execute an installation for a **BareMetal** or a **Standalone context**::
+
+ ./nsb_setup.sh
+
+To execute an installation for an **OpenStack** context::
+
+ ./nsb_setup.sh <path to admin-openrc.sh>
+
+.. note::
+
+ Yardstick may not be operational after distributive linux kernel update if
+ it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
+
+.. warning::
+
+ The Yardstick VM image (NSB or normal) cannot be built inside a VM.
+
+.. warning::
+
+ The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub.
+ Reboot of the servers from ``yardstick-standalone`` or
+ ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is
+ required to apply those changes.
+
+The above commands will set up Docker with the latest Yardstick code. To
+execute::
+
+ docker exec -it yardstick bash
+
+.. note::
+
+ It may be needed to configure tty in docker container to extend commandline
+ character length, for example:
+
+ stty size rows 58 cols 234
+
+It will also automatically download all the packages needed for NSB Testing
+setup. Refer chapter :doc:`04-installation` for more on Docker:
+:ref:`Install Yardstick using Docker`
+
+Bare Metal context example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG and DUT servers to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies. Install
+ python on servers.
+3. Start deployment using docker image based on Ubuntu 16:
+
+.. code-block:: console
+
+ ./nsb_setup.sh
+
+4. Reboot bare metal servers.
+5. Enter to yardstick container and modify pod yaml file and run tests.
+
+Standalone context example for Ubuntu 18
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+Ubuntu 18 is installed on all servers.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG server to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies.
+ Add server where VM with sample VNF will be deployed to
+ ``yardstick-standalone`` group in ``install-inventory.ini`` file.
+ Target VM image named ``yardstick-nsb-image.img`` will be placed to
+ ``/var/lib/libvirt/images/``.
+ Install python on servers.
+3. Modify ``nsb_setup.sh`` on jump host:
+
+.. code-block:: console
+
+ ansible-playbook \
+ -e IMAGE_PROPERTY='nsb' \
+ -e OS_RELEASE='bionic' \
+ -e INSTALLATION_MODE='container_pull' \
+ -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
+ -i install-inventory.ini install.yaml
+
+4. Start deployment with Yardstick docker images based on Ubuntu 18:
+
+.. code-block:: console
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
+
+5. Reboot servers.
+6. Enter to yardstick container and modify pod yaml file and run tests.
+
+
+System Topology
+---------------
+
+.. code-block:: console
+
+ +----------+ +----------+
+ | | | |
+ | | (0)----->(0) | |
+ | TG1 | | DUT |
+ | | | |
+ | | (1)<-----(1) | |
+ +----------+ +----------+
+ trafficgen_0 vnf
+
+
+Environment parameters and credentials
+--------------------------------------
+
+Configure yardstick.conf
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+If you did not run ``yardstick env influxdb`` inside the container to generate
+``yardstick.conf``, then create the config file manually (run inside the
+container)::
+
+ cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
+ vi /etc/yardstick/yardstick.conf
+
+Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
+section:
+
+.. code-block:: ini
+
+ [DEFAULT]
+ debug = True
+ dispatcher = influxdb
+
+ [dispatcher_influxdb]
+ timeout = 5
+ target = http://{YOUR_IP_HERE}:8086
+ db_name = yardstick
+ username = root
+ password = root
+
+ [nsb]
+ trex_path=/opt/nsb_bin/trex/scripts
+ bin_path=/opt/nsb_bin
+ trex_client_lib=/opt/nsb_bin/trex_client/stl
+
+Run Yardstick - Network Service Testcases
+-----------------------------------------
+
+NS testing - using yardstick CLI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ See :doc:`04-installation`
+
+Connect to the Yardstick container::
+
+ docker exec -it yardstick /bin/bash
+
+If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
+
+ source /etc/yardstick/openstack.creds
+
+In addition to the above, you need to set the ``EXTERNAL_NETWORK`` for
+OpenStack::
+
+ export EXTERNAL_NETWORK="<openstack public network>"
+
+Finally, you should be able to run the testcase::
+
+ yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
+
+Network Service Benchmarking - Bare-Metal
+-----------------------------------------
+
+Bare-Metal Config pod.yaml describing Topology
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Bare-Metal 2-Node setup
++++++++++++++++++++++++
+.. code-block:: console
+
+ +----------+ +----------+
+ | | | |
+ | | (0)----->(0) | |
+ | TG1 | | DUT |
+ | | | |
+ | | (n)<-----(n) | |
+ +----------+ +----------+
+ trafficgen_0 vnf
+
+Bare-Metal 3-Node setup - Correlated Traffic
+++++++++++++++++++++++++++++++++++++++++++++
+.. code-block:: console
+
+ +----------+ +----------+ +------------+
+ | | | | | |
+ | | | | | |
+ | | (0)----->(0) | | | UDP |
+ | TG1 | | DUT | | Replay |
+ | | | | | |
+ | | | |(1)<---->(0)| |
+ +----------+ +----------+ +------------+
+ trafficgen_0 vnf trafficgen_1
+
+
+Bare-Metal Config pod.yaml
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
+topology and update all the required fields.::
+
+ cp <yardstick>/etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: trafficgen_0
+ role: TrafficGen
+ ip: 1.1.1.1
+ user: root
+ password: r00t
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:02"
+
+ -
+ name: vnf
+ role: vnf
+ ip: 1.1.1.2
+ user: root
+ password: r00t
+ host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.19"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:03"
+
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.19"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:04"
+ routing_table:
+ - network: "152.16.100.20"
+ netmask: "255.255.255.0"
+ gateway: "152.16.100.20"
+ if: "xe0"
+ - network: "152.16.40.20"
+ netmask: "255.255.255.0"
+ gateway: "152.16.40.20"
+ if: "xe1"
+ nd_route_tbl:
+ - network: "0064:ff9b:0:0:0:0:9810:6414"
+ netmask: "112"
+ gateway: "0064:ff9b:0:0:0:0:9810:6414"
+ if: "xe0"
+ - network: "0064:ff9b:0:0:0:0:9810:2814"
+ netmask: "112"
+ gateway: "0064:ff9b:0:0:0:0:9810:2814"
+ if: "xe1"
+
+
+Standalone Virtualization
+-------------------------
+
+VM can be deployed manually or by Yardstick. If parameter *vm_deploy* is set
+to `True` VM will be deployed by Yardstick. Otherwise VM should be deployed
+manually. Test case example, context section::
+
+ contexts:
+ ...
+ vm_deploy: True
+
+
+SR-IOV
+^^^^^^
+
+SR-IOV Pre-requisites
++++++++++++++++++++++
+
+On Host, where VM is created:
+ 1. Create and configure a bridge named ``br-int`` for VM to connect to
+ external network. Currently this can be done using VXLAN tunnel.
+
+ Execute the following on host, where VM is created::
+
+ ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
+ brctl addbr br-int
+ brctl addif br-int vxlan0
+ ip link set dev vxlan0 up
+ ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
+ ip link set dev br-int up
+
+ .. note:: You may need to add extra rules to iptable to forward traffic.
+
+ .. code-block:: console
+
+ iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
+ iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
+
+ Execute the following on a jump host:
+
+ .. code-block:: console
+
+ ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
+ ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
+ ip link set dev vxlan0 up
+
+ .. note:: Host and jump host are different baremetal servers.
+
+ 2. Modify test case management CIDR.
+ IP addresses IP#1, IP#2 and CIDR must be in the same network.
+
+ .. code-block:: YAML
+
+ servers:
+ vnf_0:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.7/24'
+
+ 3. Build guest image for VNF to run.
+ Most of the sample test cases in Yardstick are using a guest image called
+ ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with SampleVNF.
+ It is necessary to have ``sudo`` rights to use this tool.
+
+ Also you may need to install several additional packages to use this tool, by
+ following the commands below::
+
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+
+ This image can be built using the following command in the directory where
+ Yardstick is installed::
+
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+
+ For instructions on generating a cloud image using Ansible, refer to
+ :doc:`04-installation`.
+
+ .. note:: VM should be build with static IP and be accessible from the
+ Yardstick host.
+
+
+SR-IOV Config pod.yaml describing Topology
+++++++++++++++++++++++++++++++++++++++++++
+
+SR-IOV 2-Node setup
++++++++++++++++++++
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | VF NIC | | VF NIC |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +----------+ +-------------------------+
+ | | | ^ ^ |
+ | | | | | |
+ | | (0)<----->(0) | ------ SUT | |
+ | TG1 | | | |
+ | | (n)<----->(n) | ----------------- |
+ | | | |
+ +----------+ +-------------------------+
+ trafficgen_0 host
+
+
+
+SR-IOV 3-Node setup - Correlated Traffic
+++++++++++++++++++++++++++++++++++++++++
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | VF NIC | | VF NIC |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +----------+ +---------------------+ +--------------+
+ | | | ^ ^ | | |
+ | | | | | | | |
+ | | (0)<----->(0) |----- | | | TG2 |
+ | TG1 | | SUT | | | (UDP Replay) |
+ | | | | | | |
+ | | (n)<----->(n) | -----| (n)<-->(n) | |
+ +----------+ +---------------------+ +--------------+
+ trafficgen_0 host trafficgen_1
+
+Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
+topology and update all the required fields.
+
+.. code-block:: console
+
+ cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
+ cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
+
+.. note:: Update all the required fields like ip, user, password, pcis, etc...
+
+SR-IOV Config pod_trex.yaml
++++++++++++++++++++++++++++
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: trafficgen_0
+ role: TrafficGen
+ ip: 1.1.1.1
+ user: root
+ password: r00t
+ key_filename: /root/.ssh/id_rsa
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:02"
+
+SR-IOV Config host_sriov.yaml
++++++++++++++++++++++++++++++
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: sriov
+ role: Sriov
+ ip: 192.168.100.101
+ user: ""
+ password: ""
+
+SR-IOV testcase update:
+``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
+
+Update contexts section
+'''''''''''''''''''''''
+
+.. code-block:: YAML
+
+ contexts:
+ - name: yardstick
+ type: Node
+ file: /etc/yardstick/nodes/standalone/pod_trex.yaml
+ - type: StandaloneSriov
+ file: /etc/yardstick/nodes/standalone/host_sriov.yaml
+ name: yardstick
+ vm_deploy: True
+ flavor:
+ images: "/var/lib/libvirt/images/ubuntu.qcow2"
+ ram: 4096
+ extra_specs:
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ user: "" # update VM username
+ password: "" # update password
+ servers:
+ vnf_0:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
+ xe0:
+ - uplink_0
+ xe1:
+ - downlink_0
+ networks:
+ uplink_0:
+ phy_port: "0000:05:00.0"
+ vpci: "0000:00:07.0"
+ cidr: '152.16.100.10/24'
+ gateway_ip: '152.16.100.20'
+ downlink_0:
+ phy_port: "0000:05:00.1"
+ vpci: "0000:00:08.0"
+ cidr: '152.16.40.10/24'
+ gateway_ip: '152.16.100.20'
+
+
+SRIOV configuration options
++++++++++++++++++++++++++++
+
+The only configuration option available for SRIOV is *vpci*. It is used as base
+address for VFs that are created during SRIOV test case execution.
+
+ .. code-block:: yaml+jinja
+
+ networks:
+ uplink_0:
+ phy_port: "0000:05:00.0"
+ vpci: "0000:00:07.0"
+ cidr: '152.16.100.10/24'
+ gateway_ip: '152.16.100.20'
+ downlink_0:
+ phy_port: "0000:05:00.1"
+ vpci: "0000:00:08.0"
+ cidr: '152.16.40.10/24'
+ gateway_ip: '152.16.100.20'
+
+.. _`VM image properties label`:
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties example under *flavor* section:
+
+ .. code-block:: console
+
+ flavor:
+ images: <path>
+ ram: 8192
+ extra_specs:
+ machine_type: 'pc-i440fx-xenial'
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ hw_socket: 0
+ cputune: |
+ <cputune>
+ <vcpupin vcpu="0" cpuset="7"/>
+ <vcpupin vcpu="1" cpuset="8"/>
+ ...
+ <vcpupin vcpu="11" cpuset="18"/>
+ <emulatorpin cpuset="11"/>
+ </cputune>
+ user: ""
+ password: ""
+
+VM image properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | images || Path to the VM image generated by |
+ | | ``nsb_setup.sh`` |
+ | || Default path is ``/var/lib/libvirt/images/`` |
+ | || Default file name ``yardstick-nsb-image.img`` |
+ | | or ``yardstick-image.img`` |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for VM |
+ | || Default is 4096 MB |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_sockets || Number of sockets provided to the guest VM |
+ | || Default is 1 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_cores || Number of cores provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_threads || Number of threads provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw_socket || Generate vcpu cpuset from given HW socket |
+ | || Default is 0 |
+ +-------------------------+-------------------------------------------------+
+ | cputune || Maps virtual cpu with logical cpu |
+ +-------------------------+-------------------------------------------------+
+ | machine_type || Machine type to be emulated in VM |
+ | || Default is 'pc-i440fx-xenial' |
+ +-------------------------+-------------------------------------------------+
+ | user || User name to access the VM |
+ | || Default value is 'root' |
+ +-------------------------+-------------------------------------------------+
+ | password || Password to access the VM |
+ +-------------------------+-------------------------------------------------+
+
+
+OVS-DPDK
+^^^^^^^^
+
+OVS-DPDK Pre-requisites
++++++++++++++++++++++++
+
+On Host, where VM is created:
+ 1. Create and configure a bridge named ``br-int`` for VM to connect to
+ external network. Currently this can be done using VXLAN tunnel.
+
+ Execute the following on host, where VM is created:
+
+ .. code-block:: console
+
+ ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
+ brctl addbr br-int
+ brctl addif br-int vxlan0
+ ip link set dev vxlan0 up
+ ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
+ ip link set dev br-int up
+
+ .. note:: May be needed to add extra rules to iptable to forward traffic.
+
+ .. code-block:: console
+
+ iptables -A FORWARD -i br-int -s <network ip address>/<netmask> -j ACCEPT
+ iptables -A FORWARD -o br-int -d <network ip address>/<netmask> -j ACCEPT
+
+ Execute the following on a jump host:
+
+ .. code-block:: console
+
+ ip link add type vxlan remote <DUT IP> local <Jumphost IP> id <ID: 10> dstport 4789
+ ip addr add <IP#2, like: 172.20.2.2/24> dev vxlan0
+ ip link set dev vxlan0 up
+
+ .. note:: Host and jump host are different baremetal servers.
+
+ 2. Modify test case management CIDR.
+ IP addresses IP#1, IP#2 and CIDR must be in the same network.
+
+ .. code-block:: YAML
+
+ servers:
+ vnf_0:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.7/24'
+
+ 3. Build guest image for VNF to run.
+ Most of the sample test cases in Yardstick are using a guest image called
+ ``yardstick-nsb-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with SampleVNF.
+ It is necessary to have ``sudo`` rights to use this tool.
+
+ You may need to install several additional packages to use this tool, by
+ following the commands below::
+
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+
+ This image can be built using the following command in the directory where
+ Yardstick is installed::
+
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+ sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
+
+ for more details refer to chapter :doc:`04-installation`
+
+ .. note:: VM should be build with static IP and should be accessible from
+ yardstick host.
+
+4. OVS & DPDK version:
+
+ * OVS 2.7 and DPDK 16.11.1 above version is supported
+
+Refer setup instructions at `OVS-DPDK`_ on host.
+
+OVS-DPDK Config pod.yaml describing Topology
+++++++++++++++++++++++++++++++++++++++++++++
+
+OVS-DPDK 2-Node setup
++++++++++++++++++++++
+
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | virtio | | virtio |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +--------+ +--------+
+ | vHOST0 | | vHOST1 |
+ +----------+ +-------------------------+
+ | | | ^ ^ |
+ | | | | | |
+ | | (0)<----->(0) | ------ | |
+ | TG1 | | SUT | |
+ | | | (ovs-dpdk) | |
+ | | (n)<----->(n) |------------------ |
+ +----------+ +-------------------------+
+ trafficgen_0 host
+
+
+OVS-DPDK 3-Node setup - Correlated Traffic
+++++++++++++++++++++++++++++++++++++++++++
+
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | virtio | | virtio |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +--------+ +--------+
+ | vHOST0 | | vHOST1 |
+ +----------+ +-------------------------+ +------------+
+ | | | ^ ^ | | |
+ | | | | | | | |
+ | | (0)<----->(0) | ------ | | | TG2 |
+ | TG1 | | SUT | | |(UDP Replay)|
+ | | | (ovs-dpdk) | | | |
+ | | (n)<----->(n) | ------ |(n)<-->(n)| |
+ +----------+ +-------------------------+ +------------+
+ trafficgen_0 host trafficgen_1
+
+
+Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
+the topology and update all the required fields::
+
+ cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
+ cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
+
+.. note:: Update all the required fields like ip, user, password, pcis, etc...
+
+OVS-DPDK Config pod_trex.yaml
++++++++++++++++++++++++++++++
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: trafficgen_0
+ role: TrafficGen
+ ip: 1.1.1.1
+ user: root
+ password: r00t
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:02"
+
+OVS-DPDK Config host_ovs.yaml
++++++++++++++++++++++++++++++
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: ovs_dpdk
+ role: OvsDpdk
+ ip: 192.168.100.101
+ user: ""
+ password: ""
+
+ovs_dpdk testcase update:
+``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
+
+Update contexts section
+'''''''''''''''''''''''
+
+.. code-block:: YAML
+
+ contexts:
+ - name: yardstick
+ type: Node
+ file: /etc/yardstick/nodes/standalone/pod_trex.yaml
+ - type: StandaloneOvsDpdk
+ name: yardstick
+ file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
+ vm_deploy: True
+ ovs_properties:
+ version:
+ ovs: 2.7.0
+ dpdk: 16.11.1
+ pmd_threads: 2
+ ram:
+ socket_0: 2048
+ socket_1: 2048
+ queues: 4
+ vpath: "/usr/local"
+
+ flavor:
+ images: "/var/lib/libvirt/images/ubuntu.qcow2"
+ ram: 4096
+ extra_specs:
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ user: "" # update VM username
+ password: "" # update password
+ servers:
+ vnf_0:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
+ xe0:
+ - uplink_0
+ xe1:
+ - downlink_0
+ networks:
+ uplink_0:
+ phy_port: "0000:05:00.0"
+ vpci: "0000:00:07.0"
+ cidr: '152.16.100.10/24'
+ gateway_ip: '152.16.100.20'
+ downlink_0:
+ phy_port: "0000:05:00.1"
+ vpci: "0000:00:08.0"
+ cidr: '152.16.40.10/24'
+ gateway_ip: '152.16.100.20'
+
+OVS-DPDK configuration options
+++++++++++++++++++++++++++++++
+
+There are number of configuration options available for OVS-DPDK context in
+test case. Mostly they are used for performance tuning.
+
+OVS-DPDK properties:
+''''''''''''''''''''
+
+OVS-DPDK properties example under *ovs_properties* section:
+
+ .. code-block:: console
+
+ ovs_properties:
+ version:
+ ovs: 2.8.1
+ dpdk: 17.05.2
+ pmd_threads: 4
+ pmd_cpu_mask: "0x3c"
+ ram:
+ socket_0: 2048
+ socket_1: 2048
+ queues: 2
+ vpath: "/usr/local"
+ max_idle: 30000
+ lcore_mask: 0x02
+ dpdk_pmd-rxq-affinity:
+ 0: "0:2,1:2"
+ 1: "0:2,1:2"
+ 2: "0:3,1:3"
+ 3: "0:3,1:3"
+ vhost_pmd-rxq-affinity:
+ 0: "0:3,1:3"
+ 1: "0:3,1:3"
+ 2: "0:4,1:4"
+ 3: "0:4,1:4"
+
+OVS-DPDK properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | version || Version of OVS and DPDK to be installed |
+ | || There is a relation between OVS and DPDK |
+ | | version which can be found at |
+ | | `OVS-DPDK-versions`_ |
+ | || By default OVS: 2.6.0, DPDK: 16.07.2 |
+ +-------------------------+-------------------------------------------------+
+ | lcore_mask || Core bitmask used during DPDK initialization |
+ | | where the non-datapath OVS-DPDK threads such |
+ | | as handler and revalidator threads run |
+ +-------------------------+-------------------------------------------------+
+ | pmd_cpu_mask || Core bitmask that sets which cores are used by |
+ | || OVS-DPDK for datapath packet processing |
+ +-------------------------+-------------------------------------------------+
+ | pmd_threads || Number of PMD threads used by OVS-DPDK for |
+ | | datapath |
+ | || This core mask is evaluated in Yardstick |
+ | || It will be used if pmd_cpu_mask is not given |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for each socket, MB |
+ | || Default is 2048 MB |
+ +-------------------------+-------------------------------------------------+
+ | queues || Number of RX queues used for DPDK physical |
+ | | interface |
+ +-------------------------+-------------------------------------------------+
+ | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vpath || User path for openvswitch files |
+ | || Default is ``/usr/local`` |
+ +-------------------------+-------------------------------------------------+
+ | max_idle || The maximum time that idle flows will remain |
+ | | cached in the datapath, ms |
+ +-------------------------+-------------------------------------------------+
+
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties are same as for SRIOV :ref:`VM image properties label`.
+
+
+OpenStack with SR-IOV support
+-----------------------------
+
+This section describes how to run a Sample VNF test case, using Heat context,
+with SR-IOV. It also covers how to install OpenStack in Ubuntu 16.04, using
+DevStack, with SR-IOV support.
+
+
+Single node OpenStack with external TG
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ +----------------------------+
+ |OpenStack(DevStack) |
+ | |
+ | +--------------------+ |
+ | |sample-VNF VM | |
+ | | | |
+ | | DUT | |
+ | | (VNF) | |
+ | | | |
+ | +--------+ +--------+ |
+ | | VF NIC | | VF NIC | |
+ | +-----+--+--+----+---+ |
+ | ^ ^ |
+ | | | |
+ +----------+ +---------+----------+-------+
+ | | | VF0 VF1 |
+ | | | ^ ^ |
+ | | | | SUT | |
+ | TG | (PF0)<----->(PF0) +---------+ | |
+ | | | | |
+ | | (PF1)<----->(PF1) +--------------------+ |
+ | | | |
+ +----------+ +----------------------------+
+ trafficgen_0 host
+
+
+Host pre-configuration
+++++++++++++++++++++++
+
+.. warning:: The following configuration requires sudo access to the system.
+ Make sure that your user have the access.
+
+Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
+manufacturers disable this extension by default.
+
+Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
+config file ``/etc/default/grub``.
+
+For the Intel platform::
+
+ ...
+ GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
+ ...
+
+For the AMD platform::
+
+ ...
+ GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
+ ...
+
+Update the grub configuration file and restart the system:
+
+.. warning:: The following command will reboot the system.
+
+.. code:: bash
+
+ sudo update-grub
+ sudo reboot
+
+Make sure the extension has been enabled::
+
+ sudo journalctl -b 0 | grep -e IOMMU -e DMAR
+
+ Feb 06 14:50:14 hostname kernel: ACPI: DMAR 0x000000006C406000 0001E0 (v01 INTEL S2600WF 00000001 INTL 20091013)
+ Feb 06 14:50:14 hostname kernel: DMAR: IOMMU enabled
+ Feb 06 14:50:14 hostname kernel: DMAR: Host address width 46
+ Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000d37fc000 flags: 0x0
+ Feb 06 14:50:14 hostname kernel: DMAR: dmar0: reg_base_addr d37fc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
+ Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000e0ffc000 flags: 0x0
+ Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
+ Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
+
+.. TODO: Refer to the yardstick installation guide for proxy set up
+
+Setup system proxy (if needed). Add the following configuration into the
+``/etc/environment`` file:
+
+.. note:: The proxy server name/port and IPs should be changed according to
+ actual/current proxy configuration in the lab.
+
+.. code:: bash
+
+ export http_proxy=http://proxy.company.com:port
+ export https_proxy=http://proxy.company.com:port
+ export ftp_proxy=http://proxy.company.com:port
+ export no_proxy=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
+ export NO_PROXY=localhost,127.0.0.1,company.com,<IP-OF-HOST1>,<IP-OF-HOST2>,...
+
+Upgrade the system:
+
+.. code:: bash
+
+ sudo -EH apt-get update
+ sudo -EH apt-get upgrade
+ sudo -EH apt-get dist-upgrade
+
+Install dependencies needed for DevStack
+
+.. code:: bash
+
+ sudo -EH apt-get install python python-dev python-pip
+
+Setup SR-IOV ports on the host:
+
+.. note:: The ``enp24s0f0``, ``enp24s0f1`` are physical function (PF) interfaces
+ on a host and ``enp24s0f3`` is a public interface used in OpenStack, so the
+ interface names should be changed according to the HW environment used for
+ testing.
+
+.. code:: bash
+
+ sudo ip link set dev enp24s0f0 up
+ sudo ip link set dev enp24s0f1 up
+ sudo ip link set dev enp24s0f3 up
+
+ # Create VFs on PF
+ echo 2 | sudo tee /sys/class/net/enp24s0f0/device/sriov_numvfs
+ echo 2 | sudo tee /sys/class/net/enp24s0f1/device/sriov_numvfs
+
+
+DevStack installation
++++++++++++++++++++++
+
+If you want to try out NSB, but don't have OpenStack set-up, you can use
+`Devstack`_ to install OpenStack on a host. Please note, that the
+``stable/pike`` branch of devstack repo should be used during the installation.
+The required ``local.conf`` configuration file is described below.
+
+DevStack configuration file:
+
+.. note:: Update the devstack configuration file by replacing angluar brackets
+ with a short description inside.
+
+.. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
+ commands to get device and vendor id of the virtual function (VF).
+
+.. literalinclude:: code/single-devstack-local.conf
+ :language: ini
+
+Start the devstack installation on a host.
+
+TG host configuration
++++++++++++++++++++++
+
+Yardstick automatically installs and configures Trex traffic generator on TG
+host based on provided POD file (see below). Anyway, it's recommended to check
+the compatibility of the installed NIC on the TG server with software Trex
+using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
+
+Run the Sample VNF test case
+++++++++++++++++++++++++++++
+
+There is an example of Sample VNF test case ready to be executed in an
+OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
+tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_trex.yaml``.
+
+Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
+context.
+
+Create pod file for TG in the yardstick repo folder located in the yardstick
+container:
+
+.. note:: The ``ip``, ``user``, ``password`` and ``vpci`` fields show be changed
+ according to HW environment used for the testing. Use ``lshw -c network -businfo``
+ command to get the PF PCI address for ``vpci`` field.
+
+.. literalinclude:: code/single-yardstick-pod.conf
+ :language: ini
+
+Run the Sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
+tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
+context using steps described in `NS testing - using yardstick CLI`_ section.
+
+
+Multi node OpenStack TG and VNF setup (two nodes)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+ +----------------------------+ +----------------------------+
+ |OpenStack(DevStack) | |OpenStack(DevStack) |
+ | | | |
+ | +--------------------+ | | +--------------------+ |
+ | |sample-VNF VM | | | |sample-VNF VM | |
+ | | | | | | | |
+ | | TG | | | | DUT | |
+ | | trafficgen_0 | | | | (VNF) | |
+ | | | | | | | |
+ | +--------+ +--------+ | | +--------+ +--------+ |
+ | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | |
+ | +----+---+--+----+---+ | | +-----+--+--+----+---+ |
+ | ^ ^ | | ^ ^ |
+ | | | | | | | |
+ +--------+-----------+-------+ +---------+----------+-------+
+ | VF0 VF1 | | VF0 VF1 |
+ | ^ ^ | | ^ ^ |
+ | | SUT2 | | | | SUT1 | |
+ | | +-------+ (PF0)<----->(PF0) +---------+ | |
+ | | | | | |
+ | +-------------------+ (PF1)<----->(PF1) +--------------------+ |
+ | | | |
+ +----------------------------+ +----------------------------+
+ host2 (compute) host1 (controller)
+
+
+Controller/Compute pre-configuration
+++++++++++++++++++++++++++++++++++++
+
+Pre-configuration of the controller and compute hosts are the same as
+described in `Host pre-configuration`_ section.
+
+DevStack configuration
+++++++++++++++++++++++
+
+A reference ``local.conf`` for deploying OpenStack in a multi-host environment
+using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
+devstack repo should be used during the installation.
+
+.. note:: Update the devstack configuration files by replacing angluar brackets
+ with a short description inside.
+
+.. note:: Use ``lspci | grep Ether`` & ``lspci -n | grep <PCI ADDRESS>``
+ commands to get device and vendor id of the virtual function (VF).
+
+DevStack configuration file for controller host:
+
+.. literalinclude:: code/multi-devstack-controller-local.conf
+ :language: ini
+
+DevStack configuration file for compute host:
+
+.. literalinclude:: code/multi-devstack-compute-local.conf
+ :language: ini
+
+Start the devstack installation on the controller and compute hosts.
+
+Run the sample vFW TC
++++++++++++++++++++++
+
+Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
+context.
+
+Run the sample vFW RFC2544 SR-IOV test case
+(``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
+in the heat context using steps described in
+`NS testing - using yardstick CLI`_ section and the following Yardstick command
+line arguments:
+
+.. code:: bash
+
+ yardstick -d task start --task-args='{"provider": "sriov"}' \
+ samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
+
+
+Enabling other Traffic generators
+---------------------------------
+
+IxLoad
+^^^^^^
+
+1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
+ ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
+ Install - ``<IxLoadTclApi verson>Linux64.bin.tgz`` and
+ ``<IxOS version>Linux64.bin.tar.gz``
+ If the installation was not done inside the container, after installing
+ the IXIA client, check ``/opt/ixia/ixload/<ver>/bin/ixloadpython`` and make
+ sure you can run this cmd inside the yardstick container. Usually user is
+ required to copy or link ``/opt/ixia/python/<ver>/bin/ixiapython`` to
+ ``/usr/bin/ixiapython<ver>`` inside the container.
+
+2. Update ``pod_ixia.yaml`` file with ixia details.
+
+ .. code-block:: console
+
+ cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
+ etc/yardstick/nodes/pod_ixia.yaml
+
+ Config ``pod_ixia.yaml``
+
+ .. literalinclude:: code/pod_ixia.yaml
+ :language: yaml
+
+ for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
+ for ovs-dpdk/sriov configuration
+
+3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
+ You will also need to configure the IxLoad machine to start the IXIA
+ IxosTclServer. This can be started like so:
+
+ * Connect to the IxLoad machine using RDP
+ * Go to:
+ ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
+ or
+ ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
+
+4. Create a folder ``Results`` in c:\ and share the folder on the network.
+
+5. Execute testcase in samplevnf folder e.g.
+ ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
+
+IxNetwork
+^^^^^^^^^
+
+IxNetwork testcases use IxNetwork API Python Bindings module, which is
+installed as part of the requirements of the project.
+
+1. Update ``pod_ixia.yaml`` file with ixia details.
+
+ .. code-block:: console
+
+ cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
+ etc/yardstick/nodes/pod_ixia.yaml
+
+ Configure ``pod_ixia.yaml``
+
+ .. literalinclude:: code/pod_ixia.yaml
+ :language: yaml
+
+ for sriov/ovs_dpdk pod files, please refer to above
+ `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
+
+2. Start IxNetwork TCL Server
+ You will also need to configure the IxNetwork machine to start the IXIA
+ IxNetworkTclServer. This can be started like so:
+
+ * Connect to the IxNetwork machine using RDP
+ * Go to:
+ ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer``
+ (or ``IxNetworkApiServer``)
+
+3. Execute testcase in samplevnf folder e.g.
+ ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
+
+Spirent Landslide
+-----------------
+
+In order to use Spirent Landslide for vEPC testcases, some dependencies have
+to be preinstalled and properly configured.
+
+- Java
+
+ 32-bit Java installation is required for the Spirent Landslide TCL API.
+
+ | ``$ sudo apt-get install openjdk-8-jdk:i386``
+
+ .. important::
+ Make sure ``LD_LIBRARY_PATH`` is pointing to 32-bit JRE. For more details
+ check `Linux Troubleshooting <http://TAS_HOST_IP/tclapiinstall.html#trouble>`
+ section of installation instructions.
+
+- LsApi (Tcl API module)
+
+ Follow Landslide documentation for detailed instructions on Linux
+ installation of Tcl API and its dependencies
+ ``http://TAS_HOST_IP/tclapiinstall.html``.
+ For working with LsApi Python wrapper only steps 1-5 are required.
+
+ .. note:: After installation make sure your API home path is included in
+ ``PYTHONPATH`` environment variable.
+
+ .. important::
+ The current version of LsApi module has an issue with reading LD_LIBRARY_PATH.
+ For LsApi module to initialize correctly following lines (184-186) in
+ lsapi.py
+
+ .. code-block:: python
+
+ ldpath = os.environ.get('LD_LIBRARY_PATH', '')
+ if ldpath == '':
+ environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
+
+ should be changed to:
+
+ .. code-block:: python
+
+ ldpath = os.environ.get('LD_LIBRARY_PATH', '')
+ if not ldpath == '':
+ environ['LD_LIBRARY_PATH'] = environ['LD_LIBRARY_PATH'] + ':' + ldpath
+
+.. note:: The Spirent landslide TCL software package needs to be updated in case
+ the user upgrades to a new version of Spirent landslide software.
diff --git a/docs/testing/user/userguide/13-nsb_operation.rst b/docs/testing/user/userguide/13-nsb_operation.rst
deleted file mode 100644
index 8c477fa3f..000000000
--- a/docs/testing/user/userguide/13-nsb_operation.rst
+++ /dev/null
@@ -1,270 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
-
-Yardstick - NSB Testing - Operation
-===================================
-
-Abstract
---------
-
-NSB test configuration and OpenStack setup requirements
-
-
-OpenStack Network Configuration
--------------------------------
-
-NSB requires certain OpenStack deployment configurations.
-For optimal VNF characterization using external traffic generators NSB requires
-provider/external networks.
-
-
-Provider networks
-^^^^^^^^^^^^^^^^^
-
-The VNFs require a clear L2 connect to the external network in order to generate
-realistic traffic from multiple address ranges and port
-
-In order to prevent Neutron from filtering traffic we have to disable Neutron Port Security.
-We also disable DHCP on the data ports because we are binding the ports to DPDK and do not need
-DHCP addresses. We also disable gateways because multiple default gateways can prevent SSH access
-to the VNF from the floating IP. We only want a gateway on the mgmt network
-
-.. code-block:: yaml
-
- uplink_0:
- cidr: '10.1.0.0/24'
- gateway_ip: 'null'
- port_security_enabled: False
- enable_dhcp: 'false'
-
-Heat Topologies
-^^^^^^^^^^^^^^^
-
-By default Heat will attach every node to every Neutron network that is created.
-For scale-out tests we do not want to attach every node to every network.
-
-For each node you can specify which ports are on which network using the
-network_ports dictionary.
-
-In this example we have ``TRex xe0 <-> xe0 VNF xe1 <-> xe0 UDP_Replay``
-
-.. code-block:: yaml
-
- vnf_0:
- floating_ip: true
- placement: "pgrp1"
- network_ports:
- mgmt:
- - mgmt
- uplink_0:
- - xe0
- downlink_0:
- - xe1
- tg_0:
- floating_ip: true
- placement: "pgrp1"
- network_ports:
- mgmt:
- - mgmt
- uplink_0:
- - xe0
- # Trex always needs two ports
- uplink_1:
- - xe1
- tg_1:
- floating_ip: true
- placement: "pgrp1"
- network_ports:
- mgmt:
- - mgmt
- downlink_0:
- - xe0
-
-Collectd KPIs
--------------
-
-NSB can collect KPIs from collected. We have support for various plugins enabled by the
-Barometer project.
-
-The default yardstick-samplevnf has collectd installed. This allows for collecting KPIs
-from the VNF.
-
-Collecting KPIs from the NFVi is more complicated and requires manual setup.
-We assume that collectd is not installed on the compute nodes.
-
-To collectd KPIs from the NFVi compute nodes:
-
-
- * install_collectd on the compute nodes
- * create pod.yaml for the compute nodes
- * enable specific plugins depending on the vswitch and DPDK
-
- example pod.yaml section for Compute node running collectd.
-
-.. code-block:: yaml
-
- -
- name: "compute-1"
- role: Compute
- ip: "10.1.2.3"
- user: "root"
- ssh_port: "22"
- password: ""
- collectd:
- interval: 5
- plugins:
- # for libvirtd stats
- virt: {}
- intel_pmu: {}
- ovs_stats:
- # path to OVS socket
- ovs_socket_path: /var/run/openvswitch/db.sock
- intel_rdt: {}
-
-
-
-Scale-Up
-------------------
-
-VNFs performance data with scale-up
-
- * Helps to figure out optimal number of cores specification in the Virtual Machine template creation or VNF
- * Helps in comparison between different VNF vendor offerings
- * Better the scale-up index, indicates the performance scalability of a particular solution
-
-Heat
-^^^^
-
-For VNF scale-up tests we increase the number for VNF worker threads. In the case of VNFs
-we also need to increase the number of VCPUs and memory allocated to the VNF.
-
-An example scale-up Heat testcase is:
-
-.. code-block:: console
-
- <repo>/samples/vnf_samples/nsut/acl/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml
-
-This testcase template requires specifying the number of VCPUs and Memory.
-We set the VCPUs and memory using the --task-args options
-
-.. code-block:: console
-
- yardstick --debug task start --task-args='{"mem": 20480, "vcpus": 10}' samples/vnf_samples/nsut/acl/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml
-
-
-Baremetal
-^^^^^^^^^
- 1. Follow above traffic generator section to setup.
- 2. edit num of threads in ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml``
-
- e.g, 6 Threads for given VNF
-
-.. code-block:: yaml
-
-
- schema: yardstick:task:0.1
- scenarios:
- {% for worker_thread in [1, 2 ,3 , 4, 5, 6] %}
- - type: NSPerf
- traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml
- topology: vfw-tg-topology.yaml
- nodes:
- tg__0: trafficgen_1.yardstick
- vnf__0: vnf.yardstick
- options:
- framesize:
- uplink: {64B: 100}
- downlink: {64B: 100}
- flow:
- src_ip: [{'tg__0': 'xe0'}]
- dst_ip: [{'tg__0': 'xe1'}]
- count: 1
- traffic_type: 4
- rfc2544:
- allowed_drop_rate: 0.0001 - 0.0001
- vnf__0:
- rules: acl_1rule.yaml
- vnf_config: {lb_config: 'HW', lb_count: 1, worker_config: '1C/1T', worker_threads: {{worker_thread}}}
- nfvi_enable: True
- runner:
- type: Iteration
- iterations: 10
- interval: 35
- {% endfor %}
- context:
- type: Node
- name: yardstick
- nfvi_type: baremetal
- file: /etc/yardstick/nodes/pod.yaml
-
-Scale-Out
---------------------
-
-VNFs performance data with scale-out
-
- * Helps in capacity planning to meet the given network node requirements
- * Helps in comparison between different VNF vendor offerings
- * Better the scale-out index, provides the flexibility in meeting future capacity requirements
-
-
-Standalone
-^^^^^^^^^^
-
-Scale-out not supported on Baremetal.
-
-1. Follow above traffic generator section to setup.
-2. Generate testcase for standalone virtualization using ansible scripts
-
- .. code-block:: console
-
- cd <repo>/ansible
- trex: standalone_ovs_scale_out_trex_test.yaml or standalone_sriov_scale_out_trex_test.yaml
- ixia: standalone_ovs_scale_out_ixia_test.yaml or standalone_sriov_scale_out_ixia_test.yaml
- ixia_correlated: standalone_ovs_scale_out_ixia_correlated_test.yaml or standalone_sriov_scale_out_ixia_correlated_test.yaml
-
- update the ovs_dpdk or sriov above Ansible scripts reflect the setup
-
-3. run the test
-
- .. code-block:: console
-
- <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-1.yaml
- <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-2.yaml
-
-Heat
-^^^^
-
-There are sample scale-out all-VM Heat tests. These tests only use VMs and don't use external traffic.
-
-The tests use UDP_Replay and correlated traffic.
-
-.. code-block:: console
-
- <repo>/samples/vnf_samples/nsut/cgnapt/tc_heat_rfc2544_ipv4_1flow_64B_trex_correlated_scale_4.yaml
-
-To run the test you need to increase OpenStack CPU, Memory and Port quotas.
-
-
-Traffic Generator tuning
-------------------------
-
-The TRex traffic generator can be setup to use multiple threads per core, this is for multiqueue testing.
-
-TRex does not automatically enable multiple threads because we currently cannot detect the number of queues on a device.
-
-To enable multiple queue set the queues_per_port value in the TG VNF options section.
-
-.. code-block:: yaml
-
- scenarios:
- - type: NSPerf
- nodes:
- tg__0: tg_0.yardstick
-
- options:
- tg_0:
- queues_per_port: 2
-
-
diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst
new file mode 100644
index 000000000..1f9e4d4c6
--- /dev/null
+++ b/docs/testing/user/userguide/14-nsb-operation.rst
@@ -0,0 +1,706 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2016-2019 Intel Corporation.
+..
+ Convention for heading levels in Yardstick documentation:
+
+ ======= Heading 0 (reserved for the title in a document)
+ ------- Heading 1
+ ^^^^^^^ Heading 2
+ +++++++ Heading 3
+ ''''''' Heading 4
+
+ Avoid deeper levels because they do not render well.
+
+Yardstick - NSB Testing - Operation
+===================================
+
+Abstract
+--------
+
+NSB test configuration and OpenStack setup requirements
+
+
+OpenStack Network Configuration
+-------------------------------
+
+NSB requires certain OpenStack deployment configurations.
+For optimal VNF characterization using external traffic generators NSB requires
+provider/external networks.
+
+
+Provider networks
+^^^^^^^^^^^^^^^^^
+
+The VNFs require a clear L2 connect to the external network in order to
+generate realistic traffic from multiple address ranges and ports.
+
+In order to prevent Neutron from filtering traffic we have to disable Neutron
+Port Security. We also disable DHCP on the data ports because we are binding
+the ports to DPDK and do not need DHCP addresses. We also disable gateways
+because multiple default gateways can prevent SSH access to the VNF from the
+floating IP. We only want a gateway on the mgmt network
+
+.. code-block:: yaml
+
+ uplink_0:
+ cidr: '10.1.0.0/24'
+ gateway_ip: 'null'
+ port_security_enabled: False
+ enable_dhcp: 'false'
+
+Heat Topologies
+^^^^^^^^^^^^^^^
+
+By default Heat will attach every node to every Neutron network that is
+created. For scale-out tests we do not want to attach every node to every
+network.
+
+For each node you can specify which ports are on which network using the
+network_ports dictionary.
+
+In this example we have ``TRex xe0 <-> xe0 VNF xe1 <-> xe0 UDP_Replay``
+
+.. code-block:: yaml
+
+ vnf_0:
+ floating_ip: true
+ placement: "pgrp1"
+ network_ports:
+ mgmt:
+ - mgmt
+ uplink_0:
+ - xe0
+ downlink_0:
+ - xe1
+ tg_0:
+ floating_ip: true
+ placement: "pgrp1"
+ network_ports:
+ mgmt:
+ - mgmt
+ uplink_0:
+ - xe0
+ # Trex always needs two ports
+ uplink_1:
+ - xe1
+ tg_1:
+ floating_ip: true
+ placement: "pgrp1"
+ network_ports:
+ mgmt:
+ - mgmt
+ downlink_0:
+ - xe0
+
+
+Availability zone
+^^^^^^^^^^^^^^^^^
+
+The configuration of the availability zone is requred in cases where location
+of exact compute host/group of compute hosts needs to be specified for
+:term:`SampleVNF` or traffic generator in the heat test case. If this is the
+case, please follow the instructions below.
+
+.. _`Create a host aggregate`:
+
+1. Create a host aggregate in the OpenStack and add the available compute hosts
+ into the aggregate group.
+
+ .. note:: Change the ``<AZ_NAME>`` (availability zone name), ``<AGG_NAME>``
+ (host aggregate name) and ``<HOST>`` (host name of one of the compute) in the
+ commands below.
+
+ .. code-block:: bash
+
+ # create host aggregate
+ openstack aggregate create --zone <AZ_NAME> \
+ --property availability_zone=<AZ_NAME> <AGG_NAME>
+ # show available hosts
+ openstack compute service list --service nova-compute
+ # add selected host into the host aggregate
+ openstack aggregate add host <AGG_NAME> <HOST>
+
+2. To specify the OpenStack location (the exact compute host or group of the hosts)
+ of SampleVNF or traffic generator in the heat test case, the ``availability_zone`` server
+ configuration option should be used. For example:
+
+ .. note:: The ``<AZ_NAME>`` (availability zone name) should be changed according
+ to the name used during the host aggregate creation steps above.
+
+ .. code-block:: yaml
+
+ context:
+ name: yardstick
+ image: yardstick-samplevnfs
+ ...
+ servers:
+ vnf_0:
+ ...
+ availability_zone: <AZ_NAME>
+ ...
+ tg__0:
+ ...
+ availability_zone: <AZ_NAME>
+ ...
+ networks:
+ ...
+
+There are two example of SampleVNF scale out test case which use the
+``availability zone`` feature to specify the exact location of scaled VNFs and
+traffic generators.
+
+Those are:
+
+.. code-block:: console
+
+ <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml
+ <repo>/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_out.yaml
+
+.. note:: This section describes the PROX scale-out testcase, but the same
+ procedure is used for the vFW test case.
+
+1. Before running the scale-out test case, make sure the host aggregates are
+ configured in the OpenStack environment. To check this, run the following
+ command:
+
+ .. code-block:: console
+
+ # show configured host aggregates (example)
+ openstack aggregate list
+ +----+------+-------------------+
+ | ID | Name | Availability Zone |
+ +----+------+-------------------+
+ | 4 | agg0 | AZ_NAME_0 |
+ | 5 | agg1 | AZ_NAME_1 |
+ +----+------+-------------------+
+
+2. If no host aggregates are configured, please follow the instructions to
+ `Create a host aggregate`_
+
+
+3. Run the SampleVNF PROX scale-out test case, specifying the
+ ``availability zone`` of each VNF and traffic generator as task arguments.
+
+ .. note:: The ``az_0`` and ``az_1`` should be changed according to the host
+ aggregates created in the OpenStack.
+
+ .. code-block:: console
+
+ yardstick -d task start \
+ <repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml\
+ --task-args='{
+ "num_vnfs": 4, "availability_zone": {
+ "vnf_0": "az_0", "tg_0": "az_1",
+ "vnf_1": "az_0", "tg_1": "az_1",
+ "vnf_2": "az_0", "tg_2": "az_1",
+ "vnf_3": "az_0", "tg_3": "az_1"
+ }
+ }'
+
+ ``num_vnfs`` specifies how many VNFs are going to be deployed in the
+ ``heat`` contexts. ``vnf_X`` and ``tg_X`` arguments configure the
+ availability zone where the VNF and traffic generator is going to be deployed.
+
+
+Collectd KPIs
+-------------
+
+NSB can collect KPIs from collected. We have support for various plugins
+enabled by the :term:`Barometer` project.
+
+The default yardstick-samplevnf has collectd installed. This allows for
+collecting KPIs from the VNF.
+
+Collecting KPIs from the NFVi is more complicated and requires manual setup.
+We assume that collectd is not installed on the compute nodes.
+
+To collectd KPIs from the NFVi compute nodes:
+
+ * install_collectd on the compute nodes
+ * create pod.yaml for the compute nodes
+ * enable specific plugins depending on the vswitch and DPDK
+
+ example ``pod.yaml`` section for Compute node running collectd.
+
+.. code-block:: yaml
+
+ -
+ name: "compute-1"
+ role: Compute
+ ip: "10.1.2.3"
+ user: "root"
+ ssh_port: "22"
+ password: ""
+ collectd:
+ interval: 5
+ plugins:
+ # for libvirtd stats
+ virt: {}
+ intel_pmu: {}
+ ovs_stats:
+ # path to OVS socket
+ ovs_socket_path: /var/run/openvswitch/db.sock
+ intel_rdt: {}
+
+
+
+Scale-Up
+--------
+
+VNFs performance data with scale-up
+
+ * Helps to figure out optimal number of cores specification in the Virtual
+ Machine template creation or VNF
+ * Helps in comparison between different VNF vendor offerings
+ * Better the scale-up index, indicates the performance scalability of a
+ particular solution
+
+Heat
+^^^^
+For VNF scale-up tests we increase the number for VNF worker threads. In the
+case of VNFs we also need to increase the number of VCPUs and memory allocated
+to the VNF.
+
+An example scale-up Heat testcase is:
+
+.. literalinclude:: /../samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_trex_scale-up.yaml
+ :language: yaml+jinja
+
+This testcase template requires specifying the number of VCPUs, Memory and Ports.
+We set the VCPUs and memory using the ``--task-args`` options
+
+.. code-block:: console
+
+ yardstick task start --task-args='{"mem": 10480, "vcpus": 4, "vports": 2}' \
+ samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
+
+In order to support ports scale-up, traffic and topology templates need to be used in testcase.
+
+A example topology template is:
+
+.. literalinclude:: /../samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
+ :language: yaml+jinja
+
+This template has ``vports`` as an argument. To pass this argument it needs to
+be configured in ``extra_args`` scenario definition. Please note that more
+argument can be defined in that section. All of them will be passed to topology
+and traffic profile templates
+
+For example:
+
+.. code-block:: yaml
+
+ schema: yardstick:task:0.1
+ scenarios:
+ - type: NSPerf
+ traffic_profile: ../../traffic_profiles/ipv4_throughput-scale-up.yaml
+ extra_args:
+ vports: {{ vports }}
+ topology: vfw-tg-topology-scale-up.yaml
+
+A example traffic profile template is:
+
+.. literalinclude:: /../samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
+ :language: yaml+jinja
+
+There is an option to provide predefined config for SampleVNFs. Path to config
+file may by specified in ``vnf_config`` scenario section.
+
+.. code-block:: yaml
+
+ vnf__0:
+ rules: acl_1rule.yaml
+ vnf_config: {lb_config: 'SW', file: vfw_vnf_pipeline_cores_4_ports_2_lb_1_sw.conf }
+
+
+Baremetal
+^^^^^^^^^
+ 1. Follow above traffic generator section to setup.
+ 2. Edit num of threads in
+ ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_trex_scale_up.yaml``
+ e.g, 6 Threads for given VNF
+
+.. code-block:: yaml+jinja
+
+ schema: yardstick:task:0.1
+ scenarios:
+ {% for worker_thread in [1, 2 ,3 , 4, 5, 6] %}
+ - type: NSPerf
+ traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml
+ topology: vfw-tg-topology.yaml
+ nodes:
+ tg__0: trafficgen_0.yardstick
+ vnf__0: vnf_0.yardstick
+ options:
+ framesize:
+ uplink: {64B: 100}
+ downlink: {64B: 100}
+ flow:
+ src_ip: [{'tg__0': 'xe0'}]
+ dst_ip: [{'tg__0': 'xe1'}]
+ count: 1
+ traffic_type: 4
+ rfc2544:
+ allowed_drop_rate: 0.0001 - 0.0001
+ vnf__0:
+ rules: acl_1rule.yaml
+ vnf_config: {lb_config: 'HW', lb_count: 1, worker_config: '1C/1T', worker_threads: {{worker_thread}}}
+ nfvi_enable: True
+ runner:
+ type: Iteration
+ iterations: 10
+ interval: 35
+ {% endfor %}
+ context:
+ type: Node
+ name: yardstick
+ nfvi_type: baremetal
+ file: /etc/yardstick/nodes/pod.yaml
+
+Scale-Out
+---------
+
+VNFs performance data with scale-out helps
+
+ * capacity planning to meet the given network node requirements
+ * comparison between different VNF vendor offerings
+ * better the scale-out index, provides the flexibility in meeting future
+ capacity requirements
+
+
+Standalone
+^^^^^^^^^^
+
+Scale-out not supported on Baremetal.
+
+1. Follow above traffic generator section to setup.
+2. Generate testcase for standalone virtualization using ansible scripts
+
+ .. code-block:: console
+
+ cd <repo>/ansible
+ trex: standalone_ovs_scale_out_test.yaml or standalone_sriov_scale_out_test.yaml
+ ixia: standalone_ovs_scale_out_ixia_test.yaml or standalone_sriov_scale_out_ixia_test.yaml
+ ixia_correlated: standalone_ovs_scale_out_ixia_correlated_test.yaml or standalone_sriov_scale_out_ixia_correlated_test.yaml
+
+ update the ovs_dpdk or sriov above Ansible scripts reflect the setup
+
+3. run the test
+
+ .. code-block:: console
+
+ <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-1.yaml
+ <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-2.yaml
+
+Heat
+^^^^
+
+There are sample scale-out all-VM Heat tests. These tests only use VMs and
+don't use external traffic.
+
+The tests use UDP_Replay and correlated traffic.
+
+.. code-block:: console
+
+ <repo>/samples/vnf_samples/nsut/cgnapt/tc_heat_rfc2544_ipv4_1flow_64B_trex_correlated_scale_4.yaml
+
+To run the test you need to increase OpenStack CPU, Memory and Port quotas.
+
+
+Traffic Generator tuning
+------------------------
+
+The TRex traffic generator can be setup to use multiple threads per core, this
+is for multiqueue testing.
+
+TRex does not automatically enable multiple threads because we currently cannot
+detect the number of queues on a device.
+
+To enable multiple queue set the ``queues_per_port`` value in the TG VNF
+options section.
+
+.. code-block:: yaml
+
+ scenarios:
+ - type: NSPerf
+ nodes:
+ tg__0: trafficgen_0.yardstick
+
+ options:
+ tg_0:
+ queues_per_port: 2
+
+
+Standalone configuration
+------------------------
+
+NSB supports certain Standalone deployment configurations.
+Standalone supports provisioning a VM in a standalone visualised environment using kvm/qemu.
+There two types of Standalone contexts available: OVS-DPDK and SRIOV.
+OVS-DPDK uses OVS network with DPDK drivers.
+SRIOV enables network traffic to bypass the software switch layer of the Hyper-V stack.
+
+Emulated machine type
+^^^^^^^^^^^^^^^^^^^^^
+
+For better performance test results of emulated VM spawned by Yardstick SA
+context (OvS-DPDK/SRIOV), it may be important to control the emulated machine
+type used by QEMU emulator. This attribute can be configured via TC definition
+in ``contexts`` section under ``extra_specs`` configuration.
+
+For example:
+
+.. code-block:: yaml
+
+ contexts:
+ ...
+ - type: StandaloneSriov
+ ...
+ flavor:
+ ...
+ extra_specs:
+ ...
+ machine_type: pc-i440fx-bionic
+
+Where, ``machine_type`` can be set to one of the emulated machine type
+supported by QEMU running on SUT platform. To get full list of supported
+emulated machine types, the following command can be used on the target SUT
+host.
+
+.. code-block:: yaml
+
+ # qemu-system-x86_64 -machine ?
+
+By default, the ``machine_type`` option is set to ``pc-i440fx-xenial`` which is
+suitable for running Ubuntu 16.04 VM image. So, if this type is not supported
+by the target platform or another VM image is used for stand alone (SA) context
+VM (e.g.: ``bionic`` image for Ubuntu 18.04), this configuration should be
+changed accordingly.
+
+Standalone with OVS-DPDK
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+SampleVNF image is spawned in a VM on a baremetal server.
+OVS with DPDK is installed on the baremetal server.
+
+.. note:: Ubuntu 17.10 requires DPDK v.17.05 and higher, DPDK v.17.05 requires OVS v.2.8.0.
+
+Default values for OVS-DPDK:
+
+ * queues: 4
+ * lcore_mask: ""
+ * pmd_cpu_mask: "0x6"
+
+Sample test case file
+^^^^^^^^^^^^^^^^^^^^^
+
+1. Prepare SampleVNF image and copy it to ``flavor/images``.
+2. Prepare context files for TREX and SampleVNF under ``contexts/file``.
+3. Add bridge named ``br-int`` to the baremetal where SampleVNF image is deployed.
+4. Modify ``networks/phy_port`` accordingly to the baremetal setup.
+5. Run test from:
+
+.. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_trex.yaml
+ :language: yaml+jinja
+
+Preparing test run of vEPC test case
+------------------------------------
+
+Provided vEPC test cases are examples of emulation of vEPC infrastructure
+components, such as UE, eNodeB, MME, SGW, PGW.
+
+Location of vEPC test cases: ``samples/vnf_samples/nsut/vepc/``.
+
+Before running a specific vEPC test case using NSB, some preconfiguration
+needs to be done.
+
+Update Spirent Landslide TG configuration in pod file
+=====================================================
+
+Examples of ``pod.yaml`` files could be found in
+:file:`etc/yardstick/nodes/standalone`.
+The name of related pod file could be checked in the context section of NSB
+test case.
+
+The ``pod.yaml`` related to vEPC test case uses some sub-structures that hold the
+details of accessing the Spirent Landslide traffic generator.
+These subsections and the changes to be done in provided example pod file are
+described below.
+
+1. ``tas_manager``: data under this key holds the information required to
+access Landslide TAS (Test Administration Server) and perform needed
+configurations on it.
+
+ * ``ip``: IP address of TAS Manager node; should be updated according to test
+ setup used
+ * ``super_user``: superuser name; could be retrieved from Landslide documentation
+ * ``super_user_password``: superuser password; could be retrieved from
+ Landslide documentation
+ * ``cfguser_password``: password of predefined user named 'cfguser'; default
+ password could be retrieved from Landslide documentation
+ * ``test_user``: username to be used during test run as a Landslide library
+ name; to be defined by test run operator
+ * ``test_user_password``: password of test user; to be defined by test run
+ operator
+ * ``proto``: *http* or *https*; to be defined by test run operator
+ * ``license``: Landslide license number installed on TAS
+
+2. The ``config`` section holds information about test servers (TSs) and
+systems under test (SUTs). Data is represented as a list of entries.
+Each such entry contains:
+
+ * ``test_server``: this subsection represents data related to test server
+ configuration, such as:
+
+ * ``name``: test server name; unique custom name to be defined by test
+ operator
+ * ``role``: this value is used as a key to bind specific Test Server and
+ TestCase; should be set to one of test types supported by TAS license
+ * ``ip``: Test Server IP address
+ * ``thread_model``: parameter related to Test Server performance mode.
+ The value should be one of the following: "Legacy" | "Max" | "Fireball".
+ Refer to Landslide documentation for details.
+ * ``phySubnets``: a structure used to specify IP ranges reservations on
+ specific network interfaces of related Test Server. Structure fields are:
+
+ * ``base``: start of IP address range
+ * ``mask``: IP range mask in CIDR format
+ * ``name``: network interface name, e.g. *eth1*
+ * ``numIps``: size of IP address range
+
+ * ``preResolvedArpAddress``: a structure used to specify the range of IP
+ addresses for which the ARP responses will be emulated
+
+ * ``StartingAddress``: IP address specifying the start of IP address range
+ * ``NumNodes``: size of the IP address range
+
+ * ``suts``: a structure that contains definitions of each specific SUT
+ (represents a vEPC component). SUT structure contains following key/value
+ pairs:
+
+ * ``name``: unique custom string specifying SUT name
+ * ``role``: string value corresponding with an SUT role specified in the
+ session profile (test session template) file
+ * ``managementIp``: SUT management IP adress
+ * ``phy``: network interface name, e.g. *eth1*
+ * ``ip``: vEPC component IP address used in test case topology
+ * ``nextHop``: next hop IP address, to allow for vEPC inter-node communication
+
+Update NSB test case definitions
+================================
+NSB test case file designated for vEPC testing contains an example of specific
+test scenario configuration.
+Test operator may change these definitions as required for the use case that
+requires testing.
+Specifically, following subsections of the vEPC test case (section **scenarios**)
+may be changed.
+
+1. Subsection ``options``: contains custom parameters used for vEPC testing
+
+ * subsection ``dmf``: may contain one or more parameters specified in
+ ``traffic_profile`` template file
+ * subsection ``test_cases``: contains re-definitions of parameters specified
+ in ``session_profile`` template file
+
+ .. note:: All parameters in ``session_profile``, value of which is a
+ placeholder, needs to be re-defined to construct a valid test session.
+
+2. Subsection ``runner``: specifies the test duration and the interval of
+TG and VNF side KPIs polling. For more details, refer to :doc:`03-architecture`.
+
+Preparing test run of vPE test case
+-----------------------------------
+The vPE (Provider Edge Router) is a :term: `VNF` approximation
+serving as an Edge Router. The vPE is approximated using the
+``ip_pipeline`` dpdk application.
+
+ .. image:: /../docs/testing/developer/devguide/images/vPE_Diagram.png
+ :width: 800px
+ :alt: NSB vPE Diagram
+
+The ``vpe_config`` file must be passed as it is not auto generated.
+The ``vpe_script`` defines the rules applied to each of the pipelines. This can be
+auto generated or a file can be passed using the ``script_file`` option in
+``vnf_config`` as shown below. The ``full_tm_profile_file`` option must be
+used if a traffic manager is defined in ``vpe_config``.
+
+.. code-block:: yaml
+
+ vnf_config: { file: './vpe_config/vpe_config_2_ports',
+ action_bulk_file: './vpe_config/action_bulk_512.txt',
+ full_tm_profile_file: './vpe_config/full_tm_profile_10G.cfg',
+ script_file: './vpe_config/vpe_script_sample' }
+
+Testcases for vPE can be found in the ``vnf_samples/nsut/vpe`` directory.
+A testcase can be started with the following command as an example:
+
+.. code-block:: bash
+
+ yardstick task start /yardstick/samples/vnf_samples/nsut/vpe/tc_baremetal_rfc2544_ipv4_1flow_64B_ixia.yaml
+
+Preparing test run of vIPSEC test case
+--------------------------------------
+
+Location of vIPSEC test cases: ``samples/vnf_samples/nsut/ipsec/``.
+
+Before running a specific vIPSEC test case using NSB, some dependencies have to be
+preinstalled and properly configured.
+- VPP
+
+.. code-block:: console
+
+ export UBUNTU="xenial"
+ export RELEASE=".stable.1810"
+ sudo rm /etc/apt/sources.list.d/99fd.io.list
+ echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io$RELEASE.ubuntu.$UBUNTU.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
+ sudo apt-get update
+ sudo apt-get install vpp vpp-lib vpp-plugin vpp-dbg vpp-dev vpp-api-java vpp-api-python vpp-api-lua
+
+- VAT templates
+
+ VAT templates is required for the VPP API.
+
+.. code-block:: console
+
+ mkdir -p /opt/nsb_bin/vpp/templates/
+ echo 'exec trace add dpdk-input 50' > /opt/nsb_bin/vpp/templates/enable_dpdk_traces.vat
+ echo 'exec trace add vhost-user-input 50' > /opt/nsb_bin/vpp/templates/enable_vhost_user_traces.vat
+ echo 'exec trace add memif-input 50' > /opt/nsb_bin/vpp/templates/enable_memif_traces.vat
+ cat > /opt/nsb_bin/vpp/templates/dump_interfaces.vat << EOL
+ sw_interface_dump
+ dump_interface_table
+ quit
+ EOL
+
+
+Preparing test run of vCMTS test case
+-------------------------------------
+
+Location of vCMTS test cases: ``samples/vnf_samples/nsut/cmts/``.
+
+Before running a specific vIPSEC test case using NSB, some changes must be
+made to the original vCMTS package.
+
+Allow SSH access to the docker images
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Follow the documentation at ``https://docs.docker.com/engine/examples/running_ssh_service/``
+to allow SSH access to the Pktgen/vcmts-d containers located at:
+
+* ``$VCMTS_ROOT/pktgen/docker/docker-image-pktgen/Dockerfile`` and
+* ``$VCMTS_ROOT/vcmtsd/docker/docker-image-vcmtsd/Dockerfile``
+
+
+Deploy the ConfigMaps for Pktgen and vCMTSd
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. code-block:: bash
+
+ cd $VCMTS_ROOT/kubernetes/helm/pktgen
+ helm template . -x templates/pktgen-configmap.yaml > configmap.yaml
+ kubectl create -f configmap.yaml
+
+ cd $VCMTS_ROOT/kubernetes/helm/vcmtsd
+ helm template . -x templates/vcmts-configmap.yaml > configmap.yaml
+ kubectl create -f configmap.yaml
+
diff --git a/docs/testing/user/userguide/15-list-of-tcs.rst b/docs/testing/user/userguide/15-list-of-tcs.rst
index b62bf6390..b727aa3c9 100644
--- a/docs/testing/user/userguide/15-list-of-tcs.rst
+++ b/docs/testing/user/userguide/15-list-of-tcs.rst
@@ -1,135 +1,136 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Ericsson AB and others.
-
-====================
-Yardstick Test Cases
-====================
-
-Abstract
-========
-
-This chapter lists available Yardstick test cases.
-Yardstick test cases are divided in two main categories:
-
-* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology
-described in :doc:`02-methodology`
-
-* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more
-aspect of a feature delivered by an OPNFV Project, including the test cases
-developed for the :term:`VTC`.
-
-Generic NFVI Test Case Descriptions
-===================================
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc001.rst
- opnfv_yardstick_tc002.rst
- opnfv_yardstick_tc004.rst
- opnfv_yardstick_tc005.rst
- opnfv_yardstick_tc008.rst
- opnfv_yardstick_tc009.rst
- opnfv_yardstick_tc010.rst
- opnfv_yardstick_tc011.rst
- opnfv_yardstick_tc012.rst
- opnfv_yardstick_tc014.rst
- opnfv_yardstick_tc024.rst
- opnfv_yardstick_tc037.rst
- opnfv_yardstick_tc038.rst
- opnfv_yardstick_tc042.rst
- opnfv_yardstick_tc043.rst
- opnfv_yardstick_tc044.rst
- opnfv_yardstick_tc055.rst
- opnfv_yardstick_tc061.rst
- opnfv_yardstick_tc063.rst
- opnfv_yardstick_tc069.rst
- opnfv_yardstick_tc070.rst
- opnfv_yardstick_tc071.rst
- opnfv_yardstick_tc072.rst
- opnfv_yardstick_tc073.rst
- opnfv_yardstick_tc074.rst
- opnfv_yardstick_tc075.rst
- opnfv_yardstick_tc076.rst
- opnfv_yardstick_tc078.rst
- opnfv_yardstick_tc079.rst
- opnfv_yardstick_tc080.rst
- opnfv_yardstick_tc081.rst
- opnfv_yardstick_tc083.rst
-
-OPNFV Feature Test Cases
-========================
-
-H A
----
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc019.rst
- opnfv_yardstick_tc025.rst
- opnfv_yardstick_tc045.rst
- opnfv_yardstick_tc046.rst
- opnfv_yardstick_tc047.rst
- opnfv_yardstick_tc048.rst
- opnfv_yardstick_tc049.rst
- opnfv_yardstick_tc050.rst
- opnfv_yardstick_tc051.rst
- opnfv_yardstick_tc052.rst
- opnfv_yardstick_tc053.rst
- opnfv_yardstick_tc054.rst
-
-IPv6
-----
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc027.rst
-
-KVM
----
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc028.rst
-
-Parser
-------
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc040.rst
-
- StorPerf
------------
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc074.rst
-
-virtual Traffic Classifier
---------------------------
-
-.. toctree::
- :maxdepth: 1
-
- opnfv_yardstick_tc006.rst
- opnfv_yardstick_tc007.rst
- opnfv_yardstick_tc020.rst
- opnfv_yardstick_tc021.rst
-
-Templates
-=========
-
-.. toctree::
- :maxdepth: 1
-
- testcase_description_v2_template
- Yardstick_task_templates
-
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Ericsson AB and others.
+
+====================
+Yardstick Test Cases
+====================
+
+Abstract
+========
+
+This chapter lists available Yardstick test cases.
+Yardstick test cases are divided in two main categories:
+
+* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology
+ described in :doc:`02-methodology`
+
+* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more
+ aspect of a feature delivered by an OPNFV Project.
+
+Generic NFVI Test Case Descriptions
+===================================
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc001.rst
+ opnfv_yardstick_tc002.rst
+ opnfv_yardstick_tc004.rst
+ opnfv_yardstick_tc005.rst
+ opnfv_yardstick_tc006.rst
+ opnfv_yardstick_tc008.rst
+ opnfv_yardstick_tc009.rst
+ opnfv_yardstick_tc010.rst
+ opnfv_yardstick_tc011.rst
+ opnfv_yardstick_tc012.rst
+ opnfv_yardstick_tc014.rst
+ opnfv_yardstick_tc015.rst
+ opnfv_yardstick_tc024.rst
+ opnfv_yardstick_tc037.rst
+ opnfv_yardstick_tc038.rst
+ opnfv_yardstick_tc042.rst
+ opnfv_yardstick_tc043.rst
+ opnfv_yardstick_tc044.rst
+ opnfv_yardstick_tc055.rst
+ opnfv_yardstick_tc061.rst
+ opnfv_yardstick_tc063.rst
+ opnfv_yardstick_tc069.rst
+ opnfv_yardstick_tc070.rst
+ opnfv_yardstick_tc071.rst
+ opnfv_yardstick_tc072.rst
+ opnfv_yardstick_tc073.rst
+ opnfv_yardstick_tc074.rst
+ opnfv_yardstick_tc075.rst
+ opnfv_yardstick_tc076.rst
+ opnfv_yardstick_tc078.rst
+ opnfv_yardstick_tc079.rst
+ opnfv_yardstick_tc080.rst
+ opnfv_yardstick_tc081.rst
+ opnfv_yardstick_tc083.rst
+ opnfv_yardstick_tc084.rst
+
+OPNFV Feature Test Cases
+========================
+
+H A
+---
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc019.rst
+ opnfv_yardstick_tc025.rst
+ opnfv_yardstick_tc045.rst
+ opnfv_yardstick_tc046.rst
+ opnfv_yardstick_tc047.rst
+ opnfv_yardstick_tc048.rst
+ opnfv_yardstick_tc049.rst
+ opnfv_yardstick_tc050.rst
+ opnfv_yardstick_tc051.rst
+ opnfv_yardstick_tc052.rst
+ opnfv_yardstick_tc053.rst
+ opnfv_yardstick_tc054.rst
+ opnfv_yardstick_tc056.rst
+ opnfv_yardstick_tc057.rst
+ opnfv_yardstick_tc058.rst
+ opnfv_yardstick_tc087.rst
+ opnfv_yardstick_tc088.rst
+ opnfv_yardstick_tc089.rst
+ opnfv_yardstick_tc090.rst
+ opnfv_yardstick_tc091.rst
+ opnfv_yardstick_tc092.rst
+ opnfv_yardstick_tc093.rst
+
+IPv6
+----
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc027.rst
+
+KVM
+---
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc028.rst
+
+Parser
+------
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc040.rst
+
+StorPerf
+--------
+
+.. toctree::
+ :maxdepth: 1
+
+ opnfv_yardstick_tc074.rst
+
+Templates
+=========
+
+.. toctree::
+ :maxdepth: 1
+
+ testcase_description_v2_template
+ Yardstick_task_templates
+
diff --git a/docs/testing/user/userguide/code/multi-devstack-compute-local.conf b/docs/testing/user/userguide/code/multi-devstack-compute-local.conf
new file mode 100644
index 000000000..b0b3cc5d4
--- /dev/null
+++ b/docs/testing/user/userguide/code/multi-devstack-compute-local.conf
@@ -0,0 +1,53 @@
+[[local|localrc]]
+HOST_IP=<HOST_IP_ADDRESS>
+MYSQL_PASSWORD=password
+DATABASE_PASSWORD=password
+RABBIT_PASSWORD=password
+ADMIN_PASSWORD=password
+SERVICE_PASSWORD=password
+HORIZON_PASSWORD=password
+# Controller node
+SERVICE_HOST=<CONTROLLER_IP_ADDRESS>
+MYSQL_HOST=$SERVICE_HOST
+RABBIT_HOST=$SERVICE_HOST
+GLANCE_HOSTPORT=$SERVICE_HOST:9292
+
+# Internet access.
+RECLONE=False
+PIP_UPGRADE=True
+IP_VERSION=4
+
+# Neutron
+enable_plugin neutron https://git.openstack.org/openstack/neutron.git stable/pike
+
+# Services
+ENABLED_SERVICES=n-cpu,rabbit,q-agt,placement-api,q-sriov-agt
+
+# Neutron Options
+PUBLIC_INTERFACE=<PUBLIC INTERFACE>
+
+# ML2 Configuration
+Q_PLUGIN=ml2
+Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,sriovnicswitch
+Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,flat,local,vxlan,gre,geneve
+
+# Open vSwitch provider networking configuration
+PHYSICAL_DEVICE_MAPPINGS=physnet1:<PF0_IFNAME>,physnet2:<PF1_IFNAME>
+
+
+[[post-config|$NOVA_CONF]]
+[DEFAULT]
+scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter
+# Whitelist PCI devices
+pci_passthrough_whitelist = {\\"devname\\": \\"<PF0_IFNAME>\\", \\"physical_network\\": \\"physnet1\\" }
+pci_passthrough_whitelist = {\\"devname\\": \\"<PF1_IFNAME>\\", \\"physical_network\\": \\"physnet2\\" }
+
+[libvirt]
+cpu_mode = host-model
+
+
+# ML2 plugin bits for SR-IOV enablement of Intel Corporation XL710/X710 Virtual Function
+[[post-config|/$Q_PLUGIN_CONF_FILE]]
+[ml2_sriov]
+agent_required = True
+supported_pci_vendor_devs = <VF_DEV_ID:VF_VEN_ID>
diff --git a/docs/testing/user/userguide/code/multi-devstack-controller-local.conf b/docs/testing/user/userguide/code/multi-devstack-controller-local.conf
new file mode 100644
index 000000000..fb61cdcbd
--- /dev/null
+++ b/docs/testing/user/userguide/code/multi-devstack-controller-local.conf
@@ -0,0 +1,64 @@
+[[local|localrc]]
+HOST_IP=<HOST_IP_ADDRESS>
+ADMIN_PASSWORD=password
+MYSQL_PASSWORD=$ADMIN_PASSWORD
+DATABASE_PASSWORD=$ADMIN_PASSWORD
+RABBIT_PASSWORD=$ADMIN_PASSWORD
+SERVICE_PASSWORD=$ADMIN_PASSWORD
+HORIZON_PASSWORD=$ADMIN_PASSWORD
+# Controller node
+SERVICE_HOST=$HOST_IP
+MYSQL_HOST=$SERVICE_HOST
+RABBIT_HOST=$SERVICE_HOST
+GLANCE_HOSTPORT=$SERVICE_HOST:9292
+
+# Internet access.
+RECLONE=False
+PIP_UPGRADE=True
+IP_VERSION=4
+
+# Services
+disable_service n-net
+ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-sriov-agt
+
+# Heat
+enable_plugin heat https://git.openstack.org/openstack/heat stable/pike
+
+# Neutron
+enable_plugin neutron https://git.openstack.org/openstack/neutron.git stable/pike
+
+# Neutron Options
+FLOATING_RANGE=<RANGE_IN_THE_PUBLIC_INTERFACE_NETWORK>
+Q_FLOATING_ALLOCATION_POOL=start=<START_IP_ADDRESS>,end=<END_IP_ADDRESS>
+PUBLIC_NETWORK_GATEWAY=<PUBLIC_NETWORK_GATEWAY>
+PUBLIC_INTERFACE=<PUBLIC INTERFACE>
+
+# ML2 Configuration
+Q_PLUGIN=ml2
+Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,sriovnicswitch
+Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,flat,local,vxlan,gre,geneve
+
+# Open vSwitch provider networking configuration
+Q_USE_PROVIDERNET_FOR_PUBLIC=True
+OVS_PHYSICAL_BRIDGE=br-ex
+OVS_BRIDGE_MAPPINGS=public:br-ex
+PHYSICAL_DEVICE_MAPPINGS=physnet1:<PF0_IFNAME>,physnet2:<PF1_IFNAME>
+PHYSICAL_NETWORK=physnet1,physnet2
+
+
+[[post-config|$NOVA_CONF]]
+[DEFAULT]
+scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter
+# Whitelist PCI devices
+pci_passthrough_whitelist = {\\"devname\\": \\"<PF0_IFNAME>\\", \\"physical_network\\": \\"physnet1\\" }
+pci_passthrough_whitelist = {\\"devname\\": \\"<PF1_IFNAME>\\", \\"physical_network\\": \\"physnet2\\" }
+
+[libvirt]
+cpu_mode = host-model
+
+
+# ML2 plugin bits for SR-IOV enablement of Intel Corporation XL710/X710 Virtual Function
+[[post-config|/$Q_PLUGIN_CONF_FILE]]
+[ml2_sriov]
+agent_required = True
+supported_pci_vendor_devs = <VF_DEV_ID:VF_VEN_ID>
diff --git a/docs/testing/user/userguide/code/pod_ixia.yaml b/docs/testing/user/userguide/code/pod_ixia.yaml
new file mode 100644
index 000000000..4ab56fe4e
--- /dev/null
+++ b/docs/testing/user/userguide/code/pod_ixia.yaml
@@ -0,0 +1,31 @@
+nodes:
+-
+ name: trafficgen_1
+ role: IxNet
+ ip: 1.2.1.1 #ixia machine ip
+ user: user
+ password: r00t
+ key_filename: /root/.ssh/id_rsa
+ tg_config:
+ ixchassis: "1.2.1.7" #ixia chassis ip
+ tcl_port: "8009" # tcl server port
+ lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
+ root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
+ py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
+ dut_result_dir: "/mnt/ixia"
+ version: 8.1
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "2:5" # Card:port
+ driver: "none"
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.0.0"
+ local_mac: "00:98:10:64:14:00"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "2:6" # [(Card, port)]
+ driver: "none"
+ dpdk_port_num: 1
+ local_ip: "152.40.40.20"
+ netmask: "255.255.0.0"
+ local_mac: "00:98:28:28:14:00"
diff --git a/docs/testing/user/userguide/code/single-devstack-local.conf b/docs/testing/user/userguide/code/single-devstack-local.conf
new file mode 100644
index 000000000..4c44f729d
--- /dev/null
+++ b/docs/testing/user/userguide/code/single-devstack-local.conf
@@ -0,0 +1,62 @@
+[[local|localrc]]
+HOST_IP=<HOST_IP_ADDRESS>
+ADMIN_PASSWORD=password
+MYSQL_PASSWORD=$ADMIN_PASSWORD
+DATABASE_PASSWORD=$ADMIN_PASSWORD
+RABBIT_PASSWORD=$ADMIN_PASSWORD
+SERVICE_PASSWORD=$ADMIN_PASSWORD
+HORIZON_PASSWORD=$ADMIN_PASSWORD
+
+# Internet access.
+RECLONE=False
+PIP_UPGRADE=True
+IP_VERSION=4
+
+# Services
+disable_service n-net
+ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-sriov-agt
+
+# Heat
+enable_plugin heat https://git.openstack.org/openstack/heat stable/pike
+
+# Neutron
+enable_plugin neutron https://git.openstack.org/openstack/neutron.git stable/pike
+
+# Neutron Options
+FLOATING_RANGE=<RANGE_IN_THE_PUBLIC_INTERFACE_NETWORK>
+Q_FLOATING_ALLOCATION_POOL=start=<START_IP_ADDRESS>,end=<END_IP_ADDRESS>
+PUBLIC_NETWORK_GATEWAY=<PUBLIC_NETWORK_GATEWAY>
+PUBLIC_INTERFACE=<PUBLIC INTERFACE>
+
+# ML2 Configuration
+Q_PLUGIN=ml2
+Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch,sriovnicswitch
+Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,flat,local,vxlan,gre,geneve
+
+# Open vSwitch provider networking configuration
+Q_USE_PROVIDERNET_FOR_PUBLIC=True
+OVS_PHYSICAL_BRIDGE=br-ex
+OVS_BRIDGE_MAPPINGS=public:br-ex
+PHYSICAL_DEVICE_MAPPINGS=physnet1:<PF0_IFNAME>,physnet2:<PF1_IFNAME>
+PHYSICAL_NETWORK=physnet1,physnet2
+
+
+[[post-config|$NOVA_CONF]]
+[DEFAULT]
+scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter
+# Whitelist PCI devices
+pci_passthrough_whitelist = {\\"devname\\": \\"<PF0_IFNAME>\\", \\"physical_network\\": \\"physnet1\\" }
+pci_passthrough_whitelist = {\\"devname\\": \\"<PF1_IFNAME>\\", \\"physical_network\\": \\"physnet2\\" }
+
+[filter_scheduler]
+enabled_filters = RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter
+
+[libvirt]
+cpu_mode = host-model
+
+
+# ML2 plugin bits for SR-IOV enablement of Intel Corporation XL710/X710 Virtual Function
+[[post-config|/$Q_PLUGIN_CONF_FILE]]
+[ml2_sriov]
+agent_required = True
+supported_pci_vendor_devs = <VF_DEV_ID:VF_VEN_ID>
diff --git a/docs/testing/user/userguide/code/single-yardstick-pod.conf b/docs/testing/user/userguide/code/single-yardstick-pod.conf
new file mode 100644
index 000000000..421246d60
--- /dev/null
+++ b/docs/testing/user/userguide/code/single-yardstick-pod.conf
@@ -0,0 +1,22 @@
+nodes:
+-
+ name: trafficgen_1
+ role: tg__0
+ ip: <TG-HOST-IP>
+ user: <TG-USER>
+ password: <TG-PASS>
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:18:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "10.1.1.150"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:18:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "10.1.1.151"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:02"
diff --git a/docs/testing/user/userguide/comp-intro.rst b/docs/testing/user/userguide/comp-intro.rst
index ad354b66d..bab6e60da 100644
--- a/docs/testing/user/userguide/comp-intro.rst
+++ b/docs/testing/user/userguide/comp-intro.rst
@@ -7,10 +7,10 @@
Yardstick
=========
-.. _Yardstick: https://wiki.opnfv.org/yardstick
+.. _Yardstick: https://wiki.opnfv.org/display/yardstick
.. _Presentation: https://wiki.opnfv.org/_media/opnfv_summit_-_yardstick_project.pdf
.. _NFV-TST001: https://docbox.etsi.org/ISG/NFV/Open/Drafts/TST001_-_Pre-deployment_Validation/
-.. _Yardsticktst: https://wiki.opnfv.org/_media/opnfv_summit_-_bridging_opnfv_and_etsi.pdf
+.. _Yardsticktst: http://events17.linuxfoundation.org/sites/events/files/slides/OPNFV%20Summit%20-%20bridging_opnfv_and_etsi.pdf
The project's goal is to verify infrastructure compliance, from the perspective
of a Virtual Network Function (VNF).
diff --git a/docs/testing/user/userguide/glossary.rst b/docs/testing/user/userguide/glossary.rst
index f8ff41887..cef9b69a5 100644
--- a/docs/testing/user/userguide/glossary.rst
+++ b/docs/testing/user/userguide/glossary.rst
@@ -13,53 +13,153 @@ Glossary
API
Application Programming Interface
- DPI
- Deep Packet Inspection
+ Barometer
+ OPNFV NFVi Service Assurance project. Barometer upstreams changes to
+ collectd, OpenStack, etc to improve features related to NFVi monitoring
+ and service assurance.
+ More info on: https://opnfv-barometer.readthedocs.io/en/latest/
+
+ collectd
+ collectd is a system statistics collection daemon.
+ More info on: https://collectd.org/
+
+ context
+ A context describes the environment in which a yardstick testcase will
+ be run. It can refer to a pre-provisioned environment, or an environment
+ that will be set up using OpenStack or Kubernetes.
+
+ Docker
+ Docker provisions and manages containers. Yardstick and many other OPNFV
+ projects are deployed in containers. Docker is required to launch the
+ containerized versions of these projects.
DPDK
Data Plane Development Kit
+ DPI
+ Deep Packet Inspection
+
DSCP
Differentiated Services Code Point
+ flavor
+ A specification of virtual resources used by OpenStack in the creation
+ of a VM instance.
+
+ Grafana
+ A visualization tool, used in Yardstick to retrieve test data from
+ InfluxDB and display it. Grafana works by defining dashboards, which are
+ combinations of visualization panes (e.g. line charts and gauges) and
+ forms that assist the user in formulating SQL-like queries for InfluxDB.
+ More info on: https://grafana.com/
+
IGMP
Internet Group Management Protocol
+ InfluxDB
+ One of the Dispatchers supported by Yardstick, it allows test results to
+ be reported to a time-series database.
+ More info on: https://www.influxdata.com/
+
IOPS
Input/Output Operations Per Second
+ A performance measurement used to benchmark storage devices.
+
+ KPI
+ Key Performance Indicator
+
+ Kubernetes
+ k8s
+ Kubernetes is an open-source container-orchestration system for automating
+ deployment, scaling and management of containerized applications.
+ It is one of the contexts supported in Yardstick.
+
+ MPLS
+ Multiprotocol Label Switching
+
+ NFV
+ Network Function Virtualization
+ NFV is an initiative to take network services which were traditionally run
+ on proprietary, dedicated hardware, and virtualize them to run on general
+ purpose hardware.
+
+ NFVI
+ Network Function Virtualization Infrastructure
+ The servers, routers, switches, etc on which the NFV system runs.
NIC
Network Interface Controller
+ NSB
+ Network Services Benchmarking. A subset of Yardstick features concerned
+ with NFVI and VNF characterization.
+
+ OpenStack
+ OpenStack is a cloud operating system that controls pools of compute,
+ storage, and networking resources. OpenStack is an open source project
+ licensed under the Apache License 2.0.
+
PBFS
Packet Based per Flow State
+ PROX
+ Packet pROcessing eXecution engine
+
QoS
Quality of Service
+ The ability to guarantee certain network or storage requirements to
+ satisfy a Service Level Agreement (SLA) between an application provider
+ and end users.
+ Typically includes performance requirements like networking bandwidth,
+ latency, jitter correction, and reliability as well as storage
+ performance in Input/Output Operations Per Second (IOPS), throttling
+ agreements, and performance expectations at peak load
+
+ runner
+ The part of a Yardstick testcase that determines how the test will be run
+ (e.g. for x iterations, y seconds or until state z is reached). The runner
+ also determines when the metrics are collected/reported.
+
+ SampleVNF
+ OPNFV project providing a repository of reference VNFs.
+ More info on: https://opnfv-samplevnf.readthedocs.io/en/latest/
+
+ scenario
+ The part of a Yardstick testcase that describes each test step.
+
+ SLA
+ Service Level Agreement
+ An SLA is an agreement between a service provider and a customer to
+ provide a certain level of service/performance.
+
+ SR-IOV
+ Single Root IO Virtualization
+ A specification that, when implemented by a physical PCIe
+ device, enables it to appear as multiple separate PCIe devices. This
+ enables multiple virtualized guests to share direct access to the
+ physical device.
+
+ SUT
+ System Under Test
+
+ testcase
+ A task in Yardstick; the yaml file that is read by Yardstick to
+ determine how to run a test.
+
+ ToS
+ Type of Service
VLAN
- Virtual LAN
+ Virtual LAN (Local Area Network)
VM
Virtual Machine
+ An operating system instance that runs on top of a hypervisor.
+ Multiple VMs can run at the same time on the same physical
+ host.
VNF
Virtual Network Function
VNFC
Virtual Network Function Component
-
- NFVI
- Network Function Virtualization Infrastructure
-
- SR-IOV
- Single Root IO Virtualization
-
- SUT
- System Under Test
-
- ToS
- Type of Service
-
- VTC
- Virtual Traffic Classifier
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index 61e157e52..ff0bb6f5d 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -11,21 +11,20 @@ Yardstick User Guide
.. toctree::
:maxdepth: 4
- :numbered:
01-introduction
02-methodology
03-architecture
04-installation
- 05-yardstick_plugin
- 06-result-store-InfluxDB
- 07-grafana
- 08-api
- 09-yardstick_user_interface
- 10-vtc-overview
- 11-nsb-overview
- 12-nsb_installation
- 13-nsb_operation
+ 05-operation
+ 06-yardstick-plugin
+ 07-result-store-InfluxDB
+ 08-grafana
+ 09-api
+ 10-yardstick-user-interface
+ 12-nsb-overview
+ 13-nsb-installation
+ 14-nsb-operation
15-list-of-tcs
nsb/nsb-list-of-tcs
glossary
diff --git a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
index 895837283..562c80ff7 100644..100755
--- a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
+++ b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
@@ -27,4 +27,15 @@ NSB PROX Test Case Descriptions
tc_prox_context_buffering_port
tc_prox_context_load_balancer_port
tc_prox_context_vpe_port
- tc_prox_context_lw_after_port
+ tc_prox_context_lw_aftr_port
+ tc_epc_default_bearer_landslide
+ tc_epc_dedicated_bearer_landslide
+ tc_epc_saegw_tput_relocation_landslide
+ tc_epc_network_service_request_landslide
+ tc_epc_ue_service_request_landslide
+ tc_vfw_rfc2544
+ tc_vfw_rfc2544_correlated
+ tc_vfw_rfc3511
+ tc_vpp_baremetal_crypto_ipsec
+ tc_vims_context_sipp
+ tc_pktgen_k8s_vcmts
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
new file mode 100644
index 000000000..ffe4f6c19
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst
@@ -0,0 +1,177 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case without link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscriber connections to BNG, runs prioritized traffic|
+| | on maximum throughput on all ports and collects network |
+| | and PPPoE subscriber metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up.yaml |
+| | |
+| | Mentioned test case is a template and number of ports in the |
+| | setup could be passed using cli arguments, e.g: |
+| | |
+| | yardstick -d task start --task-args='{vports: 8}' <tc_yaml> |
+| | |
+| | By default, vports=2. |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test case can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * Setup ports number; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * Enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolves the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports |
+| | (traffic flows are creating between access-core ports |
+| | pairs, traffic is bi-directional); |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. In case drop percentage rate is higher than expected, |
+| | reduce traffic line rate and repeat steps 7-10 again; |
+| | 11. In case drop percentage rate is as expected or number |
+| | of maximum iterations in step 10 achieved, disconnect |
+| | PPPoE subscribers and stop traffic; |
+| | 12. Stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During each iteration interval in the test run, all specified|
+| | metrics are retrieved from IxNetwork and stored in the |
+| | yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The vBNG RFC2544 test case will achieve maximum traffic line |
+| | rate with zero packet loss (or other non-zero allowed |
+| | partial drop rate). |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
new file mode 100644
index 000000000..889ba2410
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst
@@ -0,0 +1,179 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Intel Corporation.
+
+***************************************************************
+Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE
+***************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB vBNG RFC2544 QoS base line test case with link congestion |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX |
+| | |
++--------------+--------------------------------------------------------------+
+| metric | Network metrics: |
+| | * TxThroughput |
+| | * RxThroughput |
+| | * TG packets in |
+| | * TG packets out |
+| | * Max Latency |
+| | * Min Latency |
+| | * Average Latency |
+| | * Packets drop percentage |
+| | |
+| | PPPoE subscribers metrics: |
+| | * Sessions up |
+| | * Sessions down |
+| | * Sessions Not Started |
+| | * Sessions Total |
+| | |
+| | NOTE: the same network metrics list are collecting: |
+| | * summary for all ports |
+| | * per port |
+| | * per priority flows summary on all ports |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test allows to measure performance of BNG network device|
+| | according to RFC2544 testing methodology. Test case creates |
+| | PPPoE subscribers connections to BNG, run prioritized traffic|
+| | causing congestion of access port (port xe0) and collects |
+| | network and PPPoE subscribers metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The BNG QoS RFC2544 test cases are listed below: |
+| | |
+| | * tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX.yaml |
+| | |
+| | Number of ports: |
+| | * 8 ports |
+| | |
+| | Test duration: |
+| | * set as 30sec; |
+| | |
+| | Traffic type: |
+| | * IPv4; |
+| | |
+| | Packet sizes: |
+| | * IMIX. The following default IMIX distribution is using: |
+| | |
+| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% |
+| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% |
+| | |
+| | VLAN settings: |
+| | * QinQ on access ports; |
+| | * VLAN on core ports; |
+| | |
+| | Number of PPPoE subscribers: |
+| | * 4000 per access port; |
+| | * 1000 per SVLAN; |
+| | |
+| | Default ToS bits settings: |
+| | * 0 - (000) Routine |
+| | * 4 - (100) Flash Override |
+| | * 7 - (111) Network Control. |
+| | |
+| | The above fields are the main options used for the test case |
+| | and could be configured using cli options on test run or |
+| | directly in test case yaml file. |
+| | |
+| | NOTE: that only parameter that can't be changed is ports |
+| | number. To run the test with another number of ports |
+| | traffic profile should be updated. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | IXIA IxNetwork |
+| | |
+| | IXIA IxNetwork is using to emulates PPPoE sessions, generate |
+| | L2-L3 traffic, analyze traffic flows and collect network |
+| | metrics during test run. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Mentioned BNG QoS RFC2544 test cases can be configured with |
+| | different: |
+| | |
+| | * Number of PPPoE subscribers sessions; |
+| | * IP Priority type; |
+| | * Packet size; |
+| | * enable/disable BGP protocol on core ports; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | 1. BNG is up and running and has configured: |
+| conditions | * access ports with QinQ tagging; |
+| | * core ports with configured IP addresses and VLAN; |
+| | * PPPoE subscribers authorization settings (no auth or |
+| | Radius server, PAP auth protocol); |
+| | * QoS settings; |
+| | |
+| | 2. IxNetwork API server is running on specified in pod.yaml |
+| | file TCL port; |
+| | |
+| | 3. BNG ports are connected to IXIA ports (IXIA uplink |
+| | ports are connected to BNG access ports and IXIA |
+| | downlink ports are connected to BNG core ports; |
+| | |
+| | 4. The pod.yaml file contains all necessary information |
+| | (BNG access and core ports settings, core ports IP |
+| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink |
+| | ports, etc). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick resolve the topology and connects to IxNetwork |
+| | API server by TCL. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Test scenarios run, which performs the following steps: |
+| | |
+| | 1. Create access network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG access |
+| | ports); |
+| | 2. Configure access network topologies with multiple device |
+| | groups. Each device group represents single SVLAN with |
+| | PPPoE subscribers sessions (number of created on port |
+| | SVLANs and subscribers depends on specified if test case |
+| | file options); |
+| | 3. Create core network topologies (this topologies are |
+| | based on IXIA ports which are connected to BNG core |
+| | ports); |
+| | 4. Configure core network topologies with single device |
+| | group which represents one connection with configured |
+| | VLAN and BGP protocol; |
+| | 5. Establish PPPoE subscribers connections to BNG; |
+| | 6. Create traffic flows between access and core ports. |
+| | While test covers case with access port congestion, |
+| | flows between ports will be created in the following |
+| | way: traffic from two core ports are going to one access |
+| | port causing port congestion and traffic from other two |
+| | core ports is splitting between remaining three access |
+| | ports; |
+| | 7. Configure each traffic flow with specified in traffic |
+| | profile options; |
+| | 8. Run traffic with specified in test case file duration; |
+| | 9. Collect network metrics after traffic was stopped; |
+| | 10. Measure drop percentage rate of different priority |
+| | packets on congested port. Expected that all high and |
+| | medium priority packets was forwarded and only low |
+| | priority packets has drops. |
+| | 11. Disconnect PPPoE subscribers and stop test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | During test run, in the end of each iteration all specified |
+| | in the document metrics are retrieved from IxNetwork and |
+| | stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case is successful if all high and medium priority |
+| | packets on congested port was forwarded and only low |
+| | priority packets has drops. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst
new file mode 100644
index 000000000..c8865ed93
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_dedicated_bearer_landslide.rst
@@ -0,0 +1,156 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*********************************************************
+Yardstick Test Case Description: NSB EPC DEDICATED BEARER
+*********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC dedicated bearer test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_{initiator}_dedicated_bearer_landslide |
+| | |
+| | * initiator: dedicated bearer creation initiator side could |
+| | be UE (ue) or Network (network). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability under |
+| | different levels of load (number of subscriber, generated |
+| | traffic throughput, etc.) for case when default and dedicated|
+| | bearers are creating and using for traffic transferring. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC dedicated bearer test cases are listed below: |
+| | |
+| | * tc_epc_ue_dedicated_bearer_create_landslide.yaml |
+| | * tc_epc_network_dedicated_bearer_create_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Number of dedicated bearers per default bearer: |
+| | |
+| | * 1. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional and |
+| | performance testing of different types of mobile networks. |
+| | It emulates real-world control and data traffic of mobile |
+| | subscribers moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC DEDICATED BEARER test cases can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * number of dedicated bearers per default; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * dedicated bearers activation timeout; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies need to be installed. |
+| conditions | The steps are described in NSB installation chapter for the|
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information (TAS |
+| | VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administrator Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start the emulated EPC network nodes; |
+| | * Establish the subscribers connections to EPC network |
+| | (default bearers); |
+| | * Establish the number of dedicated bearers as per per |
+| | default bearer for each subscriber; |
+| | * Create the sessions and transmit traffic through EPC |
+| | network nodes during the specified traffic duration time; |
+| | * Disconnect dedicated bearers; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the results|
+| | in the database for benchmarking purposes. The aim is only |
+| | to collect all the metrics that are provided by Spirent |
+| | Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst
new file mode 100644
index 000000000..9e6d77825
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_default_bearer_landslide.rst
@@ -0,0 +1,149 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*******************************************************
+Yardstick Test Case Description: NSB EPC DEFAULT BEARER
+*******************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC default bearer test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_default_bearer_landslide_{dmf_setup} |
+| | |
+| | * dmf_setup: single or multi dmf test session setup; |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability of EPC under |
+| | different levels of load (number of subscriber, generated |
+| | traffic throughput) for case when only one default bearer is |
+| | using for transferring traffic from UE to Network. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC default bearer test cases are listed below: |
+| | |
+| | * tc_epc_default_bearer_create_landslide.yaml |
+| | * tc_epc_default_bearer_create_landslide_multi_dmf.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP - for single DMF test case; |
+| | * UDP and TCP - for multi DMF test case; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes for UDP packets; |
+| | * 1518 bytes for TCP packets; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC DEFAULT BEARER test cases can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (only |
+| | default bearers are established); |
+| | * Create the sessions and transmit traffic through EPC |
+| | network nodes during the specified traffic duration time; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst
new file mode 100644
index 000000000..85e6ce11a
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_network_service_request_landslide.rst
@@ -0,0 +1,159 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+****************************************************************
+Yardstick Test Case Description: NSB EPC NETWORK SERVICE REQUEST
+****************************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC network service request test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_network_service_request_landslide |
+| | |
+| | * initiator: service request initiator side could be UE (ue) |
+| | or Network (network). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test covers case of network initiated service request & |
+| | allows to check processing capabilities of EPC handling high |
+| | amount of continuous Downlink Data Notification messages from|
+| | network to UEs which are in Idle state. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC network service request test cases are listed below: |
+| | |
+| | * tc_epc_network_service_request_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 0.1 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Idle entry time (timeout after which UE goes to Idle state): |
+| | |
+| | * 5s; |
+| | |
+| | Traffic start delay: |
+| | |
+| | * 1000ms. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC NETWORK SERVICE REQUEST test case can be configured |
+| | with different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * timeout after which UE goes to Idle state; |
+| | * Traffic start delay; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Switch UE to Idle state after specified in test case |
+| | timeout; |
+| | * Send Downlink Data Notification from network to UE, that |
+| | will return UE to active state. This process is continuous |
+| | and during whole test run UEs will be going to Idle state |
+| | and will be switched back to active state after Downlink |
+| | Data Notification was received; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst
new file mode 100644
index 000000000..102517562
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_saegw_tput_relocation_landslide.rst
@@ -0,0 +1,167 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*********************************************************
+Yardstick Test Case Description: NSB EPC SAEGW RELOCATION
+*********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC SAEGW throughput with relocation test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_saegw_tput_relocation_landslide |
+| | |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capability of EPC |
+| | handling large amount of subscribers X2 handovers between |
+| | different eNBs while UEs are sending traffic. |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC SAEGW throughput with relocation tests are listed |
+| | below: |
+| | |
+| | * tc_epc_saegw_tput_relocation_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Handover type: |
+| | |
+| | * X2 handover; |
+| | |
+| | Mobility time (timeout after sessions were established after |
+| | which handover will start): |
+| | |
+| | * 10000ms; |
+| | |
+| | Handover start type: |
+| | |
+| | * When all sessions started; |
+| | |
+| | Mobility mode: |
+| | |
+| | * Single handoff; |
+| | |
+| | Mobility Rate: |
+| | |
+| | * 120 subscribers/s. |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC UE SERVICE REQUEST test cases can be configured with|
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * handover type; |
+| | * mobility rate; |
+| | * mobility time; |
+| | * mobility mode; |
+| | * handover start condition; |
+| | * subscribers disconnection rate; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Start run traffic; |
+| | * After specified in test case mobility timeout, start |
+| | handover process on specified mobility rate; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst b/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst
new file mode 100644
index 000000000..0711a0ce3
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_epc_ue_service_request_landslide.rst
@@ -0,0 +1,174 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+***********************************************************
+Yardstick Test Case Description: NSB EPC UE SERVICE REQUEST
+***********************************************************
+
++-----------------------------------------------------------------------------+
+|NSB EPC UE service request test case |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_epc_{initiator}_service_request_landslide |
+| | |
+| | * initiator: service request initiator side could be UE (ue) |
+| | or Network (nw). |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | All metrics provided by Spirent Landslide traffic generator |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The Spirent Landslide product provides one box solution which|
+| | allows to fully emulate all EPC network nodes including |
+| | mobile users, network host and generate control and data |
+| | plane traffic. |
+| | |
+| | This test allows to check processing capabilities of EPC |
+| | under high user connections rate and traffic load for case |
+| | when UEs initiates service request (UE initiates bearer |
+| | modification request to provide dedicated bearer for new |
+| | type of traffic) |
+| | |
+| | It's easy to replace emulated node or multiple nodes in test |
+| | topology with real node or corresponding vEPC VNF as DUT and |
+| | check it's processing capabilities under specific test case |
+| | load conditions. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The EPC ue service request test cases are listed below: |
+| | |
+| | * tc_epc_ue_service_request_landslide.yaml |
+| | |
+| | Test duration: |
+| | |
+| | * is set as 60sec (specified in test session profile); |
+| | |
+| | Traffic type: |
+| | |
+| | * UDP; |
+| | |
+| | Packet sizes: |
+| | |
+| | * 512 bytes; |
+| | |
+| | Traffic transaction rate: |
+| | |
+| | * 5 trans/s.; |
+| | |
+| | Number of mobile subscribers: |
+| | |
+| | * 20000; |
+| | |
+| | Number of default bearers per subscriber: |
+| | |
+| | * 1; |
+| | |
+| | Number of dedicated bearers per default bearer: |
+| | |
+| | * 1. |
+| | |
+| | TFT settings for dedicated bearers: |
+| | |
+| | * TFT configured to filter TCP traffic (Protocol ID 6) |
+| | |
+| | Modified TFT settings: |
+| | |
+| | * Create new TFT to filter UDP traffic (Protocol ID 17) from |
+| | 2002 local port and 2003 remote port; |
+| | |
+| | Modified QoS settings: |
+| | |
+| | * Set QCI 5 for dedicated bearers; |
+| | |
+| | The above fields and values are the main options used for the|
+| | test case. Other configurable options could be found in test |
+| | session profile yaml file. All these options have default |
+| | values which can be overwritten in test case file. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Landslide |
+| | |
+| | The Spirent Landslide is a tool for functional & performance |
+| | testing of different types of mobile networks. It emulates |
+| | real-world control and data traffic of mobile subscribers |
+| | moving through virtualized EPC network. |
+| | Detailed description of Spirent Landslide product could be |
+| | found here: https://www.spirent.com/Products/Landslide |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This EPC UE SERVICE REQUEST test case can be configured with |
+| | different: |
+| | |
+| | * packet sizes; |
+| | * traffic transaction rate; |
+| | * number of subscribers sessions; |
+| | * number of default bearers per subscriber; |
+| | * number of dedicated bearers per default; |
+| | * subscribers connection rate; |
+| | * subscribers disconnection rate; |
+| | * dedicated bearers activation timeout; |
+| | * DMF (traffic profile); |
+| | * enable/disable Fireball DMF threading model that provides |
+| | optimized performance; |
+| | * Starting TFT settings for dedicated bearers; |
+| | * Modified TFT settings for dedicated bearers; |
+| | * Modified QoS settings for dedicated bearers; |
+| | |
+| | Default values exist. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI-NFV-TST001 |
+| | |
+| | 3GPP TS 32.455 |
+| | |
++--------------+--------------------------------------------------------------+
+| pre-test | * All Spirent Landslide dependencies are installed (detailed |
+| conditions | installation steps are described in Chapter 13- |
+| | nsb-installation.rst and 14-nsb-operation.rst file for NSB |
+| | Spirent Landslide vEPC tests; |
+| | |
+| | * The pod.yaml file contains all necessary information |
+| | (TAS VM IP address, NICs, emulated SUTs and Test Nodes |
+| | parameters (names, types, ip addresses, etc.). |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Spirent Landslide components are running on the hosts |
+| | specified in the pod file. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with Spirent Landslide Test |
+| | Administration Server (TAS) by TCL and REST API. The test |
+| | will resolve the topology and instantiate all emulated EPC |
+| | network nodes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Test scenarios run, which performs the following steps: |
+| | |
+| | * Start emulated EPC network nodes; |
+| | * Establish subscribers connections to EPC network (default |
+| | bearers); |
+| | * Establish the number of dedicated bearer as specified in |
+| | the test case as per default bearer for each subscriber; |
+| | * start run users traffic through EPC network nodes; |
+| | * During traffic is running, send bearer modification request|
+| | after specified in test case timeout; |
+| | * Disconnect dedicated bearers; |
+| | * Disconnect subscribers at the end of the test. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | During test run, all the metrics provided by Spirent |
+| | Landslide are stored in the yardstick dispatcher. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will create the test session in Spirent |
+| | Landslide with the test case parameters and store the |
+| | results in the database for benchmarking purposes. The aim |
+| | is only to collect all the metrics that are provided by |
+| | Spirent Landslide product for each test specific scenario. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst b/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst
new file mode 100755
index 000000000..56f5c27ed
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_pktgen_k8s_vcmts.rst
@@ -0,0 +1,102 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB vCMTS
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB Pktgen test for vCMTS characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_vcmts_k8s_pktgen |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Upstream Processing (Per Service Group); |
+| | * Downstream Processing (Per Service Group); |
+| | * Upstream Throughput; |
+| | * Downstream Throughput; |
+| | * Platform Metrics; |
+| | * Power Consumption; |
+| | * Upstream Throughput Time Series; |
+| | * Downstream Throughput Time Series; |
+| | * System Summary; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | * The vCMTS test handles service groups and packet generation |
+| | containers setup, and metrics collection. |
+| | |
+| | * The vCMTS test case is implemented to run in Kubernetes |
+| | environment with vCMTS pre-installed. |
++--------------+---------------------------------------------------------------+
+|configuration | The vCMTS test case configurable values are listed below |
+| | |
+| | * num_sg: Number of service groups (Upstream/Downstream |
+| | container pairs). |
+| | * num_tg: Number of Pktgen containers. |
+| | * vcmtsd_image: vCMTS container image (feat/perf). |
+| | * qat_on: QAT status (true/false). |
+| | |
+| | num_sg and num_tg values should be configured in the test |
+| | case file and in the topology file. |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Intel vCMTS Reference Dataplane |
+| | Reference implementation of a DPDK-based vCMTS (DOCSIS MAC) |
+| | dataplane in a Kubernetes-orchestrated Linux Container |
+| | environment. |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This test cases can be configured with different: |
+| | |
+| | * Number of service groups |
+| | * Number of Pktgen instances |
+| | * QAT offloading |
+| | * Feat/Perf Images for performance or features (more data |
+| | collection) |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | Intel vCMTS Reference Dataplane should be installed and |
+|conditions | runnable on 2 nodes Kubernetes environment with modifications |
+| | to the containers to allow yardstick ssh access, and the |
+| | ConfigMaps from the original vCMTS package deployed. |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | Yardstick is connected to the Kubernetes Master node using |
+| | the configuration file in /etc/kubernetes/admin.yaml |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | The TG containers are created and started on the traffic |
+| | generator server (Master node), While the VNF containers are |
+| | created and started on the data plan server. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Yardstick is connected with the TG and VNF by using ssh. |
+| | to start vCMTS-d, and Pktgen. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | Yardstick connects to the running Pktgen instances to start |
+| | generating traffic using the configurations from: |
+| | /etc/yardstick/pktgen_values.yaml |
+| | |
+| | and connects to the vCMTS-d containers to start the upstream |
+| | and downstream processing using the configurations from: |
+| | /etc/yardstick/vcmtsd_values.yaml |
+| | |
++--------------+---------------------------------------------------------------+
+|step 5 | Yardstick copies vCMTS metrics regularly from the remote |
+| | InfluxDB (deployed by the vCMTS Package) to the local |
+| | Yardstick InfluxDB as configured in the options section in |
+| | the test case file. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | None. The test case will collect the KPIs and plot on |
+| | Grafana. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file
diff --git a/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst b/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
index 6827b0525..3beb5303f 100644
--- a/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
+++ b/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst
@@ -3,9 +3,9 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, 2017 Intel Corporation.
-**********************************************
-Yardstick Test Case Description: NSB PROXi VPE
-**********************************************
+*********************************************
+Yardstick Test Case Description: NSB PROX VPE
+*********************************************
+-----------------------------------------------------------------------------+
|NSB PROX test for NFVI characterization |
diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst
new file mode 100644
index 000000000..139990bc3
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544.rst
@@ -0,0 +1,189 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+************************************************
+Yardstick Test Case Description: NSB vFW RFC2544
+************************************************
+
++------------------------------------------------------------------------------+
+| NSB vFW test for VNF characterization |
+| |
++---------------+--------------------------------------------------------------+
+| test case id | tc_{context}_rfc2544_ipv4_1rule_1flow_{pkt_size}_{tg_type} |
+| | |
+| | * context = baremetal, heat, heat_external, ovs, sriov |
+| | heat_sriov_external contexts; |
+| | * tg_type = ixia (context != heat,heat_sriov_external), |
+| | trex; |
+| | * pkt_size = 64B - all contexts; |
+| | 128B, 256B, 512B, 1024B, 1280B, 1518B - |
+| | (context = heat, tg_type = ixia) |
+| | |
++---------------+--------------------------------------------------------------+
+| metric | * Network Throughput; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * TG Latency; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * VNF Packets Fwd; |
+| | * Dropped packets; |
+| | |
++---------------+--------------------------------------------------------------+
+| test purpose | The VFW RFC2544 tests measure performance characteristics of |
+| | the SUT (multiple ports) and sends UDP bidirectional traffic |
+| | from all TG ports to SampleVNF vFW application. The |
+| | application forwards received traffic based on rules |
+| | provided by the user in the TC configuration and default |
+| | rules created by vFW to send traffic from uplink ports to |
+| | downlink and voice versa. |
+| | |
++---------------+--------------------------------------------------------------+
+| configuration | The 2 ports RFC2544 test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml |
+| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1024B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1280B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_128B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_1518B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_256B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_512B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml |
+| | * tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | * tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex. |
+| | yaml |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | * tc_ovs_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml |
+| | * tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | * tc_sriov_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml |
+| | * tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml |
+| | |
+| | The 4 ports RFC2544 test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia_4port.yaml |
+| | * tc_tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_4port. |
+| | yaml |
+| | * tc_tc_heat_external_rfc2544_ipv4_1rule_1flow_64B_trex_4 |
+| | port.yaml |
+| | * tc_tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_4port.yaml |
+| | |
+| | The scale-up RFC2544 test cases are listed below: |
+| | |
+| | * tc_tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml |
+| | |
+| | The scale-out RFC2544 test cases are listed below: |
+| | |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_out.yaml |
+| | |
+| | Test duration is set as 30 sec for each test and default |
+| | number of rules are applied. These can be configured |
+| | |
++---------------+--------------------------------------------------------------+
+| test tool | The vFW is a DPDK application that performs basic filtering |
+| | for malformed packets and dynamic packet filtering of |
+| | incoming packets using the connection tracker library. |
+| | |
++---------------+--------------------------------------------------------------+
+| applicability | The vFW RFC2544 test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test duration; |
+| | * tolerated loss; |
+| | * traffic flows; |
+| | * rules; |
+| | |
+| | Default values exist. |
+| | |
++---------------+--------------------------------------------------------------+
+| pre-test | For OpenStack test case image (yardstick-samplevnf) needs |
+| conditions | to be installed into Glance with vFW and DPDK included in |
+| | it (NSB install). |
+| | |
+| | For Baremetal tests cases vFW and DPDK must be installed on |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information. |
+| | |
+| | For standalone (SA) SRIOV/OvS test cases the |
+| | yardstick-samplevnf image needs to be installed on hosts and |
+| | pod.yaml file must be provided with necessary system, NIC |
+| | information. |
+| | |
++---------------+--------------------------------------------------------------+
+| test sequence | Description and expected result |
+| | |
++---------------+--------------------------------------------------------------+
+| step 1 | For Baremetal test: The TG (except IXIA) and VNF are started |
+| | on the hosts based on the pod file. |
+| | |
+| | For Heat test: Two host VMs are booted, as Traffic generator |
+| | and VNF(vFW) based on the test flavor. In case of scale-out |
+| | scenario the multiple VNF VMs will be started. |
+| | |
+| | For Heat external test: vFW VM is booted and TG (except IXIA)|
+| | generator is started on the external host based on the pod |
+| | file. In case of scale-out scenario the multiple VNF VMs |
+| | will be deployed. |
+| | |
+| | For Heat SRIOV external test: vFW VM is booted with network |
+| | interfaces of `direct` type which are mapped to VFs that are |
+| | available to OpenStack. TG (except IXIA) is started on the |
+| | external host based on the pod file. In case of scale-out |
+| | scenario the multiple VNF VMs will be deployed. |
+| | |
+| | For SRIOV test: VF ports are created on host's PFs specified |
+| | in the TC file and VM is booed using those ports and image |
+| | provided in the configuration. TG (except IXIA) is started |
+| | on other host connected to VNF machine based on the pod |
+| | file. The vFW is started in the booted VM. In case of |
+| | scale-out scenario the multiple VNF VMs will be created. |
+| | |
+| | For OvS-DPDK test: OvS DPDK switch is started and bridges |
+| | are created with ports specified in the TC file. DPDK vHost |
+| | ports are added to corresponding bridge and VM is booed |
+| | using those ports and image provided in the configuration. |
+| | TG (except IXIA) is started on other host connected to VNF |
+| | machine based on the pod file. The vFW is started in the |
+| | booted VM. In case of scale-out scenario the multiple VNF |
+| | VMs will be deployed. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 2 | Yardstick is connected with the TG and VNF by using ssh (in |
+| | case of IXIA TG is connected via TCL interface). The test |
+| | will resolve the topology and instantiate all VNFs |
+| | and TG and collect the KPI's/metrics. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 3 | The TG will send packets to the VNFs. If the number of |
+| | dropped packets is more than the tolerated loss the line |
+| | rate or throughput is halved. This is done until the dropped |
+| | packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for different |
+| | packet size with an accepted minimal packet loss for the |
+| | default configuration. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
+| | In Heat test: All VNF VMs and TG are deleted on test |
+| | completion. |
+| | |
+| | In SRIOV test: The deployed VM with vFW is destroyed on the |
+| | host and TG (exclude IXIA) is stopped. |
+| | |
+| | In Heat SRIOV test: The deployed VM with vFW is destroyed, |
+| | VFs are released and TG (exclude IXIA) is stopped. |
+| | |
+| | In OvS test: The deployed VM with vFW is destroyed on the |
+| | host and OvS DPDK switch is stopped and ports are unbinded. |
+| | The TG (exclude IXIA) is stopped. |
+| | |
++---------------+--------------------------------------------------------------+
+| test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++---------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst
new file mode 100644
index 000000000..de490900d
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc2544_correlated.rst
@@ -0,0 +1,130 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*************************************************************
+Yardstick Test Case Description: NSB vFW RFC2544 (correlated)
+*************************************************************
+
++------------------------------------------------------------------------------+
+| NSB vFW test for VNF characterization using correlated traffic |
+| |
++---------------+--------------------------------------------------------------+
+| test case id | tc_{context}_rfc2544_ipv4_1rule_1flow_64B_trex_corelated |
+| | |
+| | * context = baremetal, heat |
+| | |
++---------------+--------------------------------------------------------------+
+| metric | * Network Throughput; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * TG Latency; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * VNF Packets Fwd; |
+| | * Dropped packets; |
+| | |
+| | NOTE: For correlated TCs the TG metrics are available on |
+| | uplink ports. |
+| | |
++---------------+--------------------------------------------------------------+
+| test purpose | The VFW RFC2544 correlated tests measure performance |
+| | characteristics of the SUT (multiple ports) and sends UDP |
+| | traffic from uplink TG ports to SampleVNF vFW application. |
+| | The application forwards received traffic from uplink ports |
+| | to downlink ports based on rules provided by the user in the |
+| | TC configuration and default rules created by vFW. The VNF |
+| | downlink traffic is received by another UDPReplay VNF and it |
+| | is mirrored back to the VNF on the same port. Finally, the |
+| | traffic is received back to the TG uplink port. |
+| | |
++---------------+--------------------------------------------------------------+
+| configuration | The 2 ports RFC2544 correlated test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_corelated |
+| | _traffic.yaml |
+| | |
+| | Multiple VNF (2, 4, 10) RFC2544 correlated test cases are |
+| | listed below: |
+| | |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated |
+| | _scale_10.yaml |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale |
+| | _2.yaml |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale |
+| | _4.yaml |
+| | |
+| | The scale-out RFC2544 test cases are listed below: |
+| | |
+| | * tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_correlated_scale |
+| | _out.yaml |
+| | |
+| | Test duration is set as 30 sec for each test and default |
+| | number of rules are applied. These can be configured |
+| | |
++---------------+--------------------------------------------------------------+
+| test tool | The vFW is a DPDK application that performs basic filtering |
+| | for malformed packets and dynamic packet filtering of |
+| | incoming packets using the connection tracker library. |
+| | |
++---------------+--------------------------------------------------------------+
+| applicability | The vFW RFC2544 test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test duration; |
+| | * tolerated loss; |
+| | * traffic flows; |
+| | * rules; |
+| | |
+| | Default values exist. |
+| | |
++---------------+--------------------------------------------------------------+
+| pre-test | For OpenStack test case image (yardstick-samplevnf) needs |
+| conditions | to be installed into Glance with vFW and DPDK included in |
+| | it (NSB install). |
+| | |
+| | For Baremetal tests cases vFW and DPDK must be installed on |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information. |
+| | |
++---------------+--------------------------------------------------------------+
+| test sequence | Description and expected result |
+| | |
++---------------+--------------------------------------------------------------+
+| step 1 | For Baremetal test: The TG (except IXIA), vFW and UDPReplay |
+| | VNFs are started on the hosts based on the pod file. |
+| | |
+| | For Heat test: Three host VMs are booted, as Traffic |
+| | generator, vFW and UDPReplay VNF(vFW) based on the test |
+| | flavor. In case of scale-out scenario the multiple vFW VNF |
+| | VMs will be started. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 2 | Yardstick is connected with the TG, vFW and UDPReplay VNF by |
+| | using ssh (in case of IXIA TG is connected via TCL |
+| | interface). The test will resolve the topology and |
+| | instantiate all VNFs and TG and collect the KPI's/metrics. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 3 | The TG will send packets to the VNFs. If the number of |
+| | dropped packets is more than the tolerated loss the line |
+| | rate or throughput is halved. This is done until the dropped |
+| | packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for 64B packet |
+| | size with an accepted minimal packet loss for the default |
+| | configuration. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
+| | In Heat test: All VNF VMs and TG are deleted on test |
+| | completion. |
+| | |
++---------------+--------------------------------------------------------------+
+| test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++---------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst b/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst
new file mode 100644
index 000000000..9051fc4df
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vfw_rfc3511.rst
@@ -0,0 +1,133 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2018 Intel Corporation.
+
+*******************************************************
+Yardstick Test Case Description: NSB vFW RFC3511 (HTTP)
+*******************************************************
+
++------------------------------------------------------------------------------+
+| NSB vFW test for VNF characterization based on RFC3511 and IXIA |
+| |
++---------------+--------------------------------------------------------------+
+| test case id | tc_{context}_http_ixload_{http_size}_Requests-65000_{type} |
+| | |
+| | * context = baremetal, heat_external |
+| | * http_size = 1b, 4k, 64k, 256k, 512k, 1024k payload size |
+| | * type = Concurrency, Connections, Throughput |
+| | |
++---------------+--------------------------------------------------------------+
+| metric | * HTTP Total Throughput (Kbps); |
+| | * HTTP Simulated Users; |
+| | * HTTP Concurrent Connections; |
+| | * HTTP Connection Rate; |
+| | * HTTP Transaction Rate |
+| | |
++---------------+--------------------------------------------------------------+
+| test purpose | The vFW RFC3511 tests measure performance characteristics of |
+| | the SUT by sending the HTTP traffic from uplink to downlink |
+| | TG ports through vFW VNF. The application forwards received |
+| | traffic based on rules provided by the user in the TC |
+| | configuration and default rules created by vFW to send |
+| | traffic from uplink ports to downlink and voice versa. |
+| | |
++---------------+--------------------------------------------------------------+
+| configuration | The 2 ports RFC3511 test cases are listed below: |
+| | |
+| | * tc_baremetal_http_ixload_1024k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_1b_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_256k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_4k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_512k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_baremetal_http_ixload_64k_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_heat_external_http_ixload_1b_Requests-10Gbps |
+| | _Throughput.yaml |
+| | * tc_heat_external_http_ixload_1b_Requests-65000 |
+| | _Concurrency.yaml |
+| | * tc_heat_external_http_ixload_1b_Requests-65000 |
+| | _Connections.yaml |
+| | |
+| | The 4 ports RFC3511 test cases are listed below: |
+| | |
+| | * tc_baremetal_http_ixload_1b_Requests-65000 |
+| | _Concurrency_4port.yaml |
+| | |
++---------------+--------------------------------------------------------------+
+| test tool | The vFW is a DPDK application that performs basic filtering |
+| | for malformed packets and dynamic packet filtering of |
+| | incoming packets using the connection tracker library. |
+| | |
++---------------+--------------------------------------------------------------+
+| applicability | The vFW RFC3511 test cases can be configured with different: |
+| | |
+| | * http payload sizes; |
+| | * traffic flows; |
+| | * rules; |
+| | |
+| | Default values exist. |
+| | |
++---------------+--------------------------------------------------------------+
+| pre-test | For OpenStack test case image (yardstick-samplevnf) needs |
+| conditions | to be installed into Glance with vFW and DPDK included in |
+| | it (NSB install). |
+| | |
+| | For Baremetal tests cases vFW and DPDK must be installed on |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information. |
+| | |
++---------------+--------------------------------------------------------------+
+| test sequence | Description and expected result |
+| | |
++---------------+--------------------------------------------------------------+
+| step 1 | For Baremetal test: The vFW VNF is started on the hosts |
+| | based on the pod file. |
+| | |
+| | For Heat external test: The vFW VM are deployed and booted. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 2 | Yardstick is connected with the TG (IxLoad) via IxLoad API |
+| | and VNF by using ssh. The test will resolve the topology and |
+| | instantiate all VNFs and TG and collect the KPI's/metrics. |
+| | |
++---------------+--------------------------------------------------------------+
+| step 3 | The TG simulates HTTP traffic based on selected type of TC. |
+| | |
+| | Concurrency: |
+| | The TC attempts to simulate some number of human users. |
+| | The simulated users are gradually brought online until 64K |
+| | users is met (the Ramp-Up phase), then taken offline (the |
+| | Ramp Down phase). |
+| | |
+| | Connections: |
+| | The TC creates some number of HTTP connections per second. |
+| | It will attempt to generate the 64K of HTTP connections |
+| | per second. |
+| | |
+| | Throughput: |
+| | TC simultaneously transmits and receives TCP payload |
+| | (bytes) at a certain rate measured in Megabits per second |
+| | (Mbps), Kilobits per second (Kbps), or Gigabits per |
+| | second. The 10 Gbits is default throughput. |
+| | |
+| | At the end of the TC, the KPIs are collected and stored |
+| | (depends on the selected dispatcher). |
+| | |
++---------------+--------------------------------------------------------------+
+| step 4 | In Baremetal test: The test quits the application and |
+| | unbinds the DPDK ports. |
+| | |
+| | In Heat test: All VNF VMs are deleted and connections to TG |
+| | are terminated. |
+| | |
++---------------+--------------------------------------------------------------+
+| test verdict | The test case will try to achieve the configured HTTP |
+| | Concurrency/Throughput/Connections. |
++---------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
new file mode 100644
index 000000000..6df4ab880
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
@@ -0,0 +1,96 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2019 Viosoft Corporation.
+
+**********************************************
+Yardstick Test Case Description: NSB VIMS
+**********************************************
+
++-----------------------------------------------------------------------------+
+|NSB VIMS test for vIMS characterization |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_vims_{context}_sipp |
+| | |
+| | * context = baremetal or heat; |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | * Successful registrations per second; |
+| | * Total number of active registrations per server; |
+| | * Successful de-registrations per second; |
+| | * Successful session establishments per second; |
+| | * Total number of active sessions per server; |
+| | * Mean session setup time; |
+| | * Successful re-registrations per second; |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The vIMS test handles registration rate, call rate, |
+| | round trip delay, and message statistics of vIMS system. |
+| | |
+| | The vIMS test cases are implemented to run in baremetal |
+| | and heat context default configuration. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The vIMS test cases are listed below: |
+| | |
+| | * tc_vims_baremetal_sipp.yaml |
+| | * tc_vims_heat_sipp.yaml |
+| | |
+| | Each test runs one time and collects all the KPIs. |
+| | The configuration of vIMS and SIPp can be changed in each |
+| | test. |
++--------------+--------------------------------------------------------------+
+|test tool | SIPp |
+| | |
+| | SIPp is an application that can simulate SIP scenarios, can |
+| | generate RTP traffic and used for vIMS characterization. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | The SIPp test cases can be configured with different: |
+| | |
+| | * number of accounts; |
+| | * the call per second (cps) of SIP test; |
+| | * the holding time; |
+| | * RTP configuratioin; |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | For Openstack test case, only vIMS is deployed by external |
+|conditions | heat template, SIPp needs pod.yaml file with the necessary |
+| | system and NIC information |
+| | |
+| | For Baremetal tests cases SIPp and vIMS must be installed in |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
+| | For Heat test: One host VM for vIMS is booted, based on |
+| | the test flavor. Another host for SIPp is booted as |
+| | traffic generator, based on pod.yaml file |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with the vIMS and SIPp via ssh. |
+| | The test will resolve the topology, instantiate the vIMS and |
+| | SIPp and collect the KPIs/metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | The SIPp will run scenario tests with parameters configured |
+| | in test case files (tc_vims_baremetal_sipp.yaml and |
+| | tc_vims_heat_sipp.yaml files). |
+| | This is done until the KPIs of SIPp are within an acceptable |
+| | threshold. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application. |
+| | |
+| | In Heat test: The host VM of vIMS is deleted on test |
+| | completion. |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will collect the KPIs and plot on Grafana. |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
new file mode 100644
index 000000000..6a4a37697
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
@@ -0,0 +1,113 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB VPP IPSEC
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB VPP test for vIPSEC characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_baremetal_rfc2544_ipv4_{crypto_dev}_{crypto_alg} |
+| | |
+| | * crypto_dev = HW_cryptodev or SW_cryptodev; |
+| | * crypto_alg = aes-gcm or cbc-sha1; |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Network Throughput NDR or PDR; |
+| | * Connections Per Second (CPS); |
+| | * Latency; |
+| | * Number of tunnels; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * Dropped packets; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | IPv4 IPsec tunnel mode performance test: |
+| | |
+| | * Finds and reports throughput NDR (Non Drop Rate) with zero |
+| | packet loss tolerance or throughput PDR (Partial Drop Rate) |
+| | with non-zero packet loss tolerance (LT) expressed in |
+| | number of packets transmitted. |
+| | |
+| | * The IPSEC test cases are implemented to run in baremetal |
+| | |
++--------------+---------------------------------------------------------------+
+|configuration | The IPSEC test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_trex.yaml |
+| | |
+| | Test duration is set as 500sec for each test. |
+| | Packet size set as 64 bytes or higher. |
+| | Number of tunnels set as 1 or higher. |
+| | Number of connections set as 1 or higher |
+| | These can be configured |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Vector Packet Processing (VPP) |
+| | The VPP platform is an extensible framework that provides |
+| | out-of-the-box production quality switch/router functionality.|
+| | Its high performance, proven technology, its modularity and, |
+| | flexibility and rich feature set |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This VPP IPSEC test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test durations; |
+| | * tolerated loss; |
+| | * crypto device type; |
+| | * number of physical cores; |
+| | * number of tunnels; |
+| | * number of connections; |
+| | * encryption algorithms - integrity algorithm; |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | For Baremetal tests cases VPP and DPDK must be installed in |
+|conditions | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | Yardstick is connected with the TG and VNF by using ssh. |
+| | The test will resolve the topology and instantiate the VNF |
+| | and TG and collect the KPI's/metrics. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Test packets are generated by TG on links to DUTs. If the |
+| | number of dropped packets is more than the tolerated loss |
+| | the line rate or throughput is halved. This is done until |
+| | the dropped packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for a packet size |
+| | specified in the test case with an accepted minimal packet |
+| | loss for the default configuration. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
index 202307de6..19cc80e30 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc010.rst
@@ -34,6 +34,7 @@ Yardstick Test Case Description TC010
| | |
| | Lmbench is a suite of operating system microbenchmarks. This |
| | test uses lat_mem_rd tool from that suite including: |
+| | |
| | * Context switching |
| | * Networking: connection establishment, pipe, TCP, UDP, and |
| | RPC hot potato |
@@ -55,7 +56,7 @@ Yardstick Test Case Description TC010
| | The benchmark runs as two nested loops. The outer loop is |
| | the stride size. The inner loop is the array size. For each |
| | array size, the benchmark creates a ring of pointers that |
-| | point backward one stride.Traversing the array is done by: |
+| | point backward one stride. Traversing the array is done by:: |
| | |
| | p = (char **)*p; |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
index 48bdef497..cbb1db91f 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc011.rst
@@ -60,14 +60,14 @@ Yardstick Test Case Description TC011
| | |
| | * options: |
| | protocol: udp # The protocol used by iperf3 tools |
-| | bandwidth: 20m # It will send the given number of packets |
-| | without pausing |
+| | # Send the given number of packets without pausing: |
+| | bandwidth: 20m |
| | * runner: |
| | duration: 30 # Total test duration 30 seconds. |
| | |
| | * SLA (optional): |
| | jitter: 10 (ms) # The maximum amount of jitter that is |
-| | accepted. |
+| | accepted. |
| | |
+--------------+--------------------------------------------------------------+
|applicability | Test can be configured with different: |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
index b56e829f5..2502f5d94 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc012.rst
@@ -34,6 +34,7 @@ Yardstick Test Case Description TC012
| | |
| | LMbench is a suite of operating system microbenchmarks. |
| | This test uses bw_mem tool from that suite including: |
+| | |
| | * Cached file read |
| | * Memory copy (bcopy) |
| | * Memory read |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc015.rst b/docs/testing/user/userguide/opnfv_yardstick_tc015.rst
new file mode 100755
index 000000000..277614ad4
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc015.rst
@@ -0,0 +1,141 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Orange and others.
+
+*************************************
+Yardstick Test Case Description TC015
+*************************************
+
+.. _unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench
+
++-----------------------------------------------------------------------------+
+| Processing speed with impact on energy consumption and CPU load |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC015_PROCESSING SPEED |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | score of single cpu running, |
+| | score of parallel running, |
+| | energy consumption |
+| | cpu load |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The purpose of TC015 is to evaluate the IaaS compute |
+| | performance with regards to CPU processing speed with |
+| | its impact on the energy consumption |
+| | It measures score of single cpu running and parallel |
+| | running. Energy consumption and cpu load are monitored while |
+| | the cpu test is running. |
+| | |
+| | The purpose is also to be able to spot the trends. |
+| | Test results, graphs and similar shall be stored for |
+| | comparison reasons and product evolution understanding |
+| | between different OPNFV versions and/or configurations, |
+| | different server types. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | UnixBench |
+| | |
+| | Unixbench is the most used CPU benchmarking software tool. |
+| | It can measure the performance of bash scripts, CPUs in |
+| | multithreading and single threading. It can also measure the |
+| | performance for parallel tasks. Also, specific disk IO for |
+| | small and large files are performed. You can use it to |
+| | measure either linux dedicated servers and linux vps |
+| | servers, running CentOS, Debian, Ubuntu, Fedora and other |
+| | distros. |
+| | |
+| | (UnixBench is not always part of a Linux distribution, hence |
+| | it needs to be installed. As an example see the |
+| | /yardstick/tools/ directory for how to generate a Linux |
+| | image with UnixBench included.) |
+| | |
+| | Redfish API |
+| | This HTTPS interface is provided by BMC of every telco grade |
+| | server. Is is a standard interface. |
+| | |
++--------------+--------------------------------------------------------------+
+|test | The UnixBench runs system benchmarks on a compute, getting |
+|description | information on the CPUs in the system. If the system has |
+| | more than one CPU, the tests will be run twice -- once with |
+| | a single copy of each test running at once, and once with N |
+| | N copies, where N is the number of CPUs. |
+| | |
+| | UnixBench will process a set of results from a single test |
+| | by averaging the individual pass results into a single final |
+| | value. |
+| | |
+| | While the cpu test is running Energy scenario run in |
+| | background to monitor the number of watt consumed by the |
+| | compute server on the fly. The same is done using Cpuload |
+| | scenario to monitor the overall percentage of CPU used on |
+| | the fly. This enables to balance the CPU score with its |
+| | impact on energy consumption. Synchronized measurements |
+| | enables to look at any relation between CPU load and energy |
+| | consumption. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | file: opnfv_yardstick_tc015.yaml |
+| | |
+| | run_mode: |
+| | Run Energy and Cpuload in background |
+| | Run unixbench in quiet mode or verbose mode |
+| | test_type: dhry2reg, whetstone and so on |
+| | |
+| | Duration and Interval are set globally for Energy and |
+| | Cpuload, aligned with duration of UnixBench test. |
+| | SLA can be set for each scenario type. Default is NA. |
+| | For SLA with single_score and parallel_score, both can be |
+| | set by user, default is NA. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Test shall be applied to node context only |
+| | It can be configured with different: |
+| | |
+| | * test types: dhry2reg, whetstone |
+| | |
+| | Default values exist. |
+| | |
+| | SLA (optional) : min_score: The minimun UnixBench score that |
+| | is accepted. |
+| | |
++--------------+--------------------------------------------------------------+
+|usability | This test case is one of Yardstick's generic test. Thus it |
+| | is runnable on most of the scenarios. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | unixbench_ |
+| | |
+| | ETSI-NFV-TST001 |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | The target shall have unixbench installed on it. |
+|conditions | |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Yardstick is connected with the target node using ssh. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Energy and Cpuload are launched silently in background one |
+| | after the other. |
+| | Then UnixBench is invoked. All the tests are executed using |
+| | the "Run" script in the top-level of UnixBench directory. |
+| | The "Run" script will run a standard "index" test, and save |
+| | the report in the "results" directory. Then the report is |
+| | processed by "unixbench_benchmark" and checked against the |
+| | SLA. |
+| | While unibench runs energy and cpu load are catched |
+| | periodically according to interval value. |
+| | |
+| | Result: Logs are stored. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
index 57e8ddf79..d27b201c5 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc019.rst
@@ -43,20 +43,24 @@ Yardstick Test Case Description TC019
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, two kinds of monitor are needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters: |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scritps. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
+| | |
+| | 1. monitor_type: which is used for finding the monitor |
+| | class and related scritps. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2. command_name: which is the command name used for |
+| | request |
| | |
| | 2. the "process" monitor check whether a process is running |
| | on a specific node, which needs three parameters: |
-| | 1) monitor_type: which used for finding the monitor class |
-| | and related scritps. It should be always set to "process" |
-| | for this monitor. |
-| | 2) process_name: which is the process name for monitor |
-| | 3) host: which is the name of the node runing the process |
+| | |
+| | 1. monitor_type: which used for finding the monitor class |
+| | and related scritps. It should be always set to |
+| | "process" for this monitor. |
+| | 2. process_name: which is the process name for monitor |
+| | 3. host: which is the name of the node runing the process |
| | |
| | e.g. |
| | monitor1: |
@@ -125,7 +129,12 @@ Yardstick Test Case Description TC019
+--------------+--------------------------------------------------------------+
|post-action | It is the action when the test cases exist. It will check |
| | the status of the specified process on the host, and restart |
-| | the process if it is not running for next test cases |
+| | the process if it is not running for next test cases. |
+| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
index 0e2e9a5f8..f3f9ea6bf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc025.rst
@@ -39,12 +39,15 @@ Yardstick Test Case Description TC025
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, one kind of monitor are needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scritps. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
+| | |
+| | 1) monitor_type: which is used for finding the monitor |
+| | class and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for |
+| | request |
| | |
| | There are four instance of the "openstack-cmd" monitor: |
| | monitor1: |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
index 125fd59fa..90790e2e3 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc027.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC027
*************************************
-.. _ipv6: https://wiki.opnfv.org/ipv6_opnfv_project
+.. _ipv6: https://wiki.opnfv.org/display/ipv6
+-----------------------------------------------------------------------------+
|IPv6 connectivity between nodes on the tenant network |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
index d62fbf787..4c73c9677 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc040.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC040
*************************************
-.. _Parser: https://wiki.opnfv.org/parser
+.. _Parser: https://wiki.opnfv.org/display/parser
+-----------------------------------------------------------------------------+
|Verify Parser Yang-to-Tosca |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
index 8660d9297..23b98c8f4 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc042.rst
@@ -4,12 +4,12 @@
.. (c) OPNFV, ZTE and others.
***************************************
-Yardstick Test Case Description TC0042
+Yardstick Test Case Description TC042
***************************************
.. _DPDK: http://dpdk.org/doc/guides/index.html
.. _Testpmd: http://dpdk.org/doc/guides/testpmd_app_ug/index.html
-.. _Pktgen-dpdk: http://pktgen.readthedocs.io/en/latest/index.html
+.. _Pktgen-dpdk: https://pktgen-dpdk.readthedocs.io/en/latest/index.html
+-----------------------------------------------------------------------------+
|Network Performance |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc045.rst b/docs/testing/user/userguide/opnfv_yardstick_tc045.rst
index 0b0993c34..378176090 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc045.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc045.rst
@@ -128,9 +128,14 @@ Yardstick Test Case Description TC045
| | Result: The test case is passed or not. |
| | |
+--------------+--------------------------------------------------------------+
-|post-action | It is the action when the test cases exist. It will check the|
-| | status of the specified process on the host, and restart the |
-| | process if it is not running for next test cases |
+|post-action | It is the action when the test cases exist. It will check |
+| | the status of the specified process on the host, and restart |
+| | the process if it is not running for next test cases. |
+| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc046.rst b/docs/testing/user/userguide/opnfv_yardstick_tc046.rst
index cce6c6884..5308c8e7b 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc046.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc046.rst
@@ -127,9 +127,14 @@ Yardstick Test Case Description TC046
| | Result: The test case is passed or not. |
| | |
+--------------+--------------------------------------------------------------+
-|post-action | It is the action when the test cases exist. It will check the|
-| | status of the specified process on the host, and restart the |
-| | process if it is not running for next test cases |
+|post-action | It is the action when the test cases exist. It will check |
+| | the status of the specified process on the host, and restart |
+| | the process if it is not running for next test cases. |
+| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc047.rst b/docs/testing/user/userguide/opnfv_yardstick_tc047.rst
index 95158cfd6..bb8ffc6ab 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc047.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc047.rst
@@ -128,9 +128,14 @@ Yardstick Test Case Description TC047
| | Result: The test case is passed or not. |
| | |
+--------------+--------------------------------------------------------------+
-|post-action | It is the action when the test cases exist. It will check the|
-| | status of the specified process on the host, and restart the |
-| | process if it is not running for next test cases |
+|post-action | It is the action when the test cases exist. It will check |
+| | the status of the specified process on the host, and restart |
+| | the process if it is not running for next test cases. |
+| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc048.rst b/docs/testing/user/userguide/opnfv_yardstick_tc048.rst
index 21c00d1fe..1bf627282 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc048.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc048.rst
@@ -128,9 +128,14 @@ Yardstick Test Case Description TC048
| | Result: The test case is passed or not. |
| | |
+--------------+--------------------------------------------------------------+
-|post-action | It is the action when the test cases exist. It will check the|
-| | status of the specified process on the host, and restart the |
-| | process if it is not running for next test cases |
+|post-action | It is the action when the test cases exist. It will check |
+| | the status of the specified process on the host, and restart |
+| | the process if it is not running for next test case |
+| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc049.rst b/docs/testing/user/userguide/opnfv_yardstick_tc049.rst
index f58bb9989..12ed94b7d 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc049.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc049.rst
@@ -128,9 +128,14 @@ Yardstick Test Case Description TC049
| | Result: The test case is passed or not. |
| | |
+--------------+--------------------------------------------------------------+
-|post-action | It is the action when the test cases exist. It will check the|
-| | status of the specified process on the host, and restart the |
-| | process if it is not running for next test cases |
+|post-action | It is the action when the test cases exist. It will check |
+| | the status of the specified process on the host, and restart |
+| | the process if it is not running for next test cases. |
+| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
index 8890c9d53..7d01cb99a 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
@@ -34,23 +34,20 @@ Yardstick Test Case Description TC050
| | 2) host: which is the name of a control node being attacked. |
| | 3) interface: the network interface to be turned off. |
| | |
-| | There are four instance of the "close-interface" monitor: |
-| | attacker1(for public netork): |
-| | -fault_type: "close-interface" |
-| | -host: node1 |
-| | -interface: "br-ex" |
-| | attacker2(for management netork): |
-| | -fault_type: "close-interface" |
-| | -host: node1 |
-| | -interface: "br-mgmt" |
-| | attacker3(for storage netork): |
-| | -fault_type: "close-interface" |
-| | -host: node1 |
-| | -interface: "br-storage" |
-| | attacker4(for private netork): |
-| | -fault_type: "close-interface" |
-| | -host: node1 |
-| | -interface: "br-mesh" |
+| | The interface to be closed by the attacker can be set by the |
+| | variable of "{{ interface_name }}":: |
+| | |
+| | attackers: |
+| | - |
+| | fault_type: "general-attacker" |
+| | host: {{ attack_host }} |
+| | key: "close-br-public" |
+| | attack_key: "close-interface" |
+| | action_parameter: |
+| | interface: {{ interface_name }} |
+| | rollback_parameter: |
+| | interface: {{ interface_name }} |
+| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, the monitor named "openstack-cmd" is |
| | needed. The monitor needs needs two parameters: |
@@ -59,19 +56,20 @@ Yardstick Test Case Description TC050
| | "openstack-cmd" for this monitor. |
| | 2) command_name: which is the command name used for request |
| | |
-| | There are four instance of the "openstack-cmd" monitor: |
-| | monitor1: |
-| | -monitor_type: "openstack-cmd" |
-| | -command_name: "nova image-list" |
-| | monitor2: |
-| | -monitor_type: "openstack-cmd" |
-| | -command_name: "neutron router-list" |
-| | monitor3: |
-| | -monitor_type: "openstack-cmd" |
-| | -command_name: "heat stack-list" |
-| | monitor4: |
-| | -monitor_type: "openstack-cmd" |
-| | -command_name: "cinder list" |
+| | There are four instance of the "openstack-cmd" monitor:: |
+| | |
+| | monitor1: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "nova image-list" |
+| | monitor2: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "neutron router-list" |
+| | monitor3: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "heat stack-list" |
+| | monitor4: |
+| | - monitor_type: "openstack-cmd" |
+| | - command_name: "cinder list" |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there is one metric: |
| | 1)service_outage_time: which indicates the maximum outage |
@@ -109,9 +107,9 @@ Yardstick Test Case Description TC050
+--------------+--------------------------------------------------------------+
|step 2 | do attacker: connect the host through SSH, and then execute |
| | the turnoff network interface script with param value |
-| | specified by "interface". |
+| | specified by "{{ interface_name }}". |
| | |
-| | Result: Network interfaces will be turned down. |
+| | Result: The specified network interface will be down. |
| | |
+--------------+--------------------------------------------------------------+
|step 3 | stop monitors after a period of time specified by |
@@ -133,3 +131,4 @@ Yardstick Test Case Description TC050
| | execution problem. |
| | |
+--------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
index 9514b6819..7f2be6e7d 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc052.rst
@@ -65,15 +65,16 @@ Yardstick Test Case Description TC052
| | |
| | In this case, the "operation" adds a flavor and the "result |
| | checker" checks whether ths flavor is created. Their |
-| | parameters show as follows: |
-| | operation: |
-| | -operation_type: "nova-create-flavor" |
-| | -action_parameter: |
-| | flavorconfig: "test-001 test-001 100 1 1" |
-| | result checker: |
-| | -checker_type: "check-flavor" |
-| | -expectedValue: "test-001" |
-| | -condition: "in" |
+| | parameters show as follows:: |
+| | |
+| | operation: |
+| | -operation_type: "nova-create-flavor" |
+| | -action_parameter: |
+| | flavorconfig: "test-001 test-001 100 1 1" |
+| | result checker: |
+| | -checker_type: "check-flavor" |
+| | -expectedValue: "test-001" |
+| | -condition: "in" |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there is one metric: |
| | 1)service_outage_time: which indicates the maximum outage |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc053.rst b/docs/testing/user/userguide/opnfv_yardstick_tc053.rst
index 3c6bbc628..7308babb8 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc053.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc053.rst
@@ -135,6 +135,11 @@ Yardstick Test Case Description TC053
| | the status of the specified process on the host, and restart |
| | the process if it is not running for next test cases. |
| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
+| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
| | execution problem. |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
index c861ca90c..25703d3fb 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc055.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC055
*************************************
-.. _/proc/cpuinfo: http://www.linfo.org/proc_cpuinfo.html
+.. _`/proc/cpuinfo`: http://www.linfo.org/proc_cpuinfo.html
+-----------------------------------------------------------------------------+
|Compute Capacity |
@@ -41,7 +41,7 @@ Yardstick Test Case Description TC055
| | capacity output. |
| | |
+--------------+--------------------------------------------------------------+
-|references | /proc/cpuinfo_ |
+|references | `/proc/cpuinfo`_ |
| | |
| | ETSI-NFV-TST001 |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc056.rst b/docs/testing/user/userguide/opnfv_yardstick_tc056.rst
index 01aa99ac2..cd8cc2f20 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc056.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc056.rst
@@ -10,7 +10,7 @@ Yardstick Test Case Description TC056
+-----------------------------------------------------------------------------+
|OpenStack Controller Messaging Queue Service High Availability |
-+==============+==============================================================+
++--------------+--------------------------------------------------------------+
|test case id | OPNFV_YARDSTICK_TC056:OpenStack Controller Messaging Queue |
| | Service High Availability |
+--------------+--------------------------------------------------------------+
@@ -98,7 +98,7 @@ Yardstick Test Case Description TC056
| | |
+--------------+--------------------------------------------------------------+
|configuration | This test case needs two configuration files: |
-| | 1) test case file:opnfv_yardstick_tc056.yaml |
+| | 1) test case file: opnfv_yardstick_tc056.yaml |
| | -Attackers: see above "attackers" description |
| | -waiting_time: which is the time (seconds) from the process |
| | being killed to stoping monitors the monitors |
@@ -142,6 +142,11 @@ Yardstick Test Case Description TC056
| | the status of the specified process on the host, and restart |
| | the process if it is not running for next test cases. |
| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
+| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
| | execution problem. |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
index 2a4ce40c0..245a58e08 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
@@ -10,8 +10,11 @@ Yardstick Test Case Description TC057
+-----------------------------------------------------------------------------+
|OpenStack Controller Cluster Management Service High Availability |
-+==============+==============================================================+
-|test case id | |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC057_HA: OpenStack Controller Cluster |
+| | Management Service High Availability |
+| | |
+--------------+--------------------------------------------------------------+
|test purpose | This test case will verify the quorum configuration of the |
| | cluster manager(pacemaker) on controller nodes. When a |
@@ -46,17 +49,21 @@ Yardstick Test Case Description TC057
| | -host: node1 |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, a kind of monitor is needed: |
+| | |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters: |
-| | 1) monitor_type: which is used for finding the monitor class |
-| | and related scripts. It should be always set to |
-| | "openstack-cmd" for this monitor. |
-| | 2) command_name: which is the command name used for request |
| | |
-| | In this case, the command_name of monitor1 should be services|
-| | that are managed by the cluster manager. (Since rabbitmq and |
-| | haproxy are managed by pacemaker, most Openstack Services |
-| | can be used to check high availability in this case) |
+| | 1. monitor_type: which is used for finding the monitor |
+| | class and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2. command_name: which is the command name used for |
+| | request |
+| | |
+| | In this case, the command_name of monitor1 should be |
+| | services that are managed by the cluster manager. |
+| | (Since rabbitmq and haproxy are managed by pacemaker, |
+| | most Openstack Services can be used to check high |
+| | availability in this case) |
| | |
| | (e.g.) |
| | monitor1: |
@@ -155,11 +162,17 @@ Yardstick Test Case Description TC057
| | Result: The test case is passed or not. |
| | |
+--------------+------+----------------------------------+--------------------+
-|post-action | It is the action when the test cases exist. It will check the|
-| | status of the cluster messaging process(corosync) on the |
+|post-action | It is the action when the test cases exist. It will check |
+| | the status of the cluster messaging process(corosync) on the |
| | host, and restart the process if it is not running for next |
-| | test cases |
+| | test cases. |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
+| | |
+--------------+------+----------------------------------+--------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
| | execution problem. |
+| | |
+--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc058.rst b/docs/testing/user/userguide/opnfv_yardstick_tc058.rst
index fb9a4c2d1..9e8427b50 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc058.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc058.rst
@@ -10,8 +10,9 @@ Yardstick Test Case Description TC058
+-----------------------------------------------------------------------------+
|OpenStack Controller Virtual Router Service High Availability |
-+==============+==============================================================+
-|test case id | OPNFV_YARDSTICK_TC058:OpenStack Controller Virtual Router |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC058: OpenStack Controller Virtual Router |
| | Service High Availability |
+--------------+--------------------------------------------------------------+
|test purpose | This test case will verify the high availability of virtual |
@@ -108,8 +109,9 @@ Yardstick Test Case Description TC058
|conditions | with cachestat included in the image. |
| | |
+--------------+--------------------------------------------------------------+
-|step 1 | Two host VMs are booted, these two hosts are in two different|
-| | networks, the networks are connected by a virtual router |
+|step 1 | Two host VMs are booted, these two hosts are in two |
+| | different networks, the networks are connected by a virtual |
+| | router. |
| | |
+--------------+--------------------------------------------------------------+
|step 1 | start monitors: |
@@ -142,7 +144,13 @@ Yardstick Test Case Description TC058
| | Virtual machines and network created in the test case will |
| | be destoryed. |
| | |
+| | Notice: This post-action uses 'lsb_release' command to check |
+| | the host linux distribution and determine the OpenStack |
+| | service name to restart the process. Lack of 'lsb_release' |
+| | on the host may cause failure to restart the process. |
+| | |
+--------------+------+----------------------------------+--------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
| | execution problem. |
+| | |
+--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
index a77653aa5..7b8ee06c7 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc063.rst
@@ -58,6 +58,7 @@ Yardstick Test Case Description TC063
| | * count: 15 - how many times to stat disk utilization |
| | type: int |
| | unit: na |
+| | |
| | There are default values for each above-mentioned option. |
| | Run in background with other test cases. |
| | |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
index af0e64fbf..e1bfd5399 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc069.rst
@@ -9,9 +9,6 @@ Yardstick Test Case Description TC069
.. _RAMspeed: http://alasir.com/software/ramspeed/
-.. table::
- :class: longtable
-
+-----------------------------------------------------------------------------+
|Memory Bandwidth |
| |
@@ -41,7 +38,8 @@ Yardstick Test Case Description TC069
| | * SLA (optional): 7000 (MBps) min_bandwidth: The minimum |
| | amount of memory bandwidth that is accepted. |
| | * type_id: 1 - runs a specified benchmark |
-| | (by an ID number): |
+| | (by an ID number):: |
+| | |
| | 1 -- INTmark [writing] 4 -- FLOATmark [writing] |
| | 2 -- INTmark [reading] 5 -- FLOATmark [reading] |
| | 3 -- INTmem 6 -- FLOATmem |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
index ad4526405..873c5c99e 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc073.rst
@@ -7,7 +7,7 @@
Yardstick Test Case Description TC073
*************************************
-.. _netperf: http://www.netperf.org/netperf/training/Netperf.html
+.. _netperf: https://hewlettpackard.github.io/netperf/
+-----------------------------------------------------------------------------+
|Throughput per NFVI node test |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
index 92cd51439..8d025eecf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc074.rst
@@ -19,16 +19,27 @@ Yardstick Test Case Description TC074
|metric | Storage performance |
| | |
+--------------+--------------------------------------------------------------+
-|test purpose | Storperf integration with yardstick. The purpose of StorPerf |
-| | is to provide a tool to measure block and object storage |
-| | performance in an NFVI. When complemented with a |
-| | characterization of typical VF storage performance |
-| | requirements, it can provide pass/fail thresholds for test, |
-| | staging, and production NFVI environments. |
-| | |
-| | The benchmarks developed for block and object storage will |
-| | be sufficiently varied to provide a good preview of expected |
-| | storage performance behavior for any type of VNF workload. |
+|test purpose | To evaluate and report on the Cinder volume performance. |
+| | |
+| | This testcase integrates with OPNFV StorPerf to measure |
+| | block performance of the underlying Cinder drivers. Many |
+| | options are supported, and even the root disk (Glance |
+| | ephemeral storage can be profiled. |
+| | |
+| | The fundamental concept of the test case is to first fill |
+| | the volumes with random data to ensure reported metrics |
+| | are indicative of continued usage and not skewed by |
+| | transitional performance while the underlying storage |
+| | driver allocates blocks. |
+| | The metrics for filling the volumes with random data |
+| | are not reported in the final results. The test also |
+| | ensures the volumes are performing at a consistent level |
+| | of performance by measuring metrics every minute, and |
+| | comparing the trend of the metrics over the run. By |
+| | evaluating the min and max values, as well as the slope of |
+| | the trend, it can make the determination that the metrics |
+| | are stable, and not fluctuating beyond industry standard |
+| | norms. |
| | |
+--------------+--------------------------------------------------------------+
|configuration | file: opnfv_yardstick_tc074.yaml |
@@ -38,7 +49,8 @@ Yardstick Test Case Description TC074
| | * public_network: "ext-net" - name of public network |
| | * volume_size: 2 - cinder volume size |
| | * block_sizes: "4096" - data block size |
-| | * queue_depths: "4" |
+| | * queue_depths: "4" - the number of simultaneous I/Os |
+| | to perform at all times |
| | * StorPerf_ip: "192.168.200.2" |
| | * query_interval: 10 - state query interval |
| | * timeout: 600 - maximum allowed job time |
@@ -50,7 +62,11 @@ Yardstick Test Case Description TC074
| | performance in an NFVI. |
| | |
| | StorPerf is delivered as a Docker container from |
-| | https://hub.docker.com/r/opnfv/storperf/tags/. |
+| | https://hub.docker.com/r/opnfv/storperf-master/tags/. |
+| | |
+| | The underlying tool used is FIO, and StorPerf supports |
+| | any FIO option in order to tailor the test to the exact |
+| | workload needed. |
| | |
+--------------+--------------------------------------------------------------+
|references | Storperf_ |
@@ -75,33 +91,56 @@ Yardstick Test Case Description TC074
| | * workload=[workload module] |
| | If not specified, the default is to run all workloads. The |
| | workload types are: |
+| | |
| | - rs: 100% Read, sequential data |
| | - ws: 100% Write, sequential data |
| | - rr: 100% Read, random access |
| | - wr: 100% Write, random access |
| | - rw: 70% Read / 30% write, random access |
-| | * nossd: Do not perform SSD style preconditioning. |
-| | * nowarm: Do not perform a warmup prior to |
+| | |
| | measurements. |
+| | |
+| | * workloads={json maps} |
+| | This parameter supercedes the workload and calls the V2.0 |
+| | API in StorPerf. It allows for greater control of the |
+| | parameters to be passed to FIO. For example, running a |
+| | random read/write with a mix of 90% read and 10% write |
+| | would be expressed as follows: |
+| | {"9010randrw": {"rw":"randrw","rwmixread": "90"}} |
+| | Note: This must be passed in as a string, so don't forget |
+| | to escape or otherwise properly deal with the quotes. |
+| | |
| | * report= [job_id] |
| | Query the status of the supplied job_id and report on |
| | metrics. If a workload is supplied, will report on only |
| | that subset. |
+| | * availability_zone: Specify the availability zone which |
+| | the stack will use to create instances. |
+| | * volume_type: |
+| | Cinder volumes can have different types, for example |
+| | encrypted vs. not encrypted. |
+| | To be able to profile the difference between the two. |
+| | * subnet_CIDR: Specify subnet CIDR of private network |
+| | * stack_name: Specify the name of the stack that will be |
+| | created, the default: "StorperfAgentGroup" |
+| | * volume_count: Specify the number of volumes per |
+| | virtual machines |
| | |
| | There are default values for each above-mentioned option. |
| | |
+--------------+--------------------------------------------------------------+
|pre-test | If you do not have an Ubuntu 14.04 image in Glance, you will |
-|conditions | need to add one. A key pair for launching agents is also |
-| | required. |
+|conditions | need to add one. |
| | |
| | Storperf is required to be installed in the environment. |
| | There are two possible methods for Storperf installation: |
-| | Run container on Jump Host |
-| | Run container in a VM |
+| | |
+| | - Run container on Jump Host |
+| | - Run container in a VM |
| | |
| | Running StorPerf on Jump Host |
| | Requirements: |
+| | |
| | - Docker must be installed |
| | - Jump Host must have access to the OpenStack Controller |
| | API |
@@ -112,6 +151,7 @@ Yardstick Test Case Description TC074
| | |
| | Running StorPerf in a VM |
| | Requirements: |
+| | |
| | - VM has docker installed |
| | - VM has OpenStack Controller credentials and can |
| | communicate with the Controller API |
@@ -126,10 +166,21 @@ Yardstick Test Case Description TC074
|test sequence | description and expected result |
| | |
+--------------+--------------------------------------------------------------+
-|step 1 | The Storperf is installed and Ubuntu 14.04 image is stored |
-| | in glance. TC is invoked and logs are produced and stored. |
+|step 1 | Yardstick calls StorPerf to create the heat stack with the |
+| | number of VMs and size of Cinder volumes specified. The |
+| | VMs will be on their own private subnet, and take floating |
+| | IP addresses from the specified public network. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick calls StorPerf to fill all the volumes with |
+| | random data. |
| | |
-| | Result: Logs are stored. |
++--------------+--------------------------------------------------------------+
+|step 3 | Yardstick calls StorPerf to perform the series of tests |
+| | specified by the workload, queue depths and block sizes. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | Yardstick calls StorPerf to delete the stack it created. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | None. Storage performance results are fetched and stored. |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
index 793c3fdd5..df2192313 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst
@@ -14,8 +14,8 @@ Yardstick Test Case Description TC081
|Network Latency |
| |
+--------------+--------------------------------------------------------------+
-|test case id | OPNFV_YARDSTICK_TC081_NETWORK_LATENCY_BETWEEN_CONTAINER_AND_ |
-| | VM |
+|test case id | OPNFV_YARDSTICK_TC081_NETWORK_LATENCY_BETWEEN_CONTAINER_AND |
+| | _VM |
| | |
+--------------+--------------------------------------------------------------+
|metric | RTT (Round Trip Time) |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
index 2e7b28e25..b3d44c4bf 100644
--- a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst
@@ -92,18 +92,19 @@ Yardstick Test Case Description TC084
+--------------+--------------------------------------------------------------+
|pre-test | To run and install SPEC CPU 2006, the following are |
|conditions | required: |
-| | * For SPECint 2006: Both C99 and C++98 compilers are |
-| | installed in VM images; |
-| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 |
-| | compilers installed in VM images; |
-| | * At least 4GB of disk space availabile on VM. |
-| | |
-| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu |
-| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. |
-| | Higher gcc and g++ version may cause compiling error. |
-| | |
-| | For more SPEC CPU 2006 dependencies please visit |
-| | (https://www.spec.org/cpu2006/Docs/techsupport.html) |
+| | |
+| | * For SPECint 2006: Both C99 and C++98 compilers are |
+| | installed in VM images; |
+| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 |
+| | compilers installed in VM images; |
+| | * At least 4GB of disk space availabile on VM. |
+| | |
+| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu |
+| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. |
+| | Higher gcc and g++ version may cause compiling error. |
+| | |
+| | For more SPEC CPU 2006 dependencies please visit |
+| | (https://www.spec.org/cpu2006/Docs/techsupport.html) |
| | |
+--------------+--------------------------------------------------------------+
|test sequence | description and expected result |
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc087.rst b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
new file mode 100644
index 000000000..c11252606
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
@@ -0,0 +1,191 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Ericsson and others.
+
+*************************************
+Yardstick Test Case Description TC087
+*************************************
+
++-----------------------------------------------------------------------------+
+|SDN Controller resilience in non-HA configuration |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC087: SDN controller resilience in |
+| | non-HA configuration |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test validates that network data plane services are |
+| | highly available in the event of an SDN Controller failure, |
+| | even if the SDN controller is deployed in a non-HA |
+| | configuration. Specifically, the test verifies that |
+| | existing data plane connectivity is not impacted, i.e. all |
+| | configured network services such as DHCP, ARP, L2, |
+| | L3 Security Groups should continue to operate |
+| | between the existing VMs while the SDN controller is |
+| | offline or rebooting. |
+| | |
+| | The test also validates that new network service operations |
+| | (creating a new VM in the existing L2/L3 network or in a new |
+| | network, etc.) are operational after the SDN controller |
+| | has recovered from a failure. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case fails the SDN controller service running |
+| | on the OpenStack controller node, then checks if already |
+| | configured DHCP/ARP/L2/L3/SNAT connectivity is not |
+| | impacted between VMs and the system is able to execute |
+| | new virtual network operations once the SDN controller |
+| | is restarted and has fully recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called “kill-process” is |
+| | needed. This attacker includes three parameters: |
+| | |
+| | 1. fault_type: which is used for finding the attacker's |
+| | scripts. It should be set to 'kill-process' in this test |
+| | |
+| | 2. process_name: should be set to the name of the SDN |
+| | controller process |
+| | |
+| | 3. host: which is the name of a control node where the |
+| | SDN controller process is running |
+| | |
+| | e.g. -fault_type: "kill-process" |
+| | -process_name: "opendaylight" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | This test case utilizes two monitors of type "ip-status" |
+| | and one monitor of type "process" to track the following |
+| | conditions: |
+| | |
+| | 1. "ping_same_network_l2": monitor ICMP traffic between |
+| | VMs in the same Neutron network |
+| | |
+| | 2. "ping_external_snat": monitor ICMP traffic from VMs to |
+| | an external host on the Internet to verify SNAT |
+| | functionality. |
+| | |
+| | 3. "SDN controller process monitor": a monitor checking the |
+| | state of a specified SDN controller process. It measures |
+| | the recovery time of the given process. |
+| | |
+| | Monitors of type "ip-status" use the "ping" utility to |
+| | verify reachability of a given target IP. |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | |
+| | 1. "nova-create-instance-in_network": create a VM instance |
+| | in one of the existing Neutron network. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | |
+| | 1. process_recover_time: which indicates the maximun |
+| | time (seconds) from the process being killed to |
+| | recovered |
+| | |
+| | 2. packet_drop: measure the packets that have been dropped |
+| | by the monitors using pktgen. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | none |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | |
+| | 1. test case file: opnfv_yardstick_tc087.yaml |
+| | |
+| | - Attackers: see above “attackers” discription |
+| | - waiting_time: which is the time (seconds) from the |
+| | process being killed to stoping monitors the monitors |
+| | - Monitors: see above “monitors” discription |
+| | - SLA: see above “metrics” discription |
+| | |
+| | 2. POD file: pod.yaml The POD configuration should record |
+| | on pod.yaml first. the “host” item in this test case |
+| | will use the node name in the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | Description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-action | 1. The OpenStack cluster is set up with a single SDN |
+| | controller in a non-HA configuration. |
+| | |
+| | 2. One or more Neutron networks are created with two or |
+| | more VMs attached to each of the Neutron networks. |
+| | |
+| | 3. The Neutron networks are attached to a Neutron router |
+| | which is attached to an external network towards the |
+| | DCGW. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Start IP connectivity monitors: |
+| | 1. Check the L2 connectivity between the VMs in the same |
+| | Neutron network. |
+| | |
+| | 2. Check connectivity from one VM to an external host on |
+| | the Internet to verify SNAT functionality. |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Start attacker: |
+| | SSH connect to the VIM node and kill the SDN controller |
+| | process |
+| | |
+| | Result: the SDN controller service will be shutdown |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Verify the results of the IP connectivity monitors. |
+| | |
+| | Result: The outage_time metric reported by the monitors |
+| | is zero. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | Restart the SDN controller. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 5 | Create a new VM in the existing Neutron network |
+| | |
++--------------+--------------------------------------------------------------+
+|step 6 | Verify connectivity between VMs as follows: |
+| | 1. Check the L2 connectivity between the previously |
+| | existing VM and the newly created VM on the same |
+| | Neutron network by sending ICMP messages |
+| | |
++--------------+--------------------------------------------------------------+
+|step 7 | Stop IP connectivity monitors after a period of time |
+| | specified by “waiting_time” |
+| | |
+| | Result: The monitor info will be aggregated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 8 | Verify the IP connectivity monitor results |
+| | |
+| | Result: IP connectivity monitor should not have any packet |
+| | drop failures reported |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | This test fails if the SLAs are not met or if there is a |
+| | test case execution problem. The SLAs are define as follows |
+| | for this test: |
+| | |
+| | * SDN Controller recovery |
+| | |
+| | * process_recover_time <= 30 sec |
+| | |
+| | * no impact on data plane connectivity during SDN |
+| | controller failure and recovery. |
+| | |
+| | * packet_drop == 0 |
+| | |
++--------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc088.rst b/docs/testing/user/userguide/opnfv_yardstick_tc088.rst
new file mode 100644
index 000000000..2423a6b31
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc088.rst
@@ -0,0 +1,129 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC088
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node Openstack Service High Availability - Nova Scheduler |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC088: Control node Openstack service down - |
+| | nova scheduler |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | compute scheduler service provided by OpenStack (nova- |
+| | scheduler) on control node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of nova-scheduler service |
+| | on a selected control node, then checks whether the request |
+| | of the related OpenStack command is OK and the killed |
+| | processes are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to "nova- |
+| | scheduler". |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "nova-scheduler" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, one kind of monitor is needed: |
+| | 1. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor: |
+| | -monitor_type: "process" |
+| | -process_name: "nova-scheduler" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance": create a VM instance to check |
+| | whether the nova-scheduler works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are one metric: |
+| | 1)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc088.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stopping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | create a new instance to check whether the nova scheduler |
+| | works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | stop the monitor after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc089.rst b/docs/testing/user/userguide/opnfv_yardstick_tc089.rst
new file mode 100644
index 000000000..0a8b2570b
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc089.rst
@@ -0,0 +1,129 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC089
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node Openstack Service High Availability - Nova Conductor |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC089: Control node Openstack service down - |
+| | nova conductor |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | compute database proxy service provided by OpenStack (nova- |
+| | conductor) on control node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of nova-conductor service |
+| | on a selected control node, then checks whether the request |
+| | of the related OpenStack command is OK and the killed |
+| | processes are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to "nova- |
+| | conductor". |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "nova-conductor" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, one kind of monitor is needed: |
+| | 1. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor: |
+| | -monitor_type: "process" |
+| | -process_name: "nova-conductor" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance": create a VM instance to check |
+| | whether the nova-conductor works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are one metric: |
+| | 1)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc089.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stopping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | create a new instance to check whether the nova conductor |
+| | works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | stop the monitor after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc090.rst b/docs/testing/user/userguide/opnfv_yardstick_tc090.rst
new file mode 100644
index 000000000..1f8747b2b
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc090.rst
@@ -0,0 +1,151 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC090
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node OpenStack Service High Availability - Database Instances |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC090: Control node OpenStack service down - |
+| | database instances |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | data base instances used by OpenStack (mysql) on control |
+| | node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of database service on a |
+| | selected control node, then checks whether the request of |
+| | the related OpenStack command is OK and the killed processes |
+| | are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to the name |
+| | of the database service of OpenStack. |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "mysql" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, two kinds of monitor are needed: |
+| | 1. the "openstack-cmd" monitor constantly request a specific |
+| | Openstack command, which needs two parameters: |
+| | 1) monitor_type: which is used for finding the monitor class |
+| | and related scritps. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for request. |
+| | In this case, the command name should be neutron related |
+| | commands. |
+| | |
+| | 2. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | The examples of monitors show as follows, there are four |
+| | instance of the "openstack-cmd" monitor, in order to check |
+| | the database connection of different OpenStack components. |
+| | |
+| | monitor1: |
+| | -monitor_type: "openstack-cmd" |
+| | -api_name: "openstack image list" |
+| | monitor2: |
+| | -monitor_type: "openstack-cmd" |
+| | -api_name: "openstack router list" |
+| | monitor3: |
+| | -monitor_type: "openstack-cmd" |
+| | -api_name: "openstack stack list" |
+| | monitor4: |
+| | -monitor_type: "openstack-cmd" |
+| | -api_name: "openstack volume list" |
+| | monitor5: |
+| | -monitor_type: "process" |
+| | -process_name: "mysql" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1)service_outage_time: which indicates the maximum outage |
+| | time (seconds) of the specified OpenStack command request. |
+| | 2)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc090.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stopping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | stop monitors after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | verify the SLA |
+| | |
+| | Result: The test case is passed or not. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc091.rst b/docs/testing/user/userguide/opnfv_yardstick_tc091.rst
new file mode 100644
index 000000000..8e89b6425
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc091.rst
@@ -0,0 +1,138 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC091
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node Openstack Service High Availability - Heat Api |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC091: Control node OpenStack service down - |
+| | heat api |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | orchestration service provided by OpenStack (heat-api) on |
+| | control node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of heat-api service on a |
+| | selected control node, then checks whether the request of |
+| | the related OpenStack command is OK and the killed processes |
+| | are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to "heat-api".|
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "heat-api" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, two kinds of monitor are needed: |
+| | 1. the "openstack-cmd" monitor constantly request a specific |
+| | OpenStack command, which needs two parameters: |
+| | 1) monitor_type: which is used for finding the monitor class |
+| | and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for request. |
+| | In this case, the command name should be neutron related |
+| | commands. |
+| | |
+| | 2. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor1: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "heat stack list" |
+| | monitor2: |
+| | -monitor_type: "process" |
+| | -process_name: "heat-api" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1)service_outage_time: which indicates the maximum outage |
+| | time (seconds) of the specified OpenStack command request. |
+| | 2)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc091.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to the monitor stopped |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | stop monitors after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | verify the SLA |
+| | |
+| | Result: The test case is passed or not. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc092.rst b/docs/testing/user/userguide/opnfv_yardstick_tc092.rst
new file mode 100644
index 000000000..9c833fa23
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc092.rst
@@ -0,0 +1,201 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Ericsson and others.
+
+*************************************
+Yardstick Test Case Description TC092
+*************************************
+
++-----------------------------------------------------------------------------+
+|SDN Controller resilience in HA configuration |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC092: SDN controller resilience and high |
+| | availability HA configuration |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test validates SDN controller node high availability by |
+| | verifying there is no impact on the data plane connectivity |
+| | when one SDN controller fails in a HA configuration, |
+| | i.e. all existing configured network services DHCP, ARP, L2, |
+| | L3VPN, Security Groups should continue to operate |
+| | between the existing VMs while one SDN controller instance |
+| | is offline and rebooting. |
+| | |
+| | The test also validates that network service operations such |
+| | as creating a new VM in an existing or new L2 network |
+| | network remain operational while one instance of the |
+| | SDN controller is offline and recovers from the failure. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case: |
+| | 1. fails one instance of a SDN controller cluster running |
+| | in a HA configuration on the OpenStack controller node |
+| | |
+| | 2. checks if already configured L2 connectivity between |
+| | existing VMs is not impacted |
+| | |
+| | 3. verifies that the system never loses the ability to |
+| | execute virtual network operations, even when the |
+| | failed SDN Controller is still recovering |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called “kill-process” is |
+| | needed. This attacker includes three parameters: |
+| | |
+| | 1. ``fault_type``: which is used for finding the attacker's |
+| | scripts. It should be set to 'kill-process' in this test |
+| | |
+| | 2. ``process_name``: should be set to sdn controller |
+| | process |
+| | |
+| | 3. ``host``: which is the name of a control node where |
+| | opendaylight process is running |
+| | |
+| | example: |
+| | - ``fault_type``: “kill-process” |
+| | - ``process_name``: “opendaylight-karaf” (TBD) |
+| | - ``host``: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, the following monitors are needed |
+| | 1. ``ping_same_network_l2``: monitor pinging traffic |
+| | between the VMs in same neutron network |
+| | |
+| | 2. ``ping_external_snat``: monitor ping traffic from VMs to |
+| | external destinations (e.g. google.com) |
+| | |
+| | 3. ``SDN controller process monitor``: a monitor checking |
+| | the state of a specified SDN controller process. It |
+| | measures the recovery time of the given process. |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance-in_network": create a VM instance |
+| | in one of the existing neutron network. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1. process_recover_time: which indicates the maximun |
+| | time (seconds) from the process being killed to |
+| | recovered |
+| | |
+| | 2. packet_drop: measure the packets that have been dropped |
+| | by the monitors using pktgen. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | TBD |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1. test case file: opnfv_yardstick_tc092.yaml |
+| | |
+| | - Attackers: see above “attackers” discription |
+| | - Monitors: see above “monitors” discription |
+| | |
+| | - waiting_time: which is the time (seconds) from the |
+| | process being killed to stoping monitors the |
+| | monitors |
+| | |
+| | - SLA: see above “metrics” discription |
+| | |
+| | 2. POD file: pod.yaml The POD configuration should record |
+| | on pod.yaml first. the “host” item in this test case |
+| | will use the node name in the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | Description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-action | 1. The OpenStack cluster is set up with an SDN controller |
+| | running in a three node cluster configuration. |
+| | |
+| | 2. One or more neutron networks are created with two or |
+| | more VMs attached to each of the neutron networks. |
+| | |
+| | 3. The neutron networks are attached to a neutron router |
+| | which is attached to an external network the towards |
+| | DCGW. |
+| | |
+| | 4. The master node of SDN controller cluster is known. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Start ip connectivity monitors: |
+| | 1. Check the L2 connectivity between the VMs in the same |
+| | neutron network. |
+| | |
+| | 2. Check the external connectivity of the VMs. |
+| | |
+| | Each monitor runs in an independent process. |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Start attacker: |
+| | SSH to the VIM node and kill the SDN controller process |
+| | determined in step 2. |
+| | |
+| | Result: One SDN controller service will be shut down |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Restart the SDN controller. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | Create a new VM in the existing Neutron network while the |
+| | SDN controller is offline or still recovering. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 5 | Stop IP connectivity monitors after a period of time |
+| | specified by “waiting_time” |
+| | |
+| | Result: The monitor info will be aggregated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 6 | Verify the IP connectivity monitor result |
+| | |
+| | Result: IP connectivity monitor should not have any packet |
+| | drop failures reported |
+| | |
++--------------+--------------------------------------------------------------+
+|step 7 | Verify process_recover_time, which indicates the maximun |
+| | time (seconds) from the process being killed to recovered, |
+| | is within the SLA. This step blocks until either the |
+| | process has recovered or a timeout occurred. |
+| | |
+| | Result: process_recover_time is within SLA limits, if not, |
+| | test case failed and stopped. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 8 | Start IP connectivity monitors for the new VM: |
+| | |
+| | 1. Check the L2 connectivity from the existing VMs to the |
+| | new VM in the Neutron network. |
+| | |
+| | 2. Check connectivity from one VM to an external host on |
+| | the Internet to verify SNAT functionality. |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 9 | Stop IP connectivity monitors after a period of time |
+| | specified by “waiting_time” |
+| | |
+| | Result: The monitor info will be aggregated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 10 | Verify the IP connectivity monitor result |
+| | |
+| | Result: IP connectivity monitor should not have any packet |
+| | drop failures reported |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc093.rst b/docs/testing/user/userguide/opnfv_yardstick_tc093.rst
new file mode 100644
index 000000000..4e22e8bf3
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc093.rst
@@ -0,0 +1,189 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intracom Telecom and others.
+.. mardim@intracom-telecom.com
+
+*************************************
+Yardstick Test Case Description TC093
+*************************************
+
++-----------------------------------------------------------------------------+
+|SDN Vswitch resilience in non-HA or HA configuration |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC093: SDN Vswitch resilience in |
+| | non-HA or HA configuration |
++--------------+--------------------------------------------------------------+
+|test purpose | This test validates that network data plane services are |
+| | resilient in the event of Virtual Switch failure |
+| | in compute nodes. Specifically, the test verifies that |
+| | existing data plane connectivity is not permanently impacted |
+| | i.e. all configured network services such as DHCP, ARP, L2, |
+| | L3 Security Groups continue to operate between the existing |
+| | VMs eventually after the Virtual Switches have finished |
+| | rebooting. |
+| | |
+| | The test also validates that new network service operations |
+| | (creating a new VM in the existing L2/L3 network or in a new |
+| | network, etc.) are operational after the Virtual Switches |
+| | have recovered from a failure. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This testcase first checks if the already configured |
+| | DHCP/ARP/L2/L3/SNAT connectivity is proper. After |
+| | it fails and restarts again the VSwitch services which are |
+| | running on both OpenStack compute nodes, and then checks if |
+| | already configured DHCP/ARP/L2/L3/SNAT connectivity is not |
+| | permanently impacted (even if there are some packet |
+| | loss events) between VMs and the system is able to execute |
+| | new virtual network operations once the Vswitch services |
+| | are restarted and have been fully recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, two attackers called “kill-process” are |
+| | needed. These attackers include three parameters: |
+| | |
+| | 1. fault_type: which is used for finding the attacker's |
+| | scripts. It should be set to 'kill-process' in this test |
+| | |
+| | 2. process_name: should be set to the name of the Vswitch |
+| | process |
+| | |
+| | 3. host: which is the name of the compute node where the |
+| | Vswitch process is running |
+| | |
+| | e.g. -fault_type: "kill-process" |
+| | -process_name: "openvswitch" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | This test case utilizes two monitors of type "ip-status" |
+| | and one monitor of type "process" to track the following |
+| | conditions: |
+| | |
+| | 1. "ping_same_network_l2": monitor ICMP traffic between |
+| | VMs in the same Neutron network |
+| | |
+| | 2. "ping_external_snat": monitor ICMP traffic from VMs to |
+| | an external host on the Internet to verify SNAT |
+| | functionality. |
+| | |
+| | 3. "Vswitch process monitor": a monitor checking the |
+| | state of the specified Vswitch process. It measures |
+| | the recovery time of the given process. |
+| | |
+| | Monitors of type "ip-status" use the "ping" utility to |
+| | verify reachability of a given target IP. |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance-in_network": create a VM instance |
+| | in one of the existing Neutron network. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1. process_recover_time: which indicates the maximun |
+| | time (seconds) from the process being killed to |
+| | recovered |
+| | |
+| | 2. outage_time: measures the total time in which |
+| | monitors were failing in their tasks (e.g. total time of |
+| | Ping failure) |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | none |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1. test case file: opnfv_yardstick_tc093.yaml |
+| | |
+| | - Attackers: see above “attackers” description |
+| | - monitor_time: which is the time (seconds) from |
+| | starting to stoping the monitors |
+| | - Monitors: see above “monitors” discription |
+| | - SLA: see above “metrics” description |
+| | |
+| | 2. POD file: pod.yaml The POD configuration should record |
+| | on pod.yaml first. the “host” item in this test case |
+| | will use the node name in the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | Description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-action | 1. The Vswitches are set up in both compute nodes. |
+| | |
+| | 2. One or more Neutron networks are created with two or |
+| | more VMs attached to each of the Neutron networks. |
+| | |
+| | 3. The Neutron networks are attached to a Neutron router |
+| | which is attached to an external network towards the |
+| | DCGW. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Start IP connectivity monitors: |
+| | 1. Check the L2 connectivity between the VMs in the same |
+| | Neutron network. |
+| | |
+| | 2. Check connectivity from one VM to an external host on |
+| | the Internet to verify SNAT functionality. |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Start attackers: |
+| | SSH connect to the VIM compute nodes and kill the Vswitch |
+| | processes |
+| | |
+| | Result: the SDN Vswitch services will be shutdown |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Verify the results of the IP connectivity monitors. |
+| | |
+| | Result: The outage_time metric reported by the monitors |
+| | is not greater than the max_outage_time. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | Restart the SDN Vswitch services. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 5 | Create a new VM in the existing Neutron network |
+| | |
++--------------+--------------------------------------------------------------+
+|step 6 | Verify connectivity between VMs as follows: |
+| | 1. Check the L2 connectivity between the previously |
+| | existing VM and the newly created VM on the same |
+| | Neutron network by sending ICMP messages |
+| | |
++--------------+--------------------------------------------------------------+
+|step 7 | Stop IP connectivity monitors after a period of time |
+| | specified by “monitor_time” |
+| | |
+| | Result: The monitor info will be aggregated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 8 | Verify the IP connectivity monitor results |
+| | |
+| | Result: IP connectivity monitor should not have any packet |
+| | drop failures reported |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | This test fails if the SLAs are not met or if there is a |
+| | test case execution problem. The SLAs are define as follows |
+| | for this test: |
+| | * SDN Vswitch recovery |
+| | |
+| | * process_recover_time <= 30 sec |
+| | |
+| | * no impact on data plane connectivity during SDN |
+| | Vswitch failure and recovery. |
+| | |
+| | * packet_drop == 0 |
+| | |
++--------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/references.rst b/docs/testing/user/userguide/references.rst
index 05729ba75..e6bc719fd 100644
--- a/docs/testing/user/userguide/references.rst
+++ b/docs/testing/user/userguide/references.rst
@@ -11,13 +11,12 @@ References
OPNFV
=====
-* Parser wiki: https://wiki.opnfv.org/parser
-* Pharos wiki: https://wiki.opnfv.org/pharos
-* VTC: https://wiki.opnfv.org/vtc
+* Parser wiki: https://wiki.opnfv.org/display/parser
+* Pharos wiki: https://wiki.opnfv.org/display/pharos
* Yardstick CI: https://build.opnfv.org/ci/view/yardstick/
* Yardstick and ETSI TST001 presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925205%2Fopnfv_summit_-_bridging_opnfv_and_etsi.pdf
* Yardstick Project presentation: https://wiki.opnfv.org/display/yardstick/Yardstick?preview=%2F2925202%2F2925208%2Fopnfv_summit_-_yardstick_project.pdf
-* Yardstick wiki: https://wiki.opnfv.org/yardstick
+* Yardstick wiki: https://wiki.opnfv.org/display/yardstick
References used in Test Cases
=============================
@@ -26,22 +25,22 @@ References used in Test Cases
* cirros-image: https://download.cirros-cloud.net
* cyclictest: https://rt.wiki.kernel.org/index.php/Cyclictest
* DPDKpktgen: https://github.com/Pktgen/Pktgen-DPDK/
-* DPDK supported NICs: http://dpdk.org/doc/nics
+* DPDK supported NICs: http://core.dpdk.org/supported/
* fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html
-* fio: http://www.bluestop.org/fio/HOWTO.txt
+* fio: https://bluestop.org/files/fio/HOWTO.txt
* free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html
* iperf3: https://iperf.fr/
-* iostat: http://linux.die.net/man/1/iostat
+* iostat: https://linux.die.net/man/1/iostat
* Lmbench man-pages: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html
* Memory bandwidth man-pages: http://manpages.ubuntu.com/manpages/trusty/bw_mem.8.html
* mpstat man-pages: http://manpages.ubuntu.com/manpages/trusty/man1/mpstat.1.html
-* netperf: http://www.netperf.org/netperf/training/Netperf.html
+* netperf: https://hewlettpackard.github.io/netperf/
* pktgen: https://www.kernel.org/doc/Documentation/networking/pktgen.txt
* RAMspeed: http://alasir.com/software/ramspeed/
-* sar: http://linux.die.net/man/1/sar
+* sar: https://linux.die.net/man/1/sar
* SR-IOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking
* Storperf: https://wiki.opnfv.org/display/storperf/Storperf
-* unixbench: https://github.com/kdlucas/byte-unixbench/blob/master/UnixBench
+* unixbench: https://github.com/kdlucas/byte-unixbench/tree/master/UnixBench
Research
@@ -54,7 +53,7 @@ Research
Standards
=========
-* ETSI NFV: http://www.etsi.org/technologies-clusters/technologies/nfv
-* ETSI GS-NFV TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
+* ETSI NFV: https://www.etsi.org/technologies-clusters/technologies/nfv
+* ETSI GS-NFV TST 001: https://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf
* RFC2544: https://www.ietf.org/rfc/rfc2544.txt