aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing/user
diff options
context:
space:
mode:
authorEmma Foley <emma.l.foley@intel.com>2018-08-28 15:17:15 +0100
committerEmma Foley <emma.l.foley@intel.com>2018-11-28 16:29:15 +0000
commit3ceadb40c622a58c4b330fb3e6cdf391955b44a8 (patch)
tree473729567f7351a955513c9ba44d466924f0da92 /docs/testing/user
parentb972b3c9d5d32276ea5845dabc6d5192ef53135f (diff)
[docs][userguide] Update formatting on userguide chapters 12-14
* Update rst headers * Fix line length * Update some links. JIRA: YARDSTICK-1335 Change-Id: Ib02c67876ad4bbf0cb520e2ccd5c91c1827a9469 Signed-off-by: Emma Foley <emma.l.foley@intel.com>
Diffstat (limited to 'docs/testing/user')
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst218
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst313
-rw-r--r--docs/testing/user/userguide/14-nsb-operation.rst47
3 files changed, 273 insertions, 305 deletions
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index 7b0d46804..c5e395ee6 100644
--- a/docs/testing/user/userguide/12-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -7,16 +7,17 @@
Network Services Benchmarking (NSB)
===================================
-Abstract
-========
-
.. _Yardstick: https://wiki.opnfv.org/display/yardstick
+.. _`ETSI GS NFV-TST001`: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf
+
+Abstract
+--------
This chapter provides an overview of the NSB, a contribution to OPNFV
Yardstick_ from Intel.
Overview
-========
+--------
The goal of NSB is to Extend Yardstick to perform real world VNFs and NFVi
Characterization and benchmarking with repeatable and deterministic methods.
@@ -31,44 +32,34 @@ according to user defined profiles.
NSB extension includes:
- - Generic data models of Network Services, based on ETSI spec `ETSI GS NFV-TST 001 <http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf>`_
-
- - New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc
-
- - Generic VNF configuration models and metrics implemented with Python
- classes
-
- - Traffic generator features and traffic profiles
-
- - L1-L3 state-less traffic profiles
-
- - L4-L7 state-full traffic profiles
-
- - Tunneling protocol / network overlay support
-
- - Test case samples
-
- - Ping
-
- - Trex
+* Generic data models of Network Services, based on ETSI spec
+ `ETSI GS NFV-TST 001`_
+* Standalone :term:`context` for VNF testing like SRIOV, OVS, OVS-DPDK, etc
+* Generic VNF configuration models and metrics implemented with Python
+ classes
+* Traffic generator features and traffic profiles
- - vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc
+ * L1-L3 stateless traffic profiles
+ * L4-L7 state-full traffic profiles
+ * Tunneling protocol/network overlay support
- - Traffic generators like Trex, ab/nginx, ixia, iperf etc
+* Test case samples
- - KPIs for a given use case:
+ * Ping
+ * Trex
+ * vPE, vCGNAT, vFirewall etc - ipv4 throughput, latency etc
- - System agent support for collecting NFVi KPI. This includes:
+* Traffic generators i.e. Trex, ab/nginx, ixia, iperf, etc
+* KPIs for a given use case:
- - CPU statistic
+ * System agent support for collecting NFVi KPI. This includes:
- - Memory BW
+ * CPU statistic
+ * Memory BW
+ * OVS-DPDK Stats
- - OVS-DPDK Stats
-
- - Network KPIs, e.g., inpackets, outpackets, thoughput, latency etc
-
- - VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc
+ * Network KPIs e.g. inpackets, outpackets, thoughput, latency
+ * VNF KPIs e.g. packet_in, packet_drop, packet_fwd
Architecture
============
@@ -83,111 +74,102 @@ performed network functionality. The part of the data model is a set of the
configuration parameters, number of connection points used and flavor including
core and memory amount.
-The ETSI defines a Network Service as a set of configurable VNFs working in
-some NFV Infrastructure connecting each other using Virtual Links available
-through Connection Points. The ETSI MANO specification defines a set of
-management entities called Network Service Descriptors (NSD) and
-VNF Descriptors (VNFD) that define real Network Service. The picture below
-makes an example how the real Network Operator use-case can map into ETSI
-Network service definition
-
-Network Service framework performs the necessary test steps. It may involve
-
- - Interacting with traffic generator and providing the inputs on traffic
- type / packet structure to generate the required traffic as per the
- test case. Traffic profiles will be used for this.
-
- - Executing the commands required for the test procedure and analyses the
- command output for confirming whether the command got executed correctly
- or not. E.g. As per the test case, run the traffic for the given
- time period / wait for the necessary time delay
-
- - Verify the test result.
-
- - Validate the traffic flow from SUT
-
- - Fetch the table / data from SUT and verify the value as per the test case
-
- - Upload the logs from SUT onto the Test Harness server
-
- - Read the KPI's provided by particular VNF
+ETSI defines a Network Service as a set of configurable VNFs working in some
+NFV Infrastructure connecting each other using Virtual Links available through
+Connection Points. The ETSI MANO specification defines a set of management
+entities called Network Service Descriptors (NSD) and VNF Descriptors (VNFD)
+that define real Network Service. The picture below makes an example how the
+real Network Operator use-case can map into ETSI Network service definition.
+
+Network Service framework performs the necessary test steps. It may involve:
+
+* Interacting with traffic generator and providing the inputs on traffic
+ type / packet structure to generate the required traffic as per the
+ test case. Traffic profiles will be used for this.
+* Executing the commands required for the test procedure and analyses the
+ command output for confirming whether the command got executed correctly
+ or not e.g. as per the test case, run the traffic for the given
+ time period and wait for the necessary time delay.
+* Verify the test result.
+* Validate the traffic flow from SUT.
+* Fetch the data from SUT and verify the value as per the test case.
+* Upload the logs from SUT onto the Test Harness server
+* Retrieve the KPI's provided by particular VNF
Components of Network Service
-----------------------------
- * *Models for Network Service benchmarking*: The Network Service benchmarking
- requires the proper modelling approach. The NSB provides models using Python
- files and defining of NSDs and VNFDs.
+* *Models for Network Service benchmarking*: The Network Service benchmarking
+ requires the proper modelling approach. The NSB provides models using Python
+ files and defining of NSDs and VNFDs.
- The benchmark control application being a part of OPNFV yardstick can call
- that python models to instantiate and configure the VNFs. Depending on
- infrastructure type (bare-metal or fully virtualized) that calls could be
- made directly or using MANO system.
+The benchmark control application being a part of OPNFV yardstick can call
+that python models to instantiate and configure the VNFs. Depending on
+infrastructure type (bare-metal or fully virtualized) that calls could be
+made directly or using MANO system.
- * *Traffic generators in NSB*: Any benchmark application requires a set of
- traffic generator and traffic profiles defining the method in which traffic
- is generated.
+* *Traffic generators in NSB*: Any benchmark application requires a set of
+ traffic generator and traffic profiles defining the method in which traffic
+ is generated.
- The Network Service benchmarking model extends the Network Service
- definition with a set of Traffic Generators (TG) that are treated
- same way as other VNFs being a part of benchmarked network service.
- Same as other VNFs the traffic generator are instantiated and terminated.
+The Network Service benchmarking model extends the Network Service
+definition with a set of Traffic Generators (TG) that are treated
+same way as other VNFs being a part of benchmarked network service.
+Same as other VNFs the traffic generator are instantiated and terminated.
- Every traffic generator has own configuration defined as a traffic profile
- and a set of KPIs supported. The python models for TG is extended by
- specific calls to listen and generate traffic.
+Every traffic generator has own configuration defined as a traffic profile
+and a set of KPIs supported. The python models for TG is extended by
+specific calls to listen and generate traffic.
- * *The stateless TREX traffic generator*: The main traffic generator used as
- Network Service stimulus is open source TREX tool.
+* *The stateless TREX traffic generator*: The main traffic generator used as
+ Network Service stimulus is open source TREX tool.
- The TREX tool can generate any kind of stateless traffic.
+The TREX tool can generate any kind of stateless traffic.
- .. code-block:: console
-
- +--------+ +-------+ +--------+
- | | | | | |
- | Trex | ---> | VNF | ---> | Trex |
- | | | | | |
- +--------+ +-------+ +--------+
-
- Supported testcases scenarios:
+.. code-block:: console
- - Correlated UDP traffic using TREX traffic generator and replay VNF.
+ +--------+ +-------+ +--------+
+ | | | | | |
+ | Trex | ---> | VNF | ---> | Trex |
+ | | | | | |
+ +--------+ +-------+ +--------+
- - using different IMIX configuration like pure voice, pure video traffic etc
+Supported testcases scenarios:
- - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows
+* Correlated UDP traffic using TREX traffic generator and replay VNF.
- - Using different number of rules configured like 1 rule, 1K, 10K rules
+ * using different IMIX configuration like pure voice, pure video traffic etc
+ * using different number IP flows e.g. 1, 1K, 16K, 64K, 256K, 1M flows
+ * Using different number of rules configured e.g. 1, 1K, 10K rules
- For UDP correlated traffic following Key Performance Indicators are collected
- for every combination of test case parameters:
+For UDP correlated traffic following Key Performance Indicators are collected
+for every combination of test case parameters:
- - RFC2544 throughput for various loss rate defined (1% is a default)
+* RFC2544 throughput for various loss rate defined (1% is a default)
Graphical Overview
==================
-NSB Testing with yardstick framework facilitate performance testing of various
+NSB Testing with Yardstick framework facilitate performance testing of various
VNFs provided.
.. code-block:: console
+-----------+
- | | +-----------+
- | vPE | ->|TGen Port 0|
- | TestCase | | +-----------+
- | | |
- +-----------+ +------------------+ +-------+ |
- | | -- API --> | VNF | <--->
- +-----------+ | Yardstick | +-------+ |
- | Test Case | --> | NSB Testing | |
- +-----------+ | | |
- | | | |
- | +------------------+ |
- +-----------+ | +-----------+
- | Traffic | ->|TGen Port 1|
- | patterns | +-----------+
+ | | +-------------+
+ | vPE | -->| TGen Port 0 |
+ | TestCase | | +-------------+
+ | | |
+ +-----------+ +---------------+ +-------+ |
+ | | ---> | VNF | <--->
+ +-----------+ | Yardstick | +-------+ |
+ | Test Case | --> | NSB Testing | |
+ +-----------+ | | |
+ | | | |
+ | +---------------+ |
+ +-----------+ | +-------------+
+ | Traffic | -->| TGen Port 1 |
+ | patterns | +-------------+
+-----------+
Figure 1: Network Service - 2 server configuration
@@ -199,8 +181,8 @@ VNFs supported for chracterization:
2. vFW - Virtual Firewall
3. vACL - Access Control List
4. Prox - Packet pROcessing eXecution engine:
- - VNF can act as Drop, Basic Forwarding (no touch),
- L2 Forwarding (change MAC), GRE encap/decap, Load balance based on
- packet fields, Symmetric load balancing
- - QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
+ * VNF can act as Drop, Basic Forwarding (no touch),
+ L2 Forwarding (change MAC), GRE encap/decap, Load balance based on
+ packet fields, Symmetric load balancing
+ * QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
5. UDP_Replay
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
index 973d56628..69f6a5a40 100644
--- a/docs/testing/user/userguide/13-nsb-installation.rst
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -14,12 +14,12 @@
Avoid deeper levels because they do not render well.
-=====================================
-Yardstick - NSB Testing -Installation
-=====================================
+================
+NSB Installation
+================
-Abstract
---------
+.. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
+.. _devstack: https://docs.openstack.org/devstack/pike/>
The Network Service Benchmarking (NSB) extends the yardstick framework to do
VNF characterization and benchmarking in three different execution
@@ -32,16 +32,16 @@ according to user defined profiles.
The steps needed to run Yardstick with NSB testing are:
* Install Yardstick (NSB Testing).
-* Setup/Reference pod.yaml describing Test topology
-* Create/Reference the test configuration yaml file.
+* Setup/reference ``pod.yaml`` describing Test topology
+* Create/reference the test configuration yaml file.
* Run the test case.
Prerequisites
-------------
-Refer chapter Yardstick Installation for more information on yardstick
-prerequisites
+Refer to :doc:`04-installation` for more information on Yardstick
+prerequisites.
Several prerequisites are needed for Yardstick (VNF testing):
@@ -61,7 +61,6 @@ Hardware & Software Ingredients
SUT requirements:
-
======= ===================
Item Description
======= ===================
@@ -93,29 +92,25 @@ Boot and BIOS settings:
Turbo Boost Disabled
============= =================================================
-
-
Install Yardstick (NSB Testing)
-------------------------------
-Download the source code and install Yardstick from it
+Download the source code and check out the latest stable branch::
.. code-block:: console
git clone https://gerrit.opnfv.org/gerrit/yardstick
-
cd yardstick
-
# Switch to latest stable branch
- # git checkout <tag or stable branch>
- git checkout stable/euphrates
+ git checkout stable/gambia
Configure the network proxy, either using the environment variables or setting
-the global environment file:
+the global environment file.
+
+* Set environment
-.. code-block:: ini
+.. code-block::
- cat /etc/environment
http_proxy='http://proxy.company.com:port'
https_proxy='http://proxy.company.com:port'
@@ -124,14 +119,11 @@ the global environment file:
export http_proxy='http://proxy.company.com:port'
export https_proxy='http://proxy.company.com:port'
-The last step is to modify the Yardstick installation inventory, used by
-Ansible:
-
-.. code-block:: ini
+Modify the Yardstick installation inventory, used by Ansible::
cat ./ansible/install-inventory.ini
[jumphost]
- localhost ansible_connection=local
+ localhost ansible_connection=local
[yardstick-standalone]
yardstick-standalone-node ansible_host=192.168.1.2
@@ -148,35 +140,29 @@ Ansible:
.. note::
- SSH access without password needs to be configured for all your nodes defined in
- ``install-inventory.ini`` file.
- If you want to use password authentication you need to install sshpass
-
- .. code-block:: console
+ SSH access without password needs to be configured for all your nodes
+ defined in ``yardstick-install-inventory.ini`` file.
+ If you want to use password authentication you need to install ``sshpass``::
sudo -EH apt-get install sshpass
-To execute an installation for a Bare-Metal or a Standalone context:
-
-.. code-block:: console
+To execute an installation for a BareMetal or a Standalone context::
./nsb_setup.sh
-To execute an installation for an OpenStack context:
-
-.. code-block:: console
+To execute an installation for an OpenStack context::
./nsb_setup.sh <path to admin-openrc.sh>
-Above command setup docker with latest yardstick code. To execute
-
-.. code-block:: console
+The above commands will set up Docker with the latest Yardstick code. To
+execute::
docker exec -it yardstick bash
It will also automatically download all the packages needed for NSB Testing
-setup. Refer chapter :doc:`04-installation` for more on docker
+setup. Refer chapter :doc:`04-installation` for more on Docker
+
**Install Yardstick using Docker (recommended)**
Another way to execute an installation for a Bare-Metal or a Standalone context
@@ -201,24 +187,22 @@ System Topology
Environment parameters and credentials
--------------------------------------
-Config yardstick conf
-~~~~~~~~~~~~~~~~~~~~~
+Configure yardstick.conf
+^^^^^^^^^^^^^^^^^^^^^^^^
-If user did not run 'yardstick env influxdb' inside the container, which will
-generate correct ``yardstick.conf``, then create the config file manually (run
-inside the container):
-::
+If you did not run ``yardstick env influxdb`` inside the container to generate
+ ``yardstick.conf``, then create the config file manually (run inside the
+container)::
cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
vi /etc/yardstick/yardstick.conf
-Add trex_path, trex_client_lib and bin_path in 'nsb' section.
-
-::
+Add ``trex_path``, ``trex_client_lib`` and ``bin_path`` to the ``nsb``
+section::
[DEFAULT]
debug = True
- dispatcher = file, influxdb
+ dispatcher = influxdb
[dispatcher_influxdb]
timeout = 5
@@ -235,18 +219,26 @@ Add trex_path, trex_client_lib and bin_path in 'nsb' section.
Run Yardstick - Network Service Testcases
-----------------------------------------
-
NS testing - using yardstick CLI
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
See :doc:`04-installation`
-.. code-block:: console
+Connect to the Yardstick container::
docker exec -it yardstick /bin/bash
- source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
- export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
+
+If you're running ``heat`` testcases and ``nsb_setup.sh`` was not used::
+ source /etc/yardstick/openstack.creds
+
+In addition to the above, you need to se the ``EXTERNAL_NETWORK`` for
+OpenStack::
+
+ export EXTERNAL_NETWORK="<openstack public network>"
+
+Finally, you should be able to run the testcase::
+
yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
Network Service Benchmarking - Bare-Metal
@@ -284,8 +276,8 @@ Bare-Metal 3-Node setup - Correlated Traffic
Bare-Metal Config pod.yaml
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
topology and update all the required fields.::
cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
@@ -368,12 +360,10 @@ SR-IOV Pre-requisites
+++++++++++++++++++++
On Host, where VM is created:
- a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
- Currently this can be done using VXLAN tunnel.
+ a) Create and configure a bridge named ``br-int`` for VM to connect to
+ external network. Currently this can be done using VXLAN tunnel.
- Execute the following on host, where VM is created:
-
- .. code-block:: console
+ Execute the following on host, where VM is created::
ip link add type vxlan remote <Jumphost IP> local <DUT IP> id <ID: 10> dstport 4789
brctl addbr br-int
@@ -382,7 +372,7 @@ On Host, where VM is created:
ip addr add <IP#1, like: 172.20.2.1/24> dev br-int
ip link set dev br-int up
- .. note:: May be needed to add extra rules to iptable to forward traffic.
+ .. note:: You may need to add extra rules to iptable to forward traffic.
.. code-block:: console
@@ -416,23 +406,24 @@ On Host, where VM is created:
Yardstick has a tool for building this custom image with SampleVNF.
It is necessary to have ``sudo`` rights to use this tool.
- Also you may need to install several additional packages to use this tool, by
- following the commands below::
-
- sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+ Also you may need to install several additional packages to use this tool, by
+ following the commands below::
- This image can be built using the following command in the directory where Yardstick is installed
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
- .. code-block:: console
+ This image can be built using the following command in the directory where
+ Yardstick is installed::
- export YARD_IMG_ARCH='amd64'
- sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
- Please use ansible script to generate a cloud image refer to :doc:`04-installation`
+ For instructions on generating a cloud image using Ansible, refer to
+ :doc:`04-installation`.
- for more details refer to chapter :doc:`04-installation`
+ for more details refer to chapter :doc:`04-installation`
- .. note:: VM should be build with static IP and should be accessible from yardstick host.
+ .. note:: VM should be build with static IP and be accessible from the
+ Yardstick host.
SR-IOV Config pod.yaml describing Topology
@@ -457,10 +448,10 @@ SR-IOV 2-Node setup
+----------+ +-------------------------+
| | | ^ ^ |
| | | | | |
- | | (0)<----->(0) | ------ | |
- | TG1 | | SUT | |
- | | | | |
- | | (n)<----->(n) |------------------ |
+ | | (0)<----->(0) | ------ SUT | |
+ | TG1 | | | |
+ | | (n)<----->(n) | ----------------- |
+ | | | |
+----------+ +-------------------------+
trafficgen_1 host
@@ -470,29 +461,29 @@ SR-IOV 3-Node setup - Correlated Traffic
++++++++++++++++++++++++++++++++++++++++
.. code-block:: console
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | VF NIC | | VF NIC |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +----------+ +-------------------------+ +--------------+
- | | | ^ ^ | | |
- | | | | | | | |
- | | (0)<----->(0) | ------ | | | TG2 |
- | TG1 | | SUT | | | (UDP Replay) |
- | | | | | | |
- | | (n)<----->(n) | ------ | (n)<-->(n) | |
- +----------+ +-------------------------+ +--------------+
- trafficgen_1 host trafficgen_2
-
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | VF NIC | | VF NIC |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +----------+ +---------------------+ +--------------+
+ | | | ^ ^ | | |
+ | | | | | | | |
+ | | (0)<----->(0) |----- | | | TG2 |
+ | TG1 | | SUT | | | (UDP Replay) |
+ | | | | | | |
+ | | (n)<----->(n) | -----| (n)<-->(n) | |
+ +----------+ +---------------------+ +--------------+
+ trafficgen_1 host trafficgen_2
+
+Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the
topology and update all the required fields.
.. code-block:: console
@@ -599,8 +590,8 @@ OVS-DPDK Pre-requisites
~~~~~~~~~~~~~~~~~~~~~~~
On Host, where VM is created:
- a) Create and configure a bridge named ``br-int`` for VM to connect to external network.
- Currently this can be done using VXLAN tunnel.
+ a) Create and configure a bridge named ``br-int`` for VM to connect to
+ external network. Currently this can be done using VXLAN tunnel.
Execute the following on host, where VM is created:
@@ -647,26 +638,27 @@ On Host, where VM is created:
Yardstick has a tool for building this custom image with SampleVNF.
It is necessary to have ``sudo`` rights to use this tool.
- Also you may need to install several additional packages to use this tool, by
- following the commands below::
+ You may need to install several additional packages to use this tool, by
+ following the commands below::
- sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
- This image can be built using the following command in the directory where Yardstick is installed::
+ This image can be built using the following command in the directory where
+ Yardstick is installed::
- export YARD_IMG_ARCH='amd64'
- sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
- sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+ sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
- for more details refer to chapter :doc:`04-installation`
+ for more details refer to chapter :doc:`04-installation`
- .. note:: VM should be build with static IP and should be accessible from yardstick host.
+ .. note:: VM should be build with static IP and should be accessible from
+ yardstick host.
- c) OVS & DPDK version.
- - OVS 2.7 and DPDK 16.11.1 above version is supported
+3. OVS & DPDK version.
+ * OVS 2.7 and DPDK 16.11.1 above version is supported
- d) Setup OVS/DPDK on host.
- Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
+4. Setup `OVS-DPDK`_ on host.
OVS-DPDK Config pod.yaml describing Topology
@@ -732,10 +724,8 @@ OVS-DPDK 3-Node setup - Correlated Traffic
trafficgen_1 host trafficgen_2
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.
-
-.. code-block:: console
+Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects
+the topology and update all the required fields::
cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
@@ -883,26 +873,22 @@ Single node OpenStack setup with external TG
Host pre-configuration
++++++++++++++++++++++
-.. warning:: The following configuration requires sudo access to the system. Make
- sure that your user have the access.
+.. warning:: The following configuration requires sudo access to the system.
+ Make sure that your user have the access.
-Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system manufacturers
-disable this extension by default.
+Enable the Intel VT-d or AMD-Vi extension in the BIOS. Some system
+manufacturers disable this extension by default.
Activate the Intel VT-d or AMD-Vi extension in the kernel by modifying the GRUB
config file ``/etc/default/grub``.
-For the Intel platform:
-
-.. code:: bash
+For the Intel platform::
...
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
...
-For the AMD platform:
-
-.. code:: bash
+For the AMD platform::
...
GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on"
@@ -917,9 +903,7 @@ Update the grub configuration file and restart the system:
sudo update-grub
sudo reboot
-Make sure the extension has been enabled:
-
-.. code:: bash
+Make sure the extension has been enabled::
sudo journalctl -b 0 | grep -e IOMMU -e DMAR
@@ -932,6 +916,8 @@ Make sure the extension has been enabled:
Feb 06 14:50:14 hostname kernel: DMAR: dmar1: reg_base_addr e0ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
Feb 06 14:50:14 hostname kernel: DMAR: DRHD base: 0x000000ee7fc000 flags: 0x0
+.. TODO: Refer to the yardstick installation guide for proxy set up
+
Setup system proxy (if needed). Add the following configuration into the
``/etc/environment`` file:
@@ -954,13 +940,11 @@ Upgrade the system:
sudo -EH apt-get upgrade
sudo -EH apt-get dist-upgrade
-Install dependencies needed for the DevStack
+Install dependencies needed for DevStack
.. code:: bash
- sudo -EH apt-get install python
- sudo -EH apt-get install python-dev
- sudo -EH apt-get install python-pip
+ sudo -EH apt-get install python python-dev python-pip
Setup SR-IOV ports on the host:
@@ -983,10 +967,10 @@ Setup SR-IOV ports on the host:
DevStack installation
+++++++++++++++++++++
-Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
-documentation to install OpenStack on a host. Please note, that stable
-``pike`` branch of devstack repo should be used during the installation.
-The required `local.conf`` configuration file are described below.
+If you want to try out NSB, but don't have OpenStack set-up, you can use
+`Devstack`_ to install OpenStack on a host. Please note, that the
+``stable/pike`` branch of devstack repo should be used during the installation.
+The required ``local.conf`` configuration file are described below.
DevStack configuration file:
@@ -1005,11 +989,10 @@ Start the devstack installation on a host.
TG host configuration
+++++++++++++++++++++
-Yardstick automatically install and configure Trex traffic generator on TG
+Yardstick automatically installs and configures Trex traffic generator on TG
host based on provided POD file (see below). Anyway, it's recommended to check
-the compatibility of the installed NIC on the TG server with software Trex using
-the manual at https://trex-tgn.cisco.com/trex/doc/trex_manual.html.
-
+the compatibility of the installed NIC on the TG server with software Trex
+using the `manual <https://trex-tgn.cisco.com/trex/doc/trex_manual.html>`_.
Run the Sample VNF test case
++++++++++++++++++++++++++++
@@ -1018,7 +1001,7 @@ There is an example of Sample VNF test case ready to be executed in an
OpenStack environment with SR-IOV support: ``samples/vnf_samples/nsut/vfw/
tc_heat_sriov_external_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``.
-Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
+Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
context.
Create pod file for TG in the yardstick repo folder located in the yardstick
@@ -1071,16 +1054,15 @@ Controller/Compute pre-configuration
++++++++++++++++++++++++++++++++++++
Pre-configuration of the controller and compute hosts are the same as
-described in `Host pre-configuration`_ section. Follow the steps in the section.
+described in `Host pre-configuration`_ section.
DevStack configuration
++++++++++++++++++++++
-Use official `Devstack <https://docs.openstack.org/devstack/pike/>`_
-documentation to install OpenStack on a host. Please note, that stable
-``pike`` branch of devstack repo should be used during the installation.
-The required `local.conf`` configuration file are described below.
+A reference ``local.conf`` for deploying OpenStack in a multi-host environment
+using `Devstack`_ is shown in this section. The ``stable/pike`` branch of
+devstack repo should be used during the installation.
.. note:: Update the devstack configuration files by replacing angluar brackets
with a short description inside.
@@ -1104,13 +1086,14 @@ Start the devstack installation on the controller and compute hosts.
Run the sample vFW TC
+++++++++++++++++++++
-Install yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
+Install Yardstick using `Install Yardstick (NSB Testing)`_ steps for OpenStack
context.
-Run sample vFW RFC2544 SR-IOV TC (``samples/vnf_samples/nsut/vfw/
-tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``) in the heat
-context using steps described in `NS testing - using yardstick CLI`_ section
-and the following yardtick command line arguments:
+Run the sample vFW RFC2544 SR-IOV test case
+(``samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``)
+in the heat context using steps described in
+`NS testing - using yardstick CLI`_ section and the following Yardstick command
+line arguments:
.. code:: bash
@@ -1118,8 +1101,8 @@ and the following yardtick command line arguments:
samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
-Enabling other Traffic generator
---------------------------------
+Enabling other Traffic generators
+---------------------------------
IxLoad
~~~~~~
@@ -1138,14 +1121,16 @@ IxLoad
.. code-block:: console
- cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
+ cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
+ etc/yardstick/nodes/pod_ixia.yaml
Config ``pod_ixia.yaml``
.. literalinclude:: code/pod_ixia.yaml
:language: console
- for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
+ for sriov/ovs_dpdk pod files, please refer to `Standalone Virtualization`_
+ for ovs-dpdk/sriov configuration
3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
You will also need to configure the IxLoad machine to start the IXIA
@@ -1155,7 +1140,7 @@ IxLoad
* Go to:
``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
or
- ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
+ ``C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe``
4. Create a folder ``Results`` in c:\ and share the folder on the network.
@@ -1172,14 +1157,16 @@ installed as part of the requirements of the project.
.. code-block:: console
- cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
+ cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia \
+ etc/yardstick/nodes/pod_ixia.yaml
- Config pod_ixia.yaml
+ Configure ``pod_ixia.yaml``
.. literalinclude:: code/pod_ixia.yaml
:language: console
- for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
+ for sriov/ovs_dpdk pod files, please refer to above
+ `Standalone Virtualization`_ for ovs-dpdk/sriov configuration
2. Start IxNetwork TCL Server
You will also need to configure the IxNetwork machine to start the IXIA
diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst
index 7ec5b4ec0..118042918 100644
--- a/docs/testing/user/userguide/14-nsb-operation.rst
+++ b/docs/testing/user/userguide/14-nsb-operation.rst
@@ -89,9 +89,9 @@ Availability zone
^^^^^^^^^^^^^^^^^
The configuration of the availability zone is requred in cases where location
-of exact compute host/group of compute hosts needs to be specified for SampleVNF
-or traffic generator in the heat test case. If this is the case, please follow
-the instructions below.
+of exact compute host/group of compute hosts needs to be specified for
+:term:`SampleVNF` or traffic generator in the heat test case. If this is the
+case, please follow the instructions below.
.. _`Create a host aggregate`:
@@ -105,7 +105,8 @@ the instructions below.
.. code-block:: bash
# create host aggregate
- openstack aggregate create --zone <AZ_NAME> --property availability_zone=<AZ_NAME> <AGG_NAME>
+ openstack aggregate create --zone <AZ_NAME> \
+ --property availability_zone=<AZ_NAME> <AGG_NAME>
# show available hosts
openstack compute service list --service nova-compute
# add selected host into the host aggregate
@@ -136,8 +137,9 @@ the instructions below.
networks:
...
-There are two example of SampleVNF scale out test case which use the availability zone
-feature to specify the exact location of scaled VNFs and traffic generators.
+There are two example of SampleVNF scale out test case which use the
+``availability zone`` feature to specify the exact location of scaled VNFs and
+traffic generators.
Those are:
@@ -164,21 +166,19 @@ Those are:
| 5 | agg1 | AZ_NAME_1 |
+----+------+-------------------+
-2. If no host aggregates are configured, please use `steps above`__ to
- configure them.
+2. If no host aggregates are configured, please follow the instructions to
+ `Create a host aggregate`_
-__ `Create a host aggregate`_
-
-3. Run the SampleVNF PROX scale-out test case, specifying the availability
- zone of each VNF and traffic generator as a task arguments.
+3. Run the SampleVNF PROX scale-out test case, specifying the
+ ``availability zone`` of each VNF and traffic generator as task arguments.
.. note:: The ``az_0`` and ``az_1`` should be changed according to the host
- aggregates created in the OpenStack.
+ aggregates created in the OpenStack.
.. code-block:: console
- yardstick -d task start\
+ yardstick -d task start \
<repo>/samples/vnf_samples/nsut/prox/tc_prox_heat_context_l2fwd_multiflow-2-scale-out.yaml\
--task-args='{
"num_vnfs": 4, "availability_zone": {
@@ -198,7 +198,7 @@ Collectd KPIs
-------------
NSB can collect KPIs from collected. We have support for various plugins
-enabled by the Barometer project.
+enabled by the :term:`Barometer` project.
The default yardstick-samplevnf has collectd installed. This allows for
collecting KPIs from the VNF.
@@ -208,12 +208,11 @@ We assume that collectd is not installed on the compute nodes.
To collectd KPIs from the NFVi compute nodes:
-
* install_collectd on the compute nodes
* create pod.yaml for the compute nodes
* enable specific plugins depending on the vswitch and DPDK
- example pod.yaml section for Compute node running collectd.
+ example ``pod.yaml`` section for Compute node running collectd.
.. code-block:: yaml
@@ -356,8 +355,8 @@ Scale-Out
VNFs performance data with scale-out helps
- * in capacity planning to meet the given network node requirements
- * in comparison between different VNF vendor offerings
+ * capacity planning to meet the given network node requirements
+ * comparison between different VNF vendor offerings
* better the scale-out index, provides the flexibility in meeting future
capacity requirements
@@ -488,11 +487,11 @@ Default values for OVS-DPDK:
Sample test case file
^^^^^^^^^^^^^^^^^^^^^
- 1. Prepare SampleVNF image and copy it to ``flavor/images``.
- 2. Prepare context files for TREX and SampleVNF under ``contexts/file``.
- 3. Add bridge named ``br-int`` to the baremetal where SampleVNF image is deployed.
- 4. Modify ``networks/phy_port`` accordingly to the baremetal setup.
- 5. Run test from:
+1. Prepare SampleVNF image and copy it to ``flavor/images``.
+2. Prepare context files for TREX and SampleVNF under ``contexts/file``.
+3. Add bridge named ``br-int`` to the baremetal where SampleVNF image is deployed.
+4. Modify ``networks/phy_port`` accordingly to the baremetal setup.
+5. Run test from:
.. literalinclude:: /../samples/vnf_samples/nsut/acl/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml
:language: yaml