aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorTrevor Cooper <trevor.cooper@intel.com>2017-10-19 21:52:53 +0000
committerGerrit Code Review <gerrit@opnfv.org>2017-10-19 21:52:53 +0000
commit8dfe90a3ef242ac0b0ab93df9ed895800ab3e94e (patch)
tree6439cd8beaeb427cceb3a84595793771e725fac9
parentfc125f816f42a2476f7daa0cb1b3c48d60226601 (diff)
parent97b961aee6653553c5a35ecee5cb766924cd10f1 (diff)
Merge "nsb_installation: updates"
-rw-r--r--docs/testing/user/userguide/11-nsb-overview.rst (renamed from docs/testing/user/userguide/13-nsb-overview.rst)94
-rw-r--r--docs/testing/user/userguide/12-nsb_installation.rst849
-rw-r--r--docs/testing/user/userguide/13-nsb_operation.rst270
-rw-r--r--docs/testing/user/userguide/14-nsb_installation.rst737
-rw-r--r--docs/testing/user/userguide/index.rst5
5 files changed, 1168 insertions, 787 deletions
diff --git a/docs/testing/user/userguide/13-nsb-overview.rst b/docs/testing/user/userguide/11-nsb-overview.rst
index 63442bff0..8ce90f65d 100644
--- a/docs/testing/user/userguide/13-nsb-overview.rst
+++ b/docs/testing/user/userguide/11-nsb-overview.rst
@@ -3,12 +3,11 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, 2016-2017 Intel Corporation.
-=====================================
Network Services Benchmarking (NSB)
-=====================================
+===================================
Abstract
-========
+--------
.. _Yardstick: https://wiki.opnfv.org/yardstick
@@ -16,9 +15,9 @@ This chapter provides an overview of the NSB, a contribution to OPNFV
Yardstick_ from Intel.
Overview
-========
+--------
-GOAL: Extend Yardstick to perform real world VNFs and NFVi Characterization and
+The goal of NSB is to Extend Yardstick to perform real world VNFs and NFVi Characterization and
benchmarking with repeatable and deterministic methods.
The Network Service Benchmarking (NSB) extends the yardstick framework to do
@@ -31,8 +30,7 @@ according to user defined profiles.
NSB extension includes:
- - Generic data models of Network Services, based on ETSI spec (ETSI GS NFV-TST 001)
- .. _ETSI GS NFV-TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf
+ - Generic data models of Network Services, based on ETSI spec `ETSI GS NFV-TST 001 <http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf>`_
- New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc
@@ -72,7 +70,8 @@ NSB extension includes:
- VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc
Architecture
-============
+------------
+
The Network Service (NS) defines a set of Virtual Network Functions (VNF)
connected together using NFV infrastructure.
@@ -113,60 +112,60 @@ Network Service framework performs the necessary test steps. It may involve
- Read the KPI's provided by particular VNF
Components of Network Service
-------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-* *Models for Network Service benchmarking*: The Network Service benchmarking
- requires the proper modelling approach. The NSB provides models using Python
- files and defining of NSDs and VNFDs.
+ * *Models for Network Service benchmarking*: The Network Service benchmarking
+ requires the proper modelling approach. The NSB provides models using Python
+ files and defining of NSDs and VNFDs.
-The benchmark control application being a part of OPNFV yardstick can call
-that python models to instantiate and configure the VNFs. Depending on
-infrastructure type (bare-metal or fully virtualized) that calls could be
-made directly or using MANO system.
+ The benchmark control application being a part of OPNFV yardstick can call
+ that python models to instantiate and configure the VNFs. Depending on
+ infrastructure type (bare-metal or fully virtualized) that calls could be
+ made directly or using MANO system.
-* *Traffic generators in NSB*: Any benchmark application requires a set of
- traffic generator and traffic profiles defining the method in which traffic
- is generated.
+ * *Traffic generators in NSB*: Any benchmark application requires a set of
+ traffic generator and traffic profiles defining the method in which traffic
+ is generated.
-The Network Service benchmarking model extends the Network Service
-definition with a set of Traffic Generators (TG) that are treated
-same way as other VNFs being a part of benchmarked network service.
-Same as other VNFs the traffic generator are instantiated and terminated.
+ The Network Service benchmarking model extends the Network Service
+ definition with a set of Traffic Generators (TG) that are treated
+ same way as other VNFs being a part of benchmarked network service.
+ Same as other VNFs the traffic generator are instantiated and terminated.
-Every traffic generator has own configuration defined as a traffic profile and
-a set of KPIs supported. The python models for TG is extended by specific calls
-to listen and generate traffic.
+ Every traffic generator has own configuration defined as a traffic profile and
+ a set of KPIs supported. The python models for TG is extended by specific calls
+ to listen and generate traffic.
-* *The stateless TREX traffic generator*: The main traffic generator used as
- Network Service stimulus is open source TREX tool.
+ * *The stateless TREX traffic generator*: The main traffic generator used as
+ Network Service stimulus is open source TREX tool.
-The TREX tool can generate any kind of stateless traffic.
+ The TREX tool can generate any kind of stateless traffic.
-.. code-block:: console
+ .. code-block:: console
- +--------+ +-------+ +--------+
- | | | | | |
- | Trex | ---> | VNF | ---> | Trex |
- | | | | | |
- +--------+ +-------+ +--------+
+ +--------+ +-------+ +--------+
+ | | | | | |
+ | Trex | ---> | VNF | ---> | Trex |
+ | | | | | |
+ +--------+ +-------+ +--------+
-Supported testcases scenarios:
+ Supported testcases scenarios:
- - Correlated UDP traffic using TREX traffic generator and replay VNF.
+ - Correlated UDP traffic using TREX traffic generator and replay VNF.
- - using different IMIX configuration like pure voice, pure video traffic etc
+ - using different IMIX configuration like pure voice, pure video traffic etc
- - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows
+ - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows
- - Using different number of rules configured like 1 rule, 1K, 10K rules
+ - Using different number of rules configured like 1 rule, 1K, 10K rules
-For UDP correlated traffic following Key Performance Indicators are collected
-for every combination of test case parameters:
+ For UDP correlated traffic following Key Performance Indicators are collected
+ for every combination of test case parameters:
- - RFC2544 throughput for various loss rate defined (1% is a default)
+ - RFC2544 throughput for various loss rate defined (1% is a default)
Graphical Overview
-==================
+------------------
NSB Testing with yardstick framework facilitate performance testing of various
VNFs provided.
@@ -193,13 +192,12 @@ VNFs provided.
Figure 1: Network Service - 2 server configuration
VNFs supported for chracterization:
-----------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. CGNAPT - Carrier Grade Network Address and port Translation
2. vFW - Virtual Firewall
3. vACL - Access Control List
-4. vPE - Provider Edge Router
5. Prox - Packet pROcessing eXecution engine:
- VNF can act as Drop, Basic Forwarding (no touch), L2 Forwarding (change MAC), GRE encap/decap, Load balance based on packet fields, Symmetric load balancing,
- QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
+ - VNF can act as Drop, Basic Forwarding (no touch), L2 Forwarding (change MAC), GRE encap/decap, Load balance based on packet fields, Symmetric load balancing,
+ - QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
6. UDP_Replay
diff --git a/docs/testing/user/userguide/12-nsb_installation.rst b/docs/testing/user/userguide/12-nsb_installation.rst
new file mode 100644
index 000000000..8cc26acd5
--- /dev/null
+++ b/docs/testing/user/userguide/12-nsb_installation.rst
@@ -0,0 +1,849 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2016-2017 Intel Corporation.
+
+Yardstick - NSB Testing -Installation
+=====================================
+
+Abstract
+--------
+
+The Network Service Benchmarking (NSB) extends the yardstick framework to do
+VNF characterization and benchmarking in three different execution
+environments viz., bare metal i.e. native Linux environment, standalone virtual
+environment and managed virtualized environment (e.g. Open stack etc.).
+It also brings in the capability to interact with external traffic generators
+both hardware & software based for triggering and validating the traffic
+according to user defined profiles.
+
+The steps needed to run Yardstick with NSB testing are:
+
+* Install Yardstick (NSB Testing).
+* Setup/Reference pod.yaml describing Test topology
+* Create/Reference the test configuration yaml file.
+* Run the test case.
+
+
+Prerequisites
+-------------
+
+Refer chapter Yardstick Installation for more information on yardstick
+prerequisites
+
+Several prerequisites are needed for Yardstick(VNF testing):
+
+ - Python Modules: pyzmq, pika.
+
+ - flex
+
+ - bison
+
+ - build-essential
+
+ - automake
+
+ - libtool
+
+ - librabbitmq-dev
+
+ - rabbitmq-server
+
+ - collectd
+
+ - intel-cmt-cat
+
+Hardware & Software Ingredients
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+SUT requirements:
+
+
+ +-----------+--------------------+
+ | Item | Description |
+ +-----------+--------------------+
+ | Memory | Min 20GB |
+ +-----------+--------------------+
+ | NICs | 2 x 10G |
+ +-----------+--------------------+
+ | OS | Ubuntu 16.04.3 LTS |
+ +-----------+--------------------+
+ | kernel | 4.4.0-34-generic |
+ +-----------+--------------------+
+ | DPDK | 17.02 |
+ +-----------+--------------------+
+
+Boot and BIOS settings:
+
+
+ +------------------+---------------------------------------------------+
+ | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
+ | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
+ | | nohz_full=1-11,22-33 rcu_nocbs=1-11,22-33 |
+ | | iommu=on iommu=pt intel_iommu=on |
+ | | Note: nohz_full and rcu_nocbs is to disable Linux |
+ | | kernel interrupts |
+ +------------------+---------------------------------------------------+
+ |BIOS | CPU Power and Performance Policy <Performance> |
+ | | CPU C-state Disabled |
+ | | CPU P-state Disabled |
+ | | Enhanced IntelĀ® SpeedstepĀ® Tech Disabled |
+ | | Hyper-Threading Technology (If supported) Enabled |
+ | | Virtualization Techology Enabled |
+ | | Intel(R) VT for Direct I/O Enabled |
+ | | Coherency Enabled |
+ | | Turbo Boost Disabled |
+ +------------------+---------------------------------------------------+
+
+
+
+Install Yardstick (NSB Testing)
+-------------------------------
+
+Download the source code and install Yardstick from it
+
+.. code-block:: console
+
+ git clone https://gerrit.opnfv.org/gerrit/yardstick
+
+ cd yardstick
+
+ # Switch to latest stable branch
+ # git checkout <tag or stable branch>
+ git checkout stable/euphrates
+
+ # For Bare-Metal or Standalone Virtualization
+ ./nsb_setup.sh
+
+ # For OpenStack
+ ./nsb_setup.sh <path to admin-openrc.sh>
+
+
+Above command setup docker with latest yardstick code. To execute
+
+.. code-block:: console
+
+ docker exec -it yardstick bash
+
+It will also automatically download all the packages needed for NSB Testing setup.
+Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (recommended)**
+
+System Topology:
+----------------
+
+.. code-block:: console
+
+ +----------+ +----------+
+ | | | |
+ | | (0)----->(0) | |
+ | TG1 | | DUT |
+ | | | |
+ | | (1)<-----(1) | |
+ +----------+ +----------+
+ trafficgen_1 vnf
+
+
+Environment parameters and credentials
+--------------------------------------
+
+Config yardstick conf
+^^^^^^^^^^^^^^^^^^^^^
+
+If user did not run 'yardstick env influxdb' inside the container, which will generate
+correct yardstick.conf, then create the config file manually (run inside the container):
+
+ cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
+ vi /etc/yardstick/yardstick.conf
+
+Add trex_path, trex_client_lib and bin_path in 'nsb' section.
+
+::
+
+ [DEFAULT]
+ debug = True
+ dispatcher = file, influxdb
+
+ [dispatcher_influxdb]
+ timeout = 5
+ target = http://{YOUR_IP_HERE}:8086
+ db_name = yardstick
+ username = root
+ password = root
+
+ [nsb]
+ trex_path=/opt/nsb_bin/trex/scripts
+ bin_path=/opt/nsb_bin
+ trex_client_lib=/opt/nsb_bin/trex_client/stl
+
+Run Yardstick - Network Service Testcases
+-----------------------------------------
+
+
+NS testing - using yardstick CLI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+ See :doc:`04-installation`
+
+.. code-block:: console
+
+
+ docker exec -it yardstick /bin/bash
+ source /etc/yardstick/openstack.creds (only for heat TC if nsb_setup.sh was NOT used)
+ export EXTERNAL_NETWORK="<openstack public network>" (only for heat TC)
+ yardstick --debug task start yardstick/samples/vnf_samples/nsut/<vnf>/<test case>
+
+Network Service Benchmarking - Bare-Metal
+-----------------------------------------
+
+Bare-Metal Config pod.yaml describing Topology
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Bare-Metal 2-Node setup:
+########################
+.. code-block:: console
+
+ +----------+ +----------+
+ | | | |
+ | | (0)----->(0) | |
+ | TG1 | | DUT |
+ | | | |
+ | | (n)<-----(n) | |
+ +----------+ +----------+
+ trafficgen_1 vnf
+
+Bare-Metal 3-Node setup - Correlated Traffic:
+#############################################
+.. code-block:: console
+
+ +----------+ +----------+ +------------+
+ | | | | | |
+ | | | | | |
+ | | (0)----->(0) | | | UDP |
+ | TG1 | | DUT | | Replay |
+ | | | | | |
+ | | | |(1)<---->(0)| |
+ +----------+ +----------+ +------------+
+ trafficgen_1 vnf trafficgen_2
+
+
+Bare-Metal Config pod.yaml
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+Before executing Yardstick test cases, make sure that pod.yaml reflects the
+topology and update all the required fields.::
+
+ cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: trafficgen_1
+ role: TrafficGen
+ ip: 1.1.1.1
+ user: root
+ password: r00t
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00.00:00:00:02"
+
+ -
+ name: vnf
+ role: vnf
+ ip: 1.1.1.2
+ user: root
+ password: r00t
+ host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.19"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:03"
+
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.19"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:04"
+ routing_table:
+ - network: "152.16.100.20"
+ netmask: "255.255.255.0"
+ gateway: "152.16.100.20"
+ if: "xe0"
+ - network: "152.16.40.20"
+ netmask: "255.255.255.0"
+ gateway: "152.16.40.20"
+ if: "xe1"
+ nd_route_tbl:
+ - network: "0064:ff9b:0:0:0:0:9810:6414"
+ netmask: "112"
+ gateway: "0064:ff9b:0:0:0:0:9810:6414"
+ if: "xe0"
+ - network: "0064:ff9b:0:0:0:0:9810:2814"
+ netmask: "112"
+ gateway: "0064:ff9b:0:0:0:0:9810:2814"
+ if: "xe1"
+
+
+Network Service Benchmarking - Standalone Virtualization
+--------------------------------------------------------
+
+SR-IOV:
+^^^^^^^
+
+SR-IOV Pre-requisites
+#####################
+
+On Host:
+ a) Create a bridge for VM to connect to external network
+
+ .. code-block:: console
+
+ brctl addbr br-int
+ brctl addif br-int <interface_name> #This interface is connected to internet
+
+ b) Build guest image for VNF to run.
+ Most of the sample test cases in Yardstick are using a guest image called
+ ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with samplevnf.
+ It is necessary to have ``sudo`` rights to use this tool.
+
+ Also you may need to install several additional packages to use this tool, by
+ following the commands below::
+
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+
+ This image can be built using the following command in the directory where Yardstick is installed
+
+ .. code-block:: console
+
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+
+ Please use ansible script to generate a cloud image refer to :doc:`04-installation`
+
+ for more details refer to chapter :doc:`04-installation`
+
+ .. note:: VM should be build with static IP and should be accessible from yardstick host.
+
+
+SR-IOV Config pod.yaml describing Topology
+##########################################
+
+SR-IOV 2-Node setup:
+####################
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | VF NIC | | VF NIC |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +----------+ +-------------------------+
+ | | | ^ ^ |
+ | | | | | |
+ | | (0)<----->(0) | ------ | |
+ | TG1 | | SUT | |
+ | | | | |
+ | | (n)<----->(n) |------------------ |
+ +----------+ +-------------------------+
+ trafficgen_1 host
+
+
+
+SR-IOV 3-Node setup - Correlated Traffic
+########################################
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | VF NIC | | VF NIC |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +----------+ +-------------------------+ +--------------+
+ | | | ^ ^ | | |
+ | | | | | | | |
+ | | (0)<----->(0) | ------ | | | TG2 |
+ | TG1 | | SUT | | | (UDP Replay) |
+ | | | | | | |
+ | | (n)<----->(n) | ------ | (n)<-->(n) | |
+ +----------+ +-------------------------+ +--------------+
+ trafficgen_1 host trafficgen_2
+
+Before executing Yardstick test cases, make sure that pod.yaml reflects the
+topology and update all the required fields.
+
+.. code-block:: console
+
+ cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
+ cp <yardstick>/etc/yardstick/nodes/standalone/host_sriov.yaml /etc/yardstick/nodes/standalone/host_sriov.yaml
+
+.. note:: Update all the required fields like ip, user, password, pcis, etc...
+
+SR-IOV Config pod_trex.yaml
+###########################
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: trafficgen_1
+ role: TrafficGen
+ ip: 1.1.1.1
+ user: root
+ password: r00t
+ key_filename: /root/.ssh/id_rsa
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00.00:00:00:02"
+
+SR-IOV Config host_sriov.yaml
+#############################
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: sriov
+ role: Sriov
+ ip: 192.168.100.101
+ user: ""
+ password: ""
+
+SR-IOV testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_sriov_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
+
+Update "contexts" section
+"""""""""""""""""""""""""
+
+.. code-block:: YAML
+
+ contexts:
+ - name: yardstick
+ type: Node
+ file: /etc/yardstick/nodes/standalone/pod_trex.yaml
+ - type: StandaloneSriov
+ file: /etc/yardstick/nodes/standalone/host_sriov.yaml
+ name: yardstick
+ vm_deploy: True
+ flavor:
+ images: "/var/lib/libvirt/images/ubuntu.qcow2"
+ ram: 4096
+ extra_specs:
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ user: "" # update VM username
+ password: "" # update password
+ servers:
+ vnf:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
+ xe0:
+ - uplink_0
+ xe1:
+ - downlink_0
+ networks:
+ uplink_0:
+ phy_port: "0000:05:00.0"
+ vpci: "0000:00:07.0"
+ cidr: '152.16.100.10/24'
+ gateway_ip: '152.16.100.20'
+ downlink_0:
+ phy_port: "0000:05:00.1"
+ vpci: "0000:00:08.0"
+ cidr: '152.16.40.10/24'
+ gateway_ip: '152.16.100.20'
+
+
+
+OVS-DPDK:
+^^^^^^^^^
+
+OVS-DPDK Pre-requisites
+#######################
+
+On Host:
+ a) Create a bridge for VM to connect to external network
+
+ .. code-block:: console
+
+ brctl addbr br-int
+ brctl addif br-int <interface_name> #This interface is connected to internet
+
+ b) Build guest image for VNF to run.
+ Most of the sample test cases in Yardstick are using a guest image called
+ ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with samplevnf.
+ It is necessary to have ``sudo`` rights to use this tool.
+
+ Also you may need to install several additional packages to use this tool, by
+ following the commands below::
+
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+
+ This image can be built using the following command in the directory where Yardstick is installed::
+
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+ sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
+
+ for more details refer to chapter :doc:`04-installation`
+
+ .. note:: VM should be build with static IP and should be accessible from yardstick host.
+
+ c) OVS & DPDK version.
+ - OVS 2.7 and DPDK 16.11.1 above version is supported
+
+ d) Setup OVS/DPDK on host.
+ Please refer to below link on how to setup `OVS-DPDK <http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_
+
+
+OVS-DPDK Config pod.yaml describing Topology
+############################################
+
+OVS-DPDK 2-Node setup:
+######################
+
+
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | virtio | | virtio |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +--------+ +--------+
+ | vHOST0 | | vHOST1 |
+ +----------+ +-------------------------+
+ | | | ^ ^ |
+ | | | | | |
+ | | (0)<----->(0) | ------ | |
+ | TG1 | | SUT | |
+ | | | (ovs-dpdk) | |
+ | | (n)<----->(n) |------------------ |
+ +----------+ +-------------------------+
+ trafficgen_1 host
+
+
+OVS-DPDK 3-Node setup - Correlated Traffic
+##########################################
+
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | virtio | | virtio |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +--------+ +--------+
+ | vHOST0 | | vHOST1 |
+ +----------+ +-------------------------+ +------------+
+ | | | ^ ^ | | |
+ | | | | | | | |
+ | | (0)<----->(0) | ------ | | | TG2 |
+ | TG1 | | SUT | | |(UDP Replay)|
+ | | | (ovs-dpdk) | | | |
+ | | (n)<----->(n) | ------ |(n)<-->(n)| |
+ +----------+ +-------------------------+ +------------+
+ trafficgen_1 host trafficgen_2
+
+
+Before executing Yardstick test cases, make sure that pod.yaml reflects the
+topology and update all the required fields.
+
+.. code-block:: console
+
+ cp <yardstick>/etc/yardstick/nodes/standalone/trex_bm.yaml.sample /etc/yardstick/nodes/standalone/pod_trex.yaml
+ cp <yardstick>/etc/yardstick/nodes/standalone/host_ovs.yaml /etc/yardstick/nodes/standalone/host_ovs.yaml
+
+.. note:: Update all the required fields like ip, user, password, pcis, etc...
+
+OVS-DPDK Config pod_trex.yaml
+#############################
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: trafficgen_1
+ role: TrafficGen
+ ip: 1.1.1.1
+ user: root
+ password: r00t
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00.00:00:00:02"
+
+OVS-DPDK Config host_ovs.yaml
+#############################
+
+.. code-block:: YAML
+
+ nodes:
+ -
+ name: ovs_dpdk
+ role: OvsDpdk
+ ip: 192.168.100.101
+ user: ""
+ password: ""
+
+ovs_dpdk testcase update: ``<yardstick>/samples/vnf_samples/nsut/vfw/tc_ovs_rfc2544_ipv4_1rule_1flow_64B_trex.yaml``
+
+Update "contexts" section
+"""""""""""""""""""""""""
+
+.. code-block:: YAML
+
+ contexts:
+ - name: yardstick
+ type: Node
+ file: /etc/yardstick/nodes/standalone/pod_trex.yaml
+ - type: StandaloneOvsDpdk
+ name: yardstick
+ file: /etc/yardstick/nodes/standalone/pod_ovs.yaml
+ vm_deploy: True
+ ovs_properties:
+ version:
+ ovs: 2.7.0
+ dpdk: 16.11.1
+ pmd_threads: 2
+ ram:
+ socket_0: 2048
+ socket_1: 2048
+ queues: 4
+ vpath: "/usr/local"
+
+ flavor:
+ images: "/var/lib/libvirt/images/ubuntu.qcow2"
+ ram: 4096
+ extra_specs:
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ user: "" # update VM username
+ password: "" # update password
+ servers:
+ vnf:
+ network_ports:
+ mgmt:
+ cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask>
+ xe0:
+ - uplink_0
+ xe1:
+ - downlink_0
+ networks:
+ uplink_0:
+ phy_port: "0000:05:00.0"
+ vpci: "0000:00:07.0"
+ cidr: '152.16.100.10/24'
+ gateway_ip: '152.16.100.20'
+ downlink_0:
+ phy_port: "0000:05:00.1"
+ vpci: "0000:00:08.0"
+ cidr: '152.16.40.10/24'
+ gateway_ip: '152.16.100.20'
+
+
+Enabling other Traffic generator
+--------------------------------
+
+IxLoad:
+^^^^^^^
+
+1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz and <IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site)
+ Install - ``<IxLoadTclApi verson>Linux64.bin.tgz & <IxOS version>Linux64.bin.tar.gz``
+ If the installation was not done inside the container, after installing the IXIA client,
+ check /opt/ixia/ixload/<ver>/bin/ixloadpython and make sure you can run this cmd
+ inside the yardstick container. Usually user is required to copy or link /opt/ixia/python/<ver>/bin/ixiapython
+ to /usr/bin/ixiapython<ver> inside the container.
+
+2. Update pod_ixia.yaml file with ixia details.
+
+ .. code-block:: console
+
+ cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
+
+ Config pod_ixia.yaml
+
+ .. code-block:: yaml
+
+
+ nodes:
+ -
+ name: trafficgen_1
+ role: IxNet
+ ip: 1.2.1.1 #ixia machine ip
+ user: user
+ password: r00t
+ key_filename: /root/.ssh/id_rsa
+ tg_config:
+ ixchassis: "1.2.1.7" #ixia chassis ip
+ tcl_port: "8009" # tcl server port
+ lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
+ root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
+ py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
+ py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
+ dut_result_dir: "/mnt/ixia"
+ version: 8.1
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "2:5" # Card:port
+ driver: "none"
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.0.0"
+ local_mac: "00:98:10:64:14:00"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "2:6" # [(Card, port)]
+ driver: "none"
+ dpdk_port_num: 1
+ local_ip: "152.40.40.20"
+ netmask: "255.255.0.0"
+ local_mac: "00:98:28:28:14:00"
+
+ for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
+
+3. Start IxOS TCL Server (Install 'Ixia IxExplorer IxOS <version>')
+ You will also need to configure the IxLoad machine to start the IXIA
+ IxosTclServer. This can be started like so:
+
+ - Connect to the IxLoad machine using RDP
+ - Go to:
+ ``Start->Programs->Ixia->IxOS->IxOS 8.01-GA-Patch1->Ixia Tcl Server IxOS 8.01-GA-Patch1``
+ or
+ ``"C:\Program Files (x86)\Ixia\IxOS\8.01-GA-Patch1\ixTclServer.exe"``
+
+4. Create a folder "Results" in c:\ and share the folder on the network.
+
+5. execute testcase in samplevnf folder.
+ eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_http_ixload_1b_Requests-65000_Concurrency.yaml``
+
+IxNetwork:
+^^^^^^^^^^
+
+1. Software needed: ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz`` (Download from ixia support site)
+ Install - ``IxNetworkAPI<ixnetwork verson>Linux64.bin.tgz``
+2. Update pod_ixia.yaml file with ixia details.
+
+ .. code-block:: console
+
+ cp <repo>/etc/yardstick/nodes/pod.yaml.nsb.sample.ixia etc/yardstick/nodes/pod_ixia.yaml
+
+ Config pod_ixia.yaml
+
+ .. code-block:: yaml
+
+ nodes:
+ -
+ name: trafficgen_1
+ role: IxNet
+ ip: 1.2.1.1 #ixia machine ip
+ user: user
+ password: r00t
+ key_filename: /root/.ssh/id_rsa
+ tg_config:
+ ixchassis: "1.2.1.7" #ixia chassis ip
+ tcl_port: "8009" # tcl server port
+ lib_path: "/opt/ixia/ixos-api/8.01.0.2/lib/ixTcl1.0"
+ root_dir: "/opt/ixia/ixos-api/8.01.0.2/"
+ py_bin_path: "/opt/ixia/ixload/8.01.106.3/bin/"
+ py_lib_path: "/opt/ixia/ixnetwork/8.01.1029.14/lib/PythonApi"
+ dut_result_dir: "/mnt/ixia"
+ version: 8.1
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "2:5" # Card:port
+ driver: "none"
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.0.0"
+ local_mac: "00:98:10:64:14:00"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "2:6" # [(Card, port)]
+ driver: "none"
+ dpdk_port_num: 1
+ local_ip: "152.40.40.20"
+ netmask: "255.255.0.0"
+ local_mac: "00:98:28:28:14:00"
+
+ for sriov/ovs_dpdk pod files, please refer to above Standalone Virtualization for ovs-dpdk/sriov configuration
+
+3. Start IxNetwork TCL Server
+ You will also need to configure the IxNetwork machine to start the IXIA
+ IxNetworkTclServer. This can be started like so:
+
+ - Connect to the IxNetwork machine using RDP
+ - Go to: ``Start->Programs->Ixia->IxNetwork->IxNetwork 7.21.893.14 GA->IxNetworkTclServer`` (or ``IxNetworkApiServer``)
+
+4. execute testcase in samplevnf folder.
+ eg ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_ixia.yaml``
+
diff --git a/docs/testing/user/userguide/13-nsb_operation.rst b/docs/testing/user/userguide/13-nsb_operation.rst
new file mode 100644
index 000000000..8c477fa3f
--- /dev/null
+++ b/docs/testing/user/userguide/13-nsb_operation.rst
@@ -0,0 +1,270 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2016-2017 Intel Corporation.
+
+Yardstick - NSB Testing - Operation
+===================================
+
+Abstract
+--------
+
+NSB test configuration and OpenStack setup requirements
+
+
+OpenStack Network Configuration
+-------------------------------
+
+NSB requires certain OpenStack deployment configurations.
+For optimal VNF characterization using external traffic generators NSB requires
+provider/external networks.
+
+
+Provider networks
+^^^^^^^^^^^^^^^^^
+
+The VNFs require a clear L2 connect to the external network in order to generate
+realistic traffic from multiple address ranges and port
+
+In order to prevent Neutron from filtering traffic we have to disable Neutron Port Security.
+We also disable DHCP on the data ports because we are binding the ports to DPDK and do not need
+DHCP addresses. We also disable gateways because multiple default gateways can prevent SSH access
+to the VNF from the floating IP. We only want a gateway on the mgmt network
+
+.. code-block:: yaml
+
+ uplink_0:
+ cidr: '10.1.0.0/24'
+ gateway_ip: 'null'
+ port_security_enabled: False
+ enable_dhcp: 'false'
+
+Heat Topologies
+^^^^^^^^^^^^^^^
+
+By default Heat will attach every node to every Neutron network that is created.
+For scale-out tests we do not want to attach every node to every network.
+
+For each node you can specify which ports are on which network using the
+network_ports dictionary.
+
+In this example we have ``TRex xe0 <-> xe0 VNF xe1 <-> xe0 UDP_Replay``
+
+.. code-block:: yaml
+
+ vnf_0:
+ floating_ip: true
+ placement: "pgrp1"
+ network_ports:
+ mgmt:
+ - mgmt
+ uplink_0:
+ - xe0
+ downlink_0:
+ - xe1
+ tg_0:
+ floating_ip: true
+ placement: "pgrp1"
+ network_ports:
+ mgmt:
+ - mgmt
+ uplink_0:
+ - xe0
+ # Trex always needs two ports
+ uplink_1:
+ - xe1
+ tg_1:
+ floating_ip: true
+ placement: "pgrp1"
+ network_ports:
+ mgmt:
+ - mgmt
+ downlink_0:
+ - xe0
+
+Collectd KPIs
+-------------
+
+NSB can collect KPIs from collected. We have support for various plugins enabled by the
+Barometer project.
+
+The default yardstick-samplevnf has collectd installed. This allows for collecting KPIs
+from the VNF.
+
+Collecting KPIs from the NFVi is more complicated and requires manual setup.
+We assume that collectd is not installed on the compute nodes.
+
+To collectd KPIs from the NFVi compute nodes:
+
+
+ * install_collectd on the compute nodes
+ * create pod.yaml for the compute nodes
+ * enable specific plugins depending on the vswitch and DPDK
+
+ example pod.yaml section for Compute node running collectd.
+
+.. code-block:: yaml
+
+ -
+ name: "compute-1"
+ role: Compute
+ ip: "10.1.2.3"
+ user: "root"
+ ssh_port: "22"
+ password: ""
+ collectd:
+ interval: 5
+ plugins:
+ # for libvirtd stats
+ virt: {}
+ intel_pmu: {}
+ ovs_stats:
+ # path to OVS socket
+ ovs_socket_path: /var/run/openvswitch/db.sock
+ intel_rdt: {}
+
+
+
+Scale-Up
+------------------
+
+VNFs performance data with scale-up
+
+ * Helps to figure out optimal number of cores specification in the Virtual Machine template creation or VNF
+ * Helps in comparison between different VNF vendor offerings
+ * Better the scale-up index, indicates the performance scalability of a particular solution
+
+Heat
+^^^^
+
+For VNF scale-up tests we increase the number for VNF worker threads. In the case of VNFs
+we also need to increase the number of VCPUs and memory allocated to the VNF.
+
+An example scale-up Heat testcase is:
+
+.. code-block:: console
+
+ <repo>/samples/vnf_samples/nsut/acl/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml
+
+This testcase template requires specifying the number of VCPUs and Memory.
+We set the VCPUs and memory using the --task-args options
+
+.. code-block:: console
+
+ yardstick --debug task start --task-args='{"mem": 20480, "vcpus": 10}' samples/vnf_samples/nsut/acl/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml
+
+
+Baremetal
+^^^^^^^^^
+ 1. Follow above traffic generator section to setup.
+ 2. edit num of threads in ``<repo>/samples/vnf_samples/nsut/vfw/tc_baremetal_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml``
+
+ e.g, 6 Threads for given VNF
+
+.. code-block:: yaml
+
+
+ schema: yardstick:task:0.1
+ scenarios:
+ {% for worker_thread in [1, 2 ,3 , 4, 5, 6] %}
+ - type: NSPerf
+ traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml
+ topology: vfw-tg-topology.yaml
+ nodes:
+ tg__0: trafficgen_1.yardstick
+ vnf__0: vnf.yardstick
+ options:
+ framesize:
+ uplink: {64B: 100}
+ downlink: {64B: 100}
+ flow:
+ src_ip: [{'tg__0': 'xe0'}]
+ dst_ip: [{'tg__0': 'xe1'}]
+ count: 1
+ traffic_type: 4
+ rfc2544:
+ allowed_drop_rate: 0.0001 - 0.0001
+ vnf__0:
+ rules: acl_1rule.yaml
+ vnf_config: {lb_config: 'HW', lb_count: 1, worker_config: '1C/1T', worker_threads: {{worker_thread}}}
+ nfvi_enable: True
+ runner:
+ type: Iteration
+ iterations: 10
+ interval: 35
+ {% endfor %}
+ context:
+ type: Node
+ name: yardstick
+ nfvi_type: baremetal
+ file: /etc/yardstick/nodes/pod.yaml
+
+Scale-Out
+--------------------
+
+VNFs performance data with scale-out
+
+ * Helps in capacity planning to meet the given network node requirements
+ * Helps in comparison between different VNF vendor offerings
+ * Better the scale-out index, provides the flexibility in meeting future capacity requirements
+
+
+Standalone
+^^^^^^^^^^
+
+Scale-out not supported on Baremetal.
+
+1. Follow above traffic generator section to setup.
+2. Generate testcase for standalone virtualization using ansible scripts
+
+ .. code-block:: console
+
+ cd <repo>/ansible
+ trex: standalone_ovs_scale_out_trex_test.yaml or standalone_sriov_scale_out_trex_test.yaml
+ ixia: standalone_ovs_scale_out_ixia_test.yaml or standalone_sriov_scale_out_ixia_test.yaml
+ ixia_correlated: standalone_ovs_scale_out_ixia_correlated_test.yaml or standalone_sriov_scale_out_ixia_correlated_test.yaml
+
+ update the ovs_dpdk or sriov above Ansible scripts reflect the setup
+
+3. run the test
+
+ .. code-block:: console
+
+ <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-1.yaml
+ <repo>/samples/vnf_samples/nsut/tc_sriov_vfw_udp_ixia_correlated_scale_out-2.yaml
+
+Heat
+^^^^
+
+There are sample scale-out all-VM Heat tests. These tests only use VMs and don't use external traffic.
+
+The tests use UDP_Replay and correlated traffic.
+
+.. code-block:: console
+
+ <repo>/samples/vnf_samples/nsut/cgnapt/tc_heat_rfc2544_ipv4_1flow_64B_trex_correlated_scale_4.yaml
+
+To run the test you need to increase OpenStack CPU, Memory and Port quotas.
+
+
+Traffic Generator tuning
+------------------------
+
+The TRex traffic generator can be setup to use multiple threads per core, this is for multiqueue testing.
+
+TRex does not automatically enable multiple threads because we currently cannot detect the number of queues on a device.
+
+To enable multiple queue set the queues_per_port value in the TG VNF options section.
+
+.. code-block:: yaml
+
+ scenarios:
+ - type: NSPerf
+ nodes:
+ tg__0: tg_0.yardstick
+
+ options:
+ tg_0:
+ queues_per_port: 2
+
+
diff --git a/docs/testing/user/userguide/14-nsb_installation.rst b/docs/testing/user/userguide/14-nsb_installation.rst
deleted file mode 100644
index 39477f476..000000000
--- a/docs/testing/user/userguide/14-nsb_installation.rst
+++ /dev/null
@@ -1,737 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, 2016-2017 Intel Corporation.
-
-Yardstick - NSB Testing -Installation
-=====================================
-
-Abstract
---------
-
-The Network Service Benchmarking (NSB) extends the yardstick framework to do
-VNF characterization and benchmarking in three different execution
-environments viz., bare metal i.e. native Linux environment, standalone virtual
-environment and managed virtualized environment (e.g. Open stack etc.).
-It also brings in the capability to interact with external traffic generators
-both hardware & software based for triggering and validating the traffic
-according to user defined profiles.
-
-The steps needed to run Yardstick with NSB testing are:
-
-* Install Yardstick (NSB Testing).
-* Setup pod.yaml describing Test topology
-* Create the test configuration yaml file.
-* Run the test case.
-
-
-Prerequisites
--------------
-
-Refer chapter Yardstick Instalaltion for more information on yardstick
-prerequisites
-
-Several prerequisites are needed for Yardstick(VNF testing):
-
-- Python Modules: pyzmq, pika.
-
-- flex
-
-- bison
-
-- build-essential
-
-- automake
-
-- libtool
-
-- librabbitmq-dev
-
-- rabbitmq-server
-
-- collectd
-
-- intel-cmt-cat
-
-Install Yardstick (NSB Testing)
--------------------------------
-
-Using Docker
-------------
-Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (**recommended**)**
-
-Install directly in Ubuntu
---------------------------
-.. _install-framework:
-
-Alternatively you can install Yardstick framework directly in Ubuntu or in an Ubuntu Docker image. No matter which way you choose to install Yardstick, the following installation steps are identical.
-
-If you choose to use the Ubuntu Docker image, you can pull the Ubuntu
-Docker image from Docker hub::
-
- docker pull ubuntu:16.04
-
-Install Yardstick
-^^^^^^^^^^^^^^^^^^^^^
-
-Prerequisite preparation::
-
- apt-get update && apt-get install -y git python-setuptools python-pip
- easy_install -U setuptools==30.0.0
- pip install appdirs==1.4.0
- pip install virtualenv
-
-Create a virtual environment::
-
- virtualenv ~/yardstick_venv
- export YARDSTICK_VENV=~/yardstick_venv
- source ~/yardstick_venv/bin/activate
-
-Download the source code and install Yardstick from it::
-
- git clone https://gerrit.opnfv.org/gerrit/yardstick
- export YARDSTICK_REPO_DIR=~/yardstick
- cd yardstick
- ./install.sh
-
-
-After *Yardstick* is installed, executing the "nsb_setup.sh" script to setup
-NSB testing::
-
- ./nsb_setup.sh
-
-It will also automatically download all the packages needed for NSB Testing setup.
-
-System Topology:
------------------
-
-.. code-block:: console
-
- +----------+ +----------+
- | | | |
- | | (0)----->(0) | |
- | TG1 | | DUT |
- | | | |
- | | (1)<-----(1) | |
- +----------+ +----------+
- trafficgen_1 vnf
-
-
-Environment parameters and credentials
---------------------------------------
-
-Environment variables
-^^^^^^^^^^^^^^^^^^^^^
-
-Before running Yardstick (NSB Testing) it is necessary to export traffic
-generator libraries.::
-
- source ~/.bash_profile
-
-Config yardstick conf
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-::
-
- cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
- vi /etc/yardstick/yardstick.conf
-
-Add trex_path, trex_client_lib and bin_path in 'nsb' section.
-
-::
-
- [DEFAULT]
- debug = True
- dispatcher = file, influxdb
-
- [dispatcher_influxdb]
- timeout = 5
- target = http://{YOUR_IP_HERE}:8086
- db_name = yardstick
- username = root
- password = root
-
- [nsb]
- trex_path=/opt/nsb_bin/trex/scripts
- bin_path=/opt/nsb_bin
- trex_client_lib=/opt/nsb_bin/trex_client/stl
-
-Network Service Benchmarking - Bare-Metal
------------------------------------------
-
-Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-2-Node setup:
-^^^^^^^^^^^^^
-.. code-block:: console
- +----------+ +----------+
- | | | |
- | | (0)----->(0) | |
- | TG1 | | DUT |
- | | | |
- | | (n)<-----(n) | |
- +----------+ +----------+
- trafficgen_1 vnf
-
-3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. code-block:: console
- +----------+ +----------+ +------------+
- | | | | | |
- | | | | | |
- | | (0)----->(0) | | | UDP |
- | TG1 | | DUT | | Replay |
- | | | | | |
- | | | |(1)<---->(0)| |
- +----------+ +----------+ +------------+
- trafficgen_1 vnf trafficgen_2
-
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.::
-
- cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
-
-Config pod.yaml
-::
- nodes:
- -
- name: trafficgen_1
- role: TrafficGen
- ip: 1.1.1.1
- user: root
- password: r00t
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.0"
- driver: i40e # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:01"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.1"
- driver: i40e # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.20"
- netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
-
- -
- name: vnf
- role: vnf
- ip: 1.1.1.2
- user: root
- password: r00t
- host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.0"
- driver: i40e # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.19"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:03"
-
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.1"
- driver: i40e # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.19"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:04"
- routing_table:
- - network: "152.16.100.20"
- netmask: "255.255.255.0"
- gateway: "152.16.100.20"
- if: "xe0"
- - network: "152.16.40.20"
- netmask: "255.255.255.0"
- gateway: "152.16.40.20"
- if: "xe1"
- nd_route_tbl:
- - network: "0064:ff9b:0:0:0:0:9810:6414"
- netmask: "112"
- gateway: "0064:ff9b:0:0:0:0:9810:6414"
- if: "xe0"
- - network: "0064:ff9b:0:0:0:0:9810:2814"
- netmask: "112"
- gateway: "0064:ff9b:0:0:0:0:9810:2814"
- if: "xe1"
-
-Enable yardstick virtual environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Before executing yardstick test cases, make sure to activate yardstick
-python virtual environment if runnin on ubuntu without docker::
-
- source /opt/nsb_bin/yardstick_venv/bin/activate
-
-On docker, virtual env is in main path.
-
-Run Yardstick - Network Service Testcases
------------------------------------------
-
-NS testing - using NSBperf CLI
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-::
-
- PYTHONPATH: ". ~/.bash_profile"
- cd <yardstick_repo>/yardstick/cmd
-
- Execute command: ./NSPerf.py -h
- ./NSBperf.py --vnf <selected vnf> --test <rfc test>
- eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml
-
-NS testing - using yardstick CLI
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-::
- PYTHONPATH: ". ~/.bash_profile"
-
-Go to test case forlder type we want to execute.
- e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/
- run: yardstick --debug task start <test_case.yaml>
-
-Network Service Benchmarking - Standalone Virtualization
---------------------------------------------------------
-
-SRIOV:
------
-
-Pre-requisites
-^^^^^^^^^^^^^^
-
-On Host:
- a) Create a bridge for VM to connect to external network
- brctl addbr br-int
- brctl addif br-int <interface_name> #This interface is connected to internet
-
- b) Build guest image for VNF to run.
- Most of the sample test cases in Yardstick are using a guest image called
- ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
- Yardstick has a tool for building this custom image with samplevnf.
- It is necessary to have ``sudo`` rights to use this tool.
-
- Also you may need to install several additional packages to use this tool, by
- follwing the commands below::
-
- sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
-
- This image can be built using the following command in the directory where Yardstick is installed::
-
- export YARD_IMG_ARCH='amd64'
- sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
- sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
-
- for more details refer chapter :doc:`04-installation``
-
-Note: VM should be build with static IP and should be accessiable from yardstick host.
-
-Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-2-Node setup:
-^^^^^^^^^^^^^
-.. code-block:: console
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | VF NIC | | VF NIC |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +--------+ +--------+
- - PF NIC - - PF NIC -
- +----------+ +-------------------------+
- | | | ^ ^ |
- | | | | | |
- | | (0)<----->(0) | ------ | |
- | TG1 | | SUT | |
- | | | | |
- | | (n)<----->(n) |------------------ |
- +----------+ +-------------------------+
- trafficgen_1 host
-
-
-3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. code-block:: console
-
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | VF NIC | | VF NIC |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +--------+ +--------+
- | PF NIC - - PF NIC -
- +----------+ +-------------------------+ +------------+
- | | | ^ ^ | | |
- | | | | | | | |
- | | (0)<----->(0) | ------ | | | TG2 |
- | TG1 | | SUT | | |(UDP Replay)|
- | | | | | | |
- | | (n)<----->(n) | ------ |(n)<-->(n)| |
- +----------+ +-------------------------+ +------------+
- trafficgen_1 host trafficgen_2
-
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.
-
-::
-
- cp /etc/yardstick/nodes/pod.yaml.nsb.sriov.sample /etc/yardstick/nodes/pod.yaml
-
-Config pod.yaml
-::
- nodes:
- -
- name: trafficgen_1
- role: TrafficGen
- ip: 1.1.1.1
- user: root
- password: r00t
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.0"
- driver: i40e # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:01"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.1"
- driver: i40e # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.20"
- netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
-
--
- name: sriov
- role: Sriov
- ip: 2.2.2.2
- user: root
- auth_type: password
- password: password
- vf_macs:
- - "00:00:00:00:00:03"
- - "00:00:00:00:00:04"
- phy_ports: # Physical ports to configure sriov
- - "0000:06:00.0"
- - "0000:06:00.1"
- phy_driver: i40e # kernel driver
- images: "/var/lib/libvirt/images/ubuntu1.img"
-
- -
- name: vnf
- role: vnf
- ip: 1.1.1.2
- user: root
- password: r00t
- host: 2.2.2.2 #BM - host == ip, virtualized env - Host - compute node
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:00:07.0"
- driver: i40evf # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.10"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:03"
-
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:00:08.0"
- driver: i40evf # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.10"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:04"
- routing_table:
- - network: "152.16.100.10"
- netmask: "255.255.255.0"
- gateway: "152.16.100.20"
- if: "xe0"
- - network: "152.16.40.10"
- netmask: "255.255.255.0"
- gateway: "152.16.40.20"
- if: "xe1"
- nd_route_tbl:
- - network: "0064:ff9b:0:0:0:0:9810:6414"
- netmask: "112"
- gateway: "0064:ff9b:0:0:0:0:9810:6414"
- if: "xe0"
- - network: "0064:ff9b:0:0:0:0:9810:2814"
- netmask: "112"
- gateway: "0064:ff9b:0:0:0:0:9810:2814"
- if: "xe1"
-
-Enable yardstick virtual environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Before executing yardstick test cases, make sure to activate yardstick
-python virtual environment if runnin on ubuntu without docker::
-
- source /opt/nsb_bin/yardstick_venv/bin/activate
-
-On docker, virtual env is in main path.
-
-Run Yardstick - Network Service Testcases
------------------------------------------
-
-NS testing - using NSBperf CLI
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-::
-
- PYTHONPATH: ". ~/.bash_profile"
- cd <yardstick_repo>/yardstick/cmd
-
- Execute command: ./NSPerf.py -h
- ./NSBperf.py --vnf <selected vnf> --test <rfc test>
- eg: ./NSBperf.py --vnf vfw --test tc_sriov_rfc2544_ipv4_1flow_64B.yaml
-
-NS testing - using yardstick CLI
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-::
- PYTHONPATH: ". ~/.bash_profile"
-
-Go to test case forlder type we want to execute.
- e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/
- run: yardstick --debug task start <test_case.yaml>
-
-OVS-DPDK:
------
-
-Pre-requisites
-^^^^^^^^^^^^^^
-
-On Host:
- a) Create a bridge for VM to connect to external network
- brctl addbr br-int
- brctl addif br-int <interface_name> #This interface is connected to internet
-
- b) Build guest image for VNF to run.
- Most of the sample test cases in Yardstick are using a guest image called
- ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
- Yardstick has a tool for building this custom image with samplevnf.
- It is necessary to have ``sudo`` rights to use this tool.
-
- Also you may need to install several additional packages to use this tool, by
- follwing the commands below::
-
- sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
-
- This image can be built using the following command in the directory where Yardstick is installed::
-
- export YARD_IMG_ARCH='amd64'
- sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
- sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
-
- for more details refer chapter :doc:`04-installation``
-
-Note: VM should be build with static IP and should be accessiable from yardstick host.
-
- c) OVS & DPDK version.
- - OVS 2.7 and DPDK 16.11.1 above version is supported
-
- d) Setup OVS/DPDK on host.
- Please refer below link on how to setup .. _ovs-dpdk: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
-
-Config pod.yaml describing Topology
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-2-Node setup:
-^^^^^^^^^^^^^
-.. code-block:: console
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | virtio | | virtio |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +--------+ +--------+
- | vHOST0 | | vHOST1 |
- +----------+ +-------------------------+
- | | | ^ ^ |
- | | | | | |
- | | (0)<----->(0) | ------ | |
- | TG1 | | SUT | |
- | | | (ovs-dpdk) | |
- | | (n)<----->(n) |------------------ |
- +----------+ +-------------------------+
- trafficgen_1 host
-
-
-3-Node setup - Correlated Traffic
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. code-block:: console
-
- +--------------------+
- | |
- | |
- | DUT |
- | (VNF) |
- | |
- +--------------------+
- | virtio | | virtio |
- +--------+ +--------+
- ^ ^
- | |
- | |
- +--------+ +--------+
- | vHOST0 | | vHOST1 |
- +----------+ +-------------------------+ +------------+
- | | | ^ ^ | | |
- | | | | | | | |
- | | (0)<----->(0) | ------ | | | TG2 |
- | TG1 | | SUT | | |(UDP Replay)|
- | | | (ovs-dpdk) | | | |
- | | (n)<----->(n) | ------ |(n)<-->(n)| |
- +----------+ +-------------------------+ +------------+
- trafficgen_1 host trafficgen_2
-
-
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.::
-
- cp /etc/yardstick/nodes/pod.yaml.nsb.ovs.sample /etc/yardstick/nodes/pod.yaml
-
-Config pod.yaml
-::
- nodes:
- -
- name: trafficgen_1
- role: TrafficGen
- ip: 1.1.1.1
- user: root
- password: r00t
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.0"
- driver: i40e # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.20"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:01"
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:07:00.1"
- driver: i40e # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.20"
- netmask: "255.255.255.0"
- local_mac: "00:00.00:00:00:02"
-
--
- name: ovs
- role: Ovsdpdk
- ip: 2.2.2.2
- user: root
- auth_type: password
- password: <password>
- vpath: "/usr/local/"
- vports:
- - dpdkvhostuser0
- - dpdkvhostuser1
- vports_mac:
- - "00:00:00:00:00:03"
- - "00:00:00:00:00:04"
- phy_ports: # Physical ports to configure ovs
- - "0000:06:00.0"
- - "0000:06:00.1"
- flow:
- - ovs-ofctl add-flow br0 in_port=1,action=output:3
- - ovs-ofctl add-flow br0 in_port=3,action=output:1
- - ovs-ofctl add-flow br0 in_port=4,action=output:2
- - ovs-ofctl add-flow br0 in_port=2,action=output:4
- phy_driver: i40e # kernel driver
- images: "/var/lib/libvirt/images/ubuntu1.img"
-
- -
- name: vnf
- role: vnf
- ip: 1.1.1.2
- user: root
- password: r00t
- host: 2.2.2.2 #BM - host == ip, virtualized env - Host - compute node
- interfaces:
- xe0: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:00:04.0"
- driver: virtio-pci # default kernel driver
- dpdk_port_num: 0
- local_ip: "152.16.100.10"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:03"
-
- xe1: # logical name from topology.yaml and vnfd.yaml
- vpci: "0000:00:05.0"
- driver: virtio-pci # default kernel driver
- dpdk_port_num: 1
- local_ip: "152.16.40.10"
- netmask: "255.255.255.0"
- local_mac: "00:00:00:00:00:04"
- routing_table:
- - network: "152.16.100.10"
- netmask: "255.255.255.0"
- gateway: "152.16.100.20"
- if: "xe0"
- - network: "152.16.40.10"
- netmask: "255.255.255.0"
- gateway: "152.16.40.20"
- if: "xe1"
- nd_route_tbl:
- - network: "0064:ff9b:0:0:0:0:9810:6414"
- netmask: "112"
- gateway: "0064:ff9b:0:0:0:0:9810:6414"
- if: "xe0"
- - network: "0064:ff9b:0:0:0:0:9810:2814"
- netmask: "112"
- gateway: "0064:ff9b:0:0:0:0:9810:2814"
- if: "xe1"
-
-Enable yardstick virtual environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Before executing yardstick test cases, make sure to activate yardstick
-python virtual environment if runnin on ubuntu without docker::
-
- source /opt/nsb_bin/yardstick_venv/bin/activate
-
-On docker, virtual env is in main path.
-
-Run Yardstick - Network Service Testcases
------------------------------------------
-
-NS testing - using NSBperf CLI
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-::
-
- PYTHONPATH: ". ~/.bash_profile"
- cd <yardstick_repo>/yardstick/cmd
-
- Execute command: ./NSPerf.py -h
- ./NSBperf.py --vnf <selected vnf> --test <rfc test>
- eg: ./NSBperf.py --vnf vfw --test tc_ovs_rfc2544_ipv4_1flow_64B.yaml
-
-NS testing - using yardstick CLI
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-::
- PYTHONPATH: ". ~/.bash_profile"
-
-Go to test case forlder type we want to execute.
- e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/
- run: yardstick --debug task start <test_case.yaml>
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index 707e12b56..c3dc57a9f 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -23,8 +23,9 @@ Yardstick User Guide
08-api
09-yardstick_user_interface
10-vtc-overview
- 13-nsb-overview
- 14-nsb_installation
+ 11-nsb-overview
+ 12-nsb_installation
+ 13-nsb_operation
15-list-of-tcs
glossary
references