summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/release/release-notes/release-notes.rst27
-rw-r--r--docs/testing/user/userguide/04-installation.rst138
-rw-r--r--docs/testing/user/userguide/13-nsb-overview.rst11
-rw-r--r--docs/testing/user/userguide/14-nsb_installation.rst543
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc006.rst119
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc056.rst149
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc057.rst165
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc058.rst148
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc083.rst81
9 files changed, 1349 insertions, 32 deletions
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index bd65c0725..bbd7e8417 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -37,10 +37,10 @@ Version History
| *Date* | *Version* | *Comment* |
| | | |
+----------------+--------------------+---------------------------------+
-| | 3.1 | Yardstick for Danube release |
+| | 3.2 | Yardstick for Danube release |
| | | |
-| | | Note: The 3.1 tag is due to git |
-| | | tag issue during Danube 3.0 |
+| | | Note: The 3.2 tag is due to a |
+| | | code issue during Danube 3.1 |
| | | release |
| | | |
+----------------+--------------------+---------------------------------+
@@ -142,16 +142,16 @@ Release Data
| **Project** | Yardstick |
| | |
+--------------------------------------+--------------------------------------+
-| **Repo/tag** | yardstick/Danube.3.1 |
+| **Repo/tag** | yardstick/Danube.3.2 |
| | |
+--------------------------------------+--------------------------------------+
-| **Yardstick Docker image tag** | Danube.3.1 |
+| **Yardstick Docker image tag** | Danube.3.2 |
| | |
+--------------------------------------+--------------------------------------+
| **Release designation** | Danube |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | July 14th, 2017 |
+| **Release date** | August 15th, 2017 |
| | |
+--------------------------------------+--------------------------------------+
| **Purpose of the delivery** | OPNFV Danube release 3.0 |
@@ -174,7 +174,7 @@ Software Deliverables
---------------------
- - The Yardstick Docker image: https://hub.docker.com/r/opnfv/yardstick (tag: danube.3.1)
+ - The Yardstick Docker image: https://hub.docker.com/r/opnfv/yardstick (tag: danube.3.2)
**Contexts**
@@ -602,6 +602,17 @@ Known Issues/Faults
Corrected Faults
----------------
+Danube.3.2:
+
++----------------------------+------------------------------------------------+
+| **JIRA REFERENCE** | **DESCRIPTION** |
+| | |
++----------------------------+------------------------------------------------+
+| JIRA: YARDSTICK-776 | Bugfix: cannot run task if without |
+| | yardstick.conf in danube |
++----------------------------+------------------------------------------------+
+
+
Danube.3.1:
+----------------------------+------------------------------------------------+
@@ -702,7 +713,7 @@ Danube.1.0:
+----------------------------+------------------------------------------------+
-Danube 3.1 known restrictions/issues
+Danube 3.2 known restrictions/issues
====================================
+-----------+-----------+----------------------------------------------+
| Installer | Scenario | Issue |
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst
index 37e4ba599..cb4f31434 100644
--- a/docs/testing/user/userguide/04-installation.rst
+++ b/docs/testing/user/userguide/04-installation.rst
@@ -443,6 +443,141 @@ Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**)
-----------------------------------------------------------
+Yardstick common CLI
+--------------------
+
+list test cases
+>>>>>>>>>>>>>>>
+**yardstick testcase list**
+
+This command line would list all test cases in yardstick.
+It would show like below::
+
+ +---------------------------------------------------------------------------------------
+ | Testcase Name | Description
+ +---------------------------------------------------------------------------------------
+ | opnfv_yardstick_tc001 | Measure network throughput using pktgen
+ | opnfv_yardstick_tc002 | measure network latency using ping
+ | opnfv_yardstick_tc005 | Measure Storage IOPS, throughput and latency using fio.
+ | opnfv_yardstick_tc006 | Measure volume storage IOPS, throughput and latency using fio.
+ | opnfv_yardstick_tc008 | Measure network throughput and packet loss using Pktgen
+ | opnfv_yardstick_tc009 | Measure network throughput and packet loss using pktgen
+ | opnfv_yardstick_tc010 | measure memory read latency using lmbench.
+ | opnfv_yardstick_tc011 | Measure packet delay variation (jitter) using iperf3.
+ | opnfv_yardstick_tc012 | Measure memory read and write bandwidth using lmbench.
+ | opnfv_yardstick_tc014 | Measure Processing speed using unixbench.
+ | opnfv_yardstick_tc019 | Sample test case for the HA of controller node service.
+ ...
+ +---------------------------------------------------------------------------------------
+show a test case config file
+>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+Take opnfv_yardstick_tc002 for an example. This test case measure network latency.
+You just need to type in **yardstick testcase show opnfv_yardstick_tc002**, and the console
+would show the config yaml of this test case::
+ ##############################################################################
+ # Copyright (c) 2017 kristian.hunt@gmail.com and others.
+ #
+ # All rights reserved. This program and the accompanying materials
+ # are made available under the terms of the Apache License, Version 2.0
+ # which accompanies this distribution, and is available at
+ # http://www.apache.org/licenses/LICENSE-2.0
+ ##############################################################################
+ ---
+
+ schema: "yardstick:task:0.1"
+ description: >
+ Yardstick TC002 config file;
+ measure network latency using ping;
+
+ {% set image = image or "cirros-0.3.5" %}
+
+ {% set provider = provider or none %}
+ {% set physical_network = physical_network or 'physnet1' %}
+ {% set segmentation_id = segmentation_id or none %}
+ {% set packetsize = packetsize or 100 %}
+
+ scenarios:
+ {% for i in range(2) %}
+ -
+ type: Ping
+ options:
+ packetsize: {{packetsize}}
+ host: athena.demo
+ target: ares.demo
+
+ runner:
+ type: Duration
+ duration: 60
+ interval: 10
+
+ sla:
+ max_rtt: 10
+ action: monitor
+ {% endfor %}
+
+ context:
+ name: demo
+ image: {{image}}
+ flavor: yardstick-flavor
+ user: cirros
+
+ placement_groups:
+ pgrp1:
+ policy: "availability"
+
+ servers:
+ athena:
+ floating_ip: true
+ placement: "pgrp1"
+ ares:
+ placement: "pgrp1"
+
+ networks:
+ test:
+ cidr: '10.0.1.0/24'
+ {% if provider == "vlan" %}
+ provider: {{provider}}
+ physical_network: {{physical_network}}å
+ {% if segmentation_id %}
+ segmentation_id: {{segmentation_id}}
+ {% endif %}
+ {% endif %}
+
+start a task to run yardstick test case
+>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+If you want run a test case, then you need to use **yardstick task start <test_case_path>**
+this command support some parameters as below:
+
++---------------------+--------------------------------------------------+
+| Parameters | Detail |
++=====================+==================================================+
+| -d | show debug log of yardstick running |
+| | |
++---------------------+--------------------------------------------------+
+| --task-args | If you want to customize test case parameters, |
+| | use "--task-args" to pass the value. The format |
+| | is a json string with parameter key-value pair. |
+| | |
++---------------------+--------------------------------------------------+
+| --task-args-file | If you want to use yardstick |
+| | env prepare command(or |
+| | related API) to load the |
++---------------------+--------------------------------------------------+
+| --parse-only | |
+| | |
+| | |
++---------------------+--------------------------------------------------+
+| --output-file \ | Specify where to output the log. if not pass, |
+| OUTPUT_FILE_PATH | the default value is |
+| | "/tmp/yardstick/yardstick.log" |
+| | |
++---------------------+--------------------------------------------------+
+| --suite \ | run a test suite, TEST_SUITE_PATH speciy where |
+| TEST_SUITE_PATH | the test suite locates |
+| | |
++---------------------+--------------------------------------------------+
+
+
Run Yardstick in a local environment
------------------------------------
@@ -512,5 +647,4 @@ yaml file and add test cases, constraint or task arguments if necessary.
Proxy Support (**Todo**)
----------------------------
-
+--------------------------- \ No newline at end of file
diff --git a/docs/testing/user/userguide/13-nsb-overview.rst b/docs/testing/user/userguide/13-nsb-overview.rst
index faac61f08..63442bff0 100644
--- a/docs/testing/user/userguide/13-nsb-overview.rst
+++ b/docs/testing/user/userguide/13-nsb-overview.rst
@@ -192,3 +192,14 @@ VNFs provided.
Figure 1: Network Service - 2 server configuration
+VNFs supported for chracterization:
+----------------------------------
+
+1. CGNAPT - Carrier Grade Network Address and port Translation
+2. vFW - Virtual Firewall
+3. vACL - Access Control List
+4. vPE - Provider Edge Router
+5. Prox - Packet pROcessing eXecution engine:
+ VNF can act as Drop, Basic Forwarding (no touch), L2 Forwarding (change MAC), GRE encap/decap, Load balance based on packet fields, Symmetric load balancing,
+ QinQ encap/decap IPv4/IPv6, ARP, QoS, Routing, Unmpls, Policing, ACL
+6. UDP_Replay
diff --git a/docs/testing/user/userguide/14-nsb_installation.rst b/docs/testing/user/userguide/14-nsb_installation.rst
index 3eb17bbca..39477f476 100644
--- a/docs/testing/user/userguide/14-nsb_installation.rst
+++ b/docs/testing/user/userguide/14-nsb_installation.rst
@@ -56,12 +56,47 @@ Several prerequisites are needed for Yardstick(VNF testing):
Install Yardstick (NSB Testing)
-------------------------------
-Refer chapter :doc:`04-installation` for more information on installing *Yardstick*
+Using Docker
+------------
+Refer chapter :doc:`04-installation` for more on docker **Install Yardstick using Docker (**recommended**)**
-After *Yardstick* is installed, executing the "nsb_setup.sh" script to setup
-NSB testing.
+Install directly in Ubuntu
+--------------------------
+.. _install-framework:
-::
+Alternatively you can install Yardstick framework directly in Ubuntu or in an Ubuntu Docker image. No matter which way you choose to install Yardstick, the following installation steps are identical.
+
+If you choose to use the Ubuntu Docker image, you can pull the Ubuntu
+Docker image from Docker hub::
+
+ docker pull ubuntu:16.04
+
+Install Yardstick
+^^^^^^^^^^^^^^^^^^^^^
+
+Prerequisite preparation::
+
+ apt-get update && apt-get install -y git python-setuptools python-pip
+ easy_install -U setuptools==30.0.0
+ pip install appdirs==1.4.0
+ pip install virtualenv
+
+Create a virtual environment::
+
+ virtualenv ~/yardstick_venv
+ export YARDSTICK_VENV=~/yardstick_venv
+ source ~/yardstick_venv/bin/activate
+
+Download the source code and install Yardstick from it::
+
+ git clone https://gerrit.opnfv.org/gerrit/yardstick
+ export YARDSTICK_REPO_DIR=~/yardstick
+ cd yardstick
+ ./install.sh
+
+
+After *Yardstick* is installed, executing the "nsb_setup.sh" script to setup
+NSB testing::
./nsb_setup.sh
@@ -74,42 +109,39 @@ System Topology:
+----------+ +----------+
| | | |
- | | (0)----->(0) | Ping/ |
- | TG1 | | vPE/ |
- | | | 2Trex |
+ | | (0)----->(0) | |
+ | TG1 | | DUT |
+ | | | |
| | (1)<-----(1) | |
+----------+ +----------+
trafficgen_1 vnf
-OpenStack parameters and credentials
-------------------------------------
+Environment parameters and credentials
+--------------------------------------
Environment variables
^^^^^^^^^^^^^^^^^^^^^
Before running Yardstick (NSB Testing) it is necessary to export traffic
-generator libraries.
-
-::
+generator libraries.::
source ~/.bash_profile
Config yardstick conf
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
::
cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf
vi /etc/yardstick/yardstick.conf
-Add trex_path and bin_path in 'nsb' section.
+Add trex_path, trex_client_lib and bin_path in 'nsb' section.
::
[DEFAULT]
debug = True
- dispatcher = influxdb
+ dispatcher = file, influxdb
[dispatcher_influxdb]
timeout = 5
@@ -121,15 +153,41 @@ Add trex_path and bin_path in 'nsb' section.
[nsb]
trex_path=/opt/nsb_bin/trex/scripts
bin_path=/opt/nsb_bin
+ trex_client_lib=/opt/nsb_bin/trex_client/stl
+Network Service Benchmarking - Bare-Metal
+-----------------------------------------
Config pod.yaml describing Topology
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Before executing Yardstick test cases, make sure that pod.yaml reflects the
-topology and update all the required fields.
+2-Node setup:
+^^^^^^^^^^^^^
+.. code-block:: console
+ +----------+ +----------+
+ | | | |
+ | | (0)----->(0) | |
+ | TG1 | | DUT |
+ | | | |
+ | | (n)<-----(n) | |
+ +----------+ +----------+
+ trafficgen_1 vnf
-::
+3-Node setup - Correlated Traffic
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. code-block:: console
+ +----------+ +----------+ +------------+
+ | | | | | |
+ | | | | | |
+ | | (0)----->(0) | | | UDP |
+ | TG1 | | DUT | | Replay |
+ | | | | | |
+ | | | |(1)<---->(0)| |
+ +----------+ +----------+ +------------+
+ trafficgen_1 vnf trafficgen_2
+
+Before executing Yardstick test cases, make sure that pod.yaml reflects the
+topology and update all the required fields.::
cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml
@@ -204,12 +262,228 @@ Enable yardstick virtual environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Before executing yardstick test cases, make sure to activate yardstick
-python virtual environment
+python virtual environment if runnin on ubuntu without docker::
+
+ source /opt/nsb_bin/yardstick_venv/bin/activate
+
+On docker, virtual env is in main path.
+
+Run Yardstick - Network Service Testcases
+-----------------------------------------
+
+NS testing - using NSBperf CLI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+::
+
+ PYTHONPATH: ". ~/.bash_profile"
+ cd <yardstick_repo>/yardstick/cmd
+
+ Execute command: ./NSPerf.py -h
+ ./NSBperf.py --vnf <selected vnf> --test <rfc test>
+ eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml
+
+NS testing - using yardstick CLI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+::
+ PYTHONPATH: ". ~/.bash_profile"
+
+Go to test case forlder type we want to execute.
+ e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/
+ run: yardstick --debug task start <test_case.yaml>
+
+Network Service Benchmarking - Standalone Virtualization
+--------------------------------------------------------
+
+SRIOV:
+-----
+
+Pre-requisites
+^^^^^^^^^^^^^^
+
+On Host:
+ a) Create a bridge for VM to connect to external network
+ brctl addbr br-int
+ brctl addif br-int <interface_name> #This interface is connected to internet
+
+ b) Build guest image for VNF to run.
+ Most of the sample test cases in Yardstick are using a guest image called
+ ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with samplevnf.
+ It is necessary to have ``sudo`` rights to use this tool.
+
+ Also you may need to install several additional packages to use this tool, by
+ follwing the commands below::
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+
+ This image can be built using the following command in the directory where Yardstick is installed::
+
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+ sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
+
+ for more details refer chapter :doc:`04-installation``
+
+Note: VM should be build with static IP and should be accessiable from yardstick host.
+
+Config pod.yaml describing Topology
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+2-Node setup:
+^^^^^^^^^^^^^
+.. code-block:: console
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | VF NIC | | VF NIC |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +--------+ +--------+
+ - PF NIC - - PF NIC -
+ +----------+ +-------------------------+
+ | | | ^ ^ |
+ | | | | | |
+ | | (0)<----->(0) | ------ | |
+ | TG1 | | SUT | |
+ | | | | |
+ | | (n)<----->(n) |------------------ |
+ +----------+ +-------------------------+
+ trafficgen_1 host
+
+
+3-Node setup - Correlated Traffic
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | VF NIC | | VF NIC |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +--------+ +--------+
+ | PF NIC - - PF NIC -
+ +----------+ +-------------------------+ +------------+
+ | | | ^ ^ | | |
+ | | | | | | | |
+ | | (0)<----->(0) | ------ | | | TG2 |
+ | TG1 | | SUT | | |(UDP Replay)|
+ | | | | | | |
+ | | (n)<----->(n) | ------ |(n)<-->(n)| |
+ +----------+ +-------------------------+ +------------+
+ trafficgen_1 host trafficgen_2
+
+Before executing Yardstick test cases, make sure that pod.yaml reflects the
+topology and update all the required fields.
+
+::
+
+ cp /etc/yardstick/nodes/pod.yaml.nsb.sriov.sample /etc/yardstick/nodes/pod.yaml
+
+Config pod.yaml
::
+ nodes:
+ -
+ name: trafficgen_1
+ role: TrafficGen
+ ip: 1.1.1.1
+ user: root
+ password: r00t
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00.00:00:00:02"
+
+-
+ name: sriov
+ role: Sriov
+ ip: 2.2.2.2
+ user: root
+ auth_type: password
+ password: password
+ vf_macs:
+ - "00:00:00:00:00:03"
+ - "00:00:00:00:00:04"
+ phy_ports: # Physical ports to configure sriov
+ - "0000:06:00.0"
+ - "0000:06:00.1"
+ phy_driver: i40e # kernel driver
+ images: "/var/lib/libvirt/images/ubuntu1.img"
+
+ -
+ name: vnf
+ role: vnf
+ ip: 1.1.1.2
+ user: root
+ password: r00t
+ host: 2.2.2.2 #BM - host == ip, virtualized env - Host - compute node
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:00:07.0"
+ driver: i40evf # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.10"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:03"
+
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:00:08.0"
+ driver: i40evf # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.10"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:04"
+ routing_table:
+ - network: "152.16.100.10"
+ netmask: "255.255.255.0"
+ gateway: "152.16.100.20"
+ if: "xe0"
+ - network: "152.16.40.10"
+ netmask: "255.255.255.0"
+ gateway: "152.16.40.20"
+ if: "xe1"
+ nd_route_tbl:
+ - network: "0064:ff9b:0:0:0:0:9810:6414"
+ netmask: "112"
+ gateway: "0064:ff9b:0:0:0:0:9810:6414"
+ if: "xe0"
+ - network: "0064:ff9b:0:0:0:0:9810:2814"
+ netmask: "112"
+ gateway: "0064:ff9b:0:0:0:0:9810:2814"
+ if: "xe1"
+
+Enable yardstick virtual environment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Before executing yardstick test cases, make sure to activate yardstick
+python virtual environment if runnin on ubuntu without docker::
source /opt/nsb_bin/yardstick_venv/bin/activate
+On docker, virtual env is in main path.
Run Yardstick - Network Service Testcases
-----------------------------------------
@@ -218,19 +492,244 @@ NS testing - using NSBperf CLI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
- source /opt/nsb_setup/yardstick_venv/bin/activate
PYTHONPATH: ". ~/.bash_profile"
cd <yardstick_repo>/yardstick/cmd
Execute command: ./NSPerf.py -h
./NSBperf.py --vnf <selected vnf> --test <rfc test>
- eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml
+ eg: ./NSBperf.py --vnf vfw --test tc_sriov_rfc2544_ipv4_1flow_64B.yaml
NS testing - using yardstick CLI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
+ PYTHONPATH: ". ~/.bash_profile"
+
+Go to test case forlder type we want to execute.
+ e.g. <yardstick repo>/samples/vnf_samples/nsut/<vnf>/
+ run: yardstick --debug task start <test_case.yaml>
+
+OVS-DPDK:
+-----
+
+Pre-requisites
+^^^^^^^^^^^^^^
+
+On Host:
+ a) Create a bridge for VM to connect to external network
+ brctl addbr br-int
+ brctl addif br-int <interface_name> #This interface is connected to internet
+
+ b) Build guest image for VNF to run.
+ Most of the sample test cases in Yardstick are using a guest image called
+ ``yardstick-image`` which deviates from an Ubuntu Cloud Server image
+ Yardstick has a tool for building this custom image with samplevnf.
+ It is necessary to have ``sudo`` rights to use this tool.
+
+ Also you may need to install several additional packages to use this tool, by
+ follwing the commands below::
+
+ sudo apt-get update && sudo apt-get install -y qemu-utils kpartx
+
+ This image can be built using the following command in the directory where Yardstick is installed::
+
+ export YARD_IMG_ARCH='amd64'
+ sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers
+ sudo tools/yardstick-img-dpdk-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh
+
+ for more details refer chapter :doc:`04-installation``
+
+Note: VM should be build with static IP and should be accessiable from yardstick host.
+
+ c) OVS & DPDK version.
+ - OVS 2.7 and DPDK 16.11.1 above version is supported
+
+ d) Setup OVS/DPDK on host.
+ Please refer below link on how to setup .. _ovs-dpdk: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
+
+Config pod.yaml describing Topology
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+2-Node setup:
+^^^^^^^^^^^^^
+.. code-block:: console
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | virtio | | virtio |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +--------+ +--------+
+ | vHOST0 | | vHOST1 |
+ +----------+ +-------------------------+
+ | | | ^ ^ |
+ | | | | | |
+ | | (0)<----->(0) | ------ | |
+ | TG1 | | SUT | |
+ | | | (ovs-dpdk) | |
+ | | (n)<----->(n) |------------------ |
+ +----------+ +-------------------------+
+ trafficgen_1 host
+
+
+3-Node setup - Correlated Traffic
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. code-block:: console
+
+ +--------------------+
+ | |
+ | |
+ | DUT |
+ | (VNF) |
+ | |
+ +--------------------+
+ | virtio | | virtio |
+ +--------+ +--------+
+ ^ ^
+ | |
+ | |
+ +--------+ +--------+
+ | vHOST0 | | vHOST1 |
+ +----------+ +-------------------------+ +------------+
+ | | | ^ ^ | | |
+ | | | | | | | |
+ | | (0)<----->(0) | ------ | | | TG2 |
+ | TG1 | | SUT | | |(UDP Replay)|
+ | | | (ovs-dpdk) | | | |
+ | | (n)<----->(n) | ------ |(n)<-->(n)| |
+ +----------+ +-------------------------+ +------------+
+ trafficgen_1 host trafficgen_2
+
+
+Before executing Yardstick test cases, make sure that pod.yaml reflects the
+topology and update all the required fields.::
+
+ cp /etc/yardstick/nodes/pod.yaml.nsb.ovs.sample /etc/yardstick/nodes/pod.yaml
+
+Config pod.yaml
+::
+ nodes:
+ -
+ name: trafficgen_1
+ role: TrafficGen
+ ip: 1.1.1.1
+ user: root
+ password: r00t
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.0"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:01"
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:07:00.1"
+ driver: i40e # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.20"
+ netmask: "255.255.255.0"
+ local_mac: "00:00.00:00:00:02"
+
+-
+ name: ovs
+ role: Ovsdpdk
+ ip: 2.2.2.2
+ user: root
+ auth_type: password
+ password: <password>
+ vpath: "/usr/local/"
+ vports:
+ - dpdkvhostuser0
+ - dpdkvhostuser1
+ vports_mac:
+ - "00:00:00:00:00:03"
+ - "00:00:00:00:00:04"
+ phy_ports: # Physical ports to configure ovs
+ - "0000:06:00.0"
+ - "0000:06:00.1"
+ flow:
+ - ovs-ofctl add-flow br0 in_port=1,action=output:3
+ - ovs-ofctl add-flow br0 in_port=3,action=output:1
+ - ovs-ofctl add-flow br0 in_port=4,action=output:2
+ - ovs-ofctl add-flow br0 in_port=2,action=output:4
+ phy_driver: i40e # kernel driver
+ images: "/var/lib/libvirt/images/ubuntu1.img"
+
+ -
+ name: vnf
+ role: vnf
+ ip: 1.1.1.2
+ user: root
+ password: r00t
+ host: 2.2.2.2 #BM - host == ip, virtualized env - Host - compute node
+ interfaces:
+ xe0: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:00:04.0"
+ driver: virtio-pci # default kernel driver
+ dpdk_port_num: 0
+ local_ip: "152.16.100.10"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:03"
+
+ xe1: # logical name from topology.yaml and vnfd.yaml
+ vpci: "0000:00:05.0"
+ driver: virtio-pci # default kernel driver
+ dpdk_port_num: 1
+ local_ip: "152.16.40.10"
+ netmask: "255.255.255.0"
+ local_mac: "00:00:00:00:00:04"
+ routing_table:
+ - network: "152.16.100.10"
+ netmask: "255.255.255.0"
+ gateway: "152.16.100.20"
+ if: "xe0"
+ - network: "152.16.40.10"
+ netmask: "255.255.255.0"
+ gateway: "152.16.40.20"
+ if: "xe1"
+ nd_route_tbl:
+ - network: "0064:ff9b:0:0:0:0:9810:6414"
+ netmask: "112"
+ gateway: "0064:ff9b:0:0:0:0:9810:6414"
+ if: "xe0"
+ - network: "0064:ff9b:0:0:0:0:9810:2814"
+ netmask: "112"
+ gateway: "0064:ff9b:0:0:0:0:9810:2814"
+ if: "xe1"
+
+Enable yardstick virtual environment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Before executing yardstick test cases, make sure to activate yardstick
+python virtual environment if runnin on ubuntu without docker::
+
+ source /opt/nsb_bin/yardstick_venv/bin/activate
+
+On docker, virtual env is in main path.
+
+Run Yardstick - Network Service Testcases
+-----------------------------------------
+
+NS testing - using NSBperf CLI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+::
+
+ PYTHONPATH: ". ~/.bash_profile"
+ cd <yardstick_repo>/yardstick/cmd
+
+ Execute command: ./NSPerf.py -h
+ ./NSBperf.py --vnf <selected vnf> --test <rfc test>
+ eg: ./NSBperf.py --vnf vfw --test tc_ovs_rfc2544_ipv4_1flow_64B.yaml
- source /opt/nsb_setup/yardstick_venv/bin/activate
+NS testing - using yardstick CLI
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+::
PYTHONPATH: ". ~/.bash_profile"
Go to test case forlder type we want to execute.
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc006.rst b/docs/testing/user/userguide/opnfv_yardstick_tc006.rst
new file mode 100644
index 000000000..d2d6467f1
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc006.rst
@@ -0,0 +1,119 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
+
+*************************************
+Yardstick Test Case Description TC006
+*************************************
+
+.. _fio: http://bluestop.org/files/fio/HOWTO.txt
+
++-----------------------------------------------------------------------------+
+|Volume storage Performance |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC006_VOLUME STORAGE PERFORMANCE |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | IOPS (Average IOs performed per second), |
+| | Throughput (Average disk read/write bandwidth rate), |
+| | Latency (Average disk read/write latency) |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The purpose of TC006 is to evaluate the IaaS volume storage |
+| | performance with regards to IOPS, throughput and latency. |
+| | |
+| | The purpose is also to be able to spot the trends. |
+| | Test results, graphs and similar shall be stored for |
+| | comparison reasons and product evolution understanding |
+| | between different OPNFV versions and/or configurations. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | fio |
+| | |
+| | fio is an I/O tool meant to be used both for benchmark and |
+| | stress/hardware verification. It has support for 19 |
+| | different types of I/O engines (sync, mmap, libaio, |
+| | posixaio, SG v3, splice, null, network, syslet, guasi, |
+| | solarisaio, and more), I/O priorities (for newer Linux |
+| | kernels), rate I/O, forked or threaded jobs, and much more. |
+| | |
+| | (fio is not always part of a Linux distribution, hence it |
+| | needs to be installed. As an example see the |
+| | /yardstick/tools/ directory for how to generate a Linux |
+| | image with fio included.) |
+| | |
++--------------+--------------------------------------------------------------+
+|test | fio test is invoked in a host VM with a volume attached on a |
+|description | compute blade, a job file as well as parameters are passed |
+| | to fio and fio will start doing what the job file tells it |
+| | to do. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | file: opnfv_yardstick_tc006.yaml |
+| | |
+| | Fio job file is provided to define the benchmark process |
+| | Target volume is mounted at /FIO_Test directory |
+| | |
+| | For SLA, minimum read/write iops is set to 100, |
+| | minimum read/write throughput is set to 400 KB/s, |
+| | and maximum read/write latency is set to 20000 usec. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | This test case can be configured with different: |
+| | |
+| | * Job file; |
+| | * Volume mount directory. |
+| | |
+| | SLA is optional. The SLA in this test case serves as an |
+| | example. Considerably higher throughput and lower latency |
+| | are expected. However, to cover most configurations, both |
+| | baremetal and fully virtualized ones, this value should be |
+| | possible to achieve and acceptable for black box testing. |
+| | Many heavy IO applications start to suffer badly if the |
+| | read/write bandwidths are lower than this. |
+| | |
++--------------+--------------------------------------------------------------+
+|usability | This test case is one of Yardstick's generic test. Thus it |
+| | is runnable on most of the scenarios. |
+| | |
++--------------+--------------------------------------------------------------+
+|references | fio_ |
+| | |
+| | ETSI-NFV-TST001 |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | The test case image needs to be installed into Glance |
+|conditions | with fio included in it. |
+| | |
+| | No POD specific requirements have been identified. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | A host VM with fio installed is booted. |
+| | A 200G volume is attached to the host VM |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with the host VM by using ssh. |
+| | 'job_file.ini' is copyied from Jump Host to the host VM via |
+| | the ssh tunnel. The attached volume is formated and mounted. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Fio benchmark is invoked. Simulated IO operations are |
+| | started. IOPS, disk read/write bandwidth and latency are |
+| | recorded and checked against the SLA. Logs are produced and |
+| | stored. |
+| | |
+| | Result: Logs are stored. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | The host VM is deleted. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc056.rst b/docs/testing/user/userguide/opnfv_yardstick_tc056.rst
new file mode 100644
index 000000000..01aa99ac2
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc056.rst
@@ -0,0 +1,149 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC056
+*************************************
+
++-----------------------------------------------------------------------------+
+|OpenStack Controller Messaging Queue Service High Availability |
++==============+==============================================================+
+|test case id | OPNFV_YARDSTICK_TC056:OpenStack Controller Messaging Queue |
+| | Service High Availability |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | messaging queue service(RabbitMQ) that supports OpenStack on |
+| | controller node. When messaging queue service(which is |
+| | active) of a specified controller node is killed, the test |
+| | case will check whether messaging queue services(which are |
+| | standby) on other controller nodes will be switched active, |
+| | and whether the cluster manager on attacked the controller |
+| | node will restart the stopped messaging queue. |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of messaging queue |
+| | service on a selected controller node, then checks whether |
+| | the request of the related Openstack command is OK and the |
+| | killed processes are recovered. |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case, this parameter should always set to "rabbitmq".|
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "rabbitmq-server" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, two kinds of monitor are needed: |
+| | 1. the "openstack-cmd" monitor constantly request a specific |
+| | Openstack command, which needs two parameters: |
+| | 1) monitor_type: which is used for finding the monitor class |
+| | and related scritps. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for request. |
+| | |
+| | 2. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class |
+| | and related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node runing the process |
+| | In this case, the command_name of monitor1 should be |
+| | services that will use the messaging queue(current nova, |
+| | neutron, cinder ,heat and ceilometer are using RabbitMQ) |
+| | , and the process-name of monitor2 should be "rabbitmq", |
+| | for example: |
+| | |
+| | e.g. |
+| | monitor1-1: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "openstack image list" |
+| | monitor1-2: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "openstack network list" |
+| | monitor1-3: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "openstack volume list" |
+| | monitor2: |
+| | -monitor_type: "process" |
+| | -process_name: "rabbitmq" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1)service_outage_time: which indicates the maximum outage |
+| | time (seconds) of the specified Openstack command request. |
+| | 2)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file:opnfv_yardstick_tc056.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stoping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | stop monitors after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | verify the SLA |
+| | |
+| | Result: The test case is passed or not. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check |
+| | the status of the specified process on the host, and restart |
+| | the process if it is not running for next test cases. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc057.rst b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
new file mode 100644
index 000000000..2a4ce40c0
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc057.rst
@@ -0,0 +1,165 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC057
+*************************************
+
++-----------------------------------------------------------------------------+
+|OpenStack Controller Cluster Management Service High Availability |
++==============+==============================================================+
+|test case id | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the quorum configuration of the |
+| | cluster manager(pacemaker) on controller nodes. When a |
+| | controller node , which holds all active application |
+| | resources, failed to communicate with other cluster nodes |
+| | (via corosync), the test case will check whether the standby |
+| | application resources will take place of those active |
+| | application resources which should be regarded to be down in |
+| | the cluster manager. |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of cluster messaging |
+| | service(corosync) on a selected controller node(the node |
+| | holds the active application resources), then checks whether |
+| | active application resources are switched to other |
+| | controller nodes and whether the Openstack commands are OK. |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the load |
+| | balance service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | In this case, this process name should set to "corosync" , |
+| | for example |
+| | -fault_type: "kill-process" |
+| | -process_name: "corosync" |
+| | -host: node1 |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, a kind of monitor is needed: |
+| | 1. the "openstack-cmd" monitor constantly request a specific |
+| | Openstack command, which needs two parameters: |
+| | 1) monitor_type: which is used for finding the monitor class |
+| | and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for request |
+| | |
+| | In this case, the command_name of monitor1 should be services|
+| | that are managed by the cluster manager. (Since rabbitmq and |
+| | haproxy are managed by pacemaker, most Openstack Services |
+| | can be used to check high availability in this case) |
+| | |
+| | (e.g.) |
+| | monitor1: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "nova image-list" |
+| | monitor2: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "neutron router-list" |
+| | monitor3: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "heat stack-list" |
+| | monitor4: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "cinder list" |
+| | |
++--------------+--------------------------------------------------------------+
+|checkers | In this test case, a checker is needed, the checker will |
+| | the status of application resources in pacemaker and the |
+| | checker have three parameters: |
+| | 1) checker_type: which is used for finding the result |
+| | checker class and related scripts. In this case the checker |
+| | type will be "pacemaker-check-resource" |
+| | 2) resource_name: the application resource name |
+| | 3) resource_status: the expected status of the resource |
+| | 4) expectedValue: the expected value for the output of the |
+| | checker script, in the case the expected value will be the |
+| | identifier in the cluster manager |
+| | 3) condition: whether the expected value is in the output of |
+| | checker script or is totally same with the output. |
+| | (note: pcs is required to installed on controller node in |
+| | order to run this checker) |
+| | |
+| | (e.g.) |
+| | checker1: |
+| | -checker_type: "pacemaker-check-resource" |
+| | -resource_name: "p_rabbitmq-server" |
+| | -resource_status: "Stopped" |
+| | -expectedValue: "node-1" |
+| | -condition: "in" |
+| | checker2: |
+| | -checker_type: "pacemaker-check-resource" |
+| | -resource_name: "p_rabbitmq-server" |
+| | -resource_status: "Master" |
+| | -expectedValue: "node-2" |
+| | -condition: "in" |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1)service_outage_time: which indicates the maximum outage |
+| | time (seconds) of the specified Openstack command request. |
++--------------+--------------------------------------------------------------+
+|test tool | None. Self-developed. |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc057.yaml |
+| | -Attackers: see above "attackers" description |
+| | -Monitors: see above "monitors" description |
+| | -Checkers: see above "checkers" description |
+| | -Steps: the test case execution step, see "test sequence" |
+| | description below |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
++--------------+------+----------------------------------+--------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | do checker: check whether the status of application |
+| | resources on different nodes are updated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | stop monitors after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 5 | verify the SLA |
+| | |
+| | Result: The test case is passed or not. |
+| | |
++--------------+------+----------------------------------+--------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the cluster messaging process(corosync) on the |
+| | host, and restart the process if it is not running for next |
+| | test cases |
++--------------+------+----------------------------------+--------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc058.rst b/docs/testing/user/userguide/opnfv_yardstick_tc058.rst
new file mode 100644
index 000000000..fb9a4c2d1
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc058.rst
@@ -0,0 +1,148 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC058
+*************************************
+
++-----------------------------------------------------------------------------+
+|OpenStack Controller Virtual Router Service High Availability |
++==============+==============================================================+
+|test case id | OPNFV_YARDSTICK_TC058:OpenStack Controller Virtual Router |
+| | Service High Availability |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of virtual |
+| | routers(L3 agent) on controller node. When a virtual router |
+| | service on a specified controller node is shut down, this |
+| | test case will check whether the network of virtual machines |
+| | will be affected, and whether the attacked virtual router |
+| | service will be recovered. |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of virtual router service |
+| | (l3-agent) on a selected controller node(the node holds the |
+| | active l3-agent), then checks whether the network routing |
+| | of virtual machines is OK and whether the killed service |
+| | will be recovered. |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the load |
+| | balance service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | In this case, this process name should set to "l3agent" , |
+| | for example |
+| | -fault_type: "kill-process" |
+| | -process_name: "l3agent" |
+| | -host: node1 |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, two kinds of monitor are needed: |
+| | 1. the "ip_status" monitor that pings a specific ip to check |
+| | the connectivity of this ip, which needs two parameters: |
+| | 1) monitor_type: which is used for finding the monitor class |
+| | and related scripts. It should be always set to "ip_status" |
+| | for this monitor. |
+| | 2) ip_address: The ip to be pinged. In this case, ip_address |
+| | will be either an ip address of external network or an ip |
+| | address of a virtual machine. |
+| | 3) host: The node on which ping will be executed, in this |
+| | case the host will be a virtual machine. |
+| | |
+| | 2. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class |
+| | and related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor. In |
+| | this case, the process-name of monitor2 should be "l3agent" |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor1-1: |
+| | -monitor_type: "ip_status" |
+| | -host: 172.16.0.11 |
+| | -ip_address: 172.16.1.11 |
+| | monitor1-2: |
+| | -monitor_type: "ip_status" |
+| | -host: 172.16.0.11 |
+| | -ip_address: 8.8.8.8 |
+| | monitor2: |
+| | -monitor_type: "process" |
+| | -process_name: "l3agent" |
+| | -host: node1 |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1)service_outage_time: which indicates the maximum outage |
+| | time (seconds) of the specified Openstack command request. |
+| | 2)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
++--------------+--------------------------------------------------------------+
+|test tool | None. Self-developed. |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc058.yaml |
+| | -Attackers: see above "attackers" description |
+| | -Monitors: see above "monitors" description |
+| | -Steps: the test case execution step, see "test sequence" |
+| | description below |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
++--------------+------+----------------------------------+--------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | The test case image needs to be installed into Glance |
+|conditions | with cachestat included in the image. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Two host VMs are booted, these two hosts are in two different|
+| | networks, the networks are connected by a virtual router |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | stop monitors after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 5 | verify the SLA |
+| | |
+| | Result: The test case is passed or not. |
+| | |
++--------------+------+----------------------------------+--------------------+
+|post-action | It is the action when the test cases exist. It will check |
+| | the status of the specified process on the host, and restart |
+| | the process if it is not running for next test cases. |
+| | Virtual machines and network created in the test case will |
+| | be destoryed. |
+| | |
++--------------+------+----------------------------------+--------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc083.rst b/docs/testing/user/userguide/opnfv_yardstick_tc083.rst
new file mode 100644
index 000000000..dc00ac67a
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc083.rst
@@ -0,0 +1,81 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Huawei Technologies Co.,Ltd and others.
+
+*************************************
+Yardstick Test Case Description TC083
+*************************************
+
+.. _netperf: http://www.netperf.org/netperf/training/Netperf.html
+
++-----------------------------------------------------------------------------+
+|Throughput per VM test |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC083_Network latency and throughput between |
+| | VMs |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | Network latency and throughput |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | To evaluate the IaaS network performance with regards to |
+| | flows and throughput, such as if and how different amounts |
+| | of packet sizes and flows matter for the throughput between |
+| | 2 VMs in one pod. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | file: opnfv_yardstick_tc083.yaml |
+| | |
+| | Packet size: default 1024 bytes. |
+| | |
+| | Test length: default 20 seconds. |
+| | |
+| | The client and server are distributed on different nodes. |
+| | |
+| | For SLA max_mean_latency is set to 100. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | netperf_ |
+| | Netperf is a software application that provides network |
+| | bandwidth testing between two hosts on a network. It |
+| | supports Unix domain sockets, TCP, SCTP, DLPI and UDP via |
+| | BSD Sockets. Netperf provides a number of predefined tests |
+| | e.g. to measure bulk (unidirectional) data transfer or |
+| | request response performance. |
+| | (netperf is not always part of a Linux distribution, hence |
+| | it needs to be installed.) |
+| | |
++--------------+--------------------------------------------------------------+
+|references | netperf Man pages |
+| | ETSI-NFV-TST001 |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | Test can be configured with different packet sizes and |
+| | test duration. Default values exist. |
+| | |
+| | SLA (optional): max_mean_latency |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | The POD can be reached by external ip and logged on via ssh |
+|conditions | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Install netperf tool on each specified node, one is as the |
+| | server, and the other as the client. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Log on to the client node and use the netperf command to |
+| | execute the network performance test |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | The throughput results stored. |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+