summaryrefslogtreecommitdiffstats
path: root/docs/userguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/userguide')
-rw-r--r--docs/userguide/Ftrace.debugging.tool.userguide.rst39
-rw-r--r--docs/userguide/abstract.rst6
-rw-r--r--docs/userguide/common.platform.render.rst2
-rw-r--r--docs/userguide/feature.userguide.render.rst2
-rw-r--r--docs/userguide/images/cpu-stress-idle-test-type.pngbin0 -> 17822 bytes
-rw-r--r--docs/userguide/images/guest_pk_fw.pngbin0 -> 8020 bytes
-rw-r--r--docs/userguide/images/host_pk_fw.pngbin0 -> 5390 bytes
-rw-r--r--docs/userguide/images/idle-idle-test-type.pngbin0 -> 14902 bytes
-rw-r--r--docs/userguide/images/io-stress-idle-test-type.pngbin0 -> 18983 bytes
-rw-r--r--docs/userguide/images/memory-stress-idle-test-type.pngbin0 -> 18727 bytes
-rw-r--r--docs/userguide/images/sriov_pk_fw.pngbin0 -> 5864 bytes
-rw-r--r--docs/userguide/index.rst1
-rw-r--r--docs/userguide/introduction.rst11
-rw-r--r--docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst122
-rw-r--r--docs/userguide/low_latency.userguide.rst179
-rw-r--r--docs/userguide/openstack.rst14
-rw-r--r--docs/userguide/packet_forwarding.userguide.rst212
-rw-r--r--docs/userguide/pcm_utility.userguide.rst23
18 files changed, 455 insertions, 156 deletions
diff --git a/docs/userguide/Ftrace.debugging.tool.userguide.rst b/docs/userguide/Ftrace.debugging.tool.userguide.rst
index 0fcbbcf93..fc0858a6d 100644
--- a/docs/userguide/Ftrace.debugging.tool.userguide.rst
+++ b/docs/userguide/Ftrace.debugging.tool.userguide.rst
@@ -9,9 +9,9 @@ FTrace Debugging Tool
About Ftrace
-------------
Ftrace is an internal tracer designed to find what is going on inside the kernel. It can be used
-for debugging or analyzing latencies and performance issues that take place outside of user-space.
-Although ftrace is typically considered the function tracer, it is really a frame work of several
-assorted tracing utilities.
+for debugging or analyzing latencies and performance related issues that take place outside of
+user-space. Although ftrace is typically considered the function tracer, it is really a frame
+work of several assorted tracing utilities.
One of the most common uses of ftrace is the event tracing.
@@ -33,7 +33,7 @@ Version Features
+-----------------------------+-----------------------------------------------+
| | - Ftrace aids in debugging the KVMFORNFV |
| Danube | 4.4-linux-kernel level issues |
-| | - Option to diable if not required |
+| | - Option to disable if not required |
+-----------------------------+-----------------------------------------------+
@@ -155,19 +155,16 @@ Examples:
[tracing]# echo 1 > tracing_on;
-===================
Ftrace in KVMFORNFV
-===================
-Ftrace is part of KVMFORNFV D-Release. Kvmfornfv currently uses 4.4-linux-Kernel as part of
-deployment and runs cyclictest for testing purpose generating latency values (max, min, avg values).
+-------------------
+Ftrace is part of KVMFORNFV D-Release. KVMFORNFV built 4.4-linux-Kernel will be tested by
+executing cyclictest and analyzing the results/latency values (max, min, avg) generated.
Ftrace (or) function tracer is a stable kernel inbuilt debugging tool which tests kernel in real
time and outputs a log as part of the code. These output logs are useful in following ways.
- Kernel Debugging.
- - Helps in Kernel code Optimization and
- - Can be used to better understand the kernel Level code flow
- - Log generation for each test run if enabled
- - Choice of Disabling and Enabling
+ - Helps in Kernel code optimization and
+ - Can be used to better understand the kernel level code flow
Ftrace logs for KVMFORNFV can be found `here`_:
@@ -184,7 +181,8 @@ Kvmfornfv has two scripts in /ci/envs to provide ftrace tool:
Enabling Ftrace in KVMFORNFV
----------------------------
-The enable_trace.sh script is triggered by changing ftrace_enable value in test_kvmfornfv.sh script which is zero by default. Change as below to enable Ftrace and trigger the script,
+The enable_trace.sh script is triggered by changing ftrace_enable value in test_kvmfornfv.sh
+script to 1 (which is zero by default). Change as below to enable Ftrace.
.. code:: bash
@@ -197,7 +195,7 @@ Note:
Details of enable_trace script
------------------------------
-- CPU Coremask is calculated using getcpumask()
+- CPU coremask is calculated using getcpumask()
- All the required events are enabled by,
echoing "1" to $TRACEDIR/events/event_name/enable file
@@ -230,19 +228,21 @@ The set_event file contains all the enabled events list
- Once tracing is diabled, disable_trace.sh script is triggered.
-Details of Disable_trace Script
+Details of disable_trace Script
-------------------------------
In disable trace script the following are done:
-- The trace file is copied and moved to /tmp folfer based on timestamp.
+- The trace file is copied and moved to /tmp folder based on timestamp
- The current tracer file is set to ``nop``
- The set_event file is cleared i.e., all the enabled events are disabled
-- Kernel Ftarcer is diabled/unmounted
+- Kernel Ftrace is disabled/unmounted
Publishing Ftrace logs:
-----------------------
-The generated trace log is pushed to `artifacts`_ of Kvmfornfv project by releng team, which is done by a script in JJB of releng. The `trigger`_ in the script is.,
+The generated trace log is pushed to `artifacts`_ by kvmfornfv-upload-artifact.sh
+script available in releng which will be triggered as a part of kvmfornfv daily job.
+The `trigger`_ in the script is.,
.. code:: bash
@@ -252,6 +252,3 @@ The generated trace log is pushed to `artifacts`_ of Kvmfornfv project by releng
.. _artifacts: https://artifacts.opnfv.org/
.. _trigger: https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=jjb/kvmfornfv/kvmfornfv-upload-artifact.sh;h=56fb4f9c18a83c689a916dc6c85f9e3ddf2479b2;hb=HEAD#l53
-
-
-.. include:: pcm_utility.userguide.rst
diff --git a/docs/userguide/abstract.rst b/docs/userguide/abstract.rst
index 8c36c268f..ec05b2560 100644
--- a/docs/userguide/abstract.rst
+++ b/docs/userguide/abstract.rst
@@ -2,9 +2,9 @@
.. http://creativecommons.org/licenses/by/4.0
-========
-Abstract
-========
+==================
+Userguide Abstract
+==================
In KVM4NFV project, we focus on the KVM hypervisor to enhance it for NFV,
by looking at the following areas initially-
diff --git a/docs/userguide/common.platform.render.rst b/docs/userguide/common.platform.render.rst
index 486ca469f..46b4707a3 100644
--- a/docs/userguide/common.platform.render.rst
+++ b/docs/userguide/common.platform.render.rst
@@ -7,7 +7,7 @@ Using common platform components
================================
This section outlines basic usage principals and methods for some of the
-commonly deployed components of supported OPNFV scenario's in Colorado.
+commonly deployed components of supported OPNFV scenario's in Danube.
The subsections provide an outline of how these components are commonly
used and how to address them in an OPNFV deployment.The components derive
from autonomous upstream communities and where possible this guide will
diff --git a/docs/userguide/feature.userguide.render.rst b/docs/userguide/feature.userguide.render.rst
index d903f0711..0e2738ae3 100644
--- a/docs/userguide/feature.userguide.render.rst
+++ b/docs/userguide/feature.userguide.render.rst
@@ -3,7 +3,7 @@
.. http://creativecommons.org/licenses/by/4.0
==========================
-Using Colorado Features
+Using Danube Features
==========================
The following sections of the user guide provide feature specific usage
diff --git a/docs/userguide/images/cpu-stress-idle-test-type.png b/docs/userguide/images/cpu-stress-idle-test-type.png
new file mode 100644
index 000000000..9a5bdf6de
--- /dev/null
+++ b/docs/userguide/images/cpu-stress-idle-test-type.png
Binary files differ
diff --git a/docs/userguide/images/guest_pk_fw.png b/docs/userguide/images/guest_pk_fw.png
new file mode 100644
index 000000000..5f80ecce5
--- /dev/null
+++ b/docs/userguide/images/guest_pk_fw.png
Binary files differ
diff --git a/docs/userguide/images/host_pk_fw.png b/docs/userguide/images/host_pk_fw.png
new file mode 100644
index 000000000..dcbe921f2
--- /dev/null
+++ b/docs/userguide/images/host_pk_fw.png
Binary files differ
diff --git a/docs/userguide/images/idle-idle-test-type.png b/docs/userguide/images/idle-idle-test-type.png
new file mode 100644
index 000000000..bd241bfe1
--- /dev/null
+++ b/docs/userguide/images/idle-idle-test-type.png
Binary files differ
diff --git a/docs/userguide/images/io-stress-idle-test-type.png b/docs/userguide/images/io-stress-idle-test-type.png
new file mode 100644
index 000000000..f79cb5984
--- /dev/null
+++ b/docs/userguide/images/io-stress-idle-test-type.png
Binary files differ
diff --git a/docs/userguide/images/memory-stress-idle-test-type.png b/docs/userguide/images/memory-stress-idle-test-type.png
new file mode 100644
index 000000000..1ca839a4a
--- /dev/null
+++ b/docs/userguide/images/memory-stress-idle-test-type.png
Binary files differ
diff --git a/docs/userguide/images/sriov_pk_fw.png b/docs/userguide/images/sriov_pk_fw.png
new file mode 100644
index 000000000..bf7ad6f9b
--- /dev/null
+++ b/docs/userguide/images/sriov_pk_fw.png
Binary files differ
diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst
index fcef57250..0d5089e01 100644
--- a/docs/userguide/index.rst
+++ b/docs/userguide/index.rst
@@ -17,6 +17,7 @@ KVMforNFV User Guide
./kvmfornfv.cyclictest-dashboard.userguide.rst
./low_latency.userguide.rst
./live_migration.userguide.rst
+ ./openstack.rst
./packet_forwarding.userguide.rst
./pcm_utility.userguide.rst
./tuning.userguide.rst
diff --git a/docs/userguide/introduction.rst b/docs/userguide/introduction.rst
index 501d6391b..9a22bdebd 100644
--- a/docs/userguide/introduction.rst
+++ b/docs/userguide/introduction.rst
@@ -2,9 +2,12 @@
.. http://creativecommons.org/licenses/by/4.0
-========
+======================
+Userguide Introduction
+======================
+
Overview
-========
+--------
The project "NFV Hypervisors-KVM" makes collaborative efforts to enable NFV
features for existing hypervisors, which are not necessarily designed or
@@ -13,7 +16,7 @@ consists of Continuous Integration builds, deployments and testing
combinations of virtual infrastructure components.
KVM4NFV Features
-================
+----------------
Using this project, the following areas are targeted-
@@ -46,7 +49,7 @@ The configuration guide details which scenarios are best for you and how to
install and configure them.
General usage guidelines
-========================
+------------------------
The user guide for KVM4NFV CICD features and capabilities provide step by step
instructions for using features that have been configured according to the
diff --git a/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
index 6333d0917..4ec8f5013 100644
--- a/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
+++ b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
@@ -2,31 +2,36 @@
.. http://creativecommons.org/licenses/by/4.0
-========================================
+=========================
+KVMFORNFV Dashboard Guide
+=========================
+
Dashboard for KVM4NFV Daily Test Results
-========================================
+----------------------------------------
Abstract
-========
+--------
This chapter explains the procedure to configure the InfluxDB and Grafana on Node1 or Node2
-depending on the testtype to publish KVM4NFV cyclic test results. The cyclictest cases are executed
-and results are published on Yardstick Dashboard(Graphana). InfluxDB is the database which will
+depending on the testtype to publish KVM4NFV test results. The cyclictest cases are executed
+and results are published on Yardstick Dashboard(Grafana). InfluxDB is the database which will
store the cyclictest results and Grafana is a visualisation suite to view the maximum,minumum and
-average values of the timeseries data of cyclictest results.The framework is shown in below image.
-
-.. Figure:: ../images/dashboard-architecture.png
+average values of the time series data of cyclictest results.The framework is shown in below image.
+.. figure:: images/dashboard-architecture.png
+ :name: dashboard-architecture
+ :width: 100%
+ :align: center
Version Features
-================
+----------------
+-----------------------------+--------------------------------------------+
| | |
| **Release** | **Features** |
| | |
+=============================+============================================+
-| | - Data published in Json file Format |
+| | - Data published in Json file format |
| Colorado | - No database support to store the test's |
| | latency values of cyclictest |
| | - For each run, the previous run's output |
@@ -36,13 +41,13 @@ Version Features
| | - Test results are stored in Influxdb |
| | - Graphical representation of the latency |
| Danube | values using Grafana suite. (Dashboard) |
-| | - Supports Graphical view for multiple |
+| | - Supports graphical view for multiple |
| | testcases and test-types (Stress/Idle) |
+-----------------------------+--------------------------------------------+
Installation Steps:
-===================
+-------------------
To configure Yardstick, InfluxDB and Grafana for KVMFORNFV project following sequence of steps are followed:
**Note:**
@@ -73,7 +78,7 @@ The Yardstick document for Grafana and InfluxDB configuration can be found `here
.. _here: https://wiki.opnfv.org/display/yardstick/How+to+deploy+InfluxDB+and+Grafana+locally
Configuring the Dispatcher Type:
-================================
+---------------------------------
Need to configure the dispatcher type in /etc/yardstick/yardstick.conf depending on the dispatcher
methods which are used to store the cyclictest results. A sample yardstick.conf can be found at
/yardstick/etc/yardstick.conf.sample, which can be copied to /etc/yardstick.
@@ -91,9 +96,9 @@ Three type of dispatcher methods are available to store the cyclictest results.
- InfluxDB
- HTTP
-**1. File**: Default Dispatcher module is file.If the dispatcher module is configured as a file,then the test results are stored in yardstick.out file.
+**1. File**: Default Dispatcher module is file. If the dispatcher module is configured as a file,then the test results are stored in a temporary file yardstick.out
( default path: /tmp/yardstick.out).
-Dispatcher module of "Verify Job" is "Default".So,the results are stored in Yardstick.out file for verify job. Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered.
+Dispatcher module of "Verify Job" is "Default". So,the results are stored in Yardstick.out file for verify job. Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered.
.. code:: bash
@@ -101,9 +106,14 @@ Dispatcher module of "Verify Job" is "Default".So,the results are stored in Yard
debug = False
dispatcher = file
-**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb.Users can check test results stored in the Influxdb(Database) on Grafana which is used to visualize the time series data.
+ [dispatcher_file]
+ file_path = /tmp/yardstick.out
+ max_bytes = 0
+ backup_count = 0
+
+**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb. Users can check test resultsstored in the Influxdb(Database) on Grafana which is used to visualize the time series data.
-To configure the influxdb ,the following content in /etc/yardstick/yardstick.conf need to updated
+To configure the influxdb, the following content in /etc/yardstick/yardstick.conf need to updated
.. code:: bash
@@ -111,7 +121,14 @@ To configure the influxdb ,the following content in /etc/yardstick/yardstick.con
debug = False
dispatcher = influxdb
-Dispatcher module of "Daily Job" is Influxdb.So the results are stored in influxdb and then published to Dashboard.
+ [dispatcher_influxdb]
+ timeout = 5
+ target = http://127.0.0.1:8086 ##Mention the IP where influxdb is running
+ db_name = yardstick
+ username = root
+ password = root
+
+Dispatcher module of "Daily Job" is Influxdb. So, the results are stored in influxdb and then published to Dashboard.
**3. HTTP**: If the dispatcher module is configured as http, users can check test result on OPNFV testing dashboard which uses MongoDB as backend.
@@ -121,13 +138,17 @@ Dispatcher module of "Daily Job" is Influxdb.So the results are stored in influx
debug = False
dispatcher = http
-.. Figure:: ../images/UseCaseDashboard.png
+ [dispatcher_http]
+ timeout = 5
+ target = http://127.0.0.1:8000/results
+
+.. figure:: images/UseCaseDashboard.png
Detailing the dispatcher module in verify and daily Jobs:
----------------------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily).Once the test is completed, results are published to the respective dispatcher modules.
+KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily). Once the test is completed, results are published to the respective dispatcher modules.
Dispatcher module is configured for each Job type as mentioned below.
@@ -182,9 +203,15 @@ Influxdb api which is already implemented in `Influxdb`_ will post the data in l
- Grafana can be accessed at `Login`_ using credentials opnfv/opnfv and used for visualizing the collected test data as shown in `Visual`_\
-.. Figure:: ../images/Dashboard-screenshot-1.png
+.. figure:: images/Dashboard-screenshot-1.png
+ :name: dashboard-screenshot-1
+ :width: 100%
+ :align: center
-.. Figure:: ../images/Dashboard-screenshot-2.png
+.. figure:: images/Dashboard-screenshot-2.png
+ :name: dashboard-screenshot-2
+ :width: 100%
+ :align: center
.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py
@@ -199,9 +226,9 @@ Influxdb api which is already implemented in `Influxdb`_ will post the data in l
.. _GrafanaDoc: http://docs.grafana.org/
Understanding Kvmfornfv Grafana Dashboard
-=========================================
+------------------------------------------
-The Kvmfornfv Dashboard found at http://testresults.opnfv.org/ currently supports graphical view of Cyclictest. For viewing Kvmfornfv Dashboard use,
+The Kvmfornfv dashboard found at http://testresults.opnfv.org/ currently supports graphical view of cyclictest. For viewing Kvmfornfv dashboarduse,
.. code:: bash
@@ -212,6 +239,15 @@ The Kvmfornfv Dashboard found at http://testresults.opnfv.org/ currently support
Username: opnfv
Password: opnfv
+
+.. code:: bash
+
+ The JSON of the kvmfonfv-cyclictest dashboard can be found at.,
+
+ $ git clone https://gerrit.opnfv.org/gerrit/yardstick.git
+ $ cd yardstick/dashboard
+ $ cat KVMFORNFV-Cyclictest
+
The Dashboard has four tables, each representing a specific test-type of cyclictest case,
- Kvmfornfv_Cyclictest_Idle-Idle
@@ -226,33 +262,49 @@ Note:
**A brief about what each graph of the dashboard represents:**
1. Idle-Idle Graph
--------------------
-`Idle-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that no stress is applied on the Host or the Guest.
+~~~~~~~~~~~~~~~~~~~~
+`Idle-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the cyclictest. Idle_Idleimplies that no stress is applied on the Host or the Guest.
.. _Idle-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=10&fullscreen
-.. Figure:: ../images/Idle-Idle.png
+.. figure:: images/Idle-Idle.png
+ :name: Idle-Idle graph
+ :width: 100%
+ :align: center
2. CPU_Stress-Idle Graph
---------------------------
-`Cpu_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that CPU stress is applied on the Host and no stress on the Guest.
+~~~~~~~~~~~~~~~~~~~~~~~~~
+`Cpu_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the cyclictest. Idle_Idle implies that CPU stress is applied on the Host and no stress on the Guest.
.. _Cpu_stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=11&fullscreen
-.. Figure:: ../images/Cpustress-Idle.png
+.. figure:: images/Cpustress-Idle.png
+ :name: cpustress-idle graph
+ :width: 100%
+ :align: center
3. Memory_Stress-Idle Graph
-----------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`Memory_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that Memory stress is applied on the Host and no stress on the Guest.
.. _Memory_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=12&fullscreen
-.. Figure:: ../images/Memorystress-Idle.png
+.. figure:: images/Memorystress-Idle.png
+ :name: memorystress-idle graph
+ :width: 100%
+ :align: center
4. IO_Stress-Idle Graph
-------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~
`IO_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that IO stress is applied on the Host and no stress on the Guest.
.. _IO_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=13&fullscreen
-.. Figure:: ../images/IOstress-Idle.png
+.. figure:: images/IOstress-Idle.png
+ :name: iostress-idle graph
+ :width: 100%
+ :align: center
+
+Future Scope
+-------------
+The future work will include adding the kvmfornfv_Packet-forwarding test results into Grafana and influxdb.
diff --git a/docs/userguide/low_latency.userguide.rst b/docs/userguide/low_latency.userguide.rst
index 66e63770c..88cc0347e 100644
--- a/docs/userguide/low_latency.userguide.rst
+++ b/docs/userguide/low_latency.userguide.rst
@@ -48,15 +48,19 @@ Please check the default kernel configuration in the source code at:
kernel/arch/x86/configs/opnfv.config.
Below is host kernel boot line example:
-::
-isolcpus=11-15,31-35 nohz_full=11-15,31-35 rcu_nocbs=11-15,31-35
-iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G mce=off idle=poll
-intel_pstate=disable processor.max_cstate=1 pcie_asmp=off tsc=reliable
+
+.. code:: bash
+
+ isolcpus=11-15,31-35 nohz_full=11-15,31-35 rcu_nocbs=11-15,31-35
+ iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G mce=off idle=poll
+ intel_pstate=disable processor.max_cstate=1 pcie_asmp=off tsc=reliable
Below is guest kernel boot line example
-::
-isolcpus=1 nohz_full=1 rcu_nocbs=1 mce=off idle=poll default_hugepagesz=1G
-hugepagesz=1G
+
+.. code:: bash
+
+ isolcpus=1 nohz_full=1 rcu_nocbs=1 mce=off idle=poll default_hugepagesz=1G
+ hugepagesz=1G
Please refer to `tuning.userguide` for more explanation.
@@ -68,45 +72,194 @@ environment is also required. Please refer to `tunning.userguide` for
more explanation.
Test cases to measure Latency
-=============================
+-----------------------------
+The performance of the kvmfornfv is assesed by the latency values. Cyclictest and Packet forwarding
+Test cases result in real time latency values of average, minimum and maximum.
+
+* Cyclictest
+
+* Packet Forwarding test
-Cyclictest case
----------------
+1. Cyclictest case
+-------------------
+Cyclictest results are the most frequently cited real-time Linux metric. The core concept of Cyclictest is very simple.
+In KVMFORNFV cyclictest is implemented on the Guest-VM with 4.4-Kernel RPM installed. It generated Max,Min and Avg
+values which help in assesing the kernel used. Cyclictest in currently divided into the following test types,
+
+* Idle-Idle
+* CPU_stress-Idle
+* Memory_stress-Idle
+* IO_stress-Idle
+
+Future scope of work may include the below test-types,
+
+* CPU_stress-CPU_stress
+* Memory_stress-Memory_stress
+* IO_stress-IO_stress
Understanding the naming convention
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. code:: bash
+
+ [Host-Type ] - [Guest-Type]
+
+* **Host-Type :** Mentions the type of stress applied on the kernel of the Host
+* **Guest-Type :** Mentions the type of stress applied on the kernel of the Guest
+
+Example.,
+
+.. code:: bash
+
+ Idle - CPU_stress
+
+The above name signifies that,
+
+- No Stress is applied on the Host kernel
+
+- CPU Stress is applied on the Guest kernel
+
+**Note:**
+
+- Stress is applied using the stress which is installed as part of the deployment.
+ Stress can be applied on CPU, Memory and Input-Output (Read/Write) operations using the stress tool.
+
+Version Features
+~~~~~~~~~~~~~~~~
+
++-----------------------+------------------+-----------------+
+| **Test Name** | **Colorado** | **Danube** |
+| | | |
++-----------------------+------------------+-----------------+
+| - Idle - Idle | ``Y`` | ``Y`` |
+| | | |
+| - Cpustress - Idle | | ``Y`` |
+| | | |
+| - Memorystress - Idle | | ``Y`` |
+| | | |
+| - IOstress - Idle | | ``Y`` |
+| | | |
++-----------------------+------------------+-----------------+
+
+
Idle-Idle test-type
~~~~~~~~~~~~~~~~~~~
+Cyclictest in run on the Guest VM when Host,Guest are not under any kind of stress. This is the basic
+cyclictest of the KVMFORNFV project. Outputs Avg, Min and Max latency values.
+
+.. figure:: images/idle-idle-test-type.png
+ :name: idle-idle test type
+ :width: 100%
+ :align: center
CPU_Stress-Idle test-type
--------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Here, the host is under CPU stress, where multiple times sqrt() function is called on kernel which
+results increased CPU load. The cyclictest will run on the guest, where the guest is under no stress.
+Outputs Avg, Min and Max latency values.
+
+.. figure:: images/cpu-stress-idle-test-type.png
+ :name: cpu-stress-idle test type
+ :width: 100%
+ :align: center
Memory_Stress-Idle test-type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+In this type, the host is under memory stress where continuos memory operations are implemented to
+increase the Memory stress (Buffer stress).The cyclictest will run on the guest, where the guest is under
+no stress. It outputs Avg, Min and Max latency values.
+
+.. figure:: images/memory-stress-idle-test-type.png
+ :name: memory-stress-idle test type
+ :width: 100%
+ :align: center
IO_Stress-Idle test-type
~~~~~~~~~~~~~~~~~~~~~~~~
+The host is under constant Input/Output stress .i.e., multiple read-write operations are invoked to
+increase stress. Cyclictest will run on the guest VM that is launched on the same host, where the guest
+is under no stress. It outputs Avg, Min and Max latency values.
+
+.. figure:: images/io-stress-test-type.png
+ :name: io-stress-idle test type
+ :width: 100%
+ :align: center
CPU_Stress-CPU_Stress test-type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Not implemented for Danube release.
Memory_Stress-Memory_Stress test-type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Not implemented for Danube release.
IO_Stress-IO_Stress test type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Not implemented for Danube release.
+
+2. Packet Forwarding Test cases
+-------------------------------
+Packet forwarding is an other test case of Kvmfornfv. It measures the time taken by a packet to return
+to source after reaching its destination. This test case uses automated test-framework provided by
+OPNFV VSWITCHPERF project and a traffic generator (IXIA is used for kvmfornfv). Only latency results
+generating test cases are triggered as a part of kvmfornfv daily job.
+
+Latency test measures the time required for a frame to travel from the originating device through the
+network to the destination device. Please note that RFC2544 Latency measurement will be superseded with
+a measurement of average latency over all successfully transferred packets or frames.
-Packet Forwarding Test case
----------------------------
+Packet forwarding test cases currently supports the following test types:
+
+* Packet forwarding to Host
+
+* Packet forwarding to Guest
+
+* Packet forwarding to Guest using SRIOV
+
+The testing approach adoped is black box testing, meaning the test inputs can be generated and the
+outputs captured and completely evaluated from the outside of the System Under Test(SUT).
Packet forwarding to Host
~~~~~~~~~~~~~~~~~~~~~~~~~
+This is also known as Physical port → vSwitch → physical port deployment.
+This test measures the time taken by the packet/frame generated by traffic generator(phy) to travel
+through the network to the destination device(phy). This test results min,avg and max latency values.
+This value signifies the performance of the installed kernel.
+
+Packet flow,
+
+.. figure:: images/host_pk_fw.png
+ :name: packet forwarding to host
+ :width: 100%
+ :align: center
Packet forwarding to Guest
~~~~~~~~~~~~~~~~~~~~~~~~~~
+This is also known as Physical port → vSwitch → VNF → vSwitch → physical port deployment.
+
+This test measures the time taken by the packet/frame generated by traffic generator(phy) to travel
+through the network involving a guest to the destination device(phy). This test results min,avg and
+max latency values. This value signifies the performance of the installed kernel.
+
+Packet flow,
+
+.. figure:: images/guest_pk_fw.png
+ :name: packet forwarding to guest
+ :width: 100%
+ :align: center
Packet forwarding to Guest using SRIOV
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+This test is used to verify the VNF and measure the base performance (maximum forwarding rate in
+fps and latency) that can be achieved by the VNF without a vSwitch. The performance metrics
+collected by this test will serve as a key comparison point for NIC passthrough technologies and
+vSwitches. VNF in this context refers to the hypervisor and the VM.
+
+**Note:** The Vsperf running on the host is still required.
+Packet flow,
+.. figure:: images/sriov_pk_fw.png
+ :name: packet forwarding to guest using sriov
+ :width: 100%
+ :align: center
diff --git a/docs/userguide/openstack.rst b/docs/userguide/openstack.rst
index bd1919991..929d2ba42 100644
--- a/docs/userguide/openstack.rst
+++ b/docs/userguide/openstack.rst
@@ -2,19 +2,19 @@
.. http://creativecommons.org/licenses/by/4.0
---------------------------------
-Colorado OpenStack User Guide
---------------------------------
+============================
+Danube OpenStack User Guide
+============================
OpenStack is a cloud operating system developed and released by the
`OpenStack project <https://www.openstack.org>`_. OpenStack is used in OPNFV
for controlling pools of compute, storage, and networking resources in a Pharos
compliant infrastructure.
-OpenStack is used in Colorado to manage tenants (known in OpenStack as
+OpenStack is used in Danube to manage tenants (known in OpenStack as
projects),users, services, images, flavours, and quotas across the Pharos
infrastructure.The OpenStack interface provides the primary interface for an
-operational Colorado deployment and it is from the "horizon console" that an
+operational Danube deployment and it is from the "horizon console" that an
OPNFV user will perform the majority of administrative and operational
activities on the deployment.
@@ -26,7 +26,7 @@ details and descriptions of how to configure and interact with the OpenStack
deployment.This guide can be used by lab engineers and operators to tune the
OpenStack deployment to your liking.
-Once you have configured OpenStack to your purposes, or the Colorado
+Once you have configured OpenStack to your purposes, or the Danube
deployment meets your needs as deployed, an operator, or administrator, will
find the best guidance for working with OpenStack in the
`OpenStack administration guide <http://docs.openstack.org/user-guide-admin>`_.
@@ -46,6 +46,6 @@ and enter the username and password:
password: admin
Other methods of interacting with and configuring OpenStack,, like the REST API
-and CLI are also available in the Colorado deployment, see the
+and CLI are also available in the Danube deployment, see the
`OpenStack administration guide <http://docs.openstack.org/user-guide-admin>`_
for more information on using those interfaces.
diff --git a/docs/userguide/packet_forwarding.userguide.rst b/docs/userguide/packet_forwarding.userguide.rst
index ba117508c..594952bdf 100644
--- a/docs/userguide/packet_forwarding.userguide.rst
+++ b/docs/userguide/packet_forwarding.userguide.rst
@@ -5,14 +5,14 @@
=================
PACKET FORWARDING
=================
-=======================
+
About Packet Forwarding
-=======================
+-----------------------
-Packet Forwarding is a test suite of KVMFORNFV which is used to measure the total time taken by a
-**Packet** generated by the traffic generator to return from Guest/Host as per the implemented
-scenario. Packet Forwarding is implemented using VSWITCHPERF/``VSPERF software of OPNFV`` and an
-``IXIA Traffic Generator``.
+Packet Forwarding is a test suite of KVMFORNFV. These latency tests measures the time taken by a
+**Packet** generated by the traffic generator to travel from the originating device through the
+network to the destination device. Packet Forwarding is implemented using test framework
+implemented by OPNFV VSWITCHPERF project and an ``IXIA Traffic Generator``.
Version Features
----------------
@@ -29,14 +29,14 @@ Version Features
| | - Packet Forwarding is a testcase in KVMFORNFV |
| | - Implements three scenarios (Host/Guest/SRIOV) |
| | as part of testing in KVMFORNFV |
-| Danube | - Uses available testcases of OPNFV's VSWTICHPERF |
-| | software (PVP/PVVP) |
+| Danube | - Uses automated test framework of OPNFV |
+| | VSWITCHPERF software (PVP/PVVP) |
+| | |
| | - Works with IXIA Traffic Generator |
+-----------------------------+---------------------------------------------------+
-======
VSPERF
-======
+------
VSPerf is an OPNFV testing project.
VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated
@@ -47,17 +47,18 @@ VNF level testing and validation.
For complete VSPERF documentation go to `link.`_
-.. _link.: <http://artifacts.opnfv.org/vswitchperf/colorado/index.html>
+.. _link.: http://artifacts.opnfv.org/vswitchperf/colorado/index.html
Installation
-------------
+~~~~~~~~~~~~
+
Guidelines of installating `VSPERF`_.
-.. _VSPERF: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html>
+.. _VSPERF: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html
Supported Operating Systems
----------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
* CentOS 7
* Fedora 20
@@ -67,19 +68,21 @@ Supported Operating Systems
* Ubuntu 14.04
Supported vSwitches
--------------------
+~~~~~~~~~~~~~~~~~~~
+
The vSwitch must support Open Flow 1.3 or greater.
* OVS (built from source).
* OVS with DPDK (built from source).
Supported Hypervisors
----------------------
+~~~~~~~~~~~~~~~~~~~~~
* Qemu version 2.3.
Other Requirements
-------------------
+~~~~~~~~~~~~~~~~~~
+
The test suite requires Python 3.3 and relies on a number of other
packages. These need to be installed for the test suite to function.
@@ -93,9 +96,9 @@ user account, which will be used for vsperf execution.
Execution of installation script:
-.. code:: bashFtrace.debugging.tool.userguide.rst
+.. code:: bash
- $ cd Vswitchperf
+ $ cd vswitchperf
$ cd systems
$ ./build_base_machine.sh
@@ -115,10 +118,10 @@ For running testcases VSPERF is installed on Intel pod1-node2 in which centos
operating system is installed. Only VSPERF installion on Centos is discussed here.
For installation steps on other operating systems please refer to `here`_.
-.. _here: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html>
+.. _here: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html
For CentOS 7
------------------
+~~~~~~~~~~~~~~
## Python 3 Packages
@@ -147,16 +150,16 @@ To activate, simple run:
Working Behind a Proxy
------------------------
+~~~~~~~~~~~~~~~~~~~~~~
If you're behind a proxy, you'll likely want to configure this before running any of the above. For example:
.. code:: bash
- export http_proxy=proxy.mycompany.com:123
- export https_proxy=proxy.mycompany.com:123
-
-
+ export http_proxy="http://<username>:<password>@<proxy>:<port>/";
+ export https_proxy="https://<username>:<password>@<proxy>:<port>/";
+ export ftp_proxy="ftp://<username>:<password>@<proxy>:<port>/";
+ export socks_proxy="socks://<username>:<password>@<proxy>:<port>/";
.. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
.. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
@@ -166,10 +169,11 @@ For other OS specific activation click `this link`_:
.. _this link: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements
Traffic-Generators
--------------------
+------------------
+
VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic-generator go through `this`_.
-.. _this: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html>
+.. _this: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html
VSPERF supports the following traffic generators:
@@ -191,35 +195,40 @@ and configure the various traffic generators.
As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation regarding traffic generators please follow this `link`_.
-.. _link: <https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD>
+.. _link: https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD
-==========
IXIA Setup
-==========
+----------
-=====================
Hardware Requirements
-=====================
-VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
+~~~~~~~~~~~~~~~~~~~~~
+
+VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that
+runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
Installation
--------------
+~~~~~~~~~~~~
Follow the [installation instructions] to install.
-IXIA Setup
-------------
On the CentOS 7 system
-----------------------
+~~~~~~~~~~~~~~~~~~~~~~
+
You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz.
On the IXIA client software system
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server)
- Right click on IxNetwork TCL Server, select properties
- - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx" where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file).
+ - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx"
+
+where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file).
-.. Figure:: ../images/IXIA1.png
+.. figure:: images/IXIA1.png
+ :name: IXIA1 setup
+ :width: 100%
+ :align: center
- Hit Ok and start the TCL server application
@@ -261,7 +270,7 @@ Detailed description of options follows:
.. _test-results-share:
Test results share
--------------------
+~~~~~~~~~~~~~~~~~~
VSPERF is not able to retrieve test results via TCL API directly. Instead, all test
results are stored at IxNetwork TCL server. Results are stored at folder defined by
@@ -285,19 +294,20 @@ Example of sharing configuration:
Note: It is essential to use slashes '/' also in path
configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter.
- * Install cifs-utils package.
+
+* Install cifs-utils package.
e.g. at rpm based Linux distribution:
- .. code-block:: console
+.. code-block:: console
yum install cifs-utils
- * Mount shared directory, so VSPERF can access test results.
+* Mount shared directory, so VSPERF can access test results.
e.g. by adding new record into ``/etc/fstab``
- .. code-block:: console
+.. code-block:: console
mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results
-o file_mode=0777,dir_mode=0777,nounix
@@ -308,6 +318,7 @@ is visible at DUT inside ``/mnt/ixia_results`` directory.
Cloning and building src dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build
them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory
contains makefiles that will allow you to clone and build the libraries that VSPERF depends on,
@@ -326,13 +337,16 @@ To delete a src subdirectory and its contents to allow you to re-clone simply us
Configure the `./conf/10_custom.conf` file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values.
-The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom
+The configuration items that can be added is not limited to the initial contents. Any configuration item
+mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom
configuration value.
Using a custom settings file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument.
.. code:: bash
@@ -347,8 +361,34 @@ argument will override both the default and your custom configuration files. Thi
2. Environment variables
3. Configuration file(s)
+vloop_vnf
+~~~~~~~~~
+
+VSPERF uses a VM image called vloop_vnf for looping traffic in the deployment
+scenarios involving VMs. The image can be downloaded from
+`<http://artifacts.opnfv.org/>`__.
+
+Please see the installation instructions for information on :ref:`vloop-vnf`
+images.
+
+.. _l2fwd-module:
+
+l2fwd Kernel Module
+~~~~~~~~~~~~~~~~~~~
+
+A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
+support for Destination Network Address Translation (DNAT) for both the MAC and
+IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
+
+.. figure:: images/Guest_Scenario.png
+ :name: Guest_Scenario
+ :width: 100%
+ :align: center
+
+
Executing tests
~~~~~~~~~~~~~~~~
+
Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers:
.. code:: bash
@@ -382,7 +422,7 @@ Some tests allow for configurable parameters, including test duration (in second
./vsperf --conf-file user_settings.py
--tests RFC2544Tput
- --test-param "rfc2544_duration=10;packet_sizes=128"
+ --test-param` "rfc2544_duration=10;packet_sizes=128"
For all available options, check out the help dialog:
@@ -393,6 +433,7 @@ For all available options, check out the help dialog:
Testcases
----------
+
Available Tests in VSPERF are:
* phy2phy_tput
@@ -444,9 +485,9 @@ Example of execution of VSPERF in "trafficgen" mode:
--test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
-================================
Packet Forwarding Test Scenarios
-================================
+--------------------------------
+
KVMFORNFV currently implements three scenarios as part of testing:
* Host Scenario
@@ -455,32 +496,47 @@ KVMFORNFV currently implements three scenarios as part of testing:
Packet Forwarding Host Scenario
--------------------------------
-Here Host is NODE-2. It has VSPERF installed in it and is properly configured to use IXIA Traffic-generator by providing IXIA CARD, PORTS and Lib paths along with IP.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Here host DUT has VSPERF installed in it and is properly configured to use IXIA Traffic-generator
+by providing IXIA CARD, PORTS and Lib paths along with IP.
please refer to figure.2
-.. Figure:: ../images/Host_Scenario.png
+.. figure:: images/Host_Scenario.png
+ :name: Host_Scenario
+ :width: 100%
+ :align: center
Packet Forwarding Guest Scenario
---------------------------------
-Here the guest is a Virtual Machine (VM) launched by using a modified CentOS image(vsperf provided)
-on Node-2 (Host) using Qemu. In this scenario, the packet is initially forwarded to Host which is
-then forwarded to the launched guest. The time taken by the packet to reach the IXIA traffic-generator
-via Host and Guest is calculated and published as a test result of this scenario.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. Figure:: ../images/Guest_Scenario.png
+Here the guest is a Virtual Machine (VM) launched by using vloop_vnf provided by vsperf project
+on host/DUT using Qemu. In this latency test the time taken by the frame/packet to travel from the
+originating device through network involving a guest to destination device is calculated.
+The resulting latency values will define the performance of installed kernel.
+
+.. figure:: images/Guest_Scenario.png
+ :name: Guest_Scenario
+ :width: 100%
+ :align: center
Packet Forwarding SRIOV Scenario
---------------------------------
-Unlike the packet forwarding to Guest-via-Host scenario, here the packet generated at the IXIA is
-directly forwarded to the Guest VM launched on Host by implementing SR-IOV interface at NIC level
-of Host .i.e., Node-2. The time taken by the packet to reach the IXIA traffic-generator is calculated
-and published as a test result for this scenario. SRIOV-support_ is given below, it details how to use SR-IOV.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In this test the packet generated at the IXIA is forwarded to the Guest VM launched on Host by
+implementing SR-IOV interface at NIC level of host .i.e., DUT. The time taken by the packet to
+travel through the network to the destination the IXIA traffic-generator is calculated and
+published as a test result for this scenario.
-.. Figure:: ../images/SRIOV_Scenario.png
+SRIOV-support_ is given below, it details how to use SR-IOV.
+
+.. figure:: images/SRIOV_Scenario.png
+ :name: SRIOV_Scenario
+ :width: 100%
+ :align: center
Using vfio_pci with DPDK
-------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~
To use vfio with DPDK instead of igb_uio add into your custom configuration
file the following parameter:
@@ -521,7 +577,7 @@ To check that IOMMU is enabled on your platform:
.. _SRIOV-support:
Using SRIOV support
--------------------
+~~~~~~~~~~~~~~~~~~~
To use virtual functions of NIC with SRIOV support, use extended form
of NIC PCI slot definition:
@@ -553,3 +609,25 @@ For example:
* tests without vSwitch, where VM accesses VF interfaces directly
by PCI-passthrough to measure raw VM throughput performance.
+Using QEMU with PCI passthrough support
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Raw virtual machine throughput performance can be measured by execution of PVP
+test with direct access to NICs by PCI passthrough. To execute VM with direct
+access to PCI devices, enable vfio-pci_. In order to use virtual functions,
+SRIOV-support_ must be enabled.
+
+Execution of test with PCI passthrough with vswitch disabled:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
+ --vswitch none --vnf QemuPciPassthrough pvp_tput
+
+Any of supported guest-loopback-application_ can be used inside VM with
+PCI passthrough support.
+
+Note: Qemu with PCI passthrough support can be used only with PVP test
+deployment.
+
+.. _guest-loopback-application:
diff --git a/docs/userguide/pcm_utility.userguide.rst b/docs/userguide/pcm_utility.userguide.rst
index baef7059a..c8eb21d61 100644
--- a/docs/userguide/pcm_utility.userguide.rst
+++ b/docs/userguide/pcm_utility.userguide.rst
@@ -1,6 +1,15 @@
-=========================================================
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+===========================
+PCM Utility in KVMFORNFV
+===========================
+
Collecting Memory Bandwidth Information using PCM utility
-=========================================================
+---------------------------------------------------------
+This chapter includes how the PCM utility is used in kvmfornfv
+to collect memory bandwidth information
About PCM utility
-----------------
@@ -22,10 +31,10 @@ Version Features
| | cyclic testcases. |
| | |
+-----------------------------+-----------------------------------------------+
+| | - pcm-memory.x will be executed before the |
+| Danube | execution of every testcase |
| | - pcm-memory.x provides the memory bandwidth |
| | data throught out the testcases |
-| | - pcm-memory.x will be executedbefore the |
-| Danube | execution of every testcase |
| | - used for all test-types (stress/idle) |
| | - Generated memory bandwidth logs which are |
| | to published to the KVMFORFNV artifacts |
@@ -124,3 +133,9 @@ signal will be passed to terminate the pcm-memory process which was executing th
pcm-memory.x 60 &>/root/MBWInfo/MBWInfo_${testType}_${timeStamp}
+ where,
+ ${testType} = verify (or) daily
+
+Future Scope
+------------
+PCM information will be added to cyclictest of kvmfornfv in yardstick.