summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJiang, Yunhong <yunhong.jiang@intel.com>2017-03-21 17:44:28 +0000
committerGerrit Code Review <gerrit@opnfv.org>2017-03-21 17:44:28 +0000
commit59ee691ba40e67e4975e0eaf768efb6df286ed3c (patch)
tree040e4d5aaf3a5aa05456a84fde72feaa16fdeda0
parentb1495d4fa3175a3ffea301dedb9b0a60ca9ada44 (diff)
parentd4b19c2012c72015c7554ad0b0f098b9dae8aa7c (diff)
Merge "This patch is used to update the documents of D-release."
-rw-r--r--docs/configurationguide/abstract.rst8
-rw-r--r--docs/configurationguide/configuration.options.render.rst4
-rw-r--r--docs/configurationguide/images/idle-idle-test.pngbin0 -> 14901 bytes
-rw-r--r--docs/configurationguide/images/stress-idle-test.pngbin0 -> 19033 bytes
-rw-r--r--docs/configurationguide/index.rst2
-rw-r--r--docs/configurationguide/low-latency.feature.configuration.description.rst97
-rw-r--r--docs/configurationguide/scenariomatrix.rst63
-rw-r--r--docs/design/kvmfornfv_design.rst16
-rw-r--r--docs/glossary/kvmfornfv_glossary.rst49
-rw-r--r--docs/index.rst166
-rw-r--r--docs/installationprocedure/abstract.rst6
-rw-r--r--docs/installationprocedure/kvm4nfv-cicd.installation.instruction.rst28
-rw-r--r--docs/installationprocedure/kvm4nfv-cicd.release.notes.rst30
-rw-r--r--docs/overview/kvmfornfv_overview.rst18
-rw-r--r--docs/releasenotes/index.rst6
-rw-r--r--docs/releasenotes/release-notes.rst95
-rw-r--r--docs/requirements/kvmfornfv_requirements.rst36
-rw-r--r--docs/scenarios/abstract.rst44
-rw-r--r--docs/scenarios/index.rst52
-rw-r--r--docs/scenarios/kvmfornfv.scenarios.description.rst155
-rw-r--r--docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst27
-rwxr-xr-xdocs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst8
-rw-r--r--docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst81
-rwxr-xr-xdocs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst8
-rw-r--r--docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst88
-rwxr-xr-xdocs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst8
-rw-r--r--docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst81
-rwxr-xr-xdocs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst8
-rw-r--r--docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst82
-rw-r--r--docs/userguide/Ftrace.debugging.tool.userguide.rst39
-rw-r--r--docs/userguide/abstract.rst6
-rw-r--r--docs/userguide/common.platform.render.rst2
-rw-r--r--docs/userguide/feature.userguide.render.rst2
-rw-r--r--docs/userguide/images/cpu-stress-idle-test-type.pngbin0 -> 17822 bytes
-rw-r--r--docs/userguide/images/guest_pk_fw.pngbin0 -> 8020 bytes
-rw-r--r--docs/userguide/images/host_pk_fw.pngbin0 -> 5390 bytes
-rw-r--r--docs/userguide/images/idle-idle-test-type.pngbin0 -> 14902 bytes
-rw-r--r--docs/userguide/images/io-stress-idle-test-type.pngbin0 -> 18983 bytes
-rw-r--r--docs/userguide/images/memory-stress-idle-test-type.pngbin0 -> 18727 bytes
-rw-r--r--docs/userguide/images/sriov_pk_fw.pngbin0 -> 5864 bytes
-rw-r--r--docs/userguide/index.rst1
-rw-r--r--docs/userguide/introduction.rst11
-rw-r--r--docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst122
-rw-r--r--docs/userguide/low_latency.userguide.rst179
-rw-r--r--docs/userguide/openstack.rst14
-rw-r--r--docs/userguide/packet_forwarding.userguide.rst212
-rw-r--r--docs/userguide/pcm_utility.userguide.rst23
47 files changed, 1344 insertions, 533 deletions
diff --git a/docs/configurationguide/abstract.rst b/docs/configurationguide/abstract.rst
index a5066c284..3693bcab7 100644
--- a/docs/configurationguide/abstract.rst
+++ b/docs/configurationguide/abstract.rst
@@ -1,12 +1,12 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-========
-Abstract
-========
+======================
+Configuration Abstract
+======================
This document provides guidance for the configurations available in the
-Colorado release of OPNFV.
+Danube release of OPNFV
The release includes four installer tools leveraging different technologies;
Apex, Compass4nfv, Fuel and JOID, which deploy components of the platform.
diff --git a/docs/configurationguide/configuration.options.render.rst b/docs/configurationguide/configuration.options.render.rst
index 93add7755..1c1c62228 100644
--- a/docs/configurationguide/configuration.options.render.rst
+++ b/docs/configurationguide/configuration.options.render.rst
@@ -13,11 +13,11 @@ such as OpenStack,KVM etc. which includes different source components or
configurations.
KVM4NFV Scenarios
-===================
+------------------
Each KVM4NFV scenario provides unique features and capabilities, it is
important to understand your target platform capabilities before installing
and configuring. This configuration guide outlines how to install and
configure components in order to enable the features required.
-.. include:: scenariomatrix.rst
+.. include:: ./scenariomatrix.rst
diff --git a/docs/configurationguide/images/idle-idle-test.png b/docs/configurationguide/images/idle-idle-test.png
new file mode 100644
index 000000000..c9831df1d
--- /dev/null
+++ b/docs/configurationguide/images/idle-idle-test.png
Binary files differ
diff --git a/docs/configurationguide/images/stress-idle-test.png b/docs/configurationguide/images/stress-idle-test.png
new file mode 100644
index 000000000..111c2a7d2
--- /dev/null
+++ b/docs/configurationguide/images/stress-idle-test.png
Binary files differ
diff --git a/docs/configurationguide/index.rst b/docs/configurationguide/index.rst
index 6ad3b282c..b4cb69839 100644
--- a/docs/configurationguide/index.rst
+++ b/docs/configurationguide/index.rst
@@ -4,7 +4,7 @@
*************************
OPNFV Configuration Guide
*************************
-Colorado 1.0
+Danube 1.0
------------
.. toctree::
diff --git a/docs/configurationguide/low-latency.feature.configuration.description.rst b/docs/configurationguide/low-latency.feature.configuration.description.rst
index bf2dbdb44..6cad4c9ce 100644
--- a/docs/configurationguide/low-latency.feature.configuration.description.rst
+++ b/docs/configurationguide/low-latency.feature.configuration.description.rst
@@ -1,9 +1,12 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-Introduction
-============
+=============================================
+Low Latency Feature Configuration Description
+=============================================
+Introduction
+------------
In KVM4NFV project, we focus on the KVM hypervisor to enhance it for NFV, by
looking at the following areas initially
@@ -14,7 +17,7 @@ looking at the following areas initially
* Fast live migration
Configuration of Cyclictest
-===========================
+---------------------------
Cyclictest measures Latency of response to a stimulus. Achieving low latency
with the KVM4NFV project requires setting up a special test environment.
@@ -26,15 +29,56 @@ parameters and the run-time environment.
https://wiki.opnfv.org/display/kvm/Nfv-kvm-tuning
Pre-configuration activities
-----------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Intel POD10 is currently used as OPNFV-KVM4NFV test environment. The rpm
+packages from the latest build are downloaded onto Intel-Pod10 jump server
+from artifact repository. Yardstick running in a ubuntu docker container
+on Intel Pod10-jump server will configure the host(intel pod10 node1/node2
+based on job type), the guest and triggers the cyclictest on the guest using
+below sample yaml file.
+
+
+.. code:: bash
+
+ For IDLE-IDLE test,
+
+ host_setup_seqs:
+ - "host-setup0.sh"
+ - "reboot"
+ - "host-setup1.sh"
+ - "host-run-qemu.sh"
+
+ guest_setup_seqs:
+ - "guest-setup0.sh"
+ - "reboot"
+ - "guest-setup1.sh"
-Intel POD1 is currently used as OPNFV-KVM4NFV test environment. The latest
-build packages are downloaded onto Intel Pod1-jump server from artifact
-repository. Yardstick running in a ubuntu docker container on Intel Pod1-jump
-server will trigger the cyclictest.
+.. figure:: images/idle-idle-test.png
+ :name: idle-idle-test
+ :width: 100%
+ :align: center
-Running cyclictest through Yardstick will Configure the host(Pod1-node1), the
-guest, executes cyclictest on the guest.
+.. code:: bash
+
+ For [CPU/Memory/IO]Stress-IDLE tests,
+
+ host_setup_seqs:
+ - "host-setup0.sh"
+ - "reboot"
+ - "host-setup1.sh"
+ - "stress_daily.sh" [cpustress/memory/io]
+ - "host-run-qemu.sh"
+
+ guest_setup_seqs:
+ - "guest-setup0.sh"
+ - "reboot"
+ - "guest-setup1.sh"
+
+.. figure:: images/stress-idle-test.png
+ :name: stress-idle-test
+ :width: 100%
+ :align: center
The following scripts are used for configuring host and guest to create a
special test environment and achieve low latency.
@@ -44,7 +88,7 @@ followed by guest-setup0.sh and guest-setup1.sh scripts on the guest VM.
**host-setup0.sh**: Running this script will install the latest kernel rpm
on host and will make necessary changes as following to create special test
-environment
+environment.
* Isolates CPUs from the general scheduler
* Stops timer ticks on isolated CPUs whenever possible
@@ -55,15 +99,28 @@ environment
* Disables clocksource verification at runtime
**host-setup1.sh**: Running this script will make the following test
-environment changes
+environment changes.
* Disabling watchdogs to reduce overhead
* Disabling RT throttling
* Reroute interrupts bound to isolated CPUs to CPU 0
* Change the iptable so that we can ssh to the guest remotely
+**stress_daily.sh**: Scripts gets triggered only for stress-idle tests. Running this script
+make the following environment changes.
+
+ * Triggers stress_script.sh, which runs the stress command with necessary options
+ * CPU,Memory or IO stress can be applied based on the test type
+ * Applying stress only on the Host is handled in D-Release
+ * For Idle-Idle test the stress script is not triggered
+ * Stress is applied only on the free cores to prevent load on qemu process
+
+ **Note:**
+ - On Numa Node 1: 22,23 cores are allocated for QEMU process
+ - 24-43 are used for applying stress
+
**host-run-qemu.sh**: Running this script will launch a guest vm on the host.
- Note: download guest disk image from artifactory
+ Note: download guest disk image from artifactory.
**guest-setup0.sh**: Running this scrcipt on the guest vm will install the
latest build kernel rpm, cyclictest and make the following configuration on
@@ -75,22 +132,22 @@ guest vm.
* Disables clocksource verification at runtime
**guest-setup1.sh**: Running this script on guest vm will do the following
-configurations
+configurations.
* Disable watchdogs to reduce overhead
* Routes device interrupts to non-RT CPU
* Disables RT throttling
Hardware configuration
-----------------------
+~~~~~~~~~~~~~~~~~~~~~~
-Currently Intel POD1 is used as test environment for kvmfornfv to execute
-cyclictest. As part of this test environment Intel pod1-jump is configured as
+Currently Intel POD10 is used as test environment for kvmfornfv to execute
+cyclictest. As part of this test environment Intel pod10-jump is configured as
jenkins slave and all the latest build artifacts are downloaded on to it.
-Intel pod1-node1 is the host on which a guest vm will be launched as a part of
+Intel pod10-node1 is the host on which a guest vm will be launched as a part of
running cylictest through yardstick.
* For more information regarding hardware configuration, please visit
- https://wiki.opnfv.org/display/pharos/Intel+Pod1
- https://build.opnfv.org/ci/computer/intel-pod1/
+ https://wiki.opnfv.org/display/pharos/Intel+Pod10
+ https://build.opnfv.org/ci/computer/intel-pod10/
http://artifacts.opnfv.org/octopus/brahmaputra/docs/octopus_docs/opnfv-jenkins-slave-connection.html
diff --git a/docs/configurationguide/scenariomatrix.rst b/docs/configurationguide/scenariomatrix.rst
index 1e2cef90a..3da38ed60 100644
--- a/docs/configurationguide/scenariomatrix.rst
+++ b/docs/configurationguide/scenariomatrix.rst
@@ -2,17 +2,21 @@
.. http://creativecommons.org/licenses/by/4.0
+==============
+Scenariomatrix
+==============
+
Scenarios are implemented as deployable compositions through integration with an installation tool.
OPNFV supports multiple installation tools and for any given release not all tools will support all
scenarios. While our target is to establish parity across the installation tools to ensure they
can provide all scenarios, the practical challenge of achieving that goal for any given feature and
release results in some disparity.
-Colorado scenario overeview
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Danube scenario overeview
+^^^^^^^^^^^^^^^^^^^^^^^^^
The following table provides an overview of the installation tools and available scenario's
-in the Colorado release of OPNFV.
+in the Danube release of OPNFV.
Scenario status is indicated by a weather pattern icon. All scenarios listed with
a weather pattern are possible to deploy and run in your environment or a Pharos lab,
@@ -23,17 +27,17 @@ Weather pattern icon legend:
+---------------------------------------------+----------------------------------------------------------+
| Weather Icon | Scenario Status |
+=============================================+==========================================================+
-| .. image:: ../images/weather-clear.jpg | Stable, no known issues |
+| .. image:: images/weather-clear.jpg | Stable, no known issues |
+---------------------------------------------+----------------------------------------------------------+
-| .. image:: ../images/weather-few-clouds.jpg | Stable, documented limitations |
+| .. image:: images/weather-few-clouds.jpg | Stable, documented limitations |
+---------------------------------------------+----------------------------------------------------------+
-| .. image:: ../images/weather-overcast.jpg | Deployable, stability or feature limitations |
+| .. image:: images/weather-overcast.jpg | Deployable, stability or feature limitations |
+---------------------------------------------+----------------------------------------------------------+
-| .. image:: ../images/weather-dash.jpg | Not deployed with this installer |
+| .. image:: images/weather-dash.jpg | Not deployed with this installer |
+---------------------------------------------+----------------------------------------------------------+
Scenarios that are not yet in a state of "Stable, no known issues" will continue to be stabilised
-and updates will be made on the stable/colorado branch. While we intend that all Colorado
+and updates will be made on the stable/danube branch. While we intend that all Danube
scenarios should be stable it is worth checking regularly to see the current status. Due to
our dependency on upstream communities and code some issues may not be resolved prior to the D release.
@@ -43,48 +47,73 @@ Scenario Naming
In OPNFV scenarios are identified by short scenario names, these names follow a scheme that
identifies the key components and behaviours of the scenario. The rules for scenario naming are as follows:
+.. code:: bash
+
os-[controller]-[feature]-[mode]-[option]
Details of the fields are
- * os: mandatory
+
+ * **[os]:** mandatory
* Refers to the platform type used
* possible value: os (OpenStack)
-* [controller]: mandatory
+ * **[controller]:** mandatory
* Refers to the SDN controller integrated in the platform
* example values: nosdn, ocl, odl, onos
- * [feature]: mandatory
+ * **[feature]:** mandatory
* Refers to the feature projects supported by the scenario
* example values: nofeature, kvm, ovs, sfc
- * [mode]: mandatory
+ * **[mode]:** mandatory
* Refers to the deployment type, which may include for instance high availability
* possible values: ha, noha
- * [option]: optional
+ * **[option]:** optional
* Used for the scenarios those do not fit into naming scheme.
* The optional field in the short scenario name should not be included if there is no optional scenario.
Some examples of supported scenario names are:
- * os-nosdn-kvm-noha
+ * **os-nosdn-kvm-noha**
* This is an OpenStack based deployment using neutron including the OPNFV enhanced KVM hypervisor
- * os-onos-nofeature-ha
+ * **os-onos-nofeature-ha**
* This is an OpenStack deployment in high availability mode including ONOS as the SDN controller
- * os-odl_l2-sfc
+ * **os-odl_l2-sfc**
* This is an OpenStack deployment using OpenDaylight and OVS enabled with SFC features
+ * **os-nosdn-kvm_nfv_ovs_dpdk-ha**
+
+ * This is an Openstack deployment with high availability using OVS, DPDK including the OPNFV enhanced KVM hypervisor
+ * This deployment has ``3-Contoller and 2-Compute nodes``
+
+ * **os-nosdn-kvm_nfv_ovs_dpdk-noha**
+
+ * This is an Openstack deployment without high availability using OVS, DPDK including the OPNFV enhanced KVM hypervisor
+ * This deployment has ``1-Contoller and 3-Compute nodes``
+
+ * **os-nosdn-kvm_nfv_ovs_dpdk_bar-ha**
+
+ * This is an Openstack deployment with high availability using OVS, DPDK including the OPNFV enhanced KVM hypervisor
+ and Barometer
+ * This deployment has ``3-Contoller and 2-Compute nodes``
+
+ * **os-nosdn-kvm_nfv_ovs_dpdk_bar-noha**
+
+ * This is an Openstack deployment without high availability using OVS, DPDK including the OPNFV enhanced KVM hypervisor
+ and Barometer
+ * This deployment has ``1-Contoller and 3-Compute nodes``
+
Installing your scenario
^^^^^^^^^^^^^^^^^^^^^^^^
@@ -92,7 +121,7 @@ There are two main methods of deploying your target scenario, one method is to f
walk you through the process of deploying to your hardware using scripts or ISO images, the other method is
to set up a Jenkins slave and connect your infrastructure to the OPNFV Jenkins master.
-For the purposes of evaluation and development a number of Colorado scenarios are able to be deployed
+For the purposes of evaluation and development a number of Danube scenarios are able to be deployed
virtually to mitigate the requirements on physical infrastructure. Details and instructions on performing
virtual deployments can be found in the installer specific installation instructions.
diff --git a/docs/design/kvmfornfv_design.rst b/docs/design/kvmfornfv_design.rst
index 54dcd120a..2465d63e3 100644
--- a/docs/design/kvmfornfv_design.rst
+++ b/docs/design/kvmfornfv_design.rst
@@ -65,9 +65,9 @@ on KVM:
.. Figure:: kvm1.png
-=====================
+
Design Considerations
-=====================
+---------------------
The latency variation and jitters can be minimized with the below
steps (with some in parallel):
@@ -92,9 +92,9 @@ steps (with some in parallel):
6. Measure latencies intensively. We leverage the existing testing methods.
OSADL, for example, defines industry tests for timing correctness.
-====================
+
Goals and Guidelines
-====================
+--------------------
The output of this project will provide :
@@ -110,9 +110,9 @@ The output of this project will provide :
4. Performance and interrupt latency measurement tools
-=========
+
Test plan
-=========
+---------
The tests that need to be conducted to make sure that all components from OPNFV
meet the requirement are mentioned below:
@@ -148,8 +148,8 @@ packet forwarding scenario.
.. Figure:: Bare-metalPacketForwarding.png
-=========
+----------
Reference
-=========
+----------
https://wiki.opnfv.org/display/kvm/
diff --git a/docs/glossary/kvmfornfv_glossary.rst b/docs/glossary/kvmfornfv_glossary.rst
index adebc815a..f5b547b85 100644
--- a/docs/glossary/kvmfornfv_glossary.rst
+++ b/docs/glossary/kvmfornfv_glossary.rst
@@ -5,12 +5,13 @@
**************
OPNFV Glossary
**************
-Colorado 1.0
+
+Danube 1.0
------------
-========
+
Contents
-========
+--------
This glossary provides a common definition of phrases and words commonly used
in OPNFV.
@@ -18,7 +19,7 @@ in OPNFV.
--------
A
--
+~
Arno
@@ -36,7 +37,7 @@ AVX2
--------
B
--
+~
Brahmaputra
@@ -58,7 +59,7 @@ Bogomips
--------
C
--
+~
CAT
@@ -95,7 +96,7 @@ CPU
--------
D
--
+~
Data plane
@@ -125,7 +126,7 @@ DSCP
--------
F
--
+~
Flavors
@@ -138,7 +139,7 @@ Fuel
--------
H
--
+~
Horizon
@@ -152,7 +153,7 @@ Hypervisor
--------
I
--
+~
IGMP
@@ -178,7 +179,7 @@ IRQ affinity
--------
J
--
+~
Jenkins
@@ -200,7 +201,7 @@ JumpHost
--------
K
--
+~
Kernel
@@ -210,7 +211,7 @@ Kernel
--------
L
--
+~
Latency
@@ -225,7 +226,7 @@ libvirt
--------
M
--
+~
Migration
@@ -235,7 +236,7 @@ Migration
--------
N
--
+~
NFV
@@ -257,7 +258,7 @@ NUMA
--------
O
--
+~
OPNFV
@@ -267,7 +268,7 @@ OPNFV
--------
P
--
+~
Pharos
@@ -291,7 +292,7 @@ Pools
--------
Q
--
+~
Qemu
@@ -301,7 +302,7 @@ Qemu
--------
R
--
+~
RDMA
@@ -316,7 +317,7 @@ Rest-Api
--------
S
--
+~
Scaling
@@ -343,7 +344,7 @@ Storage
--------
T
--
+~
Tenant
@@ -362,7 +363,7 @@ TSC
--------
V
--
+~
VLAN
@@ -380,7 +381,7 @@ VNF
--------
X
--
+~
XBZRLE
@@ -389,7 +390,7 @@ XBZRLE
--------
Y
--
+~
Yardstick
diff --git a/docs/index.rst b/docs/index.rst
new file mode 100644
index 000000000..8198d8597
--- /dev/null
+++ b/docs/index.rst
@@ -0,0 +1,166 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+******************
+Danube 1.0 Release
+******************
+
+*************************
+Overview of Documentation
+*************************
+
+.. toctree::
+ :caption: Overview
+ :numbered:
+ :maxdepth: 4
+
+ ./overview/kvmfornfv_overview.rst
+
+********************************
+KVMFORNFV Installation Procedure
+********************************
+
+.. toctree::
+ :caption: Installation Procedure
+ :numbered:
+ :maxdepth: 4
+
+ ./installationprocedure/abstract.rst
+ ./installationprocedure/kvm4nfv-cicd.installation.instruction.rst
+ ./installationprocedure/kvm4nfv-cicd.release.notes.rst
+
+**********************
+KVMFORNFV Design Guide
+**********************
+
+.. toctree::
+ :caption: Design Overview and Description
+ :numbered:
+ :maxdepth: 4
+
+ ./design/kvmfornfv_design.rst
+
+****************************
+KVMFORNFV Requirements Guide
+****************************
+
+.. toctree::
+ :caption: Requirements
+ :numbered:
+ :maxdepth: 4
+
+ ./requirements/kvmfornfv_requirements.rst
+
+*****************************
+KVMFORNFV Configuration Guide
+*****************************
+
+.. toctree::
+ :caption: Configuration Guide
+ :numbered:
+ :maxdepth: 4
+
+ ./configurationguide/abstract.rst
+ ./configurationguide/configuration.options.render.rst
+ ./configurationguide/low-latency.feature.configuration.description.rst
+ ./configurationguide/scenariomatrix.rst
+
+********************************************
+KVMFORNFV Scenarios Overview and Description
+********************************************
+
+.. toctree::
+ :caption: Scenario Overview and Description
+ :numbered:
+ :maxdepth: 4
+
+ ./scenarios/abstract.rst
+ ./scenarios/kvmfornfv.scenarios.description.rst
+
+*******************************************************
+os-nosdn-kvm_nfv_ovs_dpdk-noha Overview and Description
+*******************************************************
+
+.. toctree::
+ :caption: os-nosdn-kvm_nfv_ovs_dpdk-noha
+ :numbered:
+ :maxdepth: 3
+
+ ./scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
+
+*****************************************************
+os-nosdn-kvm_nfv_ovs_dpdk-ha Overview and Description
+*****************************************************
+
+.. toctree::
+ :caption: os-nosdn-kvm_nfv_ovs_dpdk-ha
+ :numbered:
+ :maxdepth: 3
+
+ ./scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
+
+***********************************************************
+os-nosdn-kvm_nfv_ovs_dpdk_bar-noha Overview and Description
+***********************************************************
+
+.. toctree::
+ :caption: os-nosdn-kvm_nfv_ovs_dpdk_bar-noha
+ :numbered:
+ :maxdepth: 3
+
+ ./scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
+
+*********************************************************
+os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Overview and Description
+*********************************************************
+
+.. toctree::
+ :caption: os-nosdn-kvm_nfv_ovs_dpdk_bar-ha
+ :numbered:
+ :maxdepth: 3
+
+ ./scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
+
+********************
+KVMFORNFV User Guide
+********************
+
+.. toctree::
+ :caption: User Guide
+ :numbered:
+ :maxdepth: 3
+
+ ./userguide/abstract.rst
+ ./userguide/introduction.rst
+ ./userguide/common.platform.render.rst
+ ./userguide/feature.userguide.render.rst
+ ./userguide/Ftrace.debugging.tool.userguide.rst
+ ./userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
+ ./userguide/low_latency.userguide.rst
+ ./userguide/live_migration.userguide.rst
+ ./userguide/openstack.rst
+ ./userguide/packet_forwarding.userguide.rst
+ ./userguide/pcm_utility.userguide.rst
+ ./userguide/tuning.userguide.rst
+
+**********************
+KVMFORNFV Releasenotes
+**********************
+
+.. toctree::
+ :caption: Releasenotes
+ :numbered:
+ :maxdepth: 3
+
+ ./releasenotes/release-notes.rst
+
+******************
+KVMFORNFV Glossary
+******************
+
+.. toctree::
+ :caption: Glossary
+ :numbered:
+ :maxdepth: 3
+
+ ./glossary/kvmfornfv_glossary.rst
diff --git a/docs/installationprocedure/abstract.rst b/docs/installationprocedure/abstract.rst
index 728f0aa1c..853db170e 100644
--- a/docs/installationprocedure/abstract.rst
+++ b/docs/installationprocedure/abstract.rst
@@ -2,6 +2,10 @@
.. http://creativecommons.org/licenses/by/4.0
+********
+Abstract
+********
+
This document will give the user instructions on how to deploy available
-KVM4NFV CICD build scenario verfied for the Colorado release of the OPNFV
+KVM4NFV CICD build scenario verfied for the Danube release of the OPNFV
platform.
diff --git a/docs/installationprocedure/kvm4nfv-cicd.installation.instruction.rst b/docs/installationprocedure/kvm4nfv-cicd.installation.instruction.rst
index 23177344e..4ddcb6f2e 100644
--- a/docs/installationprocedure/kvm4nfv-cicd.installation.instruction.rst
+++ b/docs/installationprocedure/kvm4nfv-cicd.installation.instruction.rst
@@ -33,15 +33,37 @@ the Rpms (on 'centos') and Debians (on 'ubuntu') builds in this case.
* How to build Kernel/Qemu Rpms- To build rpm packages, build.sh script is run
with -p and -o option (i.e. if -p package option is passed as "centos" or in
- default case). Example: sh ./ci/build.sh -p centos -o build_output
+ default case). Example:
+
+.. code:: bash
+
+ cd kvmfornfv/
+
+ For Kernel/Qemu RPMs,
+ sh ./ci/build.sh -p centos -o build_output
* How to build Kernel/Qemu Debians- To build debian packages, build.sh script
is run with -p and -o option (i.e. if -p package option is passed as
- "ubuntu"). Example: sh ./ci/build.sh -p ubuntu -o build_output
+ "ubuntu"). Example:
+
+.. code:: bash
+
+ cd kvmfornfv/
+
+ For Kernel/Qemu Debians,
+ sh ./ci/build.sh -p ubuntu -o build_output
+
* How to build all Kernel & Qemu, Rpms & Debians- To build both debian and rpm
packages, build.sh script is run with -p and -o option (i.e. if -p package
- option is passed as "both"). Example: sh ./ci/build.sh -p both -o build_output
+ option is passed as "both"). Example:
+
+.. code:: bash
+
+ cd kvmfornfv/
+
+ For Kernel/Qemu RPMs and Debians,
+ sh ./ci/build.sh -p both -o build_output
Installation instructions
-------------------------
diff --git a/docs/installationprocedure/kvm4nfv-cicd.release.notes.rst b/docs/installationprocedure/kvm4nfv-cicd.release.notes.rst
index a54fe0b11..5908ff150 100644
--- a/docs/installationprocedure/kvm4nfv-cicd.release.notes.rst
+++ b/docs/installationprocedure/kvm4nfv-cicd.release.notes.rst
@@ -10,8 +10,7 @@ Release Note for KVM4NFV CICD
Abstract
========
-This document contains the release notes for the Colorado release of
-OPNFV when using KVM4NFV CICD process.
+This document contains the release notes for the Danube release of OPNFV when using KVM4NFV CICD process.
Introduction
============
@@ -68,12 +67,12 @@ Module version change
Document version change
~~~~~~~~~~~~~~~~~~~~~~~
The following documents are added-
-- configurationguide
-- instalationprocedure
-- userguide
-- overview
-- glossary
-- releasenotes
+ - configurationguide
+ - installationprocedure
+ - userguide
+ - overview
+ - glossary
+ - releasenotes
Reason for new version
----------------------
@@ -88,7 +87,16 @@ Feature additions
| JIRA: | NFV Hypervisors-KVMKVMFORNFV-34 |
| | |
+--------------------------------------+--------------------------------------+
-| JIRA: | NFV Hypervisors-KVMKVMFORNFV-34 |
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-57 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-58 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-59 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-60 |
| | |
+--------------------------------------+--------------------------------------+
@@ -134,5 +142,5 @@ See JIRA: <link>
References
==========
-For more information on the OPNFV Brahmaputra release, please visit
-http://www.opnfv.org/brahmaputra
+For more information on the OPNFV Danube release, please visit
+http://www.opnfv.org/danube
diff --git a/docs/overview/kvmfornfv_overview.rst b/docs/overview/kvmfornfv_overview.rst
index 87d401bf0..b07d38dd0 100644
--- a/docs/overview/kvmfornfv_overview.rst
+++ b/docs/overview/kvmfornfv_overview.rst
@@ -8,18 +8,20 @@ KMV4MFV CICD Project Overview
The detailed understanding of this project is organized into different sections-
-* userguide - This provides the required technical assistance to the user, in
+* **userguide** - This provides the required technical assistance to the user, in
using the KVM4NFV CICD process.
-* installationprocedure- This will give the user instructions on how to deploy
+* **installationprocedure** - This will give the user instructions on how to deploy
available KVM4NFV CICD build scenario.
-* configurationguide- This provides guidance for configuring KVM4NFV
+* **configurationguide** - This provides guidance for configuring KVM4NFV
environment, even with the use of specific installer tools for deploying some
- components, available in the Colorado release of OPNFV.
-* requirements- This includes the introduction of KVM4NFV CICD project,
+ components, available in the Danube release of OPNFV.
+* **requirements** - This includes the introduction of KVM4NFV CICD project,
specifications of how the project should work, and constraints placed upon
its execution.
-* design- This includes the parameters or design considerations taken into
+* **design** - This includes the parameters or design considerations taken into
account for achieving minimal interrupt latency for the data VNFs.
-* releasenotes- This describes a brief summary of recent changes, enhancements
+* **scenarios** - This includes the sceanrios that are currently implemented in the
+ kvmfornfv project,features of each scenario and a general guide to how to deploy them.
+* **releasenotes** - This describes a brief summary of recent changes, enhancements
and bug fixes in the KVM4NFV project.
-* glossary- It includes the definition of terms, used in the KVM4NFV project
+* **glossary** - It includes the definition of terms, used in the KVM4NFV project.
diff --git a/docs/releasenotes/index.rst b/docs/releasenotes/index.rst
index 7f3e54d2c..4460b9a01 100644
--- a/docs/releasenotes/index.rst
+++ b/docs/releasenotes/index.rst
@@ -1,9 +1,9 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-============================================
-KVMFORNFV Release Notes for Colorado Release
-============================================
+===========================================
+KVMFORNFV Release Notes for Danube Release
+===========================================
.. toctree::
:maxdepth: 2
diff --git a/docs/releasenotes/release-notes.rst b/docs/releasenotes/release-notes.rst
index 6da29510d..f49cd0804 100644
--- a/docs/releasenotes/release-notes.rst
+++ b/docs/releasenotes/release-notes.rst
@@ -1,31 +1,32 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) <optionally add copywriters name>
-
.. _Kvmfornfv: https://wiki.opnfv.org/display/kvm/
+=============
+Release Notes
+=============
Abstract
-========
+---------
-This document provides the release notes for Colorado 1.0 release of KVMFORNFV.
+This document provides the release notes for Danube 1.0 release of KVMFORNFV.
**Contents**
-1 Version History
+ 1 Version History
-2 Important notes
+ 2 Important notes
-3 Summary
+ 3 Summary
-4 Delivery Data
+ 4 Delivery Data
-5 References
+ 5 References
-1 Version history
-===================
+1. Version history
+--------------------
+--------------------+--------------------+--------------------+----------------------+
| **Date** | **Ver.** | **Author** | **Comment** |
@@ -34,16 +35,19 @@ This document provides the release notes for Colorado 1.0 release of KVMFORNFV.
|2016-08-22 | 0.1.0 | | Colorado 1.0 release |
| | | | |
+--------------------+--------------------+--------------------+----------------------+
+|2017-03-27 | 0.1.0 | | Danube 1.0 release |
+| | | | |
++--------------------+--------------------+--------------------+----------------------+
-2 Important notes
-===================
+2. Important notes
+--------------------
The KVMFORNFV project is currently supported on the Fuel installer.
-3 Summary
-===========
+3. Summary
+------------
-This Colorado 1.0 release provides *KVMFORNFV* as a framework to enhance the
+This Danube 1.0 release provides *KVMFORNFV* as a framework to enhance the
KVM Hypervisor for NFV and OPNFV scenario testing, automated in the OPNFV
CI pipeline, including:
@@ -53,8 +57,7 @@ CI pipeline, including:
* Cyclictests execution to check the latency
-* “os-sdn-kvm-ha” Scenario testing for high availability configuration using
-Fuel installer
+* “os-sdn-kvm-ha”,“os-sdn-kvm-_nfv_ovs_dpdk-ha”,“os-sdn-kvm_nfv_ovs_dpdk-noha”,“os-sdn-kvm_nfv_ovs_dpdk_bar-ha”,“os-sdn-kvm_nfv_ovs_dpdk_bar-noha” Scenarios testing for high availability configuration using Fuel installer
* Documentation created
@@ -69,8 +72,8 @@ Fuel installer
The *KVMFORNFV framework* is developed in the OPNFV community, by the
KVMFORNFV_ team.
-4 Release Data
-================
+4. Release Data
+-----------------
+--------------------------------------+--------------------------------------+
| **Project** | NFV Hypervisors-KVM |
@@ -79,13 +82,13 @@ KVMFORNFV_ team.
| **Repo/commit-ID** | kvmfornfv |
| | |
+--------------------------------------+--------------------------------------+
-| **Release designation** | Colorado |
+| **Release designation** | Danube |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | 2016-09-22 |
+| **Release date** | 2017-03-27 |
| | |
+--------------------------------------+--------------------------------------+
-| **Purpose of the delivery** | OPNFV Colorado 1.0 Releases |
+| **Purpose of the delivery** | OPNFV Danube 1.0 Releases |
| | |
+--------------------------------------+--------------------------------------+
@@ -95,16 +98,16 @@ KVMFORNFV_ team.
4.1.1 Module version changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This is the Colorado 1.0 main release. It is based on following upstream
+This is the Danube 1.0 main release. It is based on following upstream
versions:
* RT Kernel 4.4.6-rt14
* QEMU 2.6
-* Fuel plugin based on Fuel 9.0
+* Fuel plugin based on Fuel 10.0
-This is the first tracked release of KVMFORNFV
+This is the second tracked release of KVMFORNFV
4.1.2 Document version changes
@@ -117,6 +120,35 @@ This is the initial version of the KVMFORNFV framework in OPNFV.
4.2.1 Feature additions
~~~~~~~~~~~~~~~~~~~~~~~
++--------------------------------------+--------------------------------------+
+| **JIRA REFERENCE** | **SLOGAN** |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-57 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-58 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-59 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-61 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-62 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-63 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-64 |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: | NFV Hypervisors-KVMKVMFORNFV-65 |
+| | |
++--------------------------------------+--------------------------------------+
+
4.2.2 Bug corrections
~~~~~~~~~~~~~~~~~~~~~
@@ -127,12 +159,12 @@ Initial Release
4.3.1 Software deliverables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Colorado 1.0 release of the KVMFORNFV RPM and debian for Fuel.
+Danube 1.0 release of the KVMFORNFV RPM and debian for Fuel.
4.3.2 Documentation deliverables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The below documents are delivered for Colorado KVMFORNFV Release:
+The below documents are delivered for Danube KVMFORNFV Release:
* User Guide
@@ -146,10 +178,11 @@ The below documents are delivered for Colorado KVMFORNFV Release:
* Glossary
+ * Scenarios
-5 References
-=============
+5. References
+--------------
-For more information on the KVMFORNFV Colorado release, please see:
+For more information on the KVMFORNFV Danube release, please see:
https://wiki.opnfv.org/display/kvm/
diff --git a/docs/requirements/kvmfornfv_requirements.rst b/docs/requirements/kvmfornfv_requirements.rst
index 048838907..6aa00ba6c 100644
--- a/docs/requirements/kvmfornfv_requirements.rst
+++ b/docs/requirements/kvmfornfv_requirements.rst
@@ -2,22 +2,25 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Intel Corporation, AT&T and others.
-============
+======================
+kvmfornfv Requirements
+======================
+
Introduction
-============
+------------
The NFV hypervisors provide crucial functionality in the NFV
Infrastructure(NFVI).The existing hypervisors, however, are not necessarily
designed or targeted to meet the requirements for the NFVI.
This document specifies the list of requirements that need to be met as part
-of this "NFV Hypervisors-KVM" project in Colorado release.
+of this "NFV Hypervisors-KVM" project in Danube release.
As part of this project we need to make collaborative efforts towards enabling
the NFV features.
-=================
+
Scope and Purpose
-=================
+-----------------
The main purpose of this project is to enhance the KVM hypervisor for NFV, by
looking at the following areas initially:
@@ -32,9 +35,8 @@ The output of this project would be list of the performance goals,comprehensive
instructions for the system configurations,tools to measure Performance and
interrupt latency.
-===========================
Methods and Instrumentation
-===========================
+---------------------------
The above areas would require software development and/or specific hardware
features, and some need just configurations information for the system
@@ -56,20 +58,18 @@ scenario are:
Technology(CAT) enabling can be tuned to improve the NFV
performance/latency.
-=====================
Features to be tested
-=====================
+---------------------
The tests that need to be conducted to make sure that latency is addressed are:
-1. Timer test
-2. Device Interrupt Test
-3. Packet forwarding (DPDK OVS)
-4. Packet Forwarding (SR-IOV)
-5. Bare-metal Packet Forwarding
+ 1. Timer test
+ 2. Device Interrupt Test
+ 3. Packet forwarding (DPDK OVS)
+ 4. Packet Forwarding (SR-IOV)
+ 5. Bare-metal Packet Forwarding
-============
Dependencies
-============
+------------
1. OPNFV Project: “Characterize vSwitch Performance for Telco NFV Use Cases”
(VSPERF) for performance evaluation of ivshmem vs. vhost-user.
@@ -82,8 +82,8 @@ Dependencies
5. In terms of HW dependencies, the aim is to use standard IA Server hardware
for this project, as provided by OPNFV Pharos.
-=========
+
Reference
-=========
+---------
https://wiki.opnfv.org/display/kvm/
diff --git a/docs/scenarios/abstract.rst b/docs/scenarios/abstract.rst
new file mode 100644
index 000000000..362fb6c2b
--- /dev/null
+++ b/docs/scenarios/abstract.rst
@@ -0,0 +1,44 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+*****************
+Scenario Abstract
+*****************
+This chapter includes detailed explanation of various sceanrios files deployed as part
+of kvmfornfv D-Release.
+
+Release Features
+----------------
+
++------------------------------------------+------------------+-----------------+
+| **Scenario Name** | **Colorado** | **Danube** |
+| | | |
++------------------------------------------+------------------+-----------------+
+| - os-nosdn-kvm-ha | ``Y`` | ``Y`` |
+| | | |
+| - os-nosdn-kvm_nfv_ovs_dpdk-noha | | ``Y`` |
+| | | |
+| - os-nosdn-kvm_nfv_ovs_dpdk-ha | | ``Y`` |
+| | | |
+| - os-nosdn-kvm_nfv_ovs_dpdk_bar-noha | | ``Y`` |
+| | | |
+| - os-nosdn-kvm_nfv_ovs_dpdk_bar-ha | | ``Y`` |
+| | | |
++------------------------------------------+------------------+-----------------+
+
+D- Release Scenario's overview
+-------------------------------
+
++--------------------------------------+-----------------------+---------------------+--------------------+-------------+------------+
+| **Scenario Name** | **No of Controllers** | **No of Computes** | ** Plugins Name** | ** DPDK** | ** OVS ** |
+| | | | | | |
++--------------------------------------+-----------------------+---------------------+--------------------+-------------+------------+
+| - os-nosdn-kvm_nfv_ovs_dpdk-noha | ``1`` | ``3`` | ``KVM`` | ``Y`` | ``Y`` |
+| | | | | | |
+| - os-nosdn-kvm_nfv_ovs_dpdk-ha | ``3`` | ``2`` | ``KVM`` | ``Y`` | ``Y`` |
+| | | | | | |
+| - os-nosdn-kvm_nfv_ovs_dpdk_bar-noha | ``1`` | ``2`` | ``KVM & BAR`` | ``Y`` | ``Y`` |
+| | | | | | |
+| - os-nosdn-kvm_nfv_ovs_dpdk_bar-ha | ``3`` | ``3`` | ``KVM & BAR`` | ``Y`` | ``Y`` |
+| | | | | | |
++--------------------------------------+-----------------------+---------------------+--------------------+-------------+------------+
diff --git a/docs/scenarios/index.rst b/docs/scenarios/index.rst
index 5f41fd414..6c3ed1dea 100644
--- a/docs/scenarios/index.rst
+++ b/docs/scenarios/index.rst
@@ -1,12 +1,58 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-****************************************
-os-nosdn-kvm-ha Overview and Description
-****************************************
+*********************************
+Scenario Overview and Description
+*********************************
.. toctree::
+ :caption: Scenario Overview and Description
:numbered:
:maxdepth: 4
+ ./abstract.rst
./kvmfornfv.scenarios.description.rst
+
+*******************************************************
+os-nosdn-kvm_nfv_ovs_dpdk-noha Overview and Description
+*******************************************************
+
+.. toctree::
+ :caption: os-nosdn-kvm_nfv_ovs_dpdk-noha
+ :numbered:
+ :maxdepth: 3
+
+ ./os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
+
+*****************************************************
+os-nosdn-kvm_nfv_ovs_dpdk-ha Overview and Description
+*****************************************************
+
+.. toctree::
+ :caption: os-nosdn-kvm_nfv_ovs_dpdk-ha
+ :numbered:
+ :maxdepth: 3
+
+ ./os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
+
+***********************************************************
+os-nosdn-kvm_nfv_ovs_dpdk_bar-noha Overview and Description
+***********************************************************
+
+.. toctree::
+ :caption: os-nosdn-kvm_nfv_ovs_dpdk_bar-noha
+ :numbered:
+ :maxdepth: 3
+
+ ./os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
+
+*********************************************************
+os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Overview and Description
+*********************************************************
+
+.. toctree::
+ :caption: os-nosdn-kvm_nfv_ovs_dpdk_bar-ha
+ :numbered:
+ :maxdepth: 3
+
+ ./os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
diff --git a/docs/scenarios/kvmfornfv.scenarios.description.rst b/docs/scenarios/kvmfornfv.scenarios.description.rst
index 459852d53..29488bbfd 100644
--- a/docs/scenarios/kvmfornfv.scenarios.description.rst
+++ b/docs/scenarios/kvmfornfv.scenarios.description.rst
@@ -1,19 +1,20 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
+
========================
KVM4NFV SCENARIO-TESTING
========================
ABSTRACT
-========
+--------
-This document describes the procedure to deploy/test KVM4NFV scenario in a nested virtualization
-environment in a single system. This has been verified with os-nosdn-kvm-ha, os-nosdn-kvm-noha,
-os-nosdn-kvm_ovs_dpdk-ha, os-nosdn-kvm_ovs_dpdk-noha and os-nosdn-kvm_ovs_dpdk_bar-ha test scenario.
+This document describes the procedure to deploy/test KVM4NFV scenarios in a nested virtualization
+environment. This has been verified with os-nosdn-kvm-ha, os-nosdn-kvm-noha,os-nosdn-kvm_ovs_dpdk-ha,
+os-nosdn-kvm_ovs_dpdk-noha and os-nosdn-kvm_ovs_dpdk_bar-ha test scenarios.
Version Features
-================
+----------------
+-----------------------------+---------------------------------------------+
| | |
@@ -24,26 +25,40 @@ Version Features
| Colorado | the Colorado release of KVMFORNFV |
| | |
+-----------------------------+---------------------------------------------+
-| | - High Availability deployment and |
-| | configuration of KVMFORNFV software suite |
-| Danube | - Multi-node setup with 3 controllers and |
-| | 2 computes nodes are deployed |
-| | - Scenarios os-nosdn-kvm_ovs_dpdk-ha and |
-| | os-nosdn-kvm_ovs_dpdk_bar-ha are supported|
-| | |
+| | - High Availability/No-High Availability |
+| | deployment configuration of KVMFORNFV |
+| | software suite |
+| Danube | - Multi-node setup with 3 controller and |
+| | 2 compute nodes are deployed for HA |
+| | - Multi-node setup with 1 controller and |
+| | 3 compute nodes are deployed for NO-HA |
+| | - Scenarios os-nosdn-kvm_ovs_dpdk-ha, |
+| | os-nosdn-kvm_ovs_dpdk_bar-ha, |
+| | os-nosdn-kvm_ovs_dpdk-noha, |
+| | os-nosdn-kvm_ovs_dpdk_bar-noha |
+| | are supported |
+-----------------------------+---------------------------------------------+
INTRODUCTION
-============
-The purpose of os-nosdn-kvm_ovs_dpdk-ha and os-nosdn-kvm_ovs_dpdk_bar-ha scenario testing is to
-test the High Availability deployment and configuration of OPNFV software suite with OpenStack and
-without SDN software. This OPNFV software suite includes OPNFV KVMFORNFV latest software packages
+------------
+The purpose of os-nosdn-kvm_ovs_dpdk-ha,os-nosdn-kvm_ovs_dpdk_bar-ha and
+os-nosdn-kvm_ovs_dpdk-noha,os-nosdn-kvm_ovs_dpdk_bar-noha scenarios testing is to
+test the High Availability/No-High Availability deployment and configuration of
+OPNFV software suite with OpenStack and without SDN software.
+
+This OPNFV software suite includes OPNFV KVMFORNFV latest software packages
for Linux Kernel and QEMU patches for achieving low latency and also OPNFV Barometer for traffic,
-performance and platform monitoring. High Availability feature is achieved by deploying OpenStack
+performance and platform monitoring.
+
+High Availability feature is achieved by deploying OpenStack
multi-node setup with 1 Fuel-Master,3 controllers and 2 computes nodes.
-KVMFORNFV packages will be installed on compute nodes as part of deployment. The scenario testcase deploys a multi-node setup by using OPNFV Fuel deployer.
+No-High Availability feature is achieved by deploying OpenStack
+multi-node setup with 1 Fuel-Master,1 controllers and 3 computes nodes.
+
+KVMFORNFV packages will be installed on compute nodes as part of deployment.
+The scenario testcase deploys a multi-node setup by using OPNFV Fuel deployer.
1. System pre-requisites
------------------------
@@ -86,9 +101,11 @@ If Nested virtualization is disabled, enable it by,
2. Environment Setup
--------------------
-**2.1 Configure apt.conf in /etc/apt**
+**2.1 Configuring Proxy**
+~~~~~~~~~~~~~~~~~~~~~~~~~~
-Create an apt.conf file in /etc/apt if it doesn't exist. Used to set proxy for apt-get if workin behind a proxy server.
+For **Ubuntu**.,
+Create an apt.conf file in /etc/apt if it doesn't exist. Used to set proxy for apt-get if working behind a proxy server.
.. code:: bash
@@ -97,7 +114,15 @@ Create an apt.conf file in /etc/apt if it doesn't exist. Used to set proxy for a
Acquire::ftp::proxy "ftp://<username>:<password>@<proxy>:<port>/";
Acquire::socks::proxy "socks://<username>:<password>@<proxy>:<port>/";
+For **CentOS**.,
+Edit /etc/yum.conf to work behind a proxy server by adding the below line.
+
+.. code:: bash
+
+ $ echo "proxy=http://<username>:<password>@<proxy>:<port>/" >> /etc/yum.conf
+
**2.2 Network Time Protocol (NTP) setup and configuration**
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install ntp by:
@@ -110,10 +135,12 @@ Insert the following two lines after “server ntp.ubuntu.com” line and befor
.. _link: /usr/share/doc/ntp-doc/html/accopt.html
-server 127.127.1.0
-fudge 127.127.1.0 stratum 10
+.. code:: bash
+
+ server 127.127.1.0
+ fudge 127.127.1.0 stratum 10
-Restart the ntp server
+Restart the ntp server to apply the changes
.. code:: bash
@@ -128,19 +155,24 @@ There are three ways of performing scenario testing,
- 3.3 Jenkins Project
3.1 Fuel
-~~~~~~~~~
+~~~~~~~~
**3.1.1 Clone the fuel repo :**
.. code:: bash
- git clone https://gerrit.opnfv.org/gerrit/fuel.git
+ $ git clone https://gerrit.opnfv.org/gerrit/fuel.git
**3.1.2 Checkout to the specific version of the branch to deploy by:**
+The default branch is master, to use a stable release-version use the below.,
+
.. code:: bash
+ To check the current branch
+ $ git branch
- git checkout stable/Colorado
+ To check out a specific branch
+ $ git checkout stable/Colorado
**3.1.3 Building the Fuel iso :**
@@ -149,28 +181,34 @@ There are three ways of performing scenario testing,
$ cd ~/fuel/ci/
$ ./build.sh -h
-Provides the necessary options that are required to build an iso. Creates a ``customized iso`` as per the deployment needs.
+Provide the necessary options that are required to build an iso.
+Create a ``customized iso`` as per the deployment needs.
.. code:: bash
$ cd ~/fuel/build/
$ make
- (OR) Other way is to download the latest stable fuel iso from `here`_.
+(OR) Other way is to download the latest stable fuel iso from `here`_.
-.. _here: http://artifacts.opnfv.org/fuel/colorado/opnfv-colorado.3.0.iso
+.. _here: http://artifacts.opnfv.org/fuel.html
+
+.. code:: bash
+
+ http://artifacts.opnfv.org/fuel.html
**3.1.4 Creating a new deployment scenario**
``(i). Naming the scenario file:``
-Include the new deployment scenario yaml file in deploy/scenario/. The file name should adhere to the following format :
+Include the new deployment scenario yaml file in ~/fuel/deploy/scenario/. The file name should adhere to the following format:
.. code:: bash
<ha | no-ha>_<SDN Controller>_<feature-1>_..._<feature-n>.yaml
-``(ii). The deployment configuration file should contain configuration metadata as stated below:``
+``(ii). Meta data``
+The deployment configuration file should contain configuration metadata as stated below:
.. code:: bash
@@ -179,7 +217,8 @@ Include the new deployment scenario yaml file in deploy/scenario/. The file name
version:
created:
-``(iii). To include fuel plugins in the deployment configuration file, use the “stack-extentions” key:``
+``(iii). “stack-extentions” Module``
+ To include fuel plugins in the deployment configuration file, use the “stack-extentions” key:
.. code:: bash
@@ -191,13 +230,13 @@ Include the new deployment scenario yaml file in deploy/scenario/. The file name
module-config-override:
#module-config overrides
-
+**Note:**
The “module-config-name” and “module-config-version” should be same as the name of plugin configuration file.
-
The “module-config-override” is used to configure the plugin by overrriding the corresponding keys in the plugin config yaml file present in ~/fuel/deploy/config/plugins/.
-``(iv). To configure the HA/No-Ha mode, network segmentation types and role to node assignments, use the “dea-override-config” key.``
+``(iv). “dea-override-config” Module``
+To configure the HA/No-HA mode, network segmentation types and role to node assignments, use the “dea-override-config” key.
.. code:: bash
@@ -244,12 +283,12 @@ Under the “dea-override-config” should provide atleast {environment:{mode:'v
and {nodes:1,2,...} and can also enable additional stack features such ceph,heat which overrides
corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
-``(v). In order to configure the pod dha definition, use the “dha-override-config” key.``
-
-The “dha-override-config” key is an optional key present at the ending of the scenario file.
-
-``(vi). The scenario.yaml file is used to map the short names of scenario's to the one or more deployment scenario configuration yaml files.``
+``(v). “dha-override-config” Module``
+In order to configure the pod dha definition, use the “dha-override-config” key.
+This is an optional key present at the ending of the scenario file.
+``(vi). Mapping to short scenario name``
+The scenario.yaml file is used to map the short names of scenario's to the one or more deployment scenario configuration yaml files.
The short scenario names should follow the scheme below:
.. code:: bash
@@ -259,7 +298,7 @@ The short scenario names should follow the scheme below:
[os]: mandatory
possible value: os
-please note that this field is needed in order to select parent jobs to list and do blocking relations between them.
+Please note that this field is needed in order to select parent jobs to list and do blocking relations between them.
.. code:: bash
@@ -272,8 +311,8 @@ please note that this field is needed in order to select parent jobs to list and
[option]: optional
-used for the scenarios those do not fit into naming scheme.
-optional field in the short scenario name should not be included if there is no optional scenario.
+Used for the scenarios those do not fit into naming scheme.
+Optional field in the short scenario name should not be included if there is no optional scenario.
.. code:: bash
@@ -303,14 +342,22 @@ Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario:
.. code:: bash
$ cd ~/fuel/ci/
- $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovsdpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
+ $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
where,
- -b is used to specify the configuration directory
+ ``-b`` is used to specify the configuration directory
- -i is used to specify the image downloaded from artifacts.
+ ``-f`` is used to re-deploy on the existing deployment
-Note:
+ ``-i`` is used to specify the image downloaded from artifacts.
+
+ ``-l`` is used to specify the lab name
+
+ ``-p`` is used to specify POD name
+
+ ``-s`` is used to specify the scenario file
+
+**Note:**
.. code:: bash
@@ -381,7 +428,8 @@ Check out the specific version of specific branch of fuel@opnfv
$ cd ~
$ git clone https://gerrit.opnfv.org/gerrit/fuel.git
$ cd fuel
- $ git checkout stable/Colorado
+ By default it will be master branch, in-order to deploy on the Colorado/Danube branch, do:
+ $ git checkout stable/Danube
``3.2.3 Creating the scenario``
@@ -390,7 +438,7 @@ Implement the scenario file as described in 3.1.4
``3.2.4 Deploying the scenario``
-You can use the following command to start to deploy/test os-nosdn kvm_ovs_dpdk-noha and os-nosdn-kvm_ovs_dpdk-ha scenario
+You can use the following command to deploy/test os-nosdn kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario
.. code:: bash
@@ -409,7 +457,7 @@ For os-nosdn-kvm_ovs_dpdk_bar-ha:
$ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk_bar-ha
The “ci_pipeline.sh” first clones the local fuel repo, then deploys the
-os-nosdn-kvm_ovs_dpdk-ha/os-nosdn-kvm_ovs_dpdk-noha scenario from the given ISO, and run Func test
+os-nosdn-kvm_ovs_dpdk-ha/os-nosdn-kvm_ovs_dpdk_bar-ha scenario from the given ISO, and run Functest
and Yarstick test. The log of the deployment/test (ci.log) can be found in
~/OPNFV-Playground/ci_fuel_opnfv/artifact/master/YYYY-MM-DD—HH.mm, where YYYY-MM-DD—HH.mm is the
date/time you start the “ci_pipeline.sh”.
@@ -424,7 +472,12 @@ Note:
3.3 Jenkins Project
~~~~~~~~~~~~~~~~~~~
-os-nosdn-kvm_ovs_dpdk-ha and os-nosdn-kvm_ovs_dpdk_bar-ha scenario can be executed from the jenkins project :
+os-nosdn-kvm_ovs_dpdk-(no)ha and os-nosdn-kvm_ovs_dpdk_bar-(no)ha scenario can be executed from the jenkins project :
+ HA scenarios:
1. "fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk-ha)
2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-ha)
+
+ NOHA scenarios:
+ 1. "fuel-os-nosdn-kvm_ovs_dpdk-noha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk-noha)
+ 2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-noha)
diff --git a/docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst b/docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst
index 9d8285831..f64f26ffc 100644
--- a/docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst
+++ b/docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst
@@ -2,9 +2,12 @@
.. http://creativecommons.org/licenses/by/4.0
+============================
+os-nosdn-kvm-ha Description
+============================
Introduction
-============
+-------------
.. In this section explain the purpose of the scenario and the
types of capabilities provided
@@ -21,7 +24,7 @@ This scenario testcase deployment is happening on multi-node by using
OPNFV Fuel deployer.
Scenario Components and Composition
-===================================
+-----------------------------------
.. In this section describe the unique components that make up the scenario,
.. what each component provides and why it has been included in order
.. to communicate to the user the capabilities available in this scenario.
@@ -31,18 +34,18 @@ configurations provided in ha_nfv-kvm_heat_ceilometer_scenario.yaml.
This yaml file contains following configurations and is passed as an
argument to deploy.py script
-* scenario.yaml:This configuration file defines translation between a
+* ``scenario.yaml:`` This configuration file defines translation between a
short deployment scenario name(os-nosdn-kvm-ha) and an actual deployment
scenario configuration file(ha_nfv-kvm_heat_ceilometer_scenario.yaml)
-* deployment-scenario-metadata:Contains the configuration metadata like
+* ``deployment-scenario-metadata:`` Contains the configuration metadata like
title,version,created,comment.
-* stack-extensions:Stack extentions are opnfv added value features in form
+* ``stack-extensions:`` Stack extentions are opnfv added value features in form
of a fuel-plugin.Plugins listed in stack extensions are enabled and
configured.
-* dea-override-config: Used to configure the HA mode,network segmentation
+* ``dea-override-config:`` Used to configure the HA mode,network segmentation
types and role to node assignments.These configurations overrides
corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
These keys are used to deploy multiple nodes(3 controllers,2 computes)
@@ -72,7 +75,7 @@ argument to deploy.py script
* **Node 5**: This node has compute role.
-* dha-override-config:Provides information about the VM definition and
+* ``dha-override-config:`` Provides information about the VM definition and
Network config for virtual deployment.These configurations overrides
the pod dha definition and points to the controller,compute and
fuel definition files.
@@ -81,7 +84,7 @@ argument to deploy.py script
up and running
Scenario Usage Overview
-=======================
+-----------------------
.. Provide a brief overview on how to use the scenario and the features available to the
.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
.. where the specifics of the features are covered including examples and API's
@@ -93,6 +96,8 @@ Scenario Usage Overview
-Example:
+.. code:: bash
+
sudo python deploy.py -iso ~/ISO/opnfv.iso -dea ~/CONF/hardware/dea.yaml -dha ~/CONF/hardware/dha.yaml -s /mnt/images -b pxebr -log ~/Deployment-888.log.tar.gz
* Install Fuel Master and deploy OPNFV Cloud from scratch on Virtual
@@ -100,6 +105,8 @@ Scenario Usage Overview
-Example:
+.. code:: bash
+
sudo python deploy.py -iso ~/ISO/opnfv.iso -dea ~/CONF/virtual/dea.yaml -dha ~/CONF/virtual/dha.yaml -s /mnt/images -log ~/Deployment-888.log.tar.gz
* os-nosdn-kvm-ha scenario can be executed from the jenkins project
@@ -112,7 +119,7 @@ Scenario Usage Overview
* Observed that scenario is not running any testcase on top of deployment.
Known Limitations, Issues and Workarounds
-=========================================
+-----------------------------------------
.. Explain any known limitations here.
* Test scenario os-nosdn-kvm-ha result is not stable. After node reboot
@@ -120,7 +127,7 @@ Known Limitations, Issues and Workarounds
responding with in the given time.
References
-==========
+----------
For more information on the OPNFV Danube release, please visit
http://www.opnfv.org/danube
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst
index 5582f46c7..28f588e54 100755
--- a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst
@@ -1,12 +1,12 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-****************************************
-os-nosdn-kvm-ha Overview and Description
-****************************************
+*****************************************************
+os-nosdn-kvm_nfv_ovs_dpdk-ha Overview and Description
+*****************************************************
.. toctree::
:numbered:
:maxdepth: 3
- os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
+ ./os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
index 40b9748af..a96130cad 100644
--- a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
@@ -2,9 +2,12 @@
.. http://creativecommons.org/licenses/by/4.0
+=========================================
+os-nosdn-kvm_nfv_ovs_dpdk-ha Description
+=========================================
Introduction
-============
+------------
.. In this section explain the purpose of the scenario and the
types of capabilities provided
@@ -16,10 +19,11 @@ includes OPNFV KVM4NFV latest software packages for Linux Kernel and
QEMU patches for achieving low latency. High Availability feature is achieved
by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
-KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
+KVM4NFV packages will be installed on compute nodes as part of deployment.
+This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
Scenario Components and Composition
-===================================
+-----------------------------------
.. In this section describe the unique components that make up the scenario,
.. what each component provides and why it has been included in order
.. to communicate to the user the capabilities available in this scenario.
@@ -29,7 +33,7 @@ configurations provided in ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
This yaml file contains following configurations and is passed as an
argument to deploy.py script
-* scenario.yaml:This configuration file defines translation between a
+* ``scenario.yaml:`` This configuration file defines translation between a
short deployment scenario name(os-nosdn-kvm_ovs_dpdk-ha) and an actual deployment
scenario configuration file(ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml)
@@ -63,29 +67,34 @@ argument to deploy.py script
These keys are used to deploy multiple nodes(``3 controllers,2 computes``)
as mention below.
- * **Node 1**: This node has MongoDB and Controller roles. The controller
- node runs the Identity service, Image Service, management portions of
- Compute and Networking, Networking plug-in and the dashboard. The
- Telemetry service which was designed to support billing systems for
- OpenStack cloud resources uses a NoSQL database to store information.
- The database typically runs on the controller node.
-
- * **Node 2**: This node has Controller and Ceph-osd roles. Ceph is a
- massively scalable, open source, distributed storage system. It is
- comprised of an object store, block store and a POSIX-compliant distributed
- file system. Enabling Ceph, configures Nova to store ephemeral volumes in
- RBD, configures Glance to use the Ceph RBD backend to store images,
- configures Cinder to store volumes in Ceph RBD images and configures the
- default number of object replicas in Ceph.
-
- * **Node 3**: This node has Controller role in order to achieve high
- availability.
-
- * **Node 4**: This node has Compute role. The compute node runs the
- hypervisor portion of Compute that operates tenant virtual machines
- or instances. By default, Compute uses KVM as the hypervisor.
-
- * **Node 5**: This node has compute role.
+ * **Node 1**:
+ - This node has MongoDB and Controller roles
+ - The controller node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard
+ - Uses VLAN as an interface
+
+ * **Node 2**:
+ - This node has Ceph-osd and Controller roles
+ - The controller node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard
+ - Ceph is a massively scalable, open source, distributed storage system
+ - Uses VLAN as an interface
+
+ * **Node 3**:
+ - This node has Controller role in order to achieve high availability.
+ - Uses VLAN as an interface
+
+ * **Node 4**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
+
+ * **Node 5**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
The below is the ``dea-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.
@@ -177,17 +186,18 @@ argument to deploy.py script
* os-nosdn-kvm_ovs_dpdk-ha scenario is successful when all the 5 Nodes are accessible,
up and running.
-
-
**Note:**
* In os-nosdn-kvm_ovs_dpdk-ha scenario, OVS is installed on the compute nodes with DPDK configured
-* This results in faster communication and data transfer among the compute nodes
+* Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
+
+* Hugepages are only configured for compute nodes
+* This results in faster communication and data transfer among the compute nodes
Scenario Usage Overview
-=======================
+-----------------------
.. Provide a brief overview on how to use the scenario and the features available to the
.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
.. where the specifics of the features are covered including examples and API's
@@ -210,7 +220,7 @@ where,
-i is used to specify the image downloaded from artifacts.
-Note:
+**Note:**
.. code:: bash
@@ -225,16 +235,13 @@ Note:
accessibility (IP , up & running).
Known Limitations, Issues and Workarounds
-=========================================
+-----------------------------------------
.. Explain any known limitations here.
* Test scenario os-nosdn-kvm_ovs_dpdk-ha result is not stable.
-* As Functest and Yardstick test suites are not stable. Instances are not getting IP address from DHCP (functest issue).
-
-
References
-==========
+----------
For more information on the OPNFV Danube release, please visit
http://www.opnfv.org/Danube
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst
index 9d60465d6..3a52fe426 100755
--- a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst
@@ -1,12 +1,12 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-****************************************
-os-nosdn-kvm-ha Overview and Description
-****************************************
+*******************************************************
+os-nosdn-kvm_nfv_ovs_dpdk-noha Overview and Description
+*******************************************************
.. toctree::
:numbered:
:maxdepth: 3
- os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
+ ./os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
index 3e354b5b9..a7778d963 100644
--- a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
@@ -2,36 +2,40 @@
.. http://creativecommons.org/licenses/by/4.0
+==========================================
+os-nosdn-kvm_nfv_ovs_dpdk-noha Description
+==========================================
Introduction
-============
+------------
.. In this section explain the purpose of the scenario and the
types of capabilities provided
-The purpose of os-nosdn-kvm_ovs_dpdk-noha scenario testing is to test the
+The purpose of os-nosdn-kvm_ovs_dpdk-noha scenario testing is to test the No
High Availability deployment and configuration of OPNFV software suite
with OpenStack and without SDN software. This OPNFV software suite
includes OPNFV KVM4NFV latest software packages for Linux Kernel and
-QEMU patches for achieving low latency. High Availability feature is achieved
-by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
+QEMU patches for achieving low latency. No High Availability feature is achieved
+by deploying OpenStack multi-node setup with 1 controller and 3 computes nodes.
-KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
+KVM4NFV packages will be installed on compute nodes as part of deployment.
+This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
Scenario Components and Composition
-===================================
+------------------------------------
.. In this section describe the unique components that make up the scenario,
.. what each component provides and why it has been included in order
.. to communicate to the user the capabilities available in this scenario.
-This scenario deploys the High Availability OPNFV Cloud based on the
-configurations provided in noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml.
+This scenario deploys the No High Availability OPNFV Cloud based on the
+configurations provided in no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml.
This yaml file contains following configurations and is passed as an
argument to deploy.py script
-* scenario.yaml:This configuration file defines translation between a
+* ``scenario.yaml:`` This configuration file defines translation between a
short deployment scenario name(os-nosdn-kvm_ovs_dpdk-noha) and an actual deployment
- scenario configuration file(noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml)
+ scenario configuration file(no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml)
* ``deployment-scenario-metadata:`` Contains the configuration metadata like
title,version,created,comment.
@@ -57,35 +61,37 @@ argument to deploy.py script
module-config-override:
# Module config overrides
-* ``dea-override-config:`` Used to configure the HA mode,network segmentation
+* ``dea-override-config:`` Used to configure the NO-HA mode,network segmentation
types and role to node assignments.These configurations overrides
corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
These keys are used to deploy multiple nodes(``1 controller,3 computes``)
as mention below.
- * **Node 1**: This node has MongoDB and Controller roles. The controller
- node runs the Identity service, Image Service, management portions of
- Compute and Networking, Networking plug-in and the dashboard. The
- Telemetry service which was designed to support billing systems for
- OpenStack cloud resources uses a NoSQL database to store information.
- The database typically runs on the controller node.
+ * **Node 1**:
+ - This node has MongoDB and Controller roles
+ - The controller node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard
+ - Uses VLAN as an interface
- * **Node 2**: This node has compute and Ceph-osd roles. Ceph is a
- massively scalable, open source, distributed storage system. It is
- comprised of an object store, block store and a POSIX-compliant
- file system. Enabling Ceph, configures Nova to store ephemeral volumes in
- RBD, configures Glance to use the Ceph RBD backend to store images,
- configures Cinder to store volumes in Ceph RBD images and configures the
- default number of object replicas in Ceph.
+ * **Node 2**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
- * **Node 3**: This node has Compute role in order to achieve high
- availability.
+ * **Node 3**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
- * **Node 4**: This node has Compute role. The compute node runs the
- hypervisor portion of Compute that operates tenant virtual machines
- or instances. By default, Compute uses KVM as the hypervisor.
+ * **Node 4**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
- The below is the ``dea-override-config`` of the noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.
+ The below is the ``dea-override-config`` of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.
.. code:: bash
@@ -162,7 +168,8 @@ argument to deploy.py script
* ``dha-override-config:`` Provides information about the VM definition and
Network config for virtual deployment.These configurations overrides
the pod dha definition and points to the controller,compute and
- fuel definition files. The noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml has no dha-config changes i.e., default configuration is used.
+ fuel definition files. The no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
+ has no dha-config changes i.e., default configuration is used.
* os-nosdn-kvm_ovs_dpdk-noha scenario is successful when all the 4 Nodes are accessible,
up and running.
@@ -173,11 +180,16 @@ argument to deploy.py script
* In os-nosdn-kvm_ovs_dpdk-noha scenario, OVS is installed on the compute nodes with DPDK configured
+* Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
+
+* Hugepages are only configured for compute nodes
+
* This results in faster communication and data transfer among the compute nodes
Scenario Usage Overview
-=======================
+-----------------------
+
.. Provide a brief overview on how to use the scenario and the features available to the
.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
.. where the specifics of the features are covered including examples and API's
@@ -200,7 +212,7 @@ where,
-i is used to specify the image downloaded from artifacts.
-Note:
+**Note:**
.. code:: bash
@@ -208,20 +220,20 @@ Note:
* os-nosdn-kvm_ovs_dpdk-noha scenario can be executed from the jenkins project
"fuel-os-nosdn-kvm_ovs_dpdk-noha-baremetal-daily-master"
-* This scenario provides the High Availability feature by deploying
- 3 controller,2 compute nodes and checking if all the 5 nodes
+* This scenario provides the No High Availability feature by deploying
+ 1 controller,3 compute nodes and checking if all the 4 nodes
are accessible(IP,up & running).
-* Test Scenario is passed if deployment is successful and all 5 nodes have
+* Test Scenario is passed if deployment is successful and all 4 nodes have
accessibility (IP , up & running).
Known Limitations, Issues and Workarounds
-=========================================
+-----------------------------------------
.. Explain any known limitations here.
* Test scenario os-nosdn-kvm_ovs_dpdk-noha result is not stable.
References
-==========
+----------
For more information on the OPNFV Danube release, please visit
http://www.opnfv.org/Danube
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst
index 5fccc5a2c..0e374a5ca 100755
--- a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst
@@ -1,12 +1,12 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-****************************************
-os-nosdn-kvm-ha Overview and Description
-****************************************
+*********************************************************
+os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Overview and Description
+*********************************************************
.. toctree::
:numbered:
:maxdepth: 3
- os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
+ ./os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
index 7090ccdd6..0ab20514a 100644
--- a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
@@ -2,9 +2,12 @@
.. http://creativecommons.org/licenses/by/4.0
+============================================
+os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Description
+============================================
Introduction
-============
+------------
.. In this section explain the purpose of the scenario and the
types of capabilities provided
@@ -16,10 +19,12 @@ includes OPNFV KVM4NFV latest software packages for Linux Kernel and
QEMU patches for achieving low latency. High Availability feature is achieved
by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
-KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
+OPNFV Barometer packages is used for traffic,performance and platform monitoring.
+KVM4NFV packages will be installed on compute nodes as part of deployment.
+This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
Scenario Components and Composition
-===================================
+-----------------------------------
.. In this section describe the unique components that make up the scenario,
.. what each component provides and why it has been included in order
.. to communicate to the user the capabilities available in this scenario.
@@ -29,7 +34,7 @@ configurations provided in ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.
This yaml file contains following configurations and is passed as an
argument to deploy.py script
-* scenario.yaml:This configuration file defines translation between a
+* ``scenario.yaml:`` This configuration file defines translation between a
short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-ha) and an actual deployment
scenario configuration file(ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml)
@@ -69,29 +74,34 @@ argument to deploy.py script
These keys are used to deploy multiple nodes(``3 controllers,2 computes``)
as mention below.
- * **Node 1**: This node has MongoDB and Controller roles. The controller
- node runs the Identity service, Image Service, management portions of
- Compute and Networking, Networking plug-in and the dashboard. The
- Telemetry service which was designed to support billing systems for
- OpenStack cloud resources uses a NoSQL database to store information.
- The database typically runs on the controller node.
-
- * **Node 2**: This node has Controller and Ceph-osd roles. Ceph is a
- massively scalable, open source, distributed storage system. It is
- comprised of an object store, block store and a POSIX-compliant distributed
- file system. Enabling Ceph, configures Nova to store ephemeral volumes in
- RBD, configures Glance to use the Ceph RBD backend to store images,
- configures Cinder to store volumes in Ceph RBD images and configures the
- default number of object replicas in Ceph.
-
- * **Node 3**: This node has Controller role in order to achieve high
- availability.
-
- * **Node 4**: This node has Compute role. The compute node runs the
- hypervisor portion of Compute that operates tenant virtual machines
- or instances. By default, Compute uses KVM as the hypervisor.
-
- * **Node 5**: This node has compute role.
+ * **Node 1**:
+ - This node has MongoDB and Controller roles
+ - The controller node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard
+ - Uses VLAN as an interface
+
+ * **Node 2**:
+ - This node has Ceph-osd and Controller roles
+ - The controller node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard
+ - Ceph is a massively scalable, open source, distributed storage system
+ - Uses VLAN as an interface
+
+ * **Node 3**:
+ - This node has Controller role in order to achieve high availability.
+ - Uses VLAN as an interface
+
+ * **Node 4**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
+
+ * **Node 5**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
The below is the ``dea-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.
@@ -189,11 +199,15 @@ argument to deploy.py script
* Baraometer plugin is also implemented along with KVM plugin
+* Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
+
+* Hugepages are only configured for compute nodes
+
* This results in faster communication and data transfer among the compute nodes
Scenario Usage Overview
-=======================
+------------------------
.. Provide a brief overview on how to use the scenario and the features available to the
.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
.. where the specifics of the features are covered including examples and API's
@@ -209,14 +223,14 @@ Command to deploy the os-nosdn-kvm_ovs_dpdk_bar-ha scenario:
.. code:: bash
$ cd ~/fuel/ci/
- $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
+ $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
where,
-b is used to specify the configuration directory
-i is used to specify the image downloaded from artifacts.
-Note:
+**Note:**
.. code:: bash
@@ -231,16 +245,13 @@ Note:
accessibility (IP , up & running).
Known Limitations, Issues and Workarounds
-=========================================
+-----------------------------------------
.. Explain any known limitations here.
* Test scenario os-nosdn-kvm_ovs_dpdk_bar-ha result is not stable.
-* As Functest and Yardstick test suites are not stable. Instances are not getting IP address from DHCP (functest issue).
-
-
References
-==========
+----------
For more information on the OPNFV Danube release, please visit
http://www.opnfv.org/Danube
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst
index 1cdad5205..756b2ba6a 100755
--- a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst
@@ -1,12 +1,12 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-****************************************
-os-nosdn-kvm-ha Overview and Description
-****************************************
+***********************************************************
+os-nosdn-kvm_nfv_ovs_dpdk_bar-noha Overview and Description
+***********************************************************
.. toctree::
:numbered:
:maxdepth: 3
- os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
+ ./os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
index 67a0732a7..47a7f1034 100644
--- a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
@@ -2,36 +2,41 @@
.. http://creativecommons.org/licenses/by/4.0
+============================================
+os-nosdn-kvm_nfv_ovs_dpdk_bar-ha Description
+============================================
Introduction
-============
+-------------
.. In this section explain the purpose of the scenario and the
types of capabilities provided
The purpose of os-nosdn-kvm_ovs_dpdk_bar-noha scenario testing is to test the
-High Availability deployment and configuration of OPNFV software suite
+No High Availability deployment and configuration of OPNFV software suite
with OpenStack and without SDN software. This OPNFV software suite
includes OPNFV KVM4NFV latest software packages for Linux Kernel and
-QEMU patches for achieving low latency. High Availability feature is achieved
-by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
+QEMU patches for achieving low latency.No High Availability feature is achieved
+by deploying OpenStack multi-node setup with 1 controller and 3 computes nodes.
-KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
+OPNFV Barometer packages is used for traffic,performance and platform monitoring.
+KVM4NFV packages will be installed on compute nodes as part of deployment.
+This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
Scenario Components and Composition
-===================================
+------------------------------------
.. In this section describe the unique components that make up the scenario,
.. what each component provides and why it has been included in order
.. to communicate to the user the capabilities available in this scenario.
-This scenario deploys the High Availability OPNFV Cloud based on the
-configurations provided in noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml.
+This scenario deploys the No High Availability OPNFV Cloud based on the
+configurations provided in no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml.
This yaml file contains following configurations and is passed as an
argument to deploy.py script
-* scenario.yaml:This configuration file defines translation between a
+* ``scenario.yaml:`` This configuration file defines translation between a
short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-noha) and an actual deployment
- scenario configuration file(noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml)
+ scenario configuration file(no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml)
* ``deployment-scenario-metadata:`` Contains the configuration metadata like
title,version,created,comment.
@@ -68,29 +73,31 @@ argument to deploy.py script
These keys are used to deploy multiple nodes(``1 controller,3 computes``)
as mention below.
- * **Node 1**: This node has MongoDB and Controller roles. The controller
- node runs the Identity service, Image Service, management portions of
- Compute and Networking, Networking plug-in and the dashboard. The
- Telemetry service which was designed to support billing systems for
- OpenStack cloud resources uses a NoSQL database to store information.
- The database typically runs on the controller node.
+ * **Node 1**:
+ - This node has MongoDB and Controller roles
+ - The controller node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard
+ - Uses VLAN as an interface
- * **Node 2**: This node has compute and Ceph-osd roles. Ceph is a
- massively scalable, open source, distributed storage system. It is
- comprised of an object store, block store and a POSIX-compliant
- file system. Enabling Ceph, configures Nova to store ephemeral volumes in
- RBD, configures Glance to use the Ceph RBD backend to store images,
- configures Cinder to store volumes in Ceph RBD images and configures the
- default number of object replicas in Ceph.
+ * **Node 2**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
- * **Node 3**: This node has Compute role in order to achieve high
- availability.
+ * **Node 3**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
- * **Node 4**: This node has Compute role. The compute node runs the
- hypervisor portion of Compute that operates tenant virtual machines
- or instances. By default, Compute uses KVM as the hypervisor.
+ * **Node 4**:
+ - This node has compute and Ceph-osd roles
+ - Ceph is a massively scalable, open source, distributed storage system
+ - By default, Compute uses KVM as the hypervisor
+ - Uses DPDK as an interface
- The below is the ``dea-override-config`` of the noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.
+ The below is the ``dea-override-config`` of the no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.
.. code:: bash
@@ -180,11 +187,14 @@ argument to deploy.py script
* Baraometer plugin is also implemented along with KVM plugin.
-* This results in faster communication and data transfer among the compute nodes
+* Hugepages for DPDK are configured in the attributes_1 section of the no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
+
+* Hugepages are only configured for compute nodes
+* This results in faster communication and data transfer among the compute nodes
Scenario Usage Overview
-=======================
+-----------------------
.. Provide a brief overview on how to use the scenario and the features available to the
.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
.. where the specifics of the features are covered including examples and API's
@@ -215,20 +225,20 @@ Note:
* os-nosdn-kvm_ovs_dpdk_bar-noha scenario can be executed from the jenkins project
"fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-baremetal-daily-master"
-* This scenario provides the High Availability feature by deploying
- 3 controller,2 compute nodes and checking if all the 5 nodes
+* This scenario provides the No High Availability feature by deploying
+ 1 controller,3 compute nodes and checking if all the 4 nodes
are accessible(IP,up & running).
-* Test Scenario is passed if deployment is successful and all 5 nodes have
+* Test Scenario is passed if deployment is successful and all 4 nodes have
accessibility (IP , up & running).
Known Limitations, Issues and Workarounds
-=========================================
+-----------------------------------------
.. Explain any known limitations here.
* Test scenario os-nosdn-kvm_ovs_dpdk_bar-noha result is not stable.
References
-==========
+----------
For more information on the OPNFV Danube release, please visit
http://www.opnfv.org/Danube
diff --git a/docs/userguide/Ftrace.debugging.tool.userguide.rst b/docs/userguide/Ftrace.debugging.tool.userguide.rst
index 0fcbbcf93..fc0858a6d 100644
--- a/docs/userguide/Ftrace.debugging.tool.userguide.rst
+++ b/docs/userguide/Ftrace.debugging.tool.userguide.rst
@@ -9,9 +9,9 @@ FTrace Debugging Tool
About Ftrace
-------------
Ftrace is an internal tracer designed to find what is going on inside the kernel. It can be used
-for debugging or analyzing latencies and performance issues that take place outside of user-space.
-Although ftrace is typically considered the function tracer, it is really a frame work of several
-assorted tracing utilities.
+for debugging or analyzing latencies and performance related issues that take place outside of
+user-space. Although ftrace is typically considered the function tracer, it is really a frame
+work of several assorted tracing utilities.
One of the most common uses of ftrace is the event tracing.
@@ -33,7 +33,7 @@ Version Features
+-----------------------------+-----------------------------------------------+
| | - Ftrace aids in debugging the KVMFORNFV |
| Danube | 4.4-linux-kernel level issues |
-| | - Option to diable if not required |
+| | - Option to disable if not required |
+-----------------------------+-----------------------------------------------+
@@ -155,19 +155,16 @@ Examples:
[tracing]# echo 1 > tracing_on;
-===================
Ftrace in KVMFORNFV
-===================
-Ftrace is part of KVMFORNFV D-Release. Kvmfornfv currently uses 4.4-linux-Kernel as part of
-deployment and runs cyclictest for testing purpose generating latency values (max, min, avg values).
+-------------------
+Ftrace is part of KVMFORNFV D-Release. KVMFORNFV built 4.4-linux-Kernel will be tested by
+executing cyclictest and analyzing the results/latency values (max, min, avg) generated.
Ftrace (or) function tracer is a stable kernel inbuilt debugging tool which tests kernel in real
time and outputs a log as part of the code. These output logs are useful in following ways.
- Kernel Debugging.
- - Helps in Kernel code Optimization and
- - Can be used to better understand the kernel Level code flow
- - Log generation for each test run if enabled
- - Choice of Disabling and Enabling
+ - Helps in Kernel code optimization and
+ - Can be used to better understand the kernel level code flow
Ftrace logs for KVMFORNFV can be found `here`_:
@@ -184,7 +181,8 @@ Kvmfornfv has two scripts in /ci/envs to provide ftrace tool:
Enabling Ftrace in KVMFORNFV
----------------------------
-The enable_trace.sh script is triggered by changing ftrace_enable value in test_kvmfornfv.sh script which is zero by default. Change as below to enable Ftrace and trigger the script,
+The enable_trace.sh script is triggered by changing ftrace_enable value in test_kvmfornfv.sh
+script to 1 (which is zero by default). Change as below to enable Ftrace.
.. code:: bash
@@ -197,7 +195,7 @@ Note:
Details of enable_trace script
------------------------------
-- CPU Coremask is calculated using getcpumask()
+- CPU coremask is calculated using getcpumask()
- All the required events are enabled by,
echoing "1" to $TRACEDIR/events/event_name/enable file
@@ -230,19 +228,21 @@ The set_event file contains all the enabled events list
- Once tracing is diabled, disable_trace.sh script is triggered.
-Details of Disable_trace Script
+Details of disable_trace Script
-------------------------------
In disable trace script the following are done:
-- The trace file is copied and moved to /tmp folfer based on timestamp.
+- The trace file is copied and moved to /tmp folder based on timestamp
- The current tracer file is set to ``nop``
- The set_event file is cleared i.e., all the enabled events are disabled
-- Kernel Ftarcer is diabled/unmounted
+- Kernel Ftrace is disabled/unmounted
Publishing Ftrace logs:
-----------------------
-The generated trace log is pushed to `artifacts`_ of Kvmfornfv project by releng team, which is done by a script in JJB of releng. The `trigger`_ in the script is.,
+The generated trace log is pushed to `artifacts`_ by kvmfornfv-upload-artifact.sh
+script available in releng which will be triggered as a part of kvmfornfv daily job.
+The `trigger`_ in the script is.,
.. code:: bash
@@ -252,6 +252,3 @@ The generated trace log is pushed to `artifacts`_ of Kvmfornfv project by releng
.. _artifacts: https://artifacts.opnfv.org/
.. _trigger: https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=jjb/kvmfornfv/kvmfornfv-upload-artifact.sh;h=56fb4f9c18a83c689a916dc6c85f9e3ddf2479b2;hb=HEAD#l53
-
-
-.. include:: pcm_utility.userguide.rst
diff --git a/docs/userguide/abstract.rst b/docs/userguide/abstract.rst
index 8c36c268f..ec05b2560 100644
--- a/docs/userguide/abstract.rst
+++ b/docs/userguide/abstract.rst
@@ -2,9 +2,9 @@
.. http://creativecommons.org/licenses/by/4.0
-========
-Abstract
-========
+==================
+Userguide Abstract
+==================
In KVM4NFV project, we focus on the KVM hypervisor to enhance it for NFV,
by looking at the following areas initially-
diff --git a/docs/userguide/common.platform.render.rst b/docs/userguide/common.platform.render.rst
index 486ca469f..46b4707a3 100644
--- a/docs/userguide/common.platform.render.rst
+++ b/docs/userguide/common.platform.render.rst
@@ -7,7 +7,7 @@ Using common platform components
================================
This section outlines basic usage principals and methods for some of the
-commonly deployed components of supported OPNFV scenario's in Colorado.
+commonly deployed components of supported OPNFV scenario's in Danube.
The subsections provide an outline of how these components are commonly
used and how to address them in an OPNFV deployment.The components derive
from autonomous upstream communities and where possible this guide will
diff --git a/docs/userguide/feature.userguide.render.rst b/docs/userguide/feature.userguide.render.rst
index d903f0711..0e2738ae3 100644
--- a/docs/userguide/feature.userguide.render.rst
+++ b/docs/userguide/feature.userguide.render.rst
@@ -3,7 +3,7 @@
.. http://creativecommons.org/licenses/by/4.0
==========================
-Using Colorado Features
+Using Danube Features
==========================
The following sections of the user guide provide feature specific usage
diff --git a/docs/userguide/images/cpu-stress-idle-test-type.png b/docs/userguide/images/cpu-stress-idle-test-type.png
new file mode 100644
index 000000000..9a5bdf6de
--- /dev/null
+++ b/docs/userguide/images/cpu-stress-idle-test-type.png
Binary files differ
diff --git a/docs/userguide/images/guest_pk_fw.png b/docs/userguide/images/guest_pk_fw.png
new file mode 100644
index 000000000..5f80ecce5
--- /dev/null
+++ b/docs/userguide/images/guest_pk_fw.png
Binary files differ
diff --git a/docs/userguide/images/host_pk_fw.png b/docs/userguide/images/host_pk_fw.png
new file mode 100644
index 000000000..dcbe921f2
--- /dev/null
+++ b/docs/userguide/images/host_pk_fw.png
Binary files differ
diff --git a/docs/userguide/images/idle-idle-test-type.png b/docs/userguide/images/idle-idle-test-type.png
new file mode 100644
index 000000000..bd241bfe1
--- /dev/null
+++ b/docs/userguide/images/idle-idle-test-type.png
Binary files differ
diff --git a/docs/userguide/images/io-stress-idle-test-type.png b/docs/userguide/images/io-stress-idle-test-type.png
new file mode 100644
index 000000000..f79cb5984
--- /dev/null
+++ b/docs/userguide/images/io-stress-idle-test-type.png
Binary files differ
diff --git a/docs/userguide/images/memory-stress-idle-test-type.png b/docs/userguide/images/memory-stress-idle-test-type.png
new file mode 100644
index 000000000..1ca839a4a
--- /dev/null
+++ b/docs/userguide/images/memory-stress-idle-test-type.png
Binary files differ
diff --git a/docs/userguide/images/sriov_pk_fw.png b/docs/userguide/images/sriov_pk_fw.png
new file mode 100644
index 000000000..bf7ad6f9b
--- /dev/null
+++ b/docs/userguide/images/sriov_pk_fw.png
Binary files differ
diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst
index fcef57250..0d5089e01 100644
--- a/docs/userguide/index.rst
+++ b/docs/userguide/index.rst
@@ -17,6 +17,7 @@ KVMforNFV User Guide
./kvmfornfv.cyclictest-dashboard.userguide.rst
./low_latency.userguide.rst
./live_migration.userguide.rst
+ ./openstack.rst
./packet_forwarding.userguide.rst
./pcm_utility.userguide.rst
./tuning.userguide.rst
diff --git a/docs/userguide/introduction.rst b/docs/userguide/introduction.rst
index 501d6391b..9a22bdebd 100644
--- a/docs/userguide/introduction.rst
+++ b/docs/userguide/introduction.rst
@@ -2,9 +2,12 @@
.. http://creativecommons.org/licenses/by/4.0
-========
+======================
+Userguide Introduction
+======================
+
Overview
-========
+--------
The project "NFV Hypervisors-KVM" makes collaborative efforts to enable NFV
features for existing hypervisors, which are not necessarily designed or
@@ -13,7 +16,7 @@ consists of Continuous Integration builds, deployments and testing
combinations of virtual infrastructure components.
KVM4NFV Features
-================
+----------------
Using this project, the following areas are targeted-
@@ -46,7 +49,7 @@ The configuration guide details which scenarios are best for you and how to
install and configure them.
General usage guidelines
-========================
+------------------------
The user guide for KVM4NFV CICD features and capabilities provide step by step
instructions for using features that have been configured according to the
diff --git a/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
index 6333d0917..4ec8f5013 100644
--- a/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
+++ b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
@@ -2,31 +2,36 @@
.. http://creativecommons.org/licenses/by/4.0
-========================================
+=========================
+KVMFORNFV Dashboard Guide
+=========================
+
Dashboard for KVM4NFV Daily Test Results
-========================================
+----------------------------------------
Abstract
-========
+--------
This chapter explains the procedure to configure the InfluxDB and Grafana on Node1 or Node2
-depending on the testtype to publish KVM4NFV cyclic test results. The cyclictest cases are executed
-and results are published on Yardstick Dashboard(Graphana). InfluxDB is the database which will
+depending on the testtype to publish KVM4NFV test results. The cyclictest cases are executed
+and results are published on Yardstick Dashboard(Grafana). InfluxDB is the database which will
store the cyclictest results and Grafana is a visualisation suite to view the maximum,minumum and
-average values of the timeseries data of cyclictest results.The framework is shown in below image.
-
-.. Figure:: ../images/dashboard-architecture.png
+average values of the time series data of cyclictest results.The framework is shown in below image.
+.. figure:: images/dashboard-architecture.png
+ :name: dashboard-architecture
+ :width: 100%
+ :align: center
Version Features
-================
+----------------
+-----------------------------+--------------------------------------------+
| | |
| **Release** | **Features** |
| | |
+=============================+============================================+
-| | - Data published in Json file Format |
+| | - Data published in Json file format |
| Colorado | - No database support to store the test's |
| | latency values of cyclictest |
| | - For each run, the previous run's output |
@@ -36,13 +41,13 @@ Version Features
| | - Test results are stored in Influxdb |
| | - Graphical representation of the latency |
| Danube | values using Grafana suite. (Dashboard) |
-| | - Supports Graphical view for multiple |
+| | - Supports graphical view for multiple |
| | testcases and test-types (Stress/Idle) |
+-----------------------------+--------------------------------------------+
Installation Steps:
-===================
+-------------------
To configure Yardstick, InfluxDB and Grafana for KVMFORNFV project following sequence of steps are followed:
**Note:**
@@ -73,7 +78,7 @@ The Yardstick document for Grafana and InfluxDB configuration can be found `here
.. _here: https://wiki.opnfv.org/display/yardstick/How+to+deploy+InfluxDB+and+Grafana+locally
Configuring the Dispatcher Type:
-================================
+---------------------------------
Need to configure the dispatcher type in /etc/yardstick/yardstick.conf depending on the dispatcher
methods which are used to store the cyclictest results. A sample yardstick.conf can be found at
/yardstick/etc/yardstick.conf.sample, which can be copied to /etc/yardstick.
@@ -91,9 +96,9 @@ Three type of dispatcher methods are available to store the cyclictest results.
- InfluxDB
- HTTP
-**1. File**: Default Dispatcher module is file.If the dispatcher module is configured as a file,then the test results are stored in yardstick.out file.
+**1. File**: Default Dispatcher module is file. If the dispatcher module is configured as a file,then the test results are stored in a temporary file yardstick.out
( default path: /tmp/yardstick.out).
-Dispatcher module of "Verify Job" is "Default".So,the results are stored in Yardstick.out file for verify job. Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered.
+Dispatcher module of "Verify Job" is "Default". So,the results are stored in Yardstick.out file for verify job. Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered.
.. code:: bash
@@ -101,9 +106,14 @@ Dispatcher module of "Verify Job" is "Default".So,the results are stored in Yard
debug = False
dispatcher = file
-**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb.Users can check test results stored in the Influxdb(Database) on Grafana which is used to visualize the time series data.
+ [dispatcher_file]
+ file_path = /tmp/yardstick.out
+ max_bytes = 0
+ backup_count = 0
+
+**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb. Users can check test resultsstored in the Influxdb(Database) on Grafana which is used to visualize the time series data.
-To configure the influxdb ,the following content in /etc/yardstick/yardstick.conf need to updated
+To configure the influxdb, the following content in /etc/yardstick/yardstick.conf need to updated
.. code:: bash
@@ -111,7 +121,14 @@ To configure the influxdb ,the following content in /etc/yardstick/yardstick.con
debug = False
dispatcher = influxdb
-Dispatcher module of "Daily Job" is Influxdb.So the results are stored in influxdb and then published to Dashboard.
+ [dispatcher_influxdb]
+ timeout = 5
+ target = http://127.0.0.1:8086 ##Mention the IP where influxdb is running
+ db_name = yardstick
+ username = root
+ password = root
+
+Dispatcher module of "Daily Job" is Influxdb. So, the results are stored in influxdb and then published to Dashboard.
**3. HTTP**: If the dispatcher module is configured as http, users can check test result on OPNFV testing dashboard which uses MongoDB as backend.
@@ -121,13 +138,17 @@ Dispatcher module of "Daily Job" is Influxdb.So the results are stored in influx
debug = False
dispatcher = http
-.. Figure:: ../images/UseCaseDashboard.png
+ [dispatcher_http]
+ timeout = 5
+ target = http://127.0.0.1:8000/results
+
+.. figure:: images/UseCaseDashboard.png
Detailing the dispatcher module in verify and daily Jobs:
----------------------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily).Once the test is completed, results are published to the respective dispatcher modules.
+KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily). Once the test is completed, results are published to the respective dispatcher modules.
Dispatcher module is configured for each Job type as mentioned below.
@@ -182,9 +203,15 @@ Influxdb api which is already implemented in `Influxdb`_ will post the data in l
- Grafana can be accessed at `Login`_ using credentials opnfv/opnfv and used for visualizing the collected test data as shown in `Visual`_\
-.. Figure:: ../images/Dashboard-screenshot-1.png
+.. figure:: images/Dashboard-screenshot-1.png
+ :name: dashboard-screenshot-1
+ :width: 100%
+ :align: center
-.. Figure:: ../images/Dashboard-screenshot-2.png
+.. figure:: images/Dashboard-screenshot-2.png
+ :name: dashboard-screenshot-2
+ :width: 100%
+ :align: center
.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py
@@ -199,9 +226,9 @@ Influxdb api which is already implemented in `Influxdb`_ will post the data in l
.. _GrafanaDoc: http://docs.grafana.org/
Understanding Kvmfornfv Grafana Dashboard
-=========================================
+------------------------------------------
-The Kvmfornfv Dashboard found at http://testresults.opnfv.org/ currently supports graphical view of Cyclictest. For viewing Kvmfornfv Dashboard use,
+The Kvmfornfv dashboard found at http://testresults.opnfv.org/ currently supports graphical view of cyclictest. For viewing Kvmfornfv dashboarduse,
.. code:: bash
@@ -212,6 +239,15 @@ The Kvmfornfv Dashboard found at http://testresults.opnfv.org/ currently support
Username: opnfv
Password: opnfv
+
+.. code:: bash
+
+ The JSON of the kvmfonfv-cyclictest dashboard can be found at.,
+
+ $ git clone https://gerrit.opnfv.org/gerrit/yardstick.git
+ $ cd yardstick/dashboard
+ $ cat KVMFORNFV-Cyclictest
+
The Dashboard has four tables, each representing a specific test-type of cyclictest case,
- Kvmfornfv_Cyclictest_Idle-Idle
@@ -226,33 +262,49 @@ Note:
**A brief about what each graph of the dashboard represents:**
1. Idle-Idle Graph
--------------------
-`Idle-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that no stress is applied on the Host or the Guest.
+~~~~~~~~~~~~~~~~~~~~
+`Idle-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the cyclictest. Idle_Idleimplies that no stress is applied on the Host or the Guest.
.. _Idle-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=10&fullscreen
-.. Figure:: ../images/Idle-Idle.png
+.. figure:: images/Idle-Idle.png
+ :name: Idle-Idle graph
+ :width: 100%
+ :align: center
2. CPU_Stress-Idle Graph
---------------------------
-`Cpu_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that CPU stress is applied on the Host and no stress on the Guest.
+~~~~~~~~~~~~~~~~~~~~~~~~~
+`Cpu_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the cyclictest. Idle_Idle implies that CPU stress is applied on the Host and no stress on the Guest.
.. _Cpu_stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=11&fullscreen
-.. Figure:: ../images/Cpustress-Idle.png
+.. figure:: images/Cpustress-Idle.png
+ :name: cpustress-idle graph
+ :width: 100%
+ :align: center
3. Memory_Stress-Idle Graph
-----------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`Memory_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that Memory stress is applied on the Host and no stress on the Guest.
.. _Memory_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=12&fullscreen
-.. Figure:: ../images/Memorystress-Idle.png
+.. figure:: images/Memorystress-Idle.png
+ :name: memorystress-idle graph
+ :width: 100%
+ :align: center
4. IO_Stress-Idle Graph
-------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~
`IO_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that IO stress is applied on the Host and no stress on the Guest.
.. _IO_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=13&fullscreen
-.. Figure:: ../images/IOstress-Idle.png
+.. figure:: images/IOstress-Idle.png
+ :name: iostress-idle graph
+ :width: 100%
+ :align: center
+
+Future Scope
+-------------
+The future work will include adding the kvmfornfv_Packet-forwarding test results into Grafana and influxdb.
diff --git a/docs/userguide/low_latency.userguide.rst b/docs/userguide/low_latency.userguide.rst
index 66e63770c..88cc0347e 100644
--- a/docs/userguide/low_latency.userguide.rst
+++ b/docs/userguide/low_latency.userguide.rst
@@ -48,15 +48,19 @@ Please check the default kernel configuration in the source code at:
kernel/arch/x86/configs/opnfv.config.
Below is host kernel boot line example:
-::
-isolcpus=11-15,31-35 nohz_full=11-15,31-35 rcu_nocbs=11-15,31-35
-iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G mce=off idle=poll
-intel_pstate=disable processor.max_cstate=1 pcie_asmp=off tsc=reliable
+
+.. code:: bash
+
+ isolcpus=11-15,31-35 nohz_full=11-15,31-35 rcu_nocbs=11-15,31-35
+ iommu=pt intel_iommu=on default_hugepagesz=1G hugepagesz=1G mce=off idle=poll
+ intel_pstate=disable processor.max_cstate=1 pcie_asmp=off tsc=reliable
Below is guest kernel boot line example
-::
-isolcpus=1 nohz_full=1 rcu_nocbs=1 mce=off idle=poll default_hugepagesz=1G
-hugepagesz=1G
+
+.. code:: bash
+
+ isolcpus=1 nohz_full=1 rcu_nocbs=1 mce=off idle=poll default_hugepagesz=1G
+ hugepagesz=1G
Please refer to `tuning.userguide` for more explanation.
@@ -68,45 +72,194 @@ environment is also required. Please refer to `tunning.userguide` for
more explanation.
Test cases to measure Latency
-=============================
+-----------------------------
+The performance of the kvmfornfv is assesed by the latency values. Cyclictest and Packet forwarding
+Test cases result in real time latency values of average, minimum and maximum.
+
+* Cyclictest
+
+* Packet Forwarding test
-Cyclictest case
----------------
+1. Cyclictest case
+-------------------
+Cyclictest results are the most frequently cited real-time Linux metric. The core concept of Cyclictest is very simple.
+In KVMFORNFV cyclictest is implemented on the Guest-VM with 4.4-Kernel RPM installed. It generated Max,Min and Avg
+values which help in assesing the kernel used. Cyclictest in currently divided into the following test types,
+
+* Idle-Idle
+* CPU_stress-Idle
+* Memory_stress-Idle
+* IO_stress-Idle
+
+Future scope of work may include the below test-types,
+
+* CPU_stress-CPU_stress
+* Memory_stress-Memory_stress
+* IO_stress-IO_stress
Understanding the naming convention
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. code:: bash
+
+ [Host-Type ] - [Guest-Type]
+
+* **Host-Type :** Mentions the type of stress applied on the kernel of the Host
+* **Guest-Type :** Mentions the type of stress applied on the kernel of the Guest
+
+Example.,
+
+.. code:: bash
+
+ Idle - CPU_stress
+
+The above name signifies that,
+
+- No Stress is applied on the Host kernel
+
+- CPU Stress is applied on the Guest kernel
+
+**Note:**
+
+- Stress is applied using the stress which is installed as part of the deployment.
+ Stress can be applied on CPU, Memory and Input-Output (Read/Write) operations using the stress tool.
+
+Version Features
+~~~~~~~~~~~~~~~~
+
++-----------------------+------------------+-----------------+
+| **Test Name** | **Colorado** | **Danube** |
+| | | |
++-----------------------+------------------+-----------------+
+| - Idle - Idle | ``Y`` | ``Y`` |
+| | | |
+| - Cpustress - Idle | | ``Y`` |
+| | | |
+| - Memorystress - Idle | | ``Y`` |
+| | | |
+| - IOstress - Idle | | ``Y`` |
+| | | |
++-----------------------+------------------+-----------------+
+
+
Idle-Idle test-type
~~~~~~~~~~~~~~~~~~~
+Cyclictest in run on the Guest VM when Host,Guest are not under any kind of stress. This is the basic
+cyclictest of the KVMFORNFV project. Outputs Avg, Min and Max latency values.
+
+.. figure:: images/idle-idle-test-type.png
+ :name: idle-idle test type
+ :width: 100%
+ :align: center
CPU_Stress-Idle test-type
--------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Here, the host is under CPU stress, where multiple times sqrt() function is called on kernel which
+results increased CPU load. The cyclictest will run on the guest, where the guest is under no stress.
+Outputs Avg, Min and Max latency values.
+
+.. figure:: images/cpu-stress-idle-test-type.png
+ :name: cpu-stress-idle test type
+ :width: 100%
+ :align: center
Memory_Stress-Idle test-type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+In this type, the host is under memory stress where continuos memory operations are implemented to
+increase the Memory stress (Buffer stress).The cyclictest will run on the guest, where the guest is under
+no stress. It outputs Avg, Min and Max latency values.
+
+.. figure:: images/memory-stress-idle-test-type.png
+ :name: memory-stress-idle test type
+ :width: 100%
+ :align: center
IO_Stress-Idle test-type
~~~~~~~~~~~~~~~~~~~~~~~~
+The host is under constant Input/Output stress .i.e., multiple read-write operations are invoked to
+increase stress. Cyclictest will run on the guest VM that is launched on the same host, where the guest
+is under no stress. It outputs Avg, Min and Max latency values.
+
+.. figure:: images/io-stress-test-type.png
+ :name: io-stress-idle test type
+ :width: 100%
+ :align: center
CPU_Stress-CPU_Stress test-type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Not implemented for Danube release.
Memory_Stress-Memory_Stress test-type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Not implemented for Danube release.
IO_Stress-IO_Stress test type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Not implemented for Danube release.
+
+2. Packet Forwarding Test cases
+-------------------------------
+Packet forwarding is an other test case of Kvmfornfv. It measures the time taken by a packet to return
+to source after reaching its destination. This test case uses automated test-framework provided by
+OPNFV VSWITCHPERF project and a traffic generator (IXIA is used for kvmfornfv). Only latency results
+generating test cases are triggered as a part of kvmfornfv daily job.
+
+Latency test measures the time required for a frame to travel from the originating device through the
+network to the destination device. Please note that RFC2544 Latency measurement will be superseded with
+a measurement of average latency over all successfully transferred packets or frames.
-Packet Forwarding Test case
----------------------------
+Packet forwarding test cases currently supports the following test types:
+
+* Packet forwarding to Host
+
+* Packet forwarding to Guest
+
+* Packet forwarding to Guest using SRIOV
+
+The testing approach adoped is black box testing, meaning the test inputs can be generated and the
+outputs captured and completely evaluated from the outside of the System Under Test(SUT).
Packet forwarding to Host
~~~~~~~~~~~~~~~~~~~~~~~~~
+This is also known as Physical port → vSwitch → physical port deployment.
+This test measures the time taken by the packet/frame generated by traffic generator(phy) to travel
+through the network to the destination device(phy). This test results min,avg and max latency values.
+This value signifies the performance of the installed kernel.
+
+Packet flow,
+
+.. figure:: images/host_pk_fw.png
+ :name: packet forwarding to host
+ :width: 100%
+ :align: center
Packet forwarding to Guest
~~~~~~~~~~~~~~~~~~~~~~~~~~
+This is also known as Physical port → vSwitch → VNF → vSwitch → physical port deployment.
+
+This test measures the time taken by the packet/frame generated by traffic generator(phy) to travel
+through the network involving a guest to the destination device(phy). This test results min,avg and
+max latency values. This value signifies the performance of the installed kernel.
+
+Packet flow,
+
+.. figure:: images/guest_pk_fw.png
+ :name: packet forwarding to guest
+ :width: 100%
+ :align: center
Packet forwarding to Guest using SRIOV
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+This test is used to verify the VNF and measure the base performance (maximum forwarding rate in
+fps and latency) that can be achieved by the VNF without a vSwitch. The performance metrics
+collected by this test will serve as a key comparison point for NIC passthrough technologies and
+vSwitches. VNF in this context refers to the hypervisor and the VM.
+
+**Note:** The Vsperf running on the host is still required.
+Packet flow,
+.. figure:: images/sriov_pk_fw.png
+ :name: packet forwarding to guest using sriov
+ :width: 100%
+ :align: center
diff --git a/docs/userguide/openstack.rst b/docs/userguide/openstack.rst
index bd1919991..929d2ba42 100644
--- a/docs/userguide/openstack.rst
+++ b/docs/userguide/openstack.rst
@@ -2,19 +2,19 @@
.. http://creativecommons.org/licenses/by/4.0
---------------------------------
-Colorado OpenStack User Guide
---------------------------------
+============================
+Danube OpenStack User Guide
+============================
OpenStack is a cloud operating system developed and released by the
`OpenStack project <https://www.openstack.org>`_. OpenStack is used in OPNFV
for controlling pools of compute, storage, and networking resources in a Pharos
compliant infrastructure.
-OpenStack is used in Colorado to manage tenants (known in OpenStack as
+OpenStack is used in Danube to manage tenants (known in OpenStack as
projects),users, services, images, flavours, and quotas across the Pharos
infrastructure.The OpenStack interface provides the primary interface for an
-operational Colorado deployment and it is from the "horizon console" that an
+operational Danube deployment and it is from the "horizon console" that an
OPNFV user will perform the majority of administrative and operational
activities on the deployment.
@@ -26,7 +26,7 @@ details and descriptions of how to configure and interact with the OpenStack
deployment.This guide can be used by lab engineers and operators to tune the
OpenStack deployment to your liking.
-Once you have configured OpenStack to your purposes, or the Colorado
+Once you have configured OpenStack to your purposes, or the Danube
deployment meets your needs as deployed, an operator, or administrator, will
find the best guidance for working with OpenStack in the
`OpenStack administration guide <http://docs.openstack.org/user-guide-admin>`_.
@@ -46,6 +46,6 @@ and enter the username and password:
password: admin
Other methods of interacting with and configuring OpenStack,, like the REST API
-and CLI are also available in the Colorado deployment, see the
+and CLI are also available in the Danube deployment, see the
`OpenStack administration guide <http://docs.openstack.org/user-guide-admin>`_
for more information on using those interfaces.
diff --git a/docs/userguide/packet_forwarding.userguide.rst b/docs/userguide/packet_forwarding.userguide.rst
index ba117508c..594952bdf 100644
--- a/docs/userguide/packet_forwarding.userguide.rst
+++ b/docs/userguide/packet_forwarding.userguide.rst
@@ -5,14 +5,14 @@
=================
PACKET FORWARDING
=================
-=======================
+
About Packet Forwarding
-=======================
+-----------------------
-Packet Forwarding is a test suite of KVMFORNFV which is used to measure the total time taken by a
-**Packet** generated by the traffic generator to return from Guest/Host as per the implemented
-scenario. Packet Forwarding is implemented using VSWITCHPERF/``VSPERF software of OPNFV`` and an
-``IXIA Traffic Generator``.
+Packet Forwarding is a test suite of KVMFORNFV. These latency tests measures the time taken by a
+**Packet** generated by the traffic generator to travel from the originating device through the
+network to the destination device. Packet Forwarding is implemented using test framework
+implemented by OPNFV VSWITCHPERF project and an ``IXIA Traffic Generator``.
Version Features
----------------
@@ -29,14 +29,14 @@ Version Features
| | - Packet Forwarding is a testcase in KVMFORNFV |
| | - Implements three scenarios (Host/Guest/SRIOV) |
| | as part of testing in KVMFORNFV |
-| Danube | - Uses available testcases of OPNFV's VSWTICHPERF |
-| | software (PVP/PVVP) |
+| Danube | - Uses automated test framework of OPNFV |
+| | VSWITCHPERF software (PVP/PVVP) |
+| | |
| | - Works with IXIA Traffic Generator |
+-----------------------------+---------------------------------------------------+
-======
VSPERF
-======
+------
VSPerf is an OPNFV testing project.
VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated
@@ -47,17 +47,18 @@ VNF level testing and validation.
For complete VSPERF documentation go to `link.`_
-.. _link.: <http://artifacts.opnfv.org/vswitchperf/colorado/index.html>
+.. _link.: http://artifacts.opnfv.org/vswitchperf/colorado/index.html
Installation
-------------
+~~~~~~~~~~~~
+
Guidelines of installating `VSPERF`_.
-.. _VSPERF: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html>
+.. _VSPERF: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html
Supported Operating Systems
----------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
* CentOS 7
* Fedora 20
@@ -67,19 +68,21 @@ Supported Operating Systems
* Ubuntu 14.04
Supported vSwitches
--------------------
+~~~~~~~~~~~~~~~~~~~
+
The vSwitch must support Open Flow 1.3 or greater.
* OVS (built from source).
* OVS with DPDK (built from source).
Supported Hypervisors
----------------------
+~~~~~~~~~~~~~~~~~~~~~
* Qemu version 2.3.
Other Requirements
-------------------
+~~~~~~~~~~~~~~~~~~
+
The test suite requires Python 3.3 and relies on a number of other
packages. These need to be installed for the test suite to function.
@@ -93,9 +96,9 @@ user account, which will be used for vsperf execution.
Execution of installation script:
-.. code:: bashFtrace.debugging.tool.userguide.rst
+.. code:: bash
- $ cd Vswitchperf
+ $ cd vswitchperf
$ cd systems
$ ./build_base_machine.sh
@@ -115,10 +118,10 @@ For running testcases VSPERF is installed on Intel pod1-node2 in which centos
operating system is installed. Only VSPERF installion on Centos is discussed here.
For installation steps on other operating systems please refer to `here`_.
-.. _here: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html>
+.. _here: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html
For CentOS 7
------------------
+~~~~~~~~~~~~~~
## Python 3 Packages
@@ -147,16 +150,16 @@ To activate, simple run:
Working Behind a Proxy
------------------------
+~~~~~~~~~~~~~~~~~~~~~~
If you're behind a proxy, you'll likely want to configure this before running any of the above. For example:
.. code:: bash
- export http_proxy=proxy.mycompany.com:123
- export https_proxy=proxy.mycompany.com:123
-
-
+ export http_proxy="http://<username>:<password>@<proxy>:<port>/";
+ export https_proxy="https://<username>:<password>@<proxy>:<port>/";
+ export ftp_proxy="ftp://<username>:<password>@<proxy>:<port>/";
+ export socks_proxy="socks://<username>:<password>@<proxy>:<port>/";
.. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
.. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
@@ -166,10 +169,11 @@ For other OS specific activation click `this link`_:
.. _this link: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements
Traffic-Generators
--------------------
+------------------
+
VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic-generator go through `this`_.
-.. _this: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html>
+.. _this: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html
VSPERF supports the following traffic generators:
@@ -191,35 +195,40 @@ and configure the various traffic generators.
As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation regarding traffic generators please follow this `link`_.
-.. _link: <https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD>
+.. _link: https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD
-==========
IXIA Setup
-==========
+----------
-=====================
Hardware Requirements
-=====================
-VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
+~~~~~~~~~~~~~~~~~~~~~
+
+VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that
+runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
Installation
--------------
+~~~~~~~~~~~~
Follow the [installation instructions] to install.
-IXIA Setup
-------------
On the CentOS 7 system
-----------------------
+~~~~~~~~~~~~~~~~~~~~~~
+
You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz.
On the IXIA client software system
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server)
- Right click on IxNetwork TCL Server, select properties
- - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx" where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file).
+ - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx"
+
+where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file).
-.. Figure:: ../images/IXIA1.png
+.. figure:: images/IXIA1.png
+ :name: IXIA1 setup
+ :width: 100%
+ :align: center
- Hit Ok and start the TCL server application
@@ -261,7 +270,7 @@ Detailed description of options follows:
.. _test-results-share:
Test results share
--------------------
+~~~~~~~~~~~~~~~~~~
VSPERF is not able to retrieve test results via TCL API directly. Instead, all test
results are stored at IxNetwork TCL server. Results are stored at folder defined by
@@ -285,19 +294,20 @@ Example of sharing configuration:
Note: It is essential to use slashes '/' also in path
configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter.
- * Install cifs-utils package.
+
+* Install cifs-utils package.
e.g. at rpm based Linux distribution:
- .. code-block:: console
+.. code-block:: console
yum install cifs-utils
- * Mount shared directory, so VSPERF can access test results.
+* Mount shared directory, so VSPERF can access test results.
e.g. by adding new record into ``/etc/fstab``
- .. code-block:: console
+.. code-block:: console
mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results
-o file_mode=0777,dir_mode=0777,nounix
@@ -308,6 +318,7 @@ is visible at DUT inside ``/mnt/ixia_results`` directory.
Cloning and building src dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build
them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory
contains makefiles that will allow you to clone and build the libraries that VSPERF depends on,
@@ -326,13 +337,16 @@ To delete a src subdirectory and its contents to allow you to re-clone simply us
Configure the `./conf/10_custom.conf` file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values.
-The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom
+The configuration items that can be added is not limited to the initial contents. Any configuration item
+mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom
configuration value.
Using a custom settings file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument.
.. code:: bash
@@ -347,8 +361,34 @@ argument will override both the default and your custom configuration files. Thi
2. Environment variables
3. Configuration file(s)
+vloop_vnf
+~~~~~~~~~
+
+VSPERF uses a VM image called vloop_vnf for looping traffic in the deployment
+scenarios involving VMs. The image can be downloaded from
+`<http://artifacts.opnfv.org/>`__.
+
+Please see the installation instructions for information on :ref:`vloop-vnf`
+images.
+
+.. _l2fwd-module:
+
+l2fwd Kernel Module
+~~~~~~~~~~~~~~~~~~~
+
+A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
+support for Destination Network Address Translation (DNAT) for both the MAC and
+IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
+
+.. figure:: images/Guest_Scenario.png
+ :name: Guest_Scenario
+ :width: 100%
+ :align: center
+
+
Executing tests
~~~~~~~~~~~~~~~~
+
Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers:
.. code:: bash
@@ -382,7 +422,7 @@ Some tests allow for configurable parameters, including test duration (in second
./vsperf --conf-file user_settings.py
--tests RFC2544Tput
- --test-param "rfc2544_duration=10;packet_sizes=128"
+ --test-param` "rfc2544_duration=10;packet_sizes=128"
For all available options, check out the help dialog:
@@ -393,6 +433,7 @@ For all available options, check out the help dialog:
Testcases
----------
+
Available Tests in VSPERF are:
* phy2phy_tput
@@ -444,9 +485,9 @@ Example of execution of VSPERF in "trafficgen" mode:
--test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
-================================
Packet Forwarding Test Scenarios
-================================
+--------------------------------
+
KVMFORNFV currently implements three scenarios as part of testing:
* Host Scenario
@@ -455,32 +496,47 @@ KVMFORNFV currently implements three scenarios as part of testing:
Packet Forwarding Host Scenario
--------------------------------
-Here Host is NODE-2. It has VSPERF installed in it and is properly configured to use IXIA Traffic-generator by providing IXIA CARD, PORTS and Lib paths along with IP.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Here host DUT has VSPERF installed in it and is properly configured to use IXIA Traffic-generator
+by providing IXIA CARD, PORTS and Lib paths along with IP.
please refer to figure.2
-.. Figure:: ../images/Host_Scenario.png
+.. figure:: images/Host_Scenario.png
+ :name: Host_Scenario
+ :width: 100%
+ :align: center
Packet Forwarding Guest Scenario
---------------------------------
-Here the guest is a Virtual Machine (VM) launched by using a modified CentOS image(vsperf provided)
-on Node-2 (Host) using Qemu. In this scenario, the packet is initially forwarded to Host which is
-then forwarded to the launched guest. The time taken by the packet to reach the IXIA traffic-generator
-via Host and Guest is calculated and published as a test result of this scenario.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. Figure:: ../images/Guest_Scenario.png
+Here the guest is a Virtual Machine (VM) launched by using vloop_vnf provided by vsperf project
+on host/DUT using Qemu. In this latency test the time taken by the frame/packet to travel from the
+originating device through network involving a guest to destination device is calculated.
+The resulting latency values will define the performance of installed kernel.
+
+.. figure:: images/Guest_Scenario.png
+ :name: Guest_Scenario
+ :width: 100%
+ :align: center
Packet Forwarding SRIOV Scenario
---------------------------------
-Unlike the packet forwarding to Guest-via-Host scenario, here the packet generated at the IXIA is
-directly forwarded to the Guest VM launched on Host by implementing SR-IOV interface at NIC level
-of Host .i.e., Node-2. The time taken by the packet to reach the IXIA traffic-generator is calculated
-and published as a test result for this scenario. SRIOV-support_ is given below, it details how to use SR-IOV.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In this test the packet generated at the IXIA is forwarded to the Guest VM launched on Host by
+implementing SR-IOV interface at NIC level of host .i.e., DUT. The time taken by the packet to
+travel through the network to the destination the IXIA traffic-generator is calculated and
+published as a test result for this scenario.
-.. Figure:: ../images/SRIOV_Scenario.png
+SRIOV-support_ is given below, it details how to use SR-IOV.
+
+.. figure:: images/SRIOV_Scenario.png
+ :name: SRIOV_Scenario
+ :width: 100%
+ :align: center
Using vfio_pci with DPDK
-------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~
To use vfio with DPDK instead of igb_uio add into your custom configuration
file the following parameter:
@@ -521,7 +577,7 @@ To check that IOMMU is enabled on your platform:
.. _SRIOV-support:
Using SRIOV support
--------------------
+~~~~~~~~~~~~~~~~~~~
To use virtual functions of NIC with SRIOV support, use extended form
of NIC PCI slot definition:
@@ -553,3 +609,25 @@ For example:
* tests without vSwitch, where VM accesses VF interfaces directly
by PCI-passthrough to measure raw VM throughput performance.
+Using QEMU with PCI passthrough support
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Raw virtual machine throughput performance can be measured by execution of PVP
+test with direct access to NICs by PCI passthrough. To execute VM with direct
+access to PCI devices, enable vfio-pci_. In order to use virtual functions,
+SRIOV-support_ must be enabled.
+
+Execution of test with PCI passthrough with vswitch disabled:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
+ --vswitch none --vnf QemuPciPassthrough pvp_tput
+
+Any of supported guest-loopback-application_ can be used inside VM with
+PCI passthrough support.
+
+Note: Qemu with PCI passthrough support can be used only with PVP test
+deployment.
+
+.. _guest-loopback-application:
diff --git a/docs/userguide/pcm_utility.userguide.rst b/docs/userguide/pcm_utility.userguide.rst
index baef7059a..c8eb21d61 100644
--- a/docs/userguide/pcm_utility.userguide.rst
+++ b/docs/userguide/pcm_utility.userguide.rst
@@ -1,6 +1,15 @@
-=========================================================
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+===========================
+PCM Utility in KVMFORNFV
+===========================
+
Collecting Memory Bandwidth Information using PCM utility
-=========================================================
+---------------------------------------------------------
+This chapter includes how the PCM utility is used in kvmfornfv
+to collect memory bandwidth information
About PCM utility
-----------------
@@ -22,10 +31,10 @@ Version Features
| | cyclic testcases. |
| | |
+-----------------------------+-----------------------------------------------+
+| | - pcm-memory.x will be executed before the |
+| Danube | execution of every testcase |
| | - pcm-memory.x provides the memory bandwidth |
| | data throught out the testcases |
-| | - pcm-memory.x will be executedbefore the |
-| Danube | execution of every testcase |
| | - used for all test-types (stress/idle) |
| | - Generated memory bandwidth logs which are |
| | to published to the KVMFORFNV artifacts |
@@ -124,3 +133,9 @@ signal will be passed to terminate the pcm-memory process which was executing th
pcm-memory.x 60 &>/root/MBWInfo/MBWInfo_${testType}_${timeStamp}
+ where,
+ ${testType} = verify (or) daily
+
+Future Scope
+------------
+PCM information will be added to cyclictest of kvmfornfv in yardstick.