summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/scenarios/index.rst12
-rw-r--r--docs/scenarios/kvmfornfv.scenarios.description.rst430
-rw-r--r--docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst4
-rwxr-xr-xdocs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst12
-rw-r--r--docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst240
-rwxr-xr-xdocs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst12
-rw-r--r--docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst227
-rwxr-xr-xdocs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst12
-rw-r--r--docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst246
-rwxr-xr-xdocs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst12
-rw-r--r--docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst234
-rw-r--r--docs/userguide/Ftrace.debugging.tool.userguide.rst257
-rw-r--r--docs/userguide/images/Cpustress-Idle.pngbin0 -> 52980 bytes
-rw-r--r--docs/userguide/images/Dashboard-screenshot-1.pngbin0 -> 88894 bytes
-rw-r--r--docs/userguide/images/Dashboard-screenshot-2.pngbin0 -> 77862 bytes
-rw-r--r--docs/userguide/images/Guest_Scenario.pngbin0 -> 17182 bytes
-rw-r--r--docs/userguide/images/Host_Scenario.pngbin0 -> 14320 bytes
-rw-r--r--docs/userguide/images/IOstress-Idle.pngbin0 -> 57656 bytes
-rw-r--r--docs/userguide/images/IXIA1.pngbin0 -> 11667 bytes
-rw-r--r--docs/userguide/images/Idle-Idle.pngbin0 -> 52768 bytes
-rw-r--r--docs/userguide/images/Memorystress-Idle.pngbin0 -> 72474 bytes
-rw-r--r--docs/userguide/images/SRIOV_Scenario.pngbin0 -> 16525 bytes
-rw-r--r--docs/userguide/images/UseCaseDashboard.pngbin0 -> 87632 bytes
-rw-r--r--docs/userguide/images/dashboard-architecture.pngbin0 -> 80036 bytes
-rw-r--r--docs/userguide/index.rst6
-rw-r--r--docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst258
-rw-r--r--docs/userguide/low_latency.userguide.rst44
-rw-r--r--docs/userguide/packet_forwarding.userguide.rst555
-rw-r--r--docs/userguide/pcm_utility.userguide.rst126
29 files changed, 2684 insertions, 3 deletions
diff --git a/docs/scenarios/index.rst b/docs/scenarios/index.rst
new file mode 100644
index 000000000..5f41fd414
--- /dev/null
+++ b/docs/scenarios/index.rst
@@ -0,0 +1,12 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+****************************************
+os-nosdn-kvm-ha Overview and Description
+****************************************
+
+.. toctree::
+ :numbered:
+ :maxdepth: 4
+
+ ./kvmfornfv.scenarios.description.rst
diff --git a/docs/scenarios/kvmfornfv.scenarios.description.rst b/docs/scenarios/kvmfornfv.scenarios.description.rst
new file mode 100644
index 000000000..459852d53
--- /dev/null
+++ b/docs/scenarios/kvmfornfv.scenarios.description.rst
@@ -0,0 +1,430 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+========================
+KVM4NFV SCENARIO-TESTING
+========================
+
+ABSTRACT
+========
+
+This document describes the procedure to deploy/test KVM4NFV scenario in a nested virtualization
+environment in a single system. This has been verified with os-nosdn-kvm-ha, os-nosdn-kvm-noha,
+os-nosdn-kvm_ovs_dpdk-ha, os-nosdn-kvm_ovs_dpdk-noha and os-nosdn-kvm_ovs_dpdk_bar-ha test scenario.
+
+Version Features
+================
+
++-----------------------------+---------------------------------------------+
+| | |
+| **Release** | **Features** |
+| | |
++=============================+=============================================+
+| | - Scenario Testing feature was not part of |
+| Colorado | the Colorado release of KVMFORNFV |
+| | |
++-----------------------------+---------------------------------------------+
+| | - High Availability deployment and |
+| | configuration of KVMFORNFV software suite |
+| Danube | - Multi-node setup with 3 controllers and |
+| | 2 computes nodes are deployed |
+| | - Scenarios os-nosdn-kvm_ovs_dpdk-ha and |
+| | os-nosdn-kvm_ovs_dpdk_bar-ha are supported|
+| | |
++-----------------------------+---------------------------------------------+
+
+
+INTRODUCTION
+============
+The purpose of os-nosdn-kvm_ovs_dpdk-ha and os-nosdn-kvm_ovs_dpdk_bar-ha scenario testing is to
+test the High Availability deployment and configuration of OPNFV software suite with OpenStack and
+without SDN software. This OPNFV software suite includes OPNFV KVMFORNFV latest software packages
+for Linux Kernel and QEMU patches for achieving low latency and also OPNFV Barometer for traffic,
+performance and platform monitoring. High Availability feature is achieved by deploying OpenStack
+multi-node setup with 1 Fuel-Master,3 controllers and 2 computes nodes.
+
+KVMFORNFV packages will be installed on compute nodes as part of deployment. The scenario testcase deploys a multi-node setup by using OPNFV Fuel deployer.
+
+1. System pre-requisites
+------------------------
+
+- RAM - Minimum 16GB
+- HARD DISK - Minimum 500GB
+- Linux OS installed and running
+- Nested Virtualization enabled, which can be checked by,
+
+.. code:: bash
+
+ $ cat /sys/module/kvm_intel/parameters/nested
+ Y
+
+ $ cat /proc/cpuinfo | grep vmx
+
+*Note:*
+If Nested virtualization is disabled, enable it by,
+
+.. code:: bash
+
+ For Ubuntu:
+ $ modeprobe kvm_intel
+ $ echo Y > /sys/module/kvm_intel/parameters/nested
+ $ sudo reboot
+
+ For RHEL:
+ $ cat << EOF > /etc/modprobe.d/kvm_intel.conf
+ options kvm-intel nested=1
+ options kvm-intel enable_shadow_vmcs=1
+ options kvm-intel enable_apicv=1
+ options kvm-intel ept=1
+ EOF
+ $ cat << EOF > /etc/sysctl.d/98-rp-filter.conf
+ net.ipv4.conf.default.rp_filter = 0
+ net.ipv4.conf.all.rp_filter = 0
+ EOF
+ $ sudo reboot
+
+2. Environment Setup
+--------------------
+
+**2.1 Configure apt.conf in /etc/apt**
+
+Create an apt.conf file in /etc/apt if it doesn't exist. Used to set proxy for apt-get if workin behind a proxy server.
+
+.. code:: bash
+
+ Acquire::http::proxy "http://<username>:<password>@<proxy>:<port>/";
+ Acquire::https::proxy "https://<username>:<password>@<proxy>:<port>/";
+ Acquire::ftp::proxy "ftp://<username>:<password>@<proxy>:<port>/";
+ Acquire::socks::proxy "socks://<username>:<password>@<proxy>:<port>/";
+
+**2.2 Network Time Protocol (NTP) setup and configuration**
+
+Install ntp by:
+
+.. code:: bash
+
+ $ sudo apt-get update
+ $ sudo apt-get install -y ntp
+
+Insert the following two lines after “server ntp.ubuntu.com” line and before “ # Access control configuration; see `link`_ for” line in /etc/ntp.conf file:
+
+.. _link: /usr/share/doc/ntp-doc/html/accopt.html
+
+server 127.127.1.0
+fudge 127.127.1.0 stratum 10
+
+Restart the ntp server
+
+.. code:: bash
+
+ $ sudo service ntp restart
+
+3. Scenario Testing
+-------------------
+
+There are three ways of performing scenario testing,
+ - 3.1 Fuel
+ - 3.2 OPNFV-Playground
+ - 3.3 Jenkins Project
+
+3.1 Fuel
+~~~~~~~~~
+
+**3.1.1 Clone the fuel repo :**
+
+.. code:: bash
+
+ git clone https://gerrit.opnfv.org/gerrit/fuel.git
+
+**3.1.2 Checkout to the specific version of the branch to deploy by:**
+
+.. code:: bash
+
+ git checkout stable/Colorado
+
+**3.1.3 Building the Fuel iso :**
+
+.. code:: bash
+
+ $ cd ~/fuel/ci/
+ $ ./build.sh -h
+
+Provides the necessary options that are required to build an iso. Creates a ``customized iso`` as per the deployment needs.
+
+.. code:: bash
+
+ $ cd ~/fuel/build/
+ $ make
+
+ (OR) Other way is to download the latest stable fuel iso from `here`_.
+
+.. _here: http://artifacts.opnfv.org/fuel/colorado/opnfv-colorado.3.0.iso
+
+**3.1.4 Creating a new deployment scenario**
+
+``(i). Naming the scenario file:``
+
+Include the new deployment scenario yaml file in deploy/scenario/. The file name should adhere to the following format :
+
+.. code:: bash
+
+ <ha | no-ha>_<SDN Controller>_<feature-1>_..._<feature-n>.yaml
+
+``(ii). The deployment configuration file should contain configuration metadata as stated below:``
+
+.. code:: bash
+
+ deployment-scenario-metadata:
+ title:
+ version:
+ created:
+
+``(iii). To include fuel plugins in the deployment configuration file, use the “stack-extentions” key:``
+
+.. code:: bash
+
+ Example:
+ stack-extensions:
+ - module: fuel-plugin-collectd-ceilometer
+ module-config-name: fuel-barometer
+ module-config-version: 1.0.0
+ module-config-override:
+ #module-config overrides
+
+
+The “module-config-name” and “module-config-version” should be same as the name of plugin configuration file.
+
+
+The “module-config-override” is used to configure the plugin by overrriding the corresponding keys in the plugin config yaml file present in ~/fuel/deploy/config/plugins/.
+
+``(iv). To configure the HA/No-Ha mode, network segmentation types and role to node assignments, use the “dea-override-config” key.``
+
+.. code:: bash
+
+ Example:
+ dea-override-config:
+ environment:
+ mode: ha
+ net_segment_type: tun
+ nodes:
+ - id: 1
+ interfaces: interfaces_1
+ role: mongo,controller,opendaylight
+ - id: 2
+ interfaces: interfaces_1
+ role: mongo,controller
+ - id: 3
+ interfaces: interfaces_1
+ role: mongo,controller
+ - id: 4
+ interfaces: interfaces_1
+ role: ceph-osd,compute
+ - id: 5
+ interfaces: interfaces_1
+ role: ceph-osd,compute
+ settings:
+ editable:
+ storage:
+ ephemeral_ceph:
+ description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes).
+ label: Ceph RBD for ephemeral volumes (Nova)
+ type: checkbox
+ value: true
+ weight: 75
+ images_ceph:
+ description: Configures Glance to use the Ceph RBD backend to store images.If enabled, this option will prevent Swift from installing.
+ label: Ceph RBD for images (Glance)
+ restrictions:
+ - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
+ type: checkbox
+ value: true
+ weight: 30
+
+Under the “dea-override-config” should provide atleast {environment:{mode:'value},{net_segment_type:'value'}
+and {nodes:1,2,...} and can also enable additional stack features such ceph,heat which overrides
+corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
+
+``(v). In order to configure the pod dha definition, use the “dha-override-config” key.``
+
+The “dha-override-config” key is an optional key present at the ending of the scenario file.
+
+``(vi). The scenario.yaml file is used to map the short names of scenario's to the one or more deployment scenario configuration yaml files.``
+
+The short scenario names should follow the scheme below:
+
+.. code:: bash
+
+ [os]-[controller]-[feature]-[mode]-[option]
+
+ [os]: mandatory
+ possible value: os
+
+please note that this field is needed in order to select parent jobs to list and do blocking relations between them.
+
+.. code:: bash
+
+
+ [controller]: mandatory
+ example values: nosdn, ocl, odl, onos
+
+ [mode]: mandatory
+ possible values: ha, noha
+
+ [option]: optional
+
+used for the scenarios those do not fit into naming scheme.
+optional field in the short scenario name should not be included if there is no optional scenario.
+
+.. code:: bash
+
+ Example:
+ 1. os-nosdn-kvm-noha
+ 2. os-nosdn-kvm_ovs_dpdk_bar-ha
+
+
+Example of how short scenario names are mapped to configuration yaml files:
+
+.. code:: bash
+
+ os-nosdn-kvm_ovs_dpdk-ha:
+ configfile: ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml
+
+Note:
+
+- ( - ) used for separator of fields. [os-nosdn-kvm_ovs_dpdk-ha]
+
+- ( _ ) used to separate the values belong to the same field. [os-nosdn-kvm_ovs_bar-ha].
+
+**3.1.5 Deploying the scenario**
+
+
+Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario:
+
+.. code:: bash
+
+ $ cd ~/fuel/ci/
+ $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovsdpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
+
+where,
+ -b is used to specify the configuration directory
+
+ -i is used to specify the image downloaded from artifacts.
+
+Note:
+
+.. code:: bash
+
+ Check $ sudo ./deploy.sh -h for further information.
+
+
+3.2 OPNFV-Playground
+~~~~~~~~~~~~~~~~~~~~
+
+Install OPNFV-playground (the tool chain to deploy/test CI scenarios in fuel@opnfv, ):
+
+.. code:: bash
+
+ $ cd ~
+ $ git clone https://github.com/jonasbjurel/OPNFV-Playground.git
+ $ cd OPNFV-Playground/ci_fuel_opnfv/
+
+- Follow the README.rst in this ~/OPNFV-Playground/ci_fuel_opnfv sub-holder to complete all necessary installation and setup.
+- Section “RUNNING THE PIPELINE” in README.rst explain how to use this ci_pipeline to deploy/test CI test scenarios, you can also use
+
+.. code:: bash
+
+ ./ci_pipeline.sh --help ##to learn more options.
+
+
+
+``3.2.1 Downgrade paramiko package from 2.x.x to 1.10.0``
+
+The paramiko package 2.x.x doesn’t work with OPNFV-playground tool chain now, Jira ticket FUEL - 188 has been raised for the same.
+
+Check paramiko package version by following below steps in your system:
+
+$ python
+Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information.
+>>> import paramiko
+>>> print paramiko.__version__
+>>> exit()
+
+You will get the current paramiko package version, if it is 2.x.x, uninstall this version by
+
+.. code:: bash
+
+ $ sudo pip uninstall paramiko
+
+Ubuntu 14.04 LTS has python-paramiko package (1.10.0), install it by
+
+.. code:: bash
+
+ $ sudo apt-get install python-paramiko
+
+
+Verify it by following:
+
+.. code:: bash
+
+ $ python
+ >>> import paramiko
+ >>> print paramiko.__version__
+ >>> exit()
+
+
+``3.2.2 Clone the fuel@opnfv``
+
+Check out the specific version of specific branch of fuel@opnfv
+
+.. code:: bash
+
+ $ cd ~
+ $ git clone https://gerrit.opnfv.org/gerrit/fuel.git
+ $ cd fuel
+ $ git checkout stable/Colorado
+
+
+``3.2.3 Creating the scenario``
+
+Implement the scenario file as described in 3.1.4
+
+``3.2.4 Deploying the scenario``
+
+You can use the following command to start to deploy/test os-nosdn kvm_ovs_dpdk-noha and os-nosdn-kvm_ovs_dpdk-ha scenario
+
+.. code:: bash
+
+ $ cd ~/OPNFV-Playground/ci_fuel_opnfv/
+
+For os-nosdn-kvm_ovs_dpdk-ha :
+
+.. code:: bash
+
+ $ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk-ha
+
+For os-nosdn-kvm_ovs_dpdk_bar-ha:
+
+.. code:: bash
+
+ $ ./ci_pipeline.sh -r ~/fuel -i /root/fuel.iso -B -n intel-sc -s os-nosdn-kvm_ovs_dpdk_bar-ha
+
+The “ci_pipeline.sh” first clones the local fuel repo, then deploys the
+os-nosdn-kvm_ovs_dpdk-ha/os-nosdn-kvm_ovs_dpdk-noha scenario from the given ISO, and run Func test
+and Yarstick test. The log of the deployment/test (ci.log) can be found in
+~/OPNFV-Playground/ci_fuel_opnfv/artifact/master/YYYY-MM-DD—HH.mm, where YYYY-MM-DD—HH.mm is the
+date/time you start the “ci_pipeline.sh”.
+
+Note:
+
+.. code:: bash
+
+ Check $ ./ci_pipeline.sh -h for further information.
+
+
+3.3 Jenkins Project
+~~~~~~~~~~~~~~~~~~~
+
+os-nosdn-kvm_ovs_dpdk-ha and os-nosdn-kvm_ovs_dpdk_bar-ha scenario can be executed from the jenkins project :
+
+ 1. "fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk-ha)
+ 2. "fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master" (os-nosdn-kvm_ovs_dpdk_bar-ha)
diff --git a/docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst b/docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst
index d60276e0f..9d8285831 100644
--- a/docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst
+++ b/docs/scenarios/os-nosdn-kvm-ha/os-nosdn-kvm-ha.description.rst
@@ -122,5 +122,5 @@ Known Limitations, Issues and Workarounds
References
==========
-For more information on the OPNFV Colorado release, please visit
-http://www.opnfv.org/colorado
+For more information on the OPNFV Danube release, please visit
+http://www.opnfv.org/danube
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst
new file mode 100755
index 000000000..5582f46c7
--- /dev/null
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/index.rst
@@ -0,0 +1,12 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+****************************************
+os-nosdn-kvm-ha Overview and Description
+****************************************
+
+.. toctree::
+ :numbered:
+ :maxdepth: 3
+
+ os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
new file mode 100644
index 000000000..40b9748af
--- /dev/null
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-ha/os-nosdn-kvm_nfv_ovs_dpdk-ha.description.rst
@@ -0,0 +1,240 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+
+Introduction
+============
+
+.. In this section explain the purpose of the scenario and the
+ types of capabilities provided
+
+The purpose of os-nosdn-kvm_ovs_dpdk-ha scenario testing is to test the
+High Availability deployment and configuration of OPNFV software suite
+with OpenStack and without SDN software. This OPNFV software suite
+includes OPNFV KVM4NFV latest software packages for Linux Kernel and
+QEMU patches for achieving low latency. High Availability feature is achieved
+by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
+
+KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
+
+Scenario Components and Composition
+===================================
+.. In this section describe the unique components that make up the scenario,
+.. what each component provides and why it has been included in order
+.. to communicate to the user the capabilities available in this scenario.
+
+This scenario deploys the High Availability OPNFV Cloud based on the
+configurations provided in ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml.
+This yaml file contains following configurations and is passed as an
+argument to deploy.py script
+
+* scenario.yaml:This configuration file defines translation between a
+ short deployment scenario name(os-nosdn-kvm_ovs_dpdk-ha) and an actual deployment
+ scenario configuration file(ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml)
+
+* ``deployment-scenario-metadata:`` Contains the configuration metadata like
+ title,version,created,comment.
+
+.. code:: bash
+
+ deployment-scenario-metadata:
+ title: NFV KVM and OVS-DPDK HA deployment
+ version: 0.0.1
+ created: Dec 20 2016
+ comment: NFV KVM and OVS-DPDK
+
+* ``stack-extensions:`` Stack extentions are opnfv added value features in form
+ of a fuel-plugin.Plugins listed in stack extensions are enabled and
+ configured. os-nosdn-kvm_ovs_dpdk-ha scenario currently uses KVM-1.0.0 plugin.
+
+.. code:: bash
+
+ stack-extensions:
+ - module: fuel-plugin-kvm
+ module-config-name: fuel-nfvkvm
+ module-config-version: 1.0.0
+ module-config-override:
+ # Module config overrides
+
+* ``dea-override-config:`` Used to configure the HA mode,network segmentation
+ types and role to node assignments.These configurations overrides
+ corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
+ These keys are used to deploy multiple nodes(``3 controllers,2 computes``)
+ as mention below.
+
+ * **Node 1**: This node has MongoDB and Controller roles. The controller
+ node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard. The
+ Telemetry service which was designed to support billing systems for
+ OpenStack cloud resources uses a NoSQL database to store information.
+ The database typically runs on the controller node.
+
+ * **Node 2**: This node has Controller and Ceph-osd roles. Ceph is a
+ massively scalable, open source, distributed storage system. It is
+ comprised of an object store, block store and a POSIX-compliant distributed
+ file system. Enabling Ceph, configures Nova to store ephemeral volumes in
+ RBD, configures Glance to use the Ceph RBD backend to store images,
+ configures Cinder to store volumes in Ceph RBD images and configures the
+ default number of object replicas in Ceph.
+
+ * **Node 3**: This node has Controller role in order to achieve high
+ availability.
+
+ * **Node 4**: This node has Compute role. The compute node runs the
+ hypervisor portion of Compute that operates tenant virtual machines
+ or instances. By default, Compute uses KVM as the hypervisor.
+
+ * **Node 5**: This node has compute role.
+
+ The below is the ``dea-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.
+
+.. code:: bash
+
+ dea-override-config:
+ fuel:
+ FEATURE_GROUPS:
+ - experimental
+ nodes:
+ - id: 1
+ interfaces: interfaces_1
+ role: controller
+ - id: 2
+ interfaces: interfaces_1
+ role: mongo,controller
+ - id: 3
+ interfaces: interfaces_1
+ role: ceph-osd,controller
+ - id: 4
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+ - id: 5
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+
+ attributes_1:
+ hugepages:
+ dpdk:
+ value: 1024
+ nova:
+ value:
+ '2048': 1024
+
+ settings:
+ editable:
+ storage:
+ ephemeral_ceph:
+ description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes).
+ label: Ceph RBD for ephemeral volumes (Nova)
+ type: checkbox
+ value: true
+ weight: 75
+ images_ceph:
+ description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing.
+ label: Ceph RBD for images (Glance)
+ restrictions:
+ - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
+ type: checkbox
+ value: true
+ weight: 30
+
+* ``dha-override-config:`` Provides information about the VM definition and
+ Network config for virtual deployment.These configurations overrides
+ the pod dha definition and points to the controller,compute and
+ fuel definition files.
+
+ The below is the ``dha-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.
+
+.. code:: bash
+
+ dha-override-config:
+ nodes:
+ - id: 1
+ libvirtName: controller1
+ libvirtTemplate: templates/virtual_environment/vms/controller.xml
+ - id: 2
+ libvirtName: controller2
+ libvirtTemplate: templates/virtual_environment/vms/controller.xml
+ - id: 3
+ libvirtName: controller3
+ libvirtTemplate: templates/virtual_environment/vms/controller.xml
+ - id: 4
+ libvirtName: compute1
+ libvirtTemplate: templates/virtual_environment/vms/compute.xml
+ - id: 5
+ libvirtName: compute2
+ libvirtTemplate: templates/virtual_environment/vms/compute.xml
+ - id: 6
+ libvirtName: fuel-master
+ libvirtTemplate: templates/virtual_environment/vms/fuel.xml
+ isFuel: yes
+ username: root
+ password: r00tme
+
+
+* os-nosdn-kvm_ovs_dpdk-ha scenario is successful when all the 5 Nodes are accessible,
+ up and running.
+
+
+
+**Note:**
+
+* In os-nosdn-kvm_ovs_dpdk-ha scenario, OVS is installed on the compute nodes with DPDK configured
+
+* This results in faster communication and data transfer among the compute nodes
+
+
+Scenario Usage Overview
+=======================
+.. Provide a brief overview on how to use the scenario and the features available to the
+.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
+.. where the specifics of the features are covered including examples and API's
+
+* The high availability feature can be acheived by executing deploy.py with
+ ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml as an argument.
+* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware
+ Environment:
+
+
+Command to deploy the os-nosdn-kvm_ovs_dpdk-ha scenario:
+
+.. code:: bash
+
+ $ cd ~/fuel/ci/
+ $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
+
+where,
+ -b is used to specify the configuration directory
+
+ -i is used to specify the image downloaded from artifacts.
+
+Note:
+
+.. code:: bash
+
+ Check $ sudo ./deploy.sh -h for further information.
+
+* os-nosdn-kvm_ovs_dpdk-ha scenario can be executed from the jenkins project
+ "fuel-os-nosdn-kvm_ovs_dpdk-ha-baremetal-daily-master"
+* This scenario provides the High Availability feature by deploying
+ 3 controller,2 compute nodes and checking if all the 5 nodes
+ are accessible(IP,up & running).
+* Test Scenario is passed if deployment is successful and all 5 nodes have
+ accessibility (IP , up & running).
+
+Known Limitations, Issues and Workarounds
+=========================================
+.. Explain any known limitations here.
+
+* Test scenario os-nosdn-kvm_ovs_dpdk-ha result is not stable.
+
+* As Functest and Yardstick test suites are not stable. Instances are not getting IP address from DHCP (functest issue).
+
+
+References
+==========
+
+For more information on the OPNFV Danube release, please visit
+http://www.opnfv.org/Danube
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst
new file mode 100755
index 000000000..9d60465d6
--- /dev/null
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/index.rst
@@ -0,0 +1,12 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+****************************************
+os-nosdn-kvm-ha Overview and Description
+****************************************
+
+.. toctree::
+ :numbered:
+ :maxdepth: 3
+
+ os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
new file mode 100644
index 000000000..3e354b5b9
--- /dev/null
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk-noha/os-nosdn-kvm_nfv_ovs_dpdk-noha.description.rst
@@ -0,0 +1,227 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+
+Introduction
+============
+
+.. In this section explain the purpose of the scenario and the
+ types of capabilities provided
+
+The purpose of os-nosdn-kvm_ovs_dpdk-noha scenario testing is to test the
+High Availability deployment and configuration of OPNFV software suite
+with OpenStack and without SDN software. This OPNFV software suite
+includes OPNFV KVM4NFV latest software packages for Linux Kernel and
+QEMU patches for achieving low latency. High Availability feature is achieved
+by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
+
+KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
+
+Scenario Components and Composition
+===================================
+.. In this section describe the unique components that make up the scenario,
+.. what each component provides and why it has been included in order
+.. to communicate to the user the capabilities available in this scenario.
+
+This scenario deploys the High Availability OPNFV Cloud based on the
+configurations provided in noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml.
+This yaml file contains following configurations and is passed as an
+argument to deploy.py script
+
+* scenario.yaml:This configuration file defines translation between a
+ short deployment scenario name(os-nosdn-kvm_ovs_dpdk-noha) and an actual deployment
+ scenario configuration file(noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml)
+
+* ``deployment-scenario-metadata:`` Contains the configuration metadata like
+ title,version,created,comment.
+
+.. code:: bash
+
+ deployment-scenario-metadata:
+ title: NFV KVM and OVS-DPDK NOHA deployment
+ version: 0.0.1
+ created: Dec 20 2016
+ comment: NFV KVM and OVS-DPDK
+
+* ``stack-extensions:`` Stack extentions are opnfv added value features in form
+ of a fuel-plugin.Plugins listed in stack extensions are enabled and
+ configured. os-nosdn-kvm_ovs_dpdk-noha scenario currently uses KVM-1.0.0 plugin.
+
+.. code:: bash
+
+ stack-extensions:
+ - module: fuel-plugin-kvm
+ module-config-name: fuel-nfvkvm
+ module-config-version: 1.0.0
+ module-config-override:
+ # Module config overrides
+
+* ``dea-override-config:`` Used to configure the HA mode,network segmentation
+ types and role to node assignments.These configurations overrides
+ corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
+ These keys are used to deploy multiple nodes(``1 controller,3 computes``)
+ as mention below.
+
+ * **Node 1**: This node has MongoDB and Controller roles. The controller
+ node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard. The
+ Telemetry service which was designed to support billing systems for
+ OpenStack cloud resources uses a NoSQL database to store information.
+ The database typically runs on the controller node.
+
+ * **Node 2**: This node has compute and Ceph-osd roles. Ceph is a
+ massively scalable, open source, distributed storage system. It is
+ comprised of an object store, block store and a POSIX-compliant
+ file system. Enabling Ceph, configures Nova to store ephemeral volumes in
+ RBD, configures Glance to use the Ceph RBD backend to store images,
+ configures Cinder to store volumes in Ceph RBD images and configures the
+ default number of object replicas in Ceph.
+
+ * **Node 3**: This node has Compute role in order to achieve high
+ availability.
+
+ * **Node 4**: This node has Compute role. The compute node runs the
+ hypervisor portion of Compute that operates tenant virtual machines
+ or instances. By default, Compute uses KVM as the hypervisor.
+
+ The below is the ``dea-override-config`` of the noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml file.
+
+.. code:: bash
+
+ dea-override-config:
+ fuel:
+ FEATURE_GROUPS:
+ - experimental
+ environment:
+ net_segment_type: vlan
+ nodes:
+ - id: 1
+ interfaces: interfaces_vlan
+ role: mongo,controller
+ - id: 2
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+ - id: 3
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+ - id: 4
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+
+ attributes_1:
+ hugepages:
+ dpdk:
+ value: 1024
+ nova:
+ value:
+ '2048': 1024
+
+ network:
+ networking_parameters:
+ segmentation_type: vlan
+ networks:
+ - cidr: null
+ gateway: null
+ ip_ranges: []
+ meta:
+ configurable: false
+ map_priority: 2
+ name: private
+ neutron_vlan_range: true
+ notation: null
+ render_addr_mask: null
+ render_type: null
+ seg_type: vlan
+ use_gateway: false
+ vlan_start: null
+ name: private
+ vlan_start: null
+
+ settings:
+ editable:
+ storage:
+ ephemeral_ceph:
+ description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes).
+ label: Ceph RBD for ephemeral volumes (Nova)
+ type: checkbox
+ value: true
+ weight: 75
+ images_ceph:
+ description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing.
+ label: Ceph RBD for images (Glance)
+ restrictions:
+ - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
+ type: checkbox
+ value: true
+ weight: 30
+
+* ``dha-override-config:`` Provides information about the VM definition and
+ Network config for virtual deployment.These configurations overrides
+ the pod dha definition and points to the controller,compute and
+ fuel definition files. The noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml has no dha-config changes i.e., default configuration is used.
+
+* os-nosdn-kvm_ovs_dpdk-noha scenario is successful when all the 4 Nodes are accessible,
+ up and running.
+
+
+
+**Note:**
+
+* In os-nosdn-kvm_ovs_dpdk-noha scenario, OVS is installed on the compute nodes with DPDK configured
+
+* This results in faster communication and data transfer among the compute nodes
+
+
+Scenario Usage Overview
+=======================
+.. Provide a brief overview on how to use the scenario and the features available to the
+.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
+.. where the specifics of the features are covered including examples and API's
+
+* The high availability feature is disabled and deploymet is done by deploy.py with
+ noha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml as an argument.
+* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware
+ Environment:
+
+
+Command to deploy the os-nosdn-kvm_ovs_dpdk-noha scenario:
+
+.. code:: bash
+
+ $ cd ~/fuel/ci/
+ $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
+
+where,
+ -b is used to specify the configuration directory
+
+ -i is used to specify the image downloaded from artifacts.
+
+Note:
+
+.. code:: bash
+
+ Check $ sudo ./deploy.sh -h for further information.
+
+* os-nosdn-kvm_ovs_dpdk-noha scenario can be executed from the jenkins project
+ "fuel-os-nosdn-kvm_ovs_dpdk-noha-baremetal-daily-master"
+* This scenario provides the High Availability feature by deploying
+ 3 controller,2 compute nodes and checking if all the 5 nodes
+ are accessible(IP,up & running).
+* Test Scenario is passed if deployment is successful and all 5 nodes have
+ accessibility (IP , up & running).
+
+Known Limitations, Issues and Workarounds
+=========================================
+.. Explain any known limitations here.
+
+* Test scenario os-nosdn-kvm_ovs_dpdk-noha result is not stable.
+
+References
+==========
+
+For more information on the OPNFV Danube release, please visit
+http://www.opnfv.org/Danube
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst
new file mode 100755
index 000000000..5fccc5a2c
--- /dev/null
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/index.rst
@@ -0,0 +1,12 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+****************************************
+os-nosdn-kvm-ha Overview and Description
+****************************************
+
+.. toctree::
+ :numbered:
+ :maxdepth: 3
+
+ os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
new file mode 100644
index 000000000..7090ccdd6
--- /dev/null
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha/os-nosdn-kvm_nfv_ovs_dpdk_bar-ha.description.rst
@@ -0,0 +1,246 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+
+Introduction
+============
+
+.. In this section explain the purpose of the scenario and the
+ types of capabilities provided
+
+The purpose of os-nosdn-kvm_ovs_dpdk_bar-ha scenario testing is to test the
+High Availability deployment and configuration of OPNFV software suite
+with OpenStack and without SDN software. This OPNFV software suite
+includes OPNFV KVM4NFV latest software packages for Linux Kernel and
+QEMU patches for achieving low latency. High Availability feature is achieved
+by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
+
+KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
+
+Scenario Components and Composition
+===================================
+.. In this section describe the unique components that make up the scenario,
+.. what each component provides and why it has been included in order
+.. to communicate to the user the capabilities available in this scenario.
+
+This scenario deploys the High Availability OPNFV Cloud based on the
+configurations provided in ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml.
+This yaml file contains following configurations and is passed as an
+argument to deploy.py script
+
+* scenario.yaml:This configuration file defines translation between a
+ short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-ha) and an actual deployment
+ scenario configuration file(ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml)
+
+* ``deployment-scenario-metadata:`` Contains the configuration metadata like
+ title,version,created,comment.
+
+.. code:: bash
+
+ deployment-scenario-metadata:
+ title: NFV KVM and OVS-DPDK HA deployment
+ version: 0.0.1
+ created: Dec 20 2016
+ comment: NFV KVM and OVS-DPDK
+
+* ``stack-extensions:`` Stack extentions are opnfv added value features in form
+ of a fuel-plugin.Plugins listed in stack extensions are enabled and
+ configured. os-nosdn-kvm_ovs_dpdk_bar-ha scenario currently uses KVM-1.0.0 plugin and barometer plugin.
+
+.. code:: bash
+
+ stack-extensions:
+ - module: fuel-plugin-kvm
+ module-config-name: fuel-nfvkvm
+ module-config-version: 1.0.0
+ module-config-override:
+ # Module config overrides
+ - module: fuel-plugin-collectd-ceilometer
+ module-config-name: fuel-barometer
+ module-config-version: 1.0.0
+ module-config-override:
+ # Module config overrides
+
+
+* ``dea-override-config:`` Used to configure the HA mode,network segmentation
+ types and role to node assignments.These configurations overrides
+ corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
+ These keys are used to deploy multiple nodes(``3 controllers,2 computes``)
+ as mention below.
+
+ * **Node 1**: This node has MongoDB and Controller roles. The controller
+ node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard. The
+ Telemetry service which was designed to support billing systems for
+ OpenStack cloud resources uses a NoSQL database to store information.
+ The database typically runs on the controller node.
+
+ * **Node 2**: This node has Controller and Ceph-osd roles. Ceph is a
+ massively scalable, open source, distributed storage system. It is
+ comprised of an object store, block store and a POSIX-compliant distributed
+ file system. Enabling Ceph, configures Nova to store ephemeral volumes in
+ RBD, configures Glance to use the Ceph RBD backend to store images,
+ configures Cinder to store volumes in Ceph RBD images and configures the
+ default number of object replicas in Ceph.
+
+ * **Node 3**: This node has Controller role in order to achieve high
+ availability.
+
+ * **Node 4**: This node has Compute role. The compute node runs the
+ hypervisor portion of Compute that operates tenant virtual machines
+ or instances. By default, Compute uses KVM as the hypervisor.
+
+ * **Node 5**: This node has compute role.
+
+ The below is the ``dea-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.
+
+.. code:: bash
+
+ dea-override-config:
+ fuel:
+ FEATURE_GROUPS:
+ - experimental
+ nodes:
+ - id: 1
+ interfaces: interfaces_1
+ role: controller
+ - id: 2
+ interfaces: interfaces_1
+ role: mongo,controller
+ - id: 3
+ interfaces: interfaces_1
+ role: ceph-osd,controller
+ - id: 4
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+ - id: 5
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+
+ attributes_1:
+ hugepages:
+ dpdk:
+ value: 1024
+ nova:
+ value:
+ '2048': 1024
+
+ settings:
+ editable:
+ storage:
+ ephemeral_ceph:
+ description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes).
+ label: Ceph RBD for ephemeral volumes (Nova)
+ type: checkbox
+ value: true
+ weight: 75
+ images_ceph:
+ description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing.
+ label: Ceph RBD for images (Glance)
+ restrictions:
+ - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
+ type: checkbox
+ value: true
+ weight: 30
+
+* ``dha-override-config:`` Provides information about the VM definition and
+ Network config for virtual deployment.These configurations overrides
+ the pod dha definition and points to the controller,compute and
+ fuel definition files.
+
+ The below is the ``dha-override-config`` of the ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.
+
+.. code:: bash
+
+ dha-override-config:
+ nodes:
+ - id: 1
+ libvirtName: controller1
+ libvirtTemplate: templates/virtual_environment/vms/controller.xml
+ - id: 2
+ libvirtName: controller2
+ libvirtTemplate: templates/virtual_environment/vms/controller.xml
+ - id: 3
+ libvirtName: controller3
+ libvirtTemplate: templates/virtual_environment/vms/controller.xml
+ - id: 4
+ libvirtName: compute1
+ libvirtTemplate: templates/virtual_environment/vms/compute.xml
+ - id: 5
+ libvirtName: compute2
+ libvirtTemplate: templates/virtual_environment/vms/compute.xml
+ - id: 6
+ libvirtName: fuel-master
+ libvirtTemplate: templates/virtual_environment/vms/fuel.xml
+ isFuel: yes
+ username: root
+ password: r00tme
+
+
+* os-nosdn-kvm_ovs_dpdk_bar-ha scenario is successful when all the 5 Nodes are accessible, up and running.
+
+
+**Note:**
+
+* In os-nosdn-kvm_ovs_dpdk_bar-ha scenario, OVS is installed on the compute nodes with DPDK configured
+
+* Baraometer plugin is also implemented along with KVM plugin
+
+* This results in faster communication and data transfer among the compute nodes
+
+
+Scenario Usage Overview
+=======================
+.. Provide a brief overview on how to use the scenario and the features available to the
+.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
+.. where the specifics of the features are covered including examples and API's
+
+* The high availability feature can be acheived by executing deploy.py with
+ ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml as an argument.
+* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware
+ Environment:
+
+
+Command to deploy the os-nosdn-kvm_ovs_dpdk_bar-ha scenario:
+
+.. code:: bash
+
+ $ cd ~/fuel/ci/
+ $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovs-dpdk_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
+
+where,
+ -b is used to specify the configuration directory
+
+ -i is used to specify the image downloaded from artifacts.
+
+Note:
+
+.. code:: bash
+
+ Check $ sudo ./deploy.sh -h for further information.
+
+* os-nosdn-kvm_ovs_dpdk_bar-ha scenario can be executed from the jenkins project
+ "fuel-os-nosdn-kvm_ovs_dpdk_bar-ha-baremetal-daily-master"
+* This scenario provides the High Availability feature by deploying
+ 3 controller,2 compute nodes and checking if all the 5 nodes
+ are accessible(IP,up & running).
+* Test Scenario is passed if deployment is successful and all 5 nodes have
+ accessibility (IP , up & running).
+
+Known Limitations, Issues and Workarounds
+=========================================
+.. Explain any known limitations here.
+
+* Test scenario os-nosdn-kvm_ovs_dpdk_bar-ha result is not stable.
+
+* As Functest and Yardstick test suites are not stable. Instances are not getting IP address from DHCP (functest issue).
+
+
+References
+==========
+
+For more information on the OPNFV Danube release, please visit
+http://www.opnfv.org/Danube
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst
new file mode 100755
index 000000000..1cdad5205
--- /dev/null
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/index.rst
@@ -0,0 +1,12 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+****************************************
+os-nosdn-kvm-ha Overview and Description
+****************************************
+
+.. toctree::
+ :numbered:
+ :maxdepth: 3
+
+ os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
diff --git a/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
new file mode 100644
index 000000000..67a0732a7
--- /dev/null
+++ b/docs/scenarios/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha/os-nosdn-kvm_nfv_ovs_dpdk_bar-noha.description.rst
@@ -0,0 +1,234 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+
+Introduction
+============
+
+.. In this section explain the purpose of the scenario and the
+ types of capabilities provided
+
+The purpose of os-nosdn-kvm_ovs_dpdk_bar-noha scenario testing is to test the
+High Availability deployment and configuration of OPNFV software suite
+with OpenStack and without SDN software. This OPNFV software suite
+includes OPNFV KVM4NFV latest software packages for Linux Kernel and
+QEMU patches for achieving low latency. High Availability feature is achieved
+by deploying OpenStack multi-node setup with 3 controllers and 2 computes nodes.
+
+KVM4NFV packages will be installed on compute nodes as part of deployment. This scenario testcase deployment is happening on multi-node by using OPNFV Fuel deployer.
+
+Scenario Components and Composition
+===================================
+.. In this section describe the unique components that make up the scenario,
+.. what each component provides and why it has been included in order
+.. to communicate to the user the capabilities available in this scenario.
+
+This scenario deploys the High Availability OPNFV Cloud based on the
+configurations provided in noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml.
+This yaml file contains following configurations and is passed as an
+argument to deploy.py script
+
+* scenario.yaml:This configuration file defines translation between a
+ short deployment scenario name(os-nosdn-kvm_ovs_dpdk_bar-noha) and an actual deployment
+ scenario configuration file(noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml)
+
+* ``deployment-scenario-metadata:`` Contains the configuration metadata like
+ title,version,created,comment.
+
+.. code:: bash
+
+ deployment-scenario-metadata:
+ title: NFV KVM and OVS-DPDK HA deployment
+ version: 0.0.1
+ created: Dec 20 2016
+ comment: NFV KVM and OVS-DPDK
+
+* ``stack-extensions:`` Stack extentions are opnfv added value features in form
+ of a fuel-plugin.Plugins listed in stack extensions are enabled and
+ configured. os-nosdn-kvm_ovs_dpdk_bar-noha scenario currently uses KVM-1.0.0 plugin and barometer-1.0.0 plugin.
+
+.. code:: bash
+
+ stack-extensions:
+ - module: fuel-plugin-kvm
+ module-config-name: fuel-nfvkvm
+ module-config-version: 1.0.0
+ module-config-override:
+ # Module config overrides
+ - module: fuel-plugin-collectd-ceilometer
+ module-config-name: fuel-barometer
+ module-config-version: 1.0.0
+ module-config-override:
+ # Module config overrides
+
+* ``dea-override-config:`` Used to configure the HA mode,network segmentation
+ types and role to node assignments.These configurations overrides
+ corresponding keys in the dea_base.yaml and dea_pod_override.yaml.
+ These keys are used to deploy multiple nodes(``1 controller,3 computes``)
+ as mention below.
+
+ * **Node 1**: This node has MongoDB and Controller roles. The controller
+ node runs the Identity service, Image Service, management portions of
+ Compute and Networking, Networking plug-in and the dashboard. The
+ Telemetry service which was designed to support billing systems for
+ OpenStack cloud resources uses a NoSQL database to store information.
+ The database typically runs on the controller node.
+
+ * **Node 2**: This node has compute and Ceph-osd roles. Ceph is a
+ massively scalable, open source, distributed storage system. It is
+ comprised of an object store, block store and a POSIX-compliant
+ file system. Enabling Ceph, configures Nova to store ephemeral volumes in
+ RBD, configures Glance to use the Ceph RBD backend to store images,
+ configures Cinder to store volumes in Ceph RBD images and configures the
+ default number of object replicas in Ceph.
+
+ * **Node 3**: This node has Compute role in order to achieve high
+ availability.
+
+ * **Node 4**: This node has Compute role. The compute node runs the
+ hypervisor portion of Compute that operates tenant virtual machines
+ or instances. By default, Compute uses KVM as the hypervisor.
+
+ The below is the ``dea-override-config`` of the noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml file.
+
+.. code:: bash
+
+ dea-override-config:
+ fuel:
+ FEATURE_GROUPS:
+ - experimental
+ environment:
+ net_segment_type: vlan
+ nodes:
+ - id: 1
+ interfaces: interfaces_vlan
+ role: mongo,controller
+ - id: 2
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+ - id: 3
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+ - id: 4
+ interfaces: interfaces_dpdk
+ role: ceph-osd,compute
+ attributes: attributes_1
+
+ attributes_1:
+ hugepages:
+ dpdk:
+ value: 1024
+ nova:
+ value:
+ '2048': 1024
+
+ network:
+ networking_parameters:
+ segmentation_type: vlan
+ networks:
+ - cidr: null
+ gateway: null
+ ip_ranges: []
+ meta:
+ configurable: false
+ map_priority: 2
+ name: private
+ neutron_vlan_range: true
+ notation: null
+ render_addr_mask: null
+ render_type: null
+ seg_type: vlan
+ use_gateway: false
+ vlan_start: null
+ name: private
+ vlan_start: null
+
+ settings:
+ editable:
+ storage:
+ ephemeral_ceph:
+ description: Configures Nova to store ephemeral volumes in RBD. This works best if Ceph is enabled for volumes and images, too. Enables live migration of all types of Ceph backed VMs (without this option, live migration will only work with VMs launched from Cinder volumes).
+ label: Ceph RBD for ephemeral volumes (Nova)
+ type: checkbox
+ value: true
+ weight: 75
+ images_ceph:
+ description: Configures Glance to use the Ceph RBD backend to store images. If enabled, this option will prevent Swift from installing.
+ label: Ceph RBD for images (Glance)
+ restrictions:
+ - settings:storage.images_vcenter.value == true: Only one Glance backend could be selected.
+ type: checkbox
+ value: true
+ weight: 30
+
+* ``dha-override-config:`` Provides information about the VM definition and
+ Network config for virtual deployment.These configurations overrides
+ the pod dha definition and points to the controller,compute and
+ fuel definition files. The noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml has no dha-config changes i.e., default configuration is used.
+
+* os-nosdn-kvm_ovs_dpdk_bar-noha scenario is successful when all the 4 Nodes are accessible,
+ up and running.
+
+
+
+**Note:**
+
+* In os-nosdn-kvm_ovs_dpdk_bar-noha scenario, OVS is installed on the compute nodes with DPDK configured
+
+* Baraometer plugin is also implemented along with KVM plugin.
+
+* This results in faster communication and data transfer among the compute nodes
+
+
+Scenario Usage Overview
+=======================
+.. Provide a brief overview on how to use the scenario and the features available to the
+.. user. This should be an "introduction" to the userguide document, and explicitly link to it,
+.. where the specifics of the features are covered including examples and API's
+
+* The high availability feature is disabled and deploymet is done by deploy.py with
+ noha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml as an argument.
+* Install Fuel Master and deploy OPNFV Cloud from scratch on Hardware
+ Environment:
+
+
+Command to deploy the os-nosdn-kvm_ovs_dpdk_bar-noha scenario:
+
+.. code:: bash
+
+ $ cd ~/fuel/ci/
+ $ sudo ./deploy.sh -f -b file:///tmp/opnfv-fuel/deploy/config -l devel-pipeline -p default -s no-ha_nfv-kvm_nfv-ovs-dpdk-bar_heat_ceilometer_scenario.yaml -i file:///tmp/opnfv.iso
+
+where,
+ -b is used to specify the configuration directory
+
+ -i is used to specify the image downloaded from artifacts.
+
+Note:
+
+.. code:: bash
+
+ Check $ sudo ./deploy.sh -h for further information.
+
+* os-nosdn-kvm_ovs_dpdk_bar-noha scenario can be executed from the jenkins project
+ "fuel-os-nosdn-kvm_ovs_dpdk_bar-noha-baremetal-daily-master"
+* This scenario provides the High Availability feature by deploying
+ 3 controller,2 compute nodes and checking if all the 5 nodes
+ are accessible(IP,up & running).
+* Test Scenario is passed if deployment is successful and all 5 nodes have
+ accessibility (IP , up & running).
+
+Known Limitations, Issues and Workarounds
+=========================================
+.. Explain any known limitations here.
+
+* Test scenario os-nosdn-kvm_ovs_dpdk_bar-noha result is not stable.
+
+References
+==========
+
+For more information on the OPNFV Danube release, please visit
+http://www.opnfv.org/Danube
diff --git a/docs/userguide/Ftrace.debugging.tool.userguide.rst b/docs/userguide/Ftrace.debugging.tool.userguide.rst
new file mode 100644
index 000000000..0fcbbcf93
--- /dev/null
+++ b/docs/userguide/Ftrace.debugging.tool.userguide.rst
@@ -0,0 +1,257 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+=====================
+FTrace Debugging Tool
+=====================
+
+About Ftrace
+-------------
+Ftrace is an internal tracer designed to find what is going on inside the kernel. It can be used
+for debugging or analyzing latencies and performance issues that take place outside of user-space.
+Although ftrace is typically considered the function tracer, it is really a frame work of several
+assorted tracing utilities.
+
+ One of the most common uses of ftrace is the event tracing.
+
+**Note:**
+- For KVMFORNFV, Ftrace is preferred as it is in-built kernel tool
+- More stable compared to other debugging tools
+
+Version Features
+----------------
+
++-----------------------------+-----------------------------------------------+
+| | |
+| **Release** | **Features** |
+| | |
++=============================+===============================================+
+| | - Ftrace Debugging tool is not implemented in |
+| Colorado | Colorado release of KVMFORNFV |
+| | |
++-----------------------------+-----------------------------------------------+
+| | - Ftrace aids in debugging the KVMFORNFV |
+| Danube | 4.4-linux-kernel level issues |
+| | - Option to diable if not required |
++-----------------------------+-----------------------------------------------+
+
+
+Implementation of Ftrace
+-------------------------
+Ftrace uses the debugfs file system to hold the control files as
+well as the files to display output.
+
+When debugfs is configured into the kernel (which selecting any ftrace
+option will do) the directory /sys/kernel/debug will be created. To mount
+this directory, you can add to your /etc/fstab file:
+
+.. code:: bash
+
+ debugfs /sys/kernel/debug debugfs defaults 0 0
+
+Or you can mount it at run time with:
+
+.. code:: bash
+
+ mount -t debugfs nodev /sys/kernel/debug
+
+Some configurations for Ftrace are used for other purposes, like finding latency or analyzing the system. For the purpose of debugging, the kernel configuration parameters that should be enabled are:
+
+.. code:: bash
+
+ CONFIG_FUNCTION_TRACER=y
+ CONFIG_FUNCTION_GRAPH_TRACER=y
+ CONFIG_STACK_TRACER=y
+ CONFIG_DYNAMIC_FTRACE=y
+
+The above parameters must be enabled in /boot/config-4.4.0-el7.x86_64 i.e., kernel config file for ftrace to work. If not enabled, change the parameter to ``y`` and run.,
+
+.. code:: bash
+
+ On CentOS
+ grub2-mkconfig -o /boot/grub2/grub.cfg
+ sudo reboot
+
+Re-check the parameters after reboot before running ftrace.
+
+Files in Ftrace:
+----------------
+The below is a list of few major files in Ftrace.
+
+ ``current_tracer:``
+
+ This is used to set or display the current tracer that is configured.
+
+ ``available_tracers:``
+
+ This holds the different types of tracers that have been compiled into the kernel. The tracers listed here can be configured by echoing their name into current_tracer.
+
+ ``tracing_on:``
+
+ This sets or displays whether writing to the tracering buffer is enabled. Echo 0 into this file to disable the tracer or 1 to enable it.
+
+ ``trace:``
+
+ This file holds the output of the trace in a human readable format.
+
+ ``tracing_cpumask:``
+
+ This is a mask that lets the user only trace on specified CPUs. The format is a hex string representing the CPUs.
+
+ ``events:``
+
+ It holds event tracepoints (also known as static tracepoints) that have been compiled into the kernel. It shows what event tracepoints exist and how they are grouped by system.
+
+
+Avaliable Tracers
+-----------------
+
+Here is the list of current tracers that may be configured based on usage.
+
+- function
+- function_graph
+- irqsoff
+- preemptoff
+- preemptirqsoff
+- wakeup
+- wakeup_rt
+
+Brief about a few:
+
+ ``function:``
+
+ Function call tracer to trace all kernel functions.
+
+ ``function_graph:``
+
+ Similar to the function tracer except that the function tracer probes the functions on their entry whereas the function graph tracer traces on both entry and exit of the functions.
+
+ ``nop:``
+
+ This is the "trace nothing" tracer. To remove tracers from tracing simply echo "nop" into current_tracer.
+
+Examples:
+
+.. code:: bash
+
+
+ To list available tracers:
+ [tracing]# cat available_tracers
+ function_graph function wakeup wakeup_rt preemptoff irqsoff preemptirqsoff nop
+
+ Usage:
+ [tracing]# echo function > current_tracer
+ [tracing]# cat current_tracer
+ function
+
+ To view output:
+ [tracing]# cat trace | head -10
+
+ To Stop tracing:
+ [tracing]# echo 0 > tracing_on
+
+ To Start/restart tracing:
+ [tracing]# echo 1 > tracing_on;
+
+
+===================
+Ftrace in KVMFORNFV
+===================
+Ftrace is part of KVMFORNFV D-Release. Kvmfornfv currently uses 4.4-linux-Kernel as part of
+deployment and runs cyclictest for testing purpose generating latency values (max, min, avg values).
+Ftrace (or) function tracer is a stable kernel inbuilt debugging tool which tests kernel in real
+time and outputs a log as part of the code. These output logs are useful in following ways.
+
+ - Kernel Debugging.
+ - Helps in Kernel code Optimization and
+ - Can be used to better understand the kernel Level code flow
+ - Log generation for each test run if enabled
+ - Choice of Disabling and Enabling
+
+Ftrace logs for KVMFORNFV can be found `here`_:
+
+
+.. _here: http://artifacts.opnfv.org/kvmfornfv.html
+
+Ftrace Usage in KVMFORNFV Kernel Debugging:
+-------------------------------------------
+Kvmfornfv has two scripts in /ci/envs to provide ftrace tool:
+
+ - enable_trace.sh
+ - disable_trace.sh
+
+Enabling Ftrace in KVMFORNFV
+----------------------------
+
+The enable_trace.sh script is triggered by changing ftrace_enable value in test_kvmfornfv.sh script which is zero by default. Change as below to enable Ftrace and trigger the script,
+
+.. code:: bash
+
+ ftrace_enable=1
+
+Note:
+
+- Ftrace is enabled before
+
+Details of enable_trace script
+------------------------------
+
+- CPU Coremask is calculated using getcpumask()
+- All the required events are enabled by,
+ echoing "1" to $TRACEDIR/events/event_name/enable file
+
+Example,
+
+.. code:: bash
+
+ $TRACEDIR = /sys/kernel/debug/tracing/
+ sudo bash -c "echo 1 > $TRACEDIR/events/irq/enable"
+ sudo bash -c "echo 1 > $TRACEDIR/events/task/enable"
+ sudo bash -c "echo 1 > $TRACEDIR/events/syscalls/enable"
+
+The set_event file contains all the enabled events list
+
+- Function tracer is selected. May be changed to other avaliable tracers based on requirement
+
+.. code:: bash
+
+ sudo bash -c "echo function > $TRACEDIR/current_tracer
+
+- When tracing is turned ON by setting ``tracing_on=1``, the ``trace`` file keeps getting append with the traced data until ``tracing_on=0`` and then ftrace_buffer gets cleared.
+
+.. code:: bash
+
+ To Stop/Pause,
+ echo 0 >tracing_on;
+
+ To Start/Restart,
+ echo 1 >tracing_on;
+
+- Once tracing is diabled, disable_trace.sh script is triggered.
+
+Details of Disable_trace Script
+-------------------------------
+In disable trace script the following are done:
+
+- The trace file is copied and moved to /tmp folfer based on timestamp.
+- The current tracer file is set to ``nop``
+- The set_event file is cleared i.e., all the enabled events are disabled
+- Kernel Ftarcer is diabled/unmounted
+
+
+Publishing Ftrace logs:
+-----------------------
+The generated trace log is pushed to `artifacts`_ of Kvmfornfv project by releng team, which is done by a script in JJB of releng. The `trigger`_ in the script is.,
+
+.. code:: bash
+
+ echo "Uploading artifacts for future debugging needs...."
+ gsutil cp -r $WORKSPACE/build_output/log-*.tar.gz $GS_LOG_LOCATION > $WORKSPACE/gsutil.log 2>&1
+
+.. _artifacts: https://artifacts.opnfv.org/
+
+.. _trigger: https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=jjb/kvmfornfv/kvmfornfv-upload-artifact.sh;h=56fb4f9c18a83c689a916dc6c85f9e3ddf2479b2;hb=HEAD#l53
+
+
+.. include:: pcm_utility.userguide.rst
diff --git a/docs/userguide/images/Cpustress-Idle.png b/docs/userguide/images/Cpustress-Idle.png
new file mode 100644
index 000000000..b4b4e1112
--- /dev/null
+++ b/docs/userguide/images/Cpustress-Idle.png
Binary files differ
diff --git a/docs/userguide/images/Dashboard-screenshot-1.png b/docs/userguide/images/Dashboard-screenshot-1.png
new file mode 100644
index 000000000..7ff809697
--- /dev/null
+++ b/docs/userguide/images/Dashboard-screenshot-1.png
Binary files differ
diff --git a/docs/userguide/images/Dashboard-screenshot-2.png b/docs/userguide/images/Dashboard-screenshot-2.png
new file mode 100644
index 000000000..a5c4e01b5
--- /dev/null
+++ b/docs/userguide/images/Dashboard-screenshot-2.png
Binary files differ
diff --git a/docs/userguide/images/Guest_Scenario.png b/docs/userguide/images/Guest_Scenario.png
new file mode 100644
index 000000000..550c0fe6f
--- /dev/null
+++ b/docs/userguide/images/Guest_Scenario.png
Binary files differ
diff --git a/docs/userguide/images/Host_Scenario.png b/docs/userguide/images/Host_Scenario.png
new file mode 100644
index 000000000..89789aa7b
--- /dev/null
+++ b/docs/userguide/images/Host_Scenario.png
Binary files differ
diff --git a/docs/userguide/images/IOstress-Idle.png b/docs/userguide/images/IOstress-Idle.png
new file mode 100644
index 000000000..fe4e5fc81
--- /dev/null
+++ b/docs/userguide/images/IOstress-Idle.png
Binary files differ
diff --git a/docs/userguide/images/IXIA1.png b/docs/userguide/images/IXIA1.png
new file mode 100644
index 000000000..682de7c57
--- /dev/null
+++ b/docs/userguide/images/IXIA1.png
Binary files differ
diff --git a/docs/userguide/images/Idle-Idle.png b/docs/userguide/images/Idle-Idle.png
new file mode 100644
index 000000000..d619f65ea
--- /dev/null
+++ b/docs/userguide/images/Idle-Idle.png
Binary files differ
diff --git a/docs/userguide/images/Memorystress-Idle.png b/docs/userguide/images/Memorystress-Idle.png
new file mode 100644
index 000000000..b9974a7a2
--- /dev/null
+++ b/docs/userguide/images/Memorystress-Idle.png
Binary files differ
diff --git a/docs/userguide/images/SRIOV_Scenario.png b/docs/userguide/images/SRIOV_Scenario.png
new file mode 100644
index 000000000..62e116ada
--- /dev/null
+++ b/docs/userguide/images/SRIOV_Scenario.png
Binary files differ
diff --git a/docs/userguide/images/UseCaseDashboard.png b/docs/userguide/images/UseCaseDashboard.png
new file mode 100644
index 000000000..9dd14d26e
--- /dev/null
+++ b/docs/userguide/images/UseCaseDashboard.png
Binary files differ
diff --git a/docs/userguide/images/dashboard-architecture.png b/docs/userguide/images/dashboard-architecture.png
new file mode 100644
index 000000000..821484e74
--- /dev/null
+++ b/docs/userguide/images/dashboard-architecture.png
Binary files differ
diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst
index ae0b380d4..fcef57250 100644
--- a/docs/userguide/index.rst
+++ b/docs/userguide/index.rst
@@ -7,12 +7,16 @@ KVMforNFV User Guide
********************
.. toctree::
- :maxdepth: 2
+ :maxdepth: 3
./abstract.rst
./introduction.rst
./common.platform.render.rst
./feature.userguide.render.rst
+ ./Ftrace.debugging.tool.userguide.rst
+ ./kvmfornfv.cyclictest-dashboard.userguide.rst
./low_latency.userguide.rst
./live_migration.userguide.rst
+ ./packet_forwarding.userguide.rst
+ ./pcm_utility.userguide.rst
./tuning.userguide.rst
diff --git a/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
new file mode 100644
index 000000000..6333d0917
--- /dev/null
+++ b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
@@ -0,0 +1,258 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+========================================
+Dashboard for KVM4NFV Daily Test Results
+========================================
+
+Abstract
+========
+
+This chapter explains the procedure to configure the InfluxDB and Grafana on Node1 or Node2
+depending on the testtype to publish KVM4NFV cyclic test results. The cyclictest cases are executed
+and results are published on Yardstick Dashboard(Graphana). InfluxDB is the database which will
+store the cyclictest results and Grafana is a visualisation suite to view the maximum,minumum and
+average values of the timeseries data of cyclictest results.The framework is shown in below image.
+
+.. Figure:: ../images/dashboard-architecture.png
+
+
+Version Features
+================
+
++-----------------------------+--------------------------------------------+
+| | |
+| **Release** | **Features** |
+| | |
++=============================+============================================+
+| | - Data published in Json file Format |
+| Colorado | - No database support to store the test's |
+| | latency values of cyclictest |
+| | - For each run, the previous run's output |
+| | file is replaced with a new file with |
+| | currents latency values. |
++-----------------------------+--------------------------------------------+
+| | - Test results are stored in Influxdb |
+| | - Graphical representation of the latency |
+| Danube | values using Grafana suite. (Dashboard) |
+| | - Supports Graphical view for multiple |
+| | testcases and test-types (Stress/Idle) |
++-----------------------------+--------------------------------------------+
+
+
+Installation Steps:
+===================
+To configure Yardstick, InfluxDB and Grafana for KVMFORNFV project following sequence of steps are followed:
+
+**Note:**
+
+All the below steps are done as per the script, which is a part of CICD integration.
+
+.. code:: bash
+
+ For Yardstick:
+ git clone https://gerrit.opnfv.org/gerrit/yardstick
+
+ For InfluxDB:
+ docker pull tutum/influxdb
+ docker run -d --name influxdb -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 tutum/influxdb
+ docker exec -it influxdb bash
+ $influx
+ >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES
+ >CREATE DATABASE yardstick;
+ >use yardstick;
+ >show MEASUREMENTS;
+
+ For Grafana:
+ docker pull grafana/grafana
+ docker run -d --name grafana -p 3000:3000 grafana/grafana
+
+The Yardstick document for Grafana and InfluxDB configuration can be found `here`_.
+
+.. _here: https://wiki.opnfv.org/display/yardstick/How+to+deploy+InfluxDB+and+Grafana+locally
+
+Configuring the Dispatcher Type:
+================================
+Need to configure the dispatcher type in /etc/yardstick/yardstick.conf depending on the dispatcher
+methods which are used to store the cyclictest results. A sample yardstick.conf can be found at
+/yardstick/etc/yardstick.conf.sample, which can be copied to /etc/yardstick.
+
+.. code:: bash
+
+ mkdir -p /etc/yardstick/
+ cp /yardstick/etc/yardstick.conf.sample /etc/yardstick/yardstick.conf
+
+**Dispatcher Modules:**
+
+Three type of dispatcher methods are available to store the cyclictest results.
+
+- File
+- InfluxDB
+- HTTP
+
+**1. File**: Default Dispatcher module is file.If the dispatcher module is configured as a file,then the test results are stored in yardstick.out file.
+( default path: /tmp/yardstick.out).
+Dispatcher module of "Verify Job" is "Default".So,the results are stored in Yardstick.out file for verify job. Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered.
+
+.. code:: bash
+
+ [DEFAULT]
+ debug = False
+ dispatcher = file
+
+**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb.Users can check test results stored in the Influxdb(Database) on Grafana which is used to visualize the time series data.
+
+To configure the influxdb ,the following content in /etc/yardstick/yardstick.conf need to updated
+
+.. code:: bash
+
+ [DEFAULT]
+ debug = False
+ dispatcher = influxdb
+
+Dispatcher module of "Daily Job" is Influxdb.So the results are stored in influxdb and then published to Dashboard.
+
+**3. HTTP**: If the dispatcher module is configured as http, users can check test result on OPNFV testing dashboard which uses MongoDB as backend.
+
+.. code:: bash
+
+ [DEFAULT]
+ debug = False
+ dispatcher = http
+
+.. Figure:: ../images/UseCaseDashboard.png
+
+
+Detailing the dispatcher module in verify and daily Jobs:
+---------------------------------------------------------
+
+KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily).Once the test is completed, results are published to the respective dispatcher modules.
+
+Dispatcher module is configured for each Job type as mentioned below.
+
+1. ``Verify Job`` : Default "DISPATCHER_TYPE" i.e. file(/tmp/yardstick.out) is used. User can also see the test results on Jenkins console log.
+
+.. code:: bash
+
+ *"max": "00030", "avg": "00006", "min": "00006"*
+
+2. ``Daily Job`` : Opnfv Influxdb url is configured as dispatcher module.
+
+.. code:: bash
+
+ DISPATCHER_TYPE=influxdb
+ DISPATCHER_INFLUXDB_TARGET="http://104.197.68.199:8086"
+
+Influxdb only supports line protocol, and the json protocol is deprecated.
+
+For example, the raw_result of cyclictest in json format is:
+ ::
+
+ "benchmark": {
+ "timestamp": 1478234859.065317,
+ "errors": "",
+ "data": {
+ "max": "00012",
+ "avg": "00008",
+ "min": "00007"
+ },
+ "sequence": 1
+ },
+ "runner_id": 23
+ }
+
+
+With the help of "influxdb_line_protocol", the json is transformed as a line string:
+ ::
+
+ 'kvmfornfv_cyclictest_idle_idle,deploy_scenario=unknown,host=kvm.LF,
+ installer=unknown,pod_name=unknown,runner_id=23,scenarios=Cyclictest,
+ task_id=e7be7516-9eae-406e-84b6-e931866fa793,version=unknown
+ avg="00008",max="00012",min="00007" 1478234859065316864'
+
+
+
+Influxdb api which is already implemented in `Influxdb`_ will post the data in line format into the database.
+
+``Displaying Results on Grafana dashboard:``
+
+- Once the test results are stored in Influxdb, dashboard configuration file(Json) which used to display the cyclictest results on Grafana need to be created by following the `Grafana-procedure`_ and then pushed into `yardstick-repo`_
+
+- Grafana can be accessed at `Login`_ using credentials opnfv/opnfv and used for visualizing the collected test data as shown in `Visual`_\
+
+
+.. Figure:: ../images/Dashboard-screenshot-1.png
+
+.. Figure:: ../images/Dashboard-screenshot-2.png
+
+.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py
+
+.. _Visual: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest
+
+.. _Login: http://testresults.opnfv.org/grafana/login
+
+.. _Grafana-procedure: https://wiki.opnfv.org/display/yardstick/How+to+work+with+grafana+dashboard
+
+.. _yardstick-repo: https://git.opnfv.org/cgit/yardstick/tree/dashboard/KVMFORNFV-Cyclictest
+
+.. _GrafanaDoc: http://docs.grafana.org/
+
+Understanding Kvmfornfv Grafana Dashboard
+=========================================
+
+The Kvmfornfv Dashboard found at http://testresults.opnfv.org/ currently supports graphical view of Cyclictest. For viewing Kvmfornfv Dashboard use,
+
+.. code:: bash
+
+ http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest
+
+ The login details are:
+
+ Username: opnfv
+ Password: opnfv
+
+The Dashboard has four tables, each representing a specific test-type of cyclictest case,
+
+- Kvmfornfv_Cyclictest_Idle-Idle
+- Kvmfornfv_Cyclictest_CPUstress-Idle
+- Kvmfornfv_Cyclictest_Memorystress-Idle
+- Kvmfornfv_Cyclictest_IOstress-Idle
+
+Note:
+
+- For all graphs, X-axis is marked with time stamps, Y-axis with value in microsecond units.
+
+**A brief about what each graph of the dashboard represents:**
+
+1. Idle-Idle Graph
+-------------------
+`Idle-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that no stress is applied on the Host or the Guest.
+
+.. _Idle-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=10&fullscreen
+
+.. Figure:: ../images/Idle-Idle.png
+
+2. CPU_Stress-Idle Graph
+--------------------------
+`Cpu_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that CPU stress is applied on the Host and no stress on the Guest.
+
+.. _Cpu_stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=11&fullscreen
+
+.. Figure:: ../images/Cpustress-Idle.png
+
+3. Memory_Stress-Idle Graph
+----------------------------
+`Memory_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that Memory stress is applied on the Host and no stress on the Guest.
+
+.. _Memory_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=12&fullscreen
+
+.. Figure:: ../images/Memorystress-Idle.png
+
+4. IO_Stress-Idle Graph
+------------------------
+`IO_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that IO stress is applied on the Host and no stress on the Guest.
+
+.. _IO_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=13&fullscreen
+
+.. Figure:: ../images/IOstress-Idle.png
diff --git a/docs/userguide/low_latency.userguide.rst b/docs/userguide/low_latency.userguide.rst
index e0d2791df..66e63770c 100644
--- a/docs/userguide/low_latency.userguide.rst
+++ b/docs/userguide/low_latency.userguide.rst
@@ -66,3 +66,47 @@ Run-time Environment Setup
Not only are special kernel parameters needed but a special run-time
environment is also required. Please refer to `tunning.userguide` for
more explanation.
+
+Test cases to measure Latency
+=============================
+
+Cyclictest case
+---------------
+
+Understanding the naming convention
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Idle-Idle test-type
+~~~~~~~~~~~~~~~~~~~
+
+CPU_Stress-Idle test-type
+-------------------------
+
+Memory_Stress-Idle test-type
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+IO_Stress-Idle test-type
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+CPU_Stress-CPU_Stress test-type
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Memory_Stress-Memory_Stress test-type
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+IO_Stress-IO_Stress test type
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Packet Forwarding Test case
+---------------------------
+
+Packet forwarding to Host
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Packet forwarding to Guest
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Packet forwarding to Guest using SRIOV
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
diff --git a/docs/userguide/packet_forwarding.userguide.rst b/docs/userguide/packet_forwarding.userguide.rst
new file mode 100644
index 000000000..ba117508c
--- /dev/null
+++ b/docs/userguide/packet_forwarding.userguide.rst
@@ -0,0 +1,555 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. http://creativecommons.org/licenses/by/4.0
+
+=================
+PACKET FORWARDING
+=================
+=======================
+About Packet Forwarding
+=======================
+
+Packet Forwarding is a test suite of KVMFORNFV which is used to measure the total time taken by a
+**Packet** generated by the traffic generator to return from Guest/Host as per the implemented
+scenario. Packet Forwarding is implemented using VSWITCHPERF/``VSPERF software of OPNFV`` and an
+``IXIA Traffic Generator``.
+
+Version Features
+----------------
+
++-----------------------------+---------------------------------------------------+
+| | |
+| **Release** | **Features** |
+| | |
++=============================+===================================================+
+| | - Packet Forwarding is not part of Colorado |
+| Colorado | release of KVMFORNFV |
+| | |
++-----------------------------+---------------------------------------------------+
+| | - Packet Forwarding is a testcase in KVMFORNFV |
+| | - Implements three scenarios (Host/Guest/SRIOV) |
+| | as part of testing in KVMFORNFV |
+| Danube | - Uses available testcases of OPNFV's VSWTICHPERF |
+| | software (PVP/PVVP) |
+| | - Works with IXIA Traffic Generator |
++-----------------------------+---------------------------------------------------+
+
+======
+VSPERF
+======
+
+VSPerf is an OPNFV testing project.
+VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated
+tests, that will serve as a basis for validating the suitability of different vSwitch
+implementations in a Telco NFV deployment environment. The output of this project will be utilized
+by the OPNFV Performance and Test group and its associated projects, as part of OPNFV Platform and
+VNF level testing and validation.
+
+For complete VSPERF documentation go to `link.`_
+
+.. _link.: <http://artifacts.opnfv.org/vswitchperf/colorado/index.html>
+
+
+Installation
+------------
+Guidelines of installating `VSPERF`_.
+
+.. _VSPERF: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html>
+
+Supported Operating Systems
+---------------------------
+
+* CentOS 7
+* Fedora 20
+* Fedora 21
+* Fedora 22
+* RedHat 7.2
+* Ubuntu 14.04
+
+Supported vSwitches
+-------------------
+The vSwitch must support Open Flow 1.3 or greater.
+
+* OVS (built from source).
+* OVS with DPDK (built from source).
+
+Supported Hypervisors
+---------------------
+
+* Qemu version 2.3.
+
+Other Requirements
+------------------
+The test suite requires Python 3.3 and relies on a number of other
+packages. These need to be installed for the test suite to function.
+
+Installation of required packages, preparation of Python 3 virtual
+environment and compilation of OVS, DPDK and QEMU is performed by
+script **systems/build_base_machine.sh**. It should be executed under
+user account, which will be used for vsperf execution.
+
+ **Please Note:** Password-less sudo access must be configured for given user
+ before script is executed.
+
+Execution of installation script:
+
+.. code:: bashFtrace.debugging.tool.userguide.rst
+
+ $ cd Vswitchperf
+ $ cd systems
+ $ ./build_base_machine.sh
+
+Script **build_base_machine.sh** will install all the vsperf dependencies
+in terms of system packages, Python 3.x and required Python modules.
+In case of CentOS 7 it will install Python 3.3 from an additional repository
+provided by Software Collections (`a link`_). In case of RedHat 7 it will
+install Python 3.4 as an alternate installation in /usr/local/bin. Installation
+script will also use `virtualenv`_ to create a vsperf virtual environment,
+which is isolated from the default Python environment. This environment will
+reside in a directory called **vsperfenv** in $HOME.
+
+You will need to activate the virtual environment every time you start a
+new shell session. Its activation is specific to your OS:
+
+For running testcases VSPERF is installed on Intel pod1-node2 in which centos
+operating system is installed. Only VSPERF installion on Centos is discussed here.
+For installation steps on other operating systems please refer to `here`_.
+
+.. _here: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html>
+
+For CentOS 7
+-----------------
+
+## Python 3 Packages
+
+To avoid file permission errors and Python version issues, use virtualenv to create an isolated environment with Python3.
+The required Python 3 packages can be found in the `requirements.txt` file in the root of the test suite.
+They can be installed in your virtual environment like so:
+
+.. code:: bash
+
+ scl enable python33 bash
+ # Create virtual environment
+ virtualenv vsperfenv
+ cd vsperfenv
+ source bin/activate
+ pip install -r requirements.txt
+
+
+You need to activate the virtual environment every time you start a new shell session.
+To activate, simple run:
+
+.. code:: bash
+
+ scl enable python33 bash
+ cd vsperfenv
+ source bin/activate
+
+
+Working Behind a Proxy
+-----------------------
+
+If you're behind a proxy, you'll likely want to configure this before running any of the above. For example:
+
+.. code:: bash
+
+ export http_proxy=proxy.mycompany.com:123
+ export https_proxy=proxy.mycompany.com:123
+
+
+
+.. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
+.. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
+
+For other OS specific activation click `this link`_:
+
+.. _this link: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements
+
+Traffic-Generators
+-------------------
+VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic-generator go through `this`_.
+
+.. _this: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html>
+
+VSPERF supports the following traffic generators:
+
+ * Dummy (DEFAULT): Allows you to use your own external
+ traffic generator.
+ * IXIA (IxNet and IxOS)
+ * Spirent TestCenter
+ * Xena Networks
+ * MoonGen
+
+To see the list of traffic gens from the cli:
+
+.. code-block:: console
+
+ $ ./vsperf --list-trafficgens
+
+This guide provides the details of how to install
+and configure the various traffic generators.
+
+As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation regarding traffic generators please follow this `link`_.
+
+.. _link: <https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD>
+
+==========
+IXIA Setup
+==========
+
+=====================
+Hardware Requirements
+=====================
+VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
+
+Installation
+-------------
+
+Follow the [installation instructions] to install.
+
+IXIA Setup
+------------
+On the CentOS 7 system
+----------------------
+You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz.
+
+On the IXIA client software system
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server)
+ - Right click on IxNetwork TCL Server, select properties
+ - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx" where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file).
+
+.. Figure:: ../images/IXIA1.png
+
+- Hit Ok and start the TCL server application
+
+VSPERF configuration
+--------------------
+
+There are several configuration options specific to the IxNetworks traffic generator
+from IXIA. It is essential to set them correctly, before the VSPERF is executed
+for the first time.
+
+Detailed description of options follows:
+
+ * TRAFFICGEN_IXNET_MACHINE - IP address of server, where IxNetwork TCL Server is running
+ * TRAFFICGEN_IXNET_PORT - PORT, where IxNetwork TCL Server is accepting connections from
+ TCL clients
+ * TRAFFICGEN_IXNET_USER - username, which will be used during communication with IxNetwork
+ TCL Server and IXIA chassis
+ * TRAFFICGEN_IXIA_HOST - IP address of IXIA traffic generator chassis
+ * TRAFFICGEN_IXIA_CARD - identification of card with dedicated ports at IXIA chassis
+ * TRAFFICGEN_IXIA_PORT1 - identification of the first dedicated port at TRAFFICGEN_IXIA_CARD
+ at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
+ unidirectional traffic, it is essential to correctly connect 1st IXIA port to the 1st NIC
+ at DUT, i.e. to the first PCI handle from WHITELIST_NICS list. Otherwise traffic may not
+ be able to pass through the vSwitch.
+ * TRAFFICGEN_IXIA_PORT2 - identification of the second dedicated port at TRAFFICGEN_IXIA_CARD
+ at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of
+ unidirectional traffic, it is essential to correctly connect 2nd IXIA port to the 2nd NIC
+ at DUT, i.e. to the second PCI handle from WHITELIST_NICS list. Otherwise traffic may not
+ be able to pass through the vSwitch.
+ * TRAFFICGEN_IXNET_LIB_PATH - path to the DUT specific installation of IxNetwork TCL API
+ * TRAFFICGEN_IXNET_TCL_SCRIPT - name of the TCL script, which VSPERF will use for
+ communication with IXIA TCL server
+ * TRAFFICGEN_IXNET_TESTER_RESULT_DIR - folder accessible from IxNetwork TCL server,
+ where test results are stored, e.g. ``c:/ixia_results``; see test-results-share_
+ * TRAFFICGEN_IXNET_DUT_RESULT_DIR - directory accessible from the DUT, where test
+ results from IxNetwork TCL server are stored, e.g. ``/mnt/ixia_results``; see
+ test-results-share_
+
+.. _test-results-share:
+
+Test results share
+-------------------
+
+VSPERF is not able to retrieve test results via TCL API directly. Instead, all test
+results are stored at IxNetwork TCL server. Results are stored at folder defined by
+``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` configuration parameter. Content of this
+folder must be shared (e.g. via samba protocol) between TCL Server and DUT, where
+VSPERF is executed. VSPERF expects, that test results will be available at directory
+configured by ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` configuration parameter.
+
+Example of sharing configuration:
+
+ * Create a new folder at IxNetwork TCL server machine, e.g. ``c:\ixia_results``
+ * Modify sharing options of ``ixia_results`` folder to share it with everybody
+ * Create a new directory at DUT, where shared directory with results
+ will be mounted, e.g. ``/mnt/ixia_results``
+ * Update your custom VSPERF configuration file as follows:
+
+ .. code-block:: python
+
+ TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/ixia_results'
+ TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia_results'
+
+ Note: It is essential to use slashes '/' also in path
+ configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter.
+ * Install cifs-utils package.
+
+ e.g. at rpm based Linux distribution:
+
+ .. code-block:: console
+
+ yum install cifs-utils
+
+ * Mount shared directory, so VSPERF can access test results.
+
+ e.g. by adding new record into ``/etc/fstab``
+
+ .. code-block:: console
+
+ mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results
+ -o file_mode=0777,dir_mode=0777,nounix
+
+It is recommended to verify, that any new file inserted into ``c:/ixia_results`` folder
+is visible at DUT inside ``/mnt/ixia_results`` directory.
+
+
+Cloning and building src dependencies
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build
+them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory
+contains makefiles that will allow you to clone and build the libraries that VSPERF depends on,
+such as DPDK and OVS. To clone and build simply:
+
+.. code:: bash
+
+ cd src
+ make
+
+To delete a src subdirectory and its contents to allow you to re-clone simply use:
+
+.. code:: bash
+
+ make cleanse
+
+Configure the `./conf/10_custom.conf` file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values.
+
+The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom
+configuration value.
+
+Using a custom settings file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument.
+
+.. code:: bash
+
+ ./vsperf --conf-file <path_to_settings_py> ...
+
+Note that configuration passed in via the environment (`--load-env`) or via another command line
+argument will override both the default and your custom configuration files. This
+"priority hierarchy" can be described like so (1 = max priority):
+
+1. Command line arguments
+2. Environment variables
+3. Configuration file(s)
+
+Executing tests
+~~~~~~~~~~~~~~~~
+Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers:
+.. code:: bash
+
+ username ALL=(ALL) NOPASSWD: ALL
+
+username in the example above should be replaced with a real username.
+
+To list the available tests:
+
+.. code:: bash
+
+ ./vsperf --list-tests
+
+
+To run a group of tests, for example all tests with a name containing
+'RFC2544':
+
+.. code:: bash
+
+ ./vsperf --conf-file=user_settings.py --tests="RFC2544"
+
+To run all tests:
+
+.. code:: bash
+
+ ./vsperf --conf-file=user_settings.py
+
+Some tests allow for configurable parameters, including test duration (in seconds) as well as packet sizes (in bytes).
+
+.. code:: bash
+
+ ./vsperf --conf-file user_settings.py
+ --tests RFC2544Tput
+ --test-param "rfc2544_duration=10;packet_sizes=128"
+
+For all available options, check out the help dialog:
+
+.. code:: bash
+
+ ./vsperf --help
+
+
+Testcases
+----------
+Available Tests in VSPERF are:
+
+ * phy2phy_tput
+ * phy2phy_forwarding
+ * back2back
+ * phy2phy_tput_mod_vlan
+ * phy2phy_cont
+ * pvp_cont
+ * pvvp_cont
+ * pvpv_cont
+ * phy2phy_scalability
+ * pvp_tput
+ * pvp_back2back
+ * pvvp_tput
+ * pvvp_back2back
+ * phy2phy_cpu_load
+ * phy2phy_mem_load
+
+VSPERF modes of operation
+--------------------------
+
+VSPERF can be run in different modes. By default it will configure vSwitch,
+traffic generator and VNF. However it can be used just for configuration
+and execution of traffic generator. Another option is execution of all
+components except traffic generator itself.
+
+Mode of operation is driven by configuration parameter -m or --mode
+
+.. code-block:: console
+
+ -m MODE, --mode MODE vsperf mode of operation;
+ Values:
+ "normal" - execute vSwitch, VNF and traffic generator
+ "trafficgen" - execute only traffic generator
+ "trafficgen-off" - execute vSwitch and VNF
+ "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission
+
+In case, that VSPERF is executed in "trafficgen" mode, then configuration
+of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the
+``--test-params`` option. It is not needed to specify all values of ``TRAFFIC``
+dictionary. It is sufficient to specify only values, which should be changed.
+Detailed description of ``TRAFFIC`` dictionary can be found at: ref:`configuration-of-traffic-dictionary`.
+
+Example of execution of VSPERF in "trafficgen" mode:
+
+.. code-block:: console
+
+ $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \
+ --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
+
+
+================================
+Packet Forwarding Test Scenarios
+================================
+KVMFORNFV currently implements three scenarios as part of testing:
+
+ * Host Scenario
+ * Guest Scenario.
+ * SR-IOV Scenario.
+
+
+Packet Forwarding Host Scenario
+-------------------------------
+Here Host is NODE-2. It has VSPERF installed in it and is properly configured to use IXIA Traffic-generator by providing IXIA CARD, PORTS and Lib paths along with IP.
+please refer to figure.2
+
+.. Figure:: ../images/Host_Scenario.png
+
+Packet Forwarding Guest Scenario
+--------------------------------
+Here the guest is a Virtual Machine (VM) launched by using a modified CentOS image(vsperf provided)
+on Node-2 (Host) using Qemu. In this scenario, the packet is initially forwarded to Host which is
+then forwarded to the launched guest. The time taken by the packet to reach the IXIA traffic-generator
+via Host and Guest is calculated and published as a test result of this scenario.
+
+.. Figure:: ../images/Guest_Scenario.png
+
+Packet Forwarding SRIOV Scenario
+--------------------------------
+Unlike the packet forwarding to Guest-via-Host scenario, here the packet generated at the IXIA is
+directly forwarded to the Guest VM launched on Host by implementing SR-IOV interface at NIC level
+of Host .i.e., Node-2. The time taken by the packet to reach the IXIA traffic-generator is calculated
+and published as a test result for this scenario. SRIOV-support_ is given below, it details how to use SR-IOV.
+
+.. Figure:: ../images/SRIOV_Scenario.png
+
+Using vfio_pci with DPDK
+------------------------
+
+To use vfio with DPDK instead of igb_uio add into your custom configuration
+file the following parameter:
+
+.. code-block:: python
+
+ PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci']
+
+
+**NOTE:** In case, that DPDK is installed from binary package, then please
+
+ set ``PATHS['dpdk']['bin']['modules']`` instead.
+
+**NOTE:** Please ensure that Intel VT-d is enabled in BIOS.
+
+**NOTE:** Please ensure your boot/grub parameters include
+the following:
+
+.. code-block:: console
+
+ iommu=pt intel_iommu=on
+
+To check that IOMMU is enabled on your platform:
+
+.. code-block:: console
+
+ $ dmesg | grep IOMMU
+ [ 0.000000] Intel-IOMMU: enabled
+ [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de
+ [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de
+ [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0
+ [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1
+ [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1
+ [ 3.335744] IOMMU: dmar0 using Queued invalidation
+ [ 3.335746] IOMMU: dmar1 using Queued invalidation
+ ....
+
+.. _SRIOV-support:
+
+Using SRIOV support
+-------------------
+
+To use virtual functions of NIC with SRIOV support, use extended form
+of NIC PCI slot definition:
+
+.. code-block:: python
+
+ WHITELIST_NICS = ['0000:03:00.0|vf0', '0000:03:00.1|vf3']
+
+Where ``vf`` is an indication of virtual function usage and following
+number defines a VF to be used. In case that VF usage is detected,
+then vswitchperf will enable SRIOV support for given card and it will
+detect PCI slot numbers of selected VFs.
+
+So in example above, one VF will be configured for NIC '0000:05:00.0'
+and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf
+will detect PCI addresses of selected VFs and it will use them during
+test execution.
+
+At the end of vswitchperf execution, SRIOV support will be disabled.
+
+SRIOV support is generic and it can be used in different testing scenarios.
+For example:
+
+
+* vSwitch tests with DPDK or without DPDK support to verify impact
+ of VF usage on vSwitch performance
+* tests without vSwitch, where traffic is forwared directly
+ between VF interfaces by packet forwarder (e.g. testpmd application)
+* tests without vSwitch, where VM accesses VF interfaces directly
+ by PCI-passthrough to measure raw VM throughput performance.
+
diff --git a/docs/userguide/pcm_utility.userguide.rst b/docs/userguide/pcm_utility.userguide.rst
new file mode 100644
index 000000000..baef7059a
--- /dev/null
+++ b/docs/userguide/pcm_utility.userguide.rst
@@ -0,0 +1,126 @@
+=========================================================
+Collecting Memory Bandwidth Information using PCM utility
+=========================================================
+
+About PCM utility
+-----------------
+The Intel® Performance Counter Monitor provides sample C++ routines and utilities to estimate the
+internal resource utilization of the latest Intel® Xeon® and Core™ processors and gain a significant
+performance boost.In Intel PCM toolset,there is a pcm-memory.x tool which is used for observing the
+memory traffic intensity
+
+Version Features
+-----------------
+
++-----------------------------+-----------------------------------------------+
+| | |
+| **Release** | **Features** |
+| | |
++=============================+===============================================+
+| | - In Colorado release,we don't have memory |
+| Colorado | bandwidth information collected through the |
+| | cyclic testcases. |
+| | |
++-----------------------------+-----------------------------------------------+
+| | - pcm-memory.x provides the memory bandwidth |
+| | data throught out the testcases |
+| | - pcm-memory.x will be executedbefore the |
+| Danube | execution of every testcase |
+| | - used for all test-types (stress/idle) |
+| | - Generated memory bandwidth logs which are |
+| | to published to the KVMFORFNV artifacts |
++-----------------------------+-----------------------------------------------+
+
+Implementation of pcm-memory.x:
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The tool measures the memory bandwidth observed for every channel reporting seperately throughputs
+for reads from memory and writes to the memory.pcm-memory.x tool tends to report values slightly
+higher than the application's own measurement.
+
+Command:
+
+.. code:: bash
+
+ sudo ./pcm-memory.x [Delay]/[external_program]
+
+Parameters
+
+- pcm-memory can called with either delay or external_program/application as a parameter
+
+- If delay is given as 5,then the output will be produced with refresh of every 5 seconds.
+
+- If external_program is script/application,then the output will produced after the execution of the application or the script passed as a parameter.
+
+**Sample Output:**
+
+ The output produced with default refresh of 1 second.
+
++---------------------------------------+---------------------------------------+
+| Socket 0 | Socket 1 |
++=======================================+=======================================+
+| Memory Performance Monitoring | Memory Performance Monitoring |
+| | |
++---------------------------------------+---------------------------------------+
+| Mem Ch 0: Reads (MB/s): 6870.81 | Mem Ch 0: Reads (MB/s): 7406.36 |
+| Writes(MB/s): 1805.03 | Writes(MB/s): 1951.25 |
+| Mem Ch 1: Reads (MB/s): 6873.91 | Mem Ch 1: Reads (MB/s): 7411.11 |
+| Writes(MB/s): 1810.86 | Writes(MB/s): 1957.73 |
+| Mem Ch 2: Reads (MB/s): 6866.77 | Mem Ch 2: Reads (MB/s): 7403.39 |
+| Writes(MB/s): 1804.38 | Writes(MB/s): 1951.42 |
+| Mem Ch 3: Reads (MB/s): 6867.47 | Mem Ch 3: Reads (MB/s): 7403.66 |
+| Writes(MB/s): 1805.53 | Writes(MB/s): 1950.95 |
+| | |
+| NODE0 Mem Read (MB/s): 27478.96 | NODE1 Mem Read (MB/s): 29624.51 |
+| NODE0 Mem Write (MB/s): 7225.79 | NODE1 Mem Write (MB/s): 7811.36 |
+| NODE0 P. Write (T/s) : 214810 | NODE1 P. Write (T/s): 238294 |
+| NODE0 Memory (MB/s): 34704.75 | NODE1 Memory (MB/s): 37435.87 |
++---------------------------------------+---------------------------------------+
+| - System Read Throughput(MB/s): 57103.47 |
+| - System Write Throughput(MB/s): 15037.15 |
+| - System Memory Throughput(MB/s): 72140.62 |
++-------------------------------------------------------------------------------+
+
+pcm-memory.x in KVMFORNFV:
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+pcm-memory is a part of KVMFORNFV in D release.pcm-memory.x will be executed with delay of 60 seconds
+before starting every testcase to monitor the memory traffic intensity which was handled in
+collect_MBWInfo function .The memory bandwidth information will be collected into the logs through
+the testcase updating every 60 seconds.
+
+ **Pre-requisites:**
+
+ 1.Check for the processors supported by PCM .Latest pcm utility version (2.11)support Intel® Xeon® E5 v4 processor family.
+
+ 2.Disabling NMI Watch Dog
+
+ 3.Installing MSR registers
+
+
+Memory Bandwidth logs for KVMFORNFV can be found `here`_:
+
+.. code:: bash
+
+ http://artifacts.opnfv.org/kvmfornfv.html
+
+.. _here: http://artifacts.opnfv.org/kvmfornfv.html
+
+Details of the function implemented:
+
+In install_Pcm function, it handles the installation of pcm utility and the required prerequisites for pcm-memory.x tool to execute.
+
+.. code:: bash
+
+ git clone https://github.com/opcm/pcm
+ cd pcm
+ make
+
+In collect_MBWInfo Function,the below command is executed on the node which was collected to the logs
+with the timestamp and testType.The function will be called at the begining of each testcase and
+signal will be passed to terminate the pcm-memory process which was executing throughout the cyclic testcase.
+
+.. code:: bash
+
+ pcm-memory.x 60 &>/root/MBWInfo/MBWInfo_${testType}_${timeStamp}
+