summaryrefslogtreecommitdiffstats
path: root/docs/userguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/userguide')
-rw-r--r--docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst26
-rw-r--r--docs/userguide/low_latency.userguide.rst2
-rw-r--r--docs/userguide/packet_forwarding.userguide.rst26
-rw-r--r--docs/userguide/pcm_utility.userguide.rst14
4 files changed, 33 insertions, 35 deletions
diff --git a/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
index 4ec8f5013..e7a516bff 100644
--- a/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
+++ b/docs/userguide/kvmfornfv.cyclictest-dashboard.userguide.rst
@@ -15,7 +15,7 @@ Abstract
This chapter explains the procedure to configure the InfluxDB and Grafana on Node1 or Node2
depending on the testtype to publish KVM4NFV test results. The cyclictest cases are executed
and results are published on Yardstick Dashboard(Grafana). InfluxDB is the database which will
-store the cyclictest results and Grafana is a visualisation suite to view the maximum,minumum and
+store the cyclictest results and Grafana is a visualisation suite to view the maximum,minimum and
average values of the time series data of cyclictest results.The framework is shown in below image.
.. figure:: images/dashboard-architecture.png
@@ -98,7 +98,8 @@ Three type of dispatcher methods are available to store the cyclictest results.
**1. File**: Default Dispatcher module is file. If the dispatcher module is configured as a file,then the test results are stored in a temporary file yardstick.out
( default path: /tmp/yardstick.out).
-Dispatcher module of "Verify Job" is "Default". So,the results are stored in Yardstick.out file for verify job. Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered.
+Dispatcher module of "Verify Job" is "Default". So,the results are stored in Yardstick.out file for verify job.
+Storing all the verify jobs in InfluxDB database causes redundancy of latency values. Hence, a File output format is prefered.
.. code:: bash
@@ -111,7 +112,8 @@ Dispatcher module of "Verify Job" is "Default". So,the results are stored in Yar
max_bytes = 0
backup_count = 0
-**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb. Users can check test resultsstored in the Influxdb(Database) on Grafana which is used to visualize the time series data.
+**2. Influxdb**: If the dispatcher module is configured as influxdb, then the test results are stored in Influxdb.
+Users can check test resultsstored in the Influxdb(Database) on Grafana which is used to visualize the time series data.
To configure the influxdb, the following content in /etc/yardstick/yardstick.conf need to updated
@@ -148,7 +150,8 @@ Dispatcher module of "Daily Job" is Influxdb. So, the results are stored in infl
Detailing the dispatcher module in verify and daily Jobs:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily). Once the test is completed, results are published to the respective dispatcher modules.
+KVM4NFV updates the dispatcher module in the yardstick configuration file(/etc/yardstick/yardstick.conf) depending on the Job type(Verify/Daily).
+Once the test is completed, results are published to the respective dispatcher modules.
Dispatcher module is configured for each Job type as mentioned below.
@@ -198,7 +201,8 @@ Influxdb api which is already implemented in `Influxdb`_ will post the data in l
``Displaying Results on Grafana dashboard:``
-- Once the test results are stored in Influxdb, dashboard configuration file(Json) which used to display the cyclictest results on Grafana need to be created by following the `Grafana-procedure`_ and then pushed into `yardstick-repo`_
+- Once the test results are stored in Influxdb, dashboard configuration file(Json) which used to display the cyclictest results
+on Grafana need to be created by following the `Grafana-procedure`_ and then pushed into `yardstick-repo`_
- Grafana can be accessed at `Login`_ using credentials opnfv/opnfv and used for visualizing the collected test data as shown in `Visual`_\
@@ -263,7 +267,8 @@ Note:
1. Idle-Idle Graph
~~~~~~~~~~~~~~~~~~~~
-`Idle-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the cyclictest. Idle_Idleimplies that no stress is applied on the Host or the Guest.
+`Idle-Idle`_ graph displays the Average, Maximum and Minimum latency values obtained by running Idle_Idle test-type of the cyclictest.
+Idle_Idle implies that no stress is applied on the Host or the Guest.
.. _Idle-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=10&fullscreen
@@ -274,7 +279,8 @@ Note:
2. CPU_Stress-Idle Graph
~~~~~~~~~~~~~~~~~~~~~~~~~
-`Cpu_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the cyclictest. Idle_Idle implies that CPU stress is applied on the Host and no stress on the Guest.
+`Cpu_Stress-Idle`_ graph displays the Average, Maximum and Minimum latency values obtained by running Cpu-stress_Idle test-type of the cyclictest.
+Cpu-stress_Idle implies that CPU stress is applied on the Host and no stress on the Guest.
.. _Cpu_stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=11&fullscreen
@@ -285,7 +291,8 @@ Note:
3. Memory_Stress-Idle Graph
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-`Memory_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that Memory stress is applied on the Host and no stress on the Guest.
+`Memory_Stress-Idle`_ graph displays the Average, Maximum and Minimum latency values obtained by running Memory-stress_Idle test-type of the Cyclictest.
+Memory-stress_Idle implies that Memory stress is applied on the Host and no stress on the Guest.
.. _Memory_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=12&fullscreen
@@ -296,7 +303,8 @@ Note:
4. IO_Stress-Idle Graph
~~~~~~~~~~~~~~~~~~~~~~~~~
-`IO_Stress-Idle`_ graph displays the Average,Maximum and Minimum latency values obtained by running Idle_Idle test-type of the Cyclictest. Idle_Idle implies that IO stress is applied on the Host and no stress on the Guest.
+`IO_Stress-Idle`_ graph displays the Average, Maximum and Minimum latency values obtained by running IO-stress_Idle test-type of the Cyclictest.
+IO-stress_Idle implies that IO stress is applied on the Host and no stress on the Guest.
.. _IO_Stress-Idle: http://testresults.opnfv.org/grafana/dashboard/db/kvmfornfv-cyclictest?panelId=13&fullscreen
diff --git a/docs/userguide/low_latency.userguide.rst b/docs/userguide/low_latency.userguide.rst
index 88cc0347e..e65c8aa4f 100644
--- a/docs/userguide/low_latency.userguide.rst
+++ b/docs/userguide/low_latency.userguide.rst
@@ -180,7 +180,7 @@ The host is under constant Input/Output stress .i.e., multiple read-write operat
increase stress. Cyclictest will run on the guest VM that is launched on the same host, where the guest
is under no stress. It outputs Avg, Min and Max latency values.
-.. figure:: images/io-stress-test-type.png
+.. figure:: images/io-stress-idle-test-type.png
:name: io-stress-idle test type
:width: 100%
:align: center
diff --git a/docs/userguide/packet_forwarding.userguide.rst b/docs/userguide/packet_forwarding.userguide.rst
index 594952bdf..22f9b9447 100644
--- a/docs/userguide/packet_forwarding.userguide.rst
+++ b/docs/userguide/packet_forwarding.userguide.rst
@@ -3,7 +3,7 @@
.. http://creativecommons.org/licenses/by/4.0
=================
-PACKET FORWARDING
+Packet Forwarding
=================
About Packet Forwarding
@@ -30,8 +30,7 @@ Version Features
| | - Implements three scenarios (Host/Guest/SRIOV) |
| | as part of testing in KVMFORNFV |
| Danube | - Uses automated test framework of OPNFV |
-| | VSWITCHPERF software (PVP/PVVP) |
-| | |
+| | VSWITCHPERF software (PVP/PVVP) |
| | - Works with IXIA Traffic Generator |
+-----------------------------+---------------------------------------------------+
@@ -47,7 +46,7 @@ VNF level testing and validation.
For complete VSPERF documentation go to `link.`_
-.. _link.: http://artifacts.opnfv.org/vswitchperf/colorado/index.html
+.. _link.: http://artifacts.opnfv.org/vswitchperf/danube/index.html
Installation
@@ -78,7 +77,7 @@ The vSwitch must support Open Flow 1.3 or greater.
Supported Hypervisors
~~~~~~~~~~~~~~~~~~~~~
-* Qemu version 2.3.
+* Qemu version 2.6.
Other Requirements
~~~~~~~~~~~~~~~~~~
@@ -91,8 +90,7 @@ environment and compilation of OVS, DPDK and QEMU is performed by
script **systems/build_base_machine.sh**. It should be executed under
user account, which will be used for vsperf execution.
- **Please Note:** Password-less sudo access must be configured for given user
- before script is executed.
+ **Please Note:** Password-less sudo access must be configured for given user before script is executed.
Execution of installation script:
@@ -209,7 +207,7 @@ runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host.
Installation
~~~~~~~~~~~~
-Follow the [installation instructions] to install.
+Follow the installation instructions to install.
On the CentOS 7 system
~~~~~~~~~~~~~~~~~~~~~~
@@ -380,12 +378,6 @@ A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
support for Destination Network Address Translation (DNAT) for both the MAC and
IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
-.. figure:: images/Guest_Scenario.png
- :name: Guest_Scenario
- :width: 100%
- :align: center
-
-
Executing tests
~~~~~~~~~~~~~~~~
@@ -614,7 +606,7 @@ Using QEMU with PCI passthrough support
Raw virtual machine throughput performance can be measured by execution of PVP
test with direct access to NICs by PCI passthrough. To execute VM with direct
-access to PCI devices, enable vfio-pci_. In order to use virtual functions,
+access to PCI devices, enable vfio-pci. In order to use virtual functions,
SRIOV-support_ must be enabled.
Execution of test with PCI passthrough with vswitch disabled:
@@ -624,10 +616,8 @@ Execution of test with PCI passthrough with vswitch disabled:
$ ./vsperf --conf-file=<path_to_custom_conf>/10_custom.conf \
--vswitch none --vnf QemuPciPassthrough pvp_tput
-Any of supported guest-loopback-application_ can be used inside VM with
+Any of supported guest-loopback-application can be used inside VM with
PCI passthrough support.
Note: Qemu with PCI passthrough support can be used only with PVP test
deployment.
-
-.. _guest-loopback-application:
diff --git a/docs/userguide/pcm_utility.userguide.rst b/docs/userguide/pcm_utility.userguide.rst
index c8eb21d61..1ae68516c 100644
--- a/docs/userguide/pcm_utility.userguide.rst
+++ b/docs/userguide/pcm_utility.userguide.rst
@@ -80,10 +80,10 @@ Parameters
| Mem Ch 3: Reads (MB/s): 6867.47 | Mem Ch 3: Reads (MB/s): 7403.66 |
| Writes(MB/s): 1805.53 | Writes(MB/s): 1950.95 |
| | |
-| NODE0 Mem Read (MB/s): 27478.96 | NODE1 Mem Read (MB/s): 29624.51 |
+| NODE0 Mem Read (MB/s) : 27478.96 | NODE1 Mem Read (MB/s) : 29624.51 |
| NODE0 Mem Write (MB/s): 7225.79 | NODE1 Mem Write (MB/s): 7811.36 |
-| NODE0 P. Write (T/s) : 214810 | NODE1 P. Write (T/s): 238294 |
-| NODE0 Memory (MB/s): 34704.75 | NODE1 Memory (MB/s): 37435.87 |
+| NODE0 P. Write (T/s) : 214810 | NODE1 P. Write (T/s) : 238294 |
+| NODE0 Memory (MB/s) : 34704.75 | NODE1 Memory (MB/s) : 37435.87 |
+---------------------------------------+---------------------------------------+
| - System Read Throughput(MB/s): 57103.47 |
| - System Write Throughput(MB/s): 15037.15 |
@@ -121,9 +121,9 @@ In install_Pcm function, it handles the installation of pcm utility and the requ
.. code:: bash
- git clone https://github.com/opcm/pcm
- cd pcm
- make
+ $ git clone https://github.com/opcm/pcm
+ $ cd pcm
+ $ make
In collect_MBWInfo Function,the below command is executed on the node which was collected to the logs
with the timestamp and testType.The function will be called at the begining of each testcase and
@@ -131,7 +131,7 @@ signal will be passed to terminate the pcm-memory process which was executing th
.. code:: bash
- pcm-memory.x 60 &>/root/MBWInfo/MBWInfo_${testType}_${timeStamp}
+ $ pcm-memory.x 60 &>/root/MBWInfo/MBWInfo_${testType}_${timeStamp}
where,
${testType} = verify (or) daily