aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/conf.py6
-rw-r--r--docs/conf.yaml3
-rw-r--r--docs/index.rst24
-rw-r--r--docs/k8s/index.rst40
-rw-r--r--docs/lma/index.rst18
-rw-r--r--docs/lma/logs/devguide.rst145
-rw-r--r--docs/lma/logs/images/elasticsearch.pngbin0 -> 36046 bytes
-rw-r--r--docs/lma/logs/images/fluentd-cs.pngbin0 -> 40226 bytes
-rw-r--r--docs/lma/logs/images/fluentd-ss.pngbin0 -> 18331 bytes
-rw-r--r--docs/lma/logs/images/nginx.pngbin0 -> 36737 bytes
-rw-r--r--docs/lma/logs/images/setup.pngbin0 -> 43503 bytes
-rw-r--r--docs/lma/logs/userguide.rst386
-rw-r--r--docs/lma/metrics/devguide.rst469
-rw-r--r--docs/lma/metrics/images/dataflow.pngbin0 -> 42443 bytes
-rw-r--r--docs/lma/metrics/images/setup.pngbin0 -> 15019 bytes
-rw-r--r--docs/lma/metrics/userguide.rst226
-rw-r--r--docs/openstack/index.rst39
-rw-r--r--docs/release/release-notes/release-notes.rst189
-rw-r--r--docs/requirements.txt2
-rw-r--r--docs/testing/developer/devguide/design/trafficgen_integration_guide.rst17
-rw-r--r--docs/testing/developer/devguide/design/vswitchperf_design.rst75
-rw-r--r--docs/testing/developer/devguide/index.rst6
-rw-r--r--docs/testing/developer/devguide/requirements/ietf_draft/rfc8204-vsperf-bmwg-vswitch-opnfv.rst2
-rw-r--r--docs/testing/developer/devguide/requirements/vswitchperf_ltd.rst16
-rw-r--r--docs/testing/developer/devguide/requirements/vswitchperf_ltp.rst20
-rw-r--r--docs/testing/developer/devguide/results/scenario.rst2
-rw-r--r--docs/testing/user/configguide/index.rst5
-rw-r--r--docs/testing/user/configguide/installation.rst22
-rw-r--r--docs/testing/user/configguide/tools.rst227
-rw-r--r--docs/testing/user/configguide/trafficgen.rst150
-rw-r--r--docs/testing/user/userguide/index.rst1
-rw-r--r--docs/testing/user/userguide/integration.rst12
-rw-r--r--docs/testing/user/userguide/testlist.rst31
-rw-r--r--docs/testing/user/userguide/teststeps.rst37
-rw-r--r--docs/testing/user/userguide/testusage.rst186
-rw-r--r--docs/testing/user/userguide/trafficcapture.rst8
-rw-r--r--docs/xtesting/index.rst85
-rwxr-xr-xdocs/xtesting/vsperf-xtesting.pngbin0 -> 93202 bytes
38 files changed, 2329 insertions, 120 deletions
diff --git a/docs/conf.py b/docs/conf.py
new file mode 100644
index 00000000..b281a515
--- /dev/null
+++ b/docs/conf.py
@@ -0,0 +1,6 @@
+""" for docs
+"""
+
+# pylint: disable=import-error
+# flake8: noqa
+from docs_conf.conf import *
diff --git a/docs/conf.yaml b/docs/conf.yaml
new file mode 100644
index 00000000..59448e39
--- /dev/null
+++ b/docs/conf.yaml
@@ -0,0 +1,3 @@
+---
+project_cfg: opnfv
+project: VSWITCHPERF
diff --git a/docs/index.rst b/docs/index.rst
new file mode 100644
index 00000000..c8a400f8
--- /dev/null
+++ b/docs/index.rst
@@ -0,0 +1,24 @@
+.. _vswitchperf:
+
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. SPDX-License-Identifier CC-BY-4.0
+.. (c) Open Platform for NFV Project, Inc. and its contributors
+
+*********************************
+OPNFV Vswitchperf
+*********************************
+
+.. toctree::
+ :numbered:
+ :maxdepth: 3
+
+ release/release-notes/index
+ testing/developer/devguide/index
+ testing/developer/devguide/results/index
+ testing/user/configguide/index
+ lma/index
+ openstack/index
+ k8s/index
+ xtesting/index
+
diff --git a/docs/k8s/index.rst b/docs/k8s/index.rst
new file mode 100644
index 00000000..872a3280
--- /dev/null
+++ b/docs/k8s/index.rst
@@ -0,0 +1,40 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Spirent, AT&T, Ixia and others.
+
+.. OPNFV VSPERF Documentation master file.
+
+=========================================================
+OPNFV VSPERF Kubernetes Container Networking Benchmarking
+=========================================================
+VSPERF supports testing and benchmarking of kubernetes container networking solution, referred as kubernetes Container Networking Benchmarking (CNB). The process can be broadly classified into following four operations.
+
+1. Setting up of Kubernetes Cluster.
+2. Deploying container networking solution.
+3. Deploying pod(s).
+4. Running tests.
+
+First step is achieved through the tool present in *tools/k8s/cluster-deployment* folder. Please refer to the documentation present in that folder for automated kubernetes cluster setup. To perform the remaining steps, the user has to run the following command.
+
+.. code-block:: console
+
+ vsperf --k8s --conf-file k8s.conf pcp_tput
+
+************************
+Important Configurations
+************************
+
+VSPERF has introduced a new configuration parameters, as listed below, for kubernetes CNB. The file *12_k8s.conf*, present in conf folder provides sample values. User has to modify these parameters to suit their environment before running the above command.
+
+1. K8S_CONFIG_FILEPATH - location of the kubernetes-cluster access file. This will be used to connect to the cluster.
+2. PLUGIN - The plugin to use. Allowed values are OvsDPDK, VPP, and SRIOV.
+3. NETWORK_ATTACHMENT_FILEPATH - location of the network attachment definition file.
+4. CONFIGMAP_FILEPATH - location of the config-map file. This will be used only for SRIOV plugin.
+5. POD_MANIFEST_FILEPATH - location of the POD definition file.
+6. APP_NAME - Application to run in the pod. Options - l2fwd, testpmd, and l3fwd.
+
+
+*********
+Testcases
+*********
+Kubernetes CNB will be done through new testcases. For Jerma release, only pcp_tput will be supported. This testcase, will be similar to pvp_tput, where VNF is replaced with a pod/container. The pcp_tput testcase, will still use phy2phy as deployment. In future releases, a new deployment model will be added to support more testcases for kubernetes
diff --git a/docs/lma/index.rst b/docs/lma/index.rst
new file mode 100644
index 00000000..dd6be47b
--- /dev/null
+++ b/docs/lma/index.rst
@@ -0,0 +1,18 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T, Red Hat, Spirent, Ixia and others.
+
+.. OPNFV VSPERF LMA Documentation master file.
+
+***********************
+OPNFV VSPERF LMA Guides
+***********************
+
+.. toctree::
+ :caption: Developer Guide for Monitoring Tools
+ :maxdepth: 2
+
+ ./metrics/userguide.rst
+ ./metrics/devguide.rst
+ ./logs/userguide.rst
+ ./logs/devguide.rst
diff --git a/docs/lma/logs/devguide.rst b/docs/lma/logs/devguide.rst
new file mode 100644
index 00000000..7aeaad29
--- /dev/null
+++ b/docs/lma/logs/devguide.rst
@@ -0,0 +1,145 @@
+====================
+Logs Developer Guide
+====================
+
+Ansible Client-side
+-------------------
+
+Ansible File Organisation
+^^^^^^^^^^^^^^^^^^^^^^^^^
+Files Structure::
+
+ ansible-client
+ ├── ansible.cfg
+ ├── hosts
+ ├── playbooks
+ │ └── setup.yaml
+ └── roles
+ ├── clean-td-agent
+ │ └── tasks
+ │ └── main.yml
+ └── td-agent
+ ├── files
+ │ └── td-agent.conf
+ └── tasks
+ └── main.yml
+
+Summary of roles
+^^^^^^^^^^^^^^^^
+====================== ======================
+Roles Description
+====================== ======================
+``td-agent`` Install Td-agent & change configuration file
+``clean-td-agent`` Unistall Td-agent
+====================== ======================
+
+Configurable Parameters
+^^^^^^^^^^^^^^^^^^^^^^^
+====================================================== ====================== ======================
+File (ansible-client/roles/) Parameter Description
+====================================================== ====================== ======================
+``td-agent/files/td-agent.conf`` host Fluentd-server IP
+``td-agent/files/td-agent.conf`` port Fluentd-Server Port
+====================================================== ====================== ======================
+
+Ansible Server-side
+-------------------
+
+Ansible File Organisation
+^^^^^^^^^^^^^^^^^^^^^^^^^
+Files Structure::
+
+ ansible-server
+ ├── ansible.cfg
+ ├── group_vars
+ │ └── all.yml
+ ├── hosts
+ ├── playbooks
+ │ └── setup.yaml
+ └── roles
+ ├── clean-logging
+ │ └── tasks
+ │ └── main.yml
+ ├── k8s-master
+ │ └── tasks
+ │ └── main.yml
+ ├── k8s-pre
+ │ └── tasks
+ │ └── main.yml
+ ├── k8s-worker
+ │ └── tasks
+ │ └── main.yml
+ ├── logging
+ │ ├── files
+ │ │ ├── elastalert
+ │ │ │ ├── ealert-conf-cm.yaml
+ │ │ │ ├── ealert-key-cm.yaml
+ │ │ │ ├── ealert-rule-cm.yaml
+ │ │ │ └── elastalert.yaml
+ │ │ ├── elasticsearch
+ │ │ │ ├── elasticsearch.yaml
+ │ │ │ └── user-secret.yaml
+ │ │ ├── fluentd
+ │ │ │ ├── fluent-cm.yaml
+ │ │ │ ├── fluent-service.yaml
+ │ │ │ └── fluent.yaml
+ │ │ ├── kibana
+ │ │ │ └── kibana.yaml
+ │ │ ├── namespace.yaml
+ │ │ ├── nginx
+ │ │ │ ├── nginx-conf-cm.yaml
+ │ │ │ ├── nginx-key-cm.yaml
+ │ │ │ ├── nginx-service.yaml
+ │ │ │ └── nginx.yaml
+ │ │ ├── persistentVolume.yaml
+ │ │ └── storageClass.yaml
+ │ └── tasks
+ │ └── main.yml
+ └── nfs
+ └── tasks
+ └── main.yml
+
+Summary of roles
+^^^^^^^^^^^^^^^^
+====================== ======================
+Roles Description
+====================== ======================
+``k8s-pre`` Pre-requisite for installing K8s, like installing docker & K8s, disable swap etc.
+``k8s-master`` Reset K8s & make a master
+``k8s-worker`` Join woker nodes with token
+``logging`` EFK & elastalert setup in K8s
+``clean logging`` Remove EFK & elastalert setup from K8s
+``nfs`` Start a NFS server to store Elasticsearch data
+====================== ======================
+
+Configurable Parameters
+^^^^^^^^^^^^^^^^^^^^^^^
+========================================================================= ============================================ ======================
+File (ansible-server/roles/) Parameter name Description
+========================================================================= ============================================ ======================
+**Role: logging**
+``logging/files/persistentVolume.yaml`` storage Increase or Decrease Storage size of Persistent Volume size for each VM
+``logging/files/kibana/kibana.yaml`` version To Change the Kibana Version
+``logging/files/kibana/kibana.yaml`` count To increase or decrease the replica
+``logging/files/elasticsearch/elasticsearch.yaml`` version To Change the Elasticsearch Version
+``logging/files/elasticsearch/elasticsearch.yaml`` nodePort To Change Service Port
+``logging/files/elasticsearch/elasticsearch.yaml`` storage Increase or Decrease Storage size of Elasticsearch data for each VM
+``logging/files/elasticsearch/elasticsearch.yaml`` nodeAffinity -> values (hostname) In which VM Elasticsearch master or data pod will run (change the hostname to run the Elasticsearch master or data pod on a specific node)
+``logging/files/elasticsearch/user-secret.yaml`` stringData Add Elasticsearch User & its roles (`Elastic Docs <https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-users-and-roles.html#k8s_file_realm>`_)
+``logging/files/fluentd/fluent.yaml`` replicas To increase or decrease the replica
+``logging/files/fluentd/fluent-service.yaml`` nodePort To Change Service Port
+``logging/files/fluentd/fluent-cm.yaml`` index_template.json -> number_of_replicas To increase or decrease replica of data in Elasticsearch
+``logging/files/fluentd/fluent-cm.yaml`` fluent.conf Server port & other Fluentd Configuration
+``logging/files/nginx/nginx.yaml`` replicas To increase or decrease the replica
+``logging/files/nginx/nginx-service.yaml`` nodePort To Change Service Port
+``logging/files/nginx/nginx-key-cm.yaml`` kibana-access.key, kibana-access.pem Key file for HTTPs Connection
+``logging/files/nginx/nginx-conf-cm.yaml`` - Nginx Configuration
+``logging/files/elastalert/elastalert.yaml`` replicas To increase or decrease the replica
+``logging/files/elastalert/ealert-key-cm.yaml`` elastalert.key, elastalert.pem Key file for HTTPs Connection
+``logging/files/elastalert/ealert-conf-cm.yaml`` run_every How often ElastAlert will query Elasticsearch
+``logging/files/elastalert/ealert-conf-cm.yaml`` alert_time_limit If an alert fails for some reason, ElastAlert will retry sending the alert until this time period has elapsed
+``logging/files/elastalert/ealert-conf-cm.yaml`` es_host, es_port Elasticsearch Serivce name & port in K8s
+``logging/files/elastalert/ealert-rule-cm.yaml`` http_post_url Alert Receiver IP (`Elastalert Rule Config <https://elastalert.readthedocs.io/en/latest/ruletypes.html>`_)
+**Role: nfs**
+``nfs/tasks/main.yml`` line Path of NFS storage
+========================================================================= ============================================ ======================
diff --git a/docs/lma/logs/images/elasticsearch.png b/docs/lma/logs/images/elasticsearch.png
new file mode 100644
index 00000000..f0b876f5
--- /dev/null
+++ b/docs/lma/logs/images/elasticsearch.png
Binary files differ
diff --git a/docs/lma/logs/images/fluentd-cs.png b/docs/lma/logs/images/fluentd-cs.png
new file mode 100644
index 00000000..513bb3ef
--- /dev/null
+++ b/docs/lma/logs/images/fluentd-cs.png
Binary files differ
diff --git a/docs/lma/logs/images/fluentd-ss.png b/docs/lma/logs/images/fluentd-ss.png
new file mode 100644
index 00000000..4e9ab112
--- /dev/null
+++ b/docs/lma/logs/images/fluentd-ss.png
Binary files differ
diff --git a/docs/lma/logs/images/nginx.png b/docs/lma/logs/images/nginx.png
new file mode 100644
index 00000000..a0b00514
--- /dev/null
+++ b/docs/lma/logs/images/nginx.png
Binary files differ
diff --git a/docs/lma/logs/images/setup.png b/docs/lma/logs/images/setup.png
new file mode 100644
index 00000000..267685fa
--- /dev/null
+++ b/docs/lma/logs/images/setup.png
Binary files differ
diff --git a/docs/lma/logs/userguide.rst b/docs/lma/logs/userguide.rst
new file mode 100644
index 00000000..9b616fe7
--- /dev/null
+++ b/docs/lma/logs/userguide.rst
@@ -0,0 +1,386 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, AT&T, Red Hat, Spirent, Ixia and others.
+
+.. OPNFV VSPERF Documentation master file.
+
+***************
+Logs User Guide
+***************
+
+Prerequisites
+=============
+
+- Require 3 VMs to setup K8s
+- ``$ sudo yum install ansible``
+- ``$ pip install openshift pyyaml kubernetes`` (required for ansible K8s module)
+- Update IPs in all these files (if changed)
+ ====================================================================== ======================
+ Path Description
+ ====================================================================== ======================
+ ``ansible-server/group_vars/all.yml`` IP of K8s apiserver and VM hostname
+ ``ansible-server/hosts`` IP of VMs to install
+ ``ansible-server/roles/logging/files/persistentVolume.yaml`` IP of NFS-Server
+ ``ansible-server/roles/logging/files/elastalert/ealert-rule-cm.yaml`` IP of alert-receiver
+ ====================================================================== ======================
+
+Architecture
+============
+.. image:: images/setup.png
+
+Installation - Clientside
+=========================
+
+Nodes
+-----
+
+- **Node1** = 10.10.120.21
+- **Node4** = 10.10.120.24
+
+How installation is done?
+-------------------------
+
+- TD-agent installation
+ ``$ curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh``
+- Copy the TD-agent config file in **Node1**
+ ``$ cp tdagent-client-config/node1.conf /etc/td-agent/td-agent.conf``
+- Copy the TD-agent config file in **Node4**
+ ``$ cp tdagent-client-config/node4.conf /etc/td-agent/td-agent.conf``
+- Restart the service
+ ``$ sudo service td-agent restart``
+
+Installation - Serverside
+=========================
+
+Nodes
+-----
+
+Inside Jumphost - POD12
+ - **VM1** = 10.10.120.211
+ - **VM2** = 10.10.120.203
+ - **VM3** = 10.10.120.204
+
+
+How installation is done?
+-------------------------
+
+**Using Ansible:**
+ - **K8s**
+ - **Elasticsearch:** 1 Master & 1 Data node at each VM
+ - **Kibana:** 1 Replicas
+ - **Nginx:** 2 Replicas
+ - **Fluentd:** 2 Replicas
+ - **Elastalert:** 1 Replica (get duplicate alert, if increase replica)
+ - **NFS Server:** at each VM to store elasticsearch data at following path
+ - ``/srv/nfs/master``
+ - ``/srv/nfs/data``
+
+How to setup?
+-------------
+
+- **To setup K8s cluster and EFK:** Run the ansible-playbook ``ansible/playbooks/setup.yaml``
+- **To clean everything:** Run the ansible-playbook ``ansible/playbooks/clean.yaml``
+
+Do we have HA?
+--------------
+
+Yes
+
+Configuration
+=============
+
+K8s
+---
+
+Path of all yamls (Serverside)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+``ansible-server/roles/logging/files/``
+
+K8s namespace
+^^^^^^^^^^^^^
+
+``logging``
+
+K8s Service details
+^^^^^^^^^^^^^^^^^^^
+
+``$ kubectl get svc -n logging``
+
+Elasticsearch Configuration
+---------------------------
+
+Elasticsearch Setup Structure
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/elasticsearch.png
+
+Elasticsearch service details
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+| **Service Name:** ``logging-es-http``
+| **Service Port:** ``9200``
+| **Service Type:** ``ClusterIP``
+
+How to get elasticsearch default username & password?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+- User1 (custom user):
+ | **Username:** ``elasticsearch``
+ | **Password:** ``password123``
+- User2 (by default created by Elastic Operator):
+ | **Username:** ``elastic``
+ | To get default password:
+ | ``$ PASSWORD=$(kubectl get secret -n logging logging-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')``
+ | ``$ echo $PASSWORD``
+
+How to increase replica of any index?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+| $ curl -k -u "elasticsearch:password123" -H 'Content-Type: application/json' -XPUT "https://10.10.120.211:9200/indexname*/_settings" -d '
+| {
+| "index" : {
+| "number_of_replicas" : "2" }
+| }'
+
+Index Life
+^^^^^^^^^^
+**30 Days**
+
+Kibana Configuration
+--------------------
+
+Kibana Service details
+^^^^^^^^^^^^^^^^^^^^^^
+
+| **Service Name:** ``logging-kb-http``
+| **Service Port:** ``5601``
+| **Service Type:** ``ClusterIP``
+
+Nginx Configuration
+-------------------
+
+IP
+^^
+
+The IP address with https. Ex: "10.10.120.211:32000"
+
+Nginx Setup Structure
+^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/nginx.png
+
+Ngnix Service details
+^^^^^^^^^^^^^^^^^^^^^
+
+| **Service Name:** ``nginx``
+| **Service Port:** ``32000``
+| **Service Type:** ``NodePort``
+
+Why NGINX is used?
+^^^^^^^^^^^^^^^^^^
+
+`Securing ELK using Nginx <https://logz.io/blog/securing-elk-nginx/>`_
+
+Nginx Configuration
+^^^^^^^^^^^^^^^^^^^
+
+**Path:** ``ansible-server/roles/logging/files/nginx/nginx-conf-cm.yaml``
+
+Fluentd Configuration - Clientside (Td-agent)
+---------------------------------------------
+
+Fluentd Setup Structure
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/fluentd-cs.png
+
+Log collection paths
+^^^^^^^^^^^^^^^^^^^^
+
+- ``/tmp/result*/*.log``
+- ``/tmp/result*/*.dat``
+- ``/tmp/result*/*.csv``
+- ``/tmp/result*/stc-liveresults.dat.*``
+- ``/var/log/userspace*.log``
+- ``/var/log/sriovdp/*.log.*``
+- ``/var/log/pods/**/*.log``
+
+Logs sent to
+^^^^^^^^^^^^
+
+Another fluentd instance of K8s cluster (K8s Master: 10.10.120.211) at Jumphost.
+
+Td-agent logs
+^^^^^^^^^^^^^
+
+Path of td-agent logs: ``/var/log/td-agent/td-agent.log``
+
+Td-agent configuration
+^^^^^^^^^^^^^^^^^^^^^^
+
+| Path of conf file: ``/etc/td-agent/td-agent.conf``
+| **If any changes is made in td-agent.conf then restart the td-agent service,** ``$ sudo service td-agent restart``
+
+Config Description
+^^^^^^^^^^^^^^^^^^
+
+- Get the logs from collection path
+- | Convert to this format
+ | {
+ | msg: "log line"
+ | log_path: “/file/path”
+ | file: “file.name”
+ | host: “pod12-node4”
+ | }
+- Sends it to fluentd
+
+Fluentd Configuration - Serverside
+----------------------------------
+
+Fluentd Setup Structure
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/fluentd-ss.png
+
+Fluentd Service details
+^^^^^^^^^^^^^^^^^^^^^^^
+
+| **Service Name:** ``fluentd``
+| **Service Port:** ``32224``
+| **Service Type:** ``NodePort``
+
+Logs sent to
+^^^^^^^^^^^^
+Elasticsearch service (Example: logging-es-http at port 9200)
+
+Config Description
+^^^^^^^^^^^^^^^^^^
+
+- **Step 1**
+ - Get the logs from Node1 & Node4
+- **Step 2**
+ ======================================== ======================
+ log_path add tag (for routing)
+ ======================================== ======================
+ ``/tmp/result.*/.*errors.dat`` errordat.log
+ ``/tmp/result.*/.*counts.dat`` countdat.log
+ ``/tmp/result.*/stc-liveresults.dat.tx`` stcdattx.log
+ ``/tmp/result.*/stc-liveresults.dat.rx`` stcdatrx.log
+ ``/tmp/result.*/.*Statistics.csv`` ixia.log
+ ``/tmp/result.*/vsperf-overall*`` vsperf.log
+ ``/tmp/result.*/vswitchd*`` vswitchd.log
+ ``/var/log/userspace*`` userspace.log
+ ``/var/log/sriovdp*`` sriovdp.log
+ ``/var/log/pods*`` pods.log
+ ======================================== ======================
+
+- **Step 3**
+ Then parse each type using tags.
+ - error.conf: to find any error
+ - time-series.conf: to parse time series data
+ - time-analysis.conf: to calculate time analyasis
+- **Step 4**
+ ================================ ======================
+ host add tag (for routing)
+ ================================ ======================
+ ``pod12-node4`` node4
+ ``worker`` node1
+ ================================ ======================
+- **Step 5**
+ ================================ ======================
+ Tag elasticsearch
+ ================================ ======================
+ ``node4`` index “node4*”
+ ``node1`` index “node1*”
+ ================================ ======================
+
+Elastalert
+==========
+
+Send alert if
+-------------
+
+- Blacklist
+ - "Failed to run test"
+ - "Failed to execute in '30' seconds"
+ - "('Result', 'Failed')"
+ - "could not open socket: connection refused"
+ - "Input/output error"
+ - "dpdk|ERR|EAL: Error - exiting with code: 1"
+ - "Failed to execute in '30' seconds"
+ - "dpdk|ERR|EAL: Driver cannot attach the device"
+ - "dpdk|EMER|Cannot create lock on"
+ - "dpdk|ERR|VHOST_CONFIG: * device not found"
+- Time
+ - vswitch_duration > 3 sec
+
+How to configure alert?
+-----------------------
+
+- Add your rule in ``ansible/roles/logging/files/elastalert/ealert-rule-cm.yaml`` (`Elastalert Rule Config <https://elastalert.readthedocs.io/en/latest/ruletypes.html>`_)
+ | name: anything
+ | type: <check-above-link> #The RuleType to use
+ | index: node4* #index name
+ | realert:
+ | minutes: 0 #to get alert for all cases after each interval
+ | alert: post #To send alert as HTTP POST
+ | http_post_url: # Provide URL
+
+- Mount this file to elastalert pod in ``ansible/roles/logging/files/elastalert/elastalert.yaml``.
+
+Alert Format
+------------
+
+{"type": "pattern-match", "label": "failed", "index": "node4-20200815", "log": "error-log-line", "log-path": "/tmp/result/file.log", "reson": "error-message" }
+
+Data Management
+===============
+
+Elasticsearch
+-------------
+
+Q&As
+^^^^
+
+Where data is stored now?
+Data is stored in NFS server with 1 replica of each index (default). Path of data are following:
+
+ - ``/srv/nfs/data (VM1)``
+ - ``/srv/nfs/data (VM2)``
+ - ``/srv/nfs/data (VM3)``
+ - ``/srv/nfs/master (VM1)``
+ - ``/srv/nfs/master (VM2)``
+ - ``/srv/nfs/master (VM3)``
+
+If user wants to change from NFS to local storage, can he do it?
+Yes, user can do this, need to configure persistent volume. (``ansible-server/roles/logging/files/persistentVolume.yaml``)
+
+Do we have backup of data?
+Yes. 1 replica of each index
+
+When K8s restart, the data is still accessible?
+Yes (If data is not deleted from /srv/nfs/data)
+
+Troubleshooting
+===============
+
+If no logs receiving in Elasticsearch
+-------------------------------------
+
+- Check IP & port of server-fluentd in client config.
+- Check client-fluentd logs, ``$ sudo tail -f /var/log/td-agent/td-agent.log``
+- Check server-fluentd logs, ``$ sudo kubectl logs -n logging <fluentd-pod-name>``
+
+If no notification received
+---------------------------
+
+- Search your "log" in Elasticsearch.
+- Check config of elastalert
+- Check IP of alert-receiver
+
+Reference
+=========
+- `Elastic cloud on K8s <https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html>`_
+- `HA Elasticsearch on K8s <https://www.elastic.co/blog/high-availability-elasticsearch-on-kubernetes-with-eck-and-gke>`_
+- `Fluentd Configuration <https://docs.fluentd.org/configuration/config-file>`_
+- `Elastalert Rule Config <https://elastalert.readthedocs.io/en/latest/ruletypes.html>`_
diff --git a/docs/lma/metrics/devguide.rst b/docs/lma/metrics/devguide.rst
new file mode 100644
index 00000000..40162397
--- /dev/null
+++ b/docs/lma/metrics/devguide.rst
@@ -0,0 +1,469 @@
+=======================
+Metrics Developer Guide
+=======================
+
+Anible File Organization
+========================
+
+Ansible-Server
+--------------
+
+Please follow the following file structure:
+
+.. code-block:: bash
+
+ ansible-server
+ | ansible.cfg
+ | hosts
+ |
+ +---group_vars
+ | all.yml
+ |
+ +---playbooks
+ | clean.yaml
+ | setup.yaml
+ |
+ \---roles
+ +---clean-monitoring
+ | \---tasks
+ | main.yml
+ |
+ +---monitoring
+ +---files
+ | | monitoring-namespace.yaml
+ | |
+ | +---alertmanager
+ | | alertmanager-config.yaml
+ | | alertmanager-deployment.yaml
+ | | alertmanager-service.yaml
+ | | alertmanager1-deployment.yaml
+ | | alertmanager1-service.yaml
+ | |
+ | +---cadvisor
+ | | cadvisor-daemonset.yaml
+ | | cadvisor-service.yaml
+ | |
+ | +---collectd-exporter
+ | | collectd-exporter-deployment.yaml
+ | | collectd-exporter-service.yaml
+ | |
+ | +---grafana
+ | | grafana-datasource-config.yaml
+ | | grafana-deployment.yaml
+ | | grafana-pv.yaml
+ | | grafana-pvc.yaml
+ | | grafana-service.yaml
+ | |
+ | +---kube-state-metrics
+ | | kube-state-metrics-deployment.yaml
+ | | kube-state-metrics-service.yaml
+ | |
+ | +---node-exporter
+ | | nodeexporter-daemonset.yaml
+ | | nodeexporter-service.yaml
+ | |
+ | \---prometheus
+ | main-prometheus-service.yaml
+ | prometheus-config.yaml
+ | prometheus-deployment.yaml
+ | prometheus-pv.yaml
+ | prometheus-pvc.yaml
+ | prometheus-service.yaml
+ | prometheus1-deployment.yaml
+ | prometheus1-service.yaml
+ |
+ \---tasks
+ main.yml
+
+
+Ansible - Client
+----------------
+
+Please follow the following file structure:
+
+.. code-block:: bash
+
+ ansible-server
+ | ansible.cfg
+ | hosts
+ |
+ +---group_vars
+ | all.yml
+ |
+ +---playbooks
+ | clean.yaml
+ | setup.yaml
+ |
+ \---roles
+ +---clean-collectd
+ | \---tasks
+ | main.yml
+ |
+ +---collectd
+ +---files
+ | collectd.conf.j2
+ |
+ \---tasks
+ main.yml
+
+
+Summary of Roles
+================
+
+A brief description of the Ansible playbook roles,
+which are used to deploy the monitoring cluster
+
+Ansible Server Roles
+--------------------
+
+Ansible Server, this part consists of the roles used to deploy
+Prometheus Alertmanager Grafana stack on the server-side
+
+Role: Monitoring
+~~~~~~~~~~~~~~~~
+
+Deployment and configuration of PAG stack along with collectd-exporter,
+cadvisor and node-exporter.
+
+Role: Clean-Monitoring
+~~~~~~~~~~~~~~~~~~~~~~
+
+Removes all the components deployed by the Monitoring role.
+
+
+File-Task Mapping and Configurable Parameters
+================================================
+
+Ansible Server
+----------------
+
+Role: Monitoring
+~~~~~~~~~~~~~~~~~~~
+
+Alert Manager
+^^^^^^^^^^^^^^^
+
+File: alertmanager-config.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/alertmanager/alertmanager-config.yaml
+
+Task: Configures Receivers for alertmanager
+
+Summary: A configmap, currently configures webhook for alertmanager,
+can be used to configure any kind of receiver
+
+Configurable Parameters:
+ receiver.url: change to the webhook receiver's URL
+ route: Can be used to add receivers
+
+
+File: alertmanager-deployment.yaml
+''''''''''''''''''''''''''''''''''
+Path : monitoring/files/alertmanager/alertmanager-deployment.yaml
+
+Task: Deploys alertmanager instance
+
+Summary: A Deployment, deploys 1 replica of alertmanager
+
+
+File: alertmanager-service.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/alertmanager/alertmanager-service.yaml
+
+Task: Creates a K8s service for alertmanager
+
+Summary: A Nodeport type of service, so that user can create "silences",
+view the status of alerts from the native alertmanager dashboard / UI.
+
+Configurable Parameters:
+ spec.type: Options : NodePort, ClusterIP, LoadBalancer
+ spec.ports: Edit / add ports to be handled by the service
+
+**Note: alertmanager1-deployment, alertmanager1-service are the same as
+alertmanager-deployment and alertmanager-service respectively.**
+
+CAdvisor
+^^^^^^^^^^^
+
+File: cadvisor-daemonset.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/cadvisor/cadvisor-daemonset.yaml
+
+Task: To create a cadvisor daemonset
+
+Summary: A daemonset, used to scrape data of the kubernetes cluster itself,
+its a daemonset so an instance is run on every node.
+
+Configurable Parameters:
+ spec.template.spec.ports: Port of the container
+
+
+File: cadvisor-service.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/cadvisor/cadvisor-service.yaml
+
+Task: To create a cadvisor service
+
+Summary: A ClusterIP service for cadvisor to communicate with prometheus
+
+Configurable Parameters:
+ spec.ports: Add / Edit ports
+
+
+Collectd Exporter
+^^^^^^^^^^^^^^^^^^^^
+
+File: collectd-exporter-deployment.yaml
+''''''''''''''''''''''''''''''''''''''''''
+Path : monitoring/files/collectd-exporter/collectd-exporter-deployment.yaml
+
+Task: To create a collectd replica
+
+Summary: A deployment, acts as receiver for collectd data sent by client machines,
+prometheus pulls data from this exporter
+
+Configurable Parameters:
+ spec.template.spec.ports: Port of the container
+
+
+File: collectd-exporter.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/collectd-exporter/collectd-exporter.yaml
+
+Task: To create a collectd service
+
+Summary: A NodePort service for collectd-exporter to hold data for prometheus
+to scrape
+
+Configurable Parameters:
+ spec.ports: Add / Edit ports
+
+
+Grafana
+^^^^^^^^^
+
+File: grafana-datasource-config.yaml
+''''''''''''''''''''''''''''''''''''''''''
+Path : monitoring/files/grafana/grafana-datasource-config.yaml
+
+Task: To create config file for grafana
+
+Summary: A configmap, adds prometheus datasource in grafana
+
+
+File: grafana-deployment.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/grafana/grafana-deployment.yaml
+
+Task: To create a grafana deployment
+
+Summary: The grafana deployment creates a single replica of grafana,
+with preconfigured prometheus datasource.
+
+Configurable Parameters:
+ spec.template.spec.ports: Edit ports
+ spec.template.spec.env: Add / Edit environment variables
+
+
+File: grafana-pv.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/grafana/grafana-pv.yaml
+
+Task: To create a persistent volume for grafana
+
+Summary: A persistent volume for grafana.
+
+Configurable Parameters:
+ spec.capacity.storage: Increase / decrease size
+ spec.accessModes: To change the way PV is accessed.
+ spec.nfs.server: To change the ip address of NFS server
+ spec.nfs.path: To change the path of the server
+
+
+File: grafana-pvc.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/grafana/grafana-pvc.yaml
+
+Task: To create a persistent volume claim for grafana
+
+Summary: A persistent volume claim for grafana.
+
+Configurable Parameters:
+ spec.resources.requests.storage: Increase / decrease size
+
+
+File: grafana-service.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/grafana/grafana-service.yaml
+
+Task: To create a service for grafana
+
+Summary: A Nodeport type of service, so that users actually connect to,
+view the dashboard / UI.
+
+Configurable Parameters:
+ spec.type: Options : NodePort, ClusterIP, LoadBalancer
+ spec.ports: Edit / add ports to be handled by the service
+
+
+Kube State Metrics
+^^^^^^^^^^^^^^^^^^^^
+
+File: kube-state-metrics-deployment.yaml
+''''''''''''''''''''''''''''''''''''''''
+Path : monitoring/files/kube-state-metrics/kube-state-metrics-deployment.yaml
+
+Task: To create a kube-state-metrics instance
+
+Summary: A deployment, used to collect metrics of the kubernetes cluster iteself
+
+Configurable Parameters:
+ spec.template.spec.containers.ports: Port of the container
+
+
+File: kube-state-metrics-service.yaml
+'''''''''''''''''''''''''''''''''''''
+Path : monitoring/files/kube-state-metrics/kube-state-metrics-service.yaml
+
+Task: To create a collectd service
+
+Summary: A NodePort service for collectd-exporter to hold data for prometheus
+to scrape
+
+Configurable Parameters:
+ spec.ports: Add / Edit ports
+
+
+Node Exporter
+^^^^^^^^^^^^^^^
+
+File: node-exporter-daemonset.yaml
+''''''''''''''''''''''''''''''''''
+Path : monitoring/files/node-exporter/node-exporter-daemonset.yaml
+
+Task: To create a node exporter daemonset
+
+Summary: A daemonset, used to scrape data of the host machines / node,
+its a daemonset so an instance is run on every node.
+
+Configurable Parameters:
+ spec.template.spec.ports: Port of the container
+
+
+File: node-exporter-service.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/node-exporter/node-exporter-service.yaml
+
+Task: To create a node exporter service
+
+Summary: A ClusterIP service for node exporter to communicate with Prometheus
+
+Configurable Parameters:
+ spec.ports: Add / Edit ports
+
+
+Prometheus
+^^^^^^^^^^^^^
+
+File: prometheus-config.yaml
+''''''''''''''''''''''''''''''''''''''''''
+Path : monitoring/files/prometheus/prometheus-config.yaml
+
+Task: To create a config file for Prometheus
+
+Summary: A configmap, adds alert rules.
+
+Configurable Parameters:
+ data.alert.rules: Add / Edit alert rules
+
+
+File: prometheus-deployment.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/prometheus/prometheus-deployment.yaml
+
+Task: To create a Prometheus deployment
+
+Summary: The Prometheus deployment creates a single replica of Prometheus,
+with preconfigured Prometheus datasource.
+
+Configurable Parameters:
+ spec.template.spec.affinity: To change the node affinity,
+ make sure only 1 instance of prometheus is
+ running on 1 node.
+
+ spec.template.spec.ports: Add / Edit container port
+
+
+File: prometheus-pv.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/prometheus/prometheus-pv.yaml
+
+Task: To create a persistent volume for Prometheus
+
+Summary: A persistent volume for Prometheus.
+
+Configurable Parameters:
+ spec.capacity.storage: Increase / decrease size
+ spec.accessModes: To change the way PV is accessed.
+ spec.hostpath.path: To change the path of the volume
+
+
+File: prometheus-pvc.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/prometheus/prometheus-pvc.yaml
+
+Task: To create a persistent volume claim for Prometheus
+
+Summary: A persistent volume claim for Prometheus.
+
+Configurable Parameters:
+ spec.resources.requests.storage: Increase / decrease size
+
+
+File: prometheus-service.yaml
+'''''''''''''''''''''''''''''''''
+Path : monitoring/files/prometheus/prometheus-service.yaml
+
+Task: To create a service for prometheus
+
+Summary: A Nodeport type of service, prometheus native dashboard
+available here.
+
+Configurable Parameters:
+ spec.type: Options : NodePort, ClusterIP, LoadBalancer
+ spec.ports: Edit / add ports to be handled by the service
+
+
+File: main-prometheus-server.yaml
+'''''''''''''''''''''''''''''''''''
+Path: monitoring/files/prometheus/main-prometheus-service.yaml
+
+Task: A service that connects both prometheus instances.
+
+Summary: A Nodeport service for other services to connect to the Prometheus cluster.
+As HA Prometheus needs to independent instances of Prometheus scraping the same inputs
+having the same configuration
+
+**Note: prometheus-deployment, prometheus1-service are the same as
+prometheus-deployment and prometheus-service respectively.**
+
+
+Ansible Client Roles
+----------------------
+
+Role: Collectd
+~~~~~~~~~~~~~~~~~~
+
+File: main.yml
+^^^^^^^^^^^^^^^^
+Path: collectd/tasks/main.yaml
+
+Task: Install collectd along with prerequisites
+
+Associated template file:
+
+collectd.conf.j2
+Path: collectd/files/collectd.conf.j2
+
+Summary: Edit this file to change the default configuration to
+be installed on the client's machine
diff --git a/docs/lma/metrics/images/dataflow.png b/docs/lma/metrics/images/dataflow.png
new file mode 100644
index 00000000..ca1ec908
--- /dev/null
+++ b/docs/lma/metrics/images/dataflow.png
Binary files differ
diff --git a/docs/lma/metrics/images/setup.png b/docs/lma/metrics/images/setup.png
new file mode 100644
index 00000000..ce6a1274
--- /dev/null
+++ b/docs/lma/metrics/images/setup.png
Binary files differ
diff --git a/docs/lma/metrics/userguide.rst b/docs/lma/metrics/userguide.rst
new file mode 100644
index 00000000..eae336d7
--- /dev/null
+++ b/docs/lma/metrics/userguide.rst
@@ -0,0 +1,226 @@
+==================
+Metrics User Guide
+==================
+
+Setup
+=======
+
+Prerequisites
+-------------------------
+- Require 3 VMs to setup K8s
+- ``$ sudo yum install ansible``
+- ``$ pip install openshift pyyaml kubernetes`` (required for ansible K8s module)
+- Update IPs in all these files (if changed)
+ - ``ansible-server/group_vars/all.yml`` (IP of apiserver and hostname)
+ - ``ansible-server/hosts`` (IP of VMs to install)
+ - ``ansible-server/roles/monitoring/files/grafana/grafana-pv.yaml`` (IP of NFS-Server)
+ - ``ansible-server/roles/monitoring/files/alertmanager/alertmanager-config.yaml`` (IP of alert-receiver)
+
+Setup Structure
+---------------
+.. image:: images/setup.png
+
+Installation - Client Side
+----------------------------
+
+Nodes
+`````
+- **Node1** = 10.10.120.21
+- **Node4** = 10.10.120.24
+
+How installation is done?
+`````````````````````````
+Ansible playbook available in ``tools/lma/ansible-client`` folder
+
+- ``cd tools/lma/ansible-client``
+- ``ansible-playbook setup.yaml``
+
+This deploys collectd and configures it to send data to collectd exporter
+configured at 10.10.120.211 (ip address of current instance of collectd-exporter)
+Please make appropriate changes in the config file present in ``tools/lma/ansible-client/roles/collectd/files/``
+
+Installation - Server Side
+----------------------------
+
+Nodes
+``````
+
+Inside Jumphost - POD12
+ - **VM1** = 10.10.120.211
+ - **VM2** = 10.10.120.203
+ - **VM3** = 10.10.120.204
+
+
+How installation is done?
+`````````````````````````
+**Using Ansible:**
+ - **K8s**
+ - **Prometheus:** 2 independent deployments
+ - **Alertmanager:** 2 independent deployments (cluster peers)
+ - **Grafana:** 1 Replica deployment
+ - **cAdvisor:** 1 daemonset, i.e 3 replicas, one on each node
+ - **collectd-exporter:** 1 Replica
+ - **node-exporter:** 1 statefulset with 3 replicas
+ - **kube-state-metrics:** 1 deployment
+ - **NFS Server:** at each VM to store grafana data at following path
+ - ``/usr/share/monitoring_data/grafana``
+
+How to setup?
+`````````````
+- **To setup K8s cluster, EFK and PAG:** Run the ansible-playbook ``ansible/playbooks/setup.yaml``
+- **To clean everything:** Run the ansible-playbook ``ansible/playbooks/clean.yaml``
+
+Do we have HA?
+````````````````
+Yes
+
+Configuration
+=============
+
+K8s
+---
+Path to all yamls (Server Side)
+````````````````````````````````
+``tools/lma/ansible-server/roles/monitoring/files/``
+
+K8s namespace
+`````````````
+``monitoring``
+
+Configuration
+---------------------------
+
+Serivces and Ports
+``````````````````````````
+
+Services and their ports are listed below,
+one can go to IP of any node on the following ports,
+service will correctly redirect you
+
+
+ ====================== =======
+ Service Port
+ ====================== =======
+ Prometheus 30900
+ Prometheus1 30901
+ Main-Prometheus 30902
+ Alertmanager 30930
+ Alertmanager1 30931
+ Grafana 30000
+ Collectd-exporter 30130
+ ====================== =======
+
+How to change Configuration?
+------------------------------
+- Ports, names of the containers, pretty much every configuration can be modified by changing the required values in the respective yaml files (``/tools/lma/ansible-server/roles/monitoring/``)
+- For metrics, on the client's machine, edit the collectd's configuration (jinja2 template) file, and add required plugins (``/tools/lma/ansible-client/roles/collectd/files/collectd.conf.j2``).
+ For more details refer `this <https://collectd.org/wiki/index.php/First_steps>`_
+
+Where to send metrics?
+------------------------
+
+Metrics are sent to collectd exporter.
+UDP packets are sent to port 38026
+(can be configured and checked at
+``tools/lma/ansible-server/roles/monitoring/files/collectd-exporter/collectd-exporter-deployment.yaml``)
+
+Data Management
+================================
+
+DataFlow:
+--------------
+.. image:: images/dataflow.png
+
+Where is the data stored now?
+----------------------------------
+ - Grafana data (including dashboards) ==> On master, at ``/usr/share/monitoring_data/grafana`` (its accessed by Presistent volume via NFS)
+ - Prometheus Data ==> On VM2 and VM3, at /usr/share/monitoring_data/prometheus
+
+ **Note: Promethei data also are independent of each other, a shared data solution gave errors**
+
+Do we have backup of data?
+-------------------------------
+ Promethei even though independent scrape same targets,
+ have same alert rules, therefore generate very similar data.
+
+ Grafana's NFS part of the data has no backup
+ Dashboards' json are available in the ``/tools/lma/metrics/dashboards`` directory
+
+When containers are restarted, the data is still accessible?
+-----------------------------------------------------------------
+ Yes, unless the data directories are deleted ``(/usr/share/monitoring_data/*)`` from each node
+
+Alert Management
+==================
+
+Configure Alert receiver
+--------------------------
+- Go to file ``/tools/lma/ansible-server/roles/monitoring/files/alertmanager/alertmanager-config.yaml``
+- Under the config.yml section under receivers, add, update, delete receivers
+- Currently ip of unified alert receiver is used.
+- Alertmanager supports multiple types of receivers, you can get a `list here <https://prometheus.io/docs/alerting/latest/configuration/>`_
+
+Add new alerts
+--------------------------------------
+- Go to file ``/tools/lma/ansible-server/roles/monitoring/files/prometheus/prometheus-config.yaml``
+- Under the data section alert.rules file is mounted on the config-map.
+- In this file alerts are divided in 4 groups, namely:
+ - targets
+ - host and hardware
+ - container
+ - kubernetes
+- Add alerts under exisiting group or add new group. Please follow the structure of the file for adding new group
+- To add new alert:
+ - Use the following structure:
+
+ alert: alertname
+
+ expr: alert rule (generally promql conditional query)
+
+ for: time-range (eg. 5m, 10s, etc, the amount of time the condition needs to be true for the alert to be triggered)
+
+ labels:
+
+ severity: critical (other severity options and other labels can be added here)
+
+ type: hardware
+
+ annotations:
+
+ summary: <summary of the alert>
+
+ description: <descibe the alert here>
+
+- For an exhaustive alerts list you can have a look `here <https://awesome-prometheus-alerts.grep.to/>`_
+
+Troubleshooting
+===============
+No metrics received in grafana plot
+---------------------------------------------
+- Check if all configurations are correctly done.
+- Go to main-prometheus's port and any one VMs' ip, and check if prometheus is getting the metrics
+- If prometheus is getting them, read grafana's logs (``kubectl -n monitoring logs <name_of_grafana_pod>``)
+- Else, have a look at collectd exporter's metrics endpoint (eg. 10.10.120.211:30103/metrics)
+- If collectd is getting them, check prometheus's config file if collectd's ip is correct over there.
+- Else ssh to master, check which node collectd-exporter is scheduled (lets say vm2)
+- Now ssh to vm2
+- Use ``tcpdump -i ens3 #the interface used to connect to the internet > testdump``
+- Grep your client node's ip and check if packets are reaching our monitoring cluster (``cat testdump | grep <ip of client>``)
+- Ideally you should see packets reaching the node, if so please see if the collectd-exporter is running correctly, check its logs.
+- If no packets are received, error is on the client side, check collectd's config file and make sure correct collectd-exporter ip is used in the ``<network>`` section.
+
+If no notification received
+---------------------------
+- Go to main-prometheus's port and any one VMs' ip,(eg. 10.10.120.211:30902) and check if prometheus is getting the metrics
+- If no, read "No metrics received in grafana plot" section, else read ahead.
+- Check IP of alert-receiver, you can see this by going to alertmanager-ip:port and check if alertmanager is configured correctly.
+- If yes, paste the alert rule in the prometheus' query-box and see if any metric staisfy the condition.
+- You may need to change alert rules in the alert.rules section of prometheus-config.yaml if there was a bug in the alert's rule. (please read the "Add new alerts" section for detailed instructions)
+
+Reference
+=========
+- `Prometheus K8S deployment <https://www.metricfire.com/blog/how-to-deploy-prometheus-on-kubernetes/>`_
+- `HA Prometheus <https://prometheus.io/docs/introduction/faq/#can-prometheus-be-made-highly-available>`_
+- `Data Flow Diagram <https://drive.google.com/file/d/1D--LXFqU_H-fqpD57H3lJFOqcqWHoF0U/view?usp=sharing>`_
+- `Collectd Configuration <https://docs.opnfv.org/en/stable-fraser/submodules/barometer/docs/release/userguide/docker.userguide.html#build-the-collectd-docker-image>`_
+- `Alertmanager Rule Config <https://awesome-prometheus-alerts.grep.to/>`_
diff --git a/docs/openstack/index.rst b/docs/openstack/index.rst
new file mode 100644
index 00000000..6009e669
--- /dev/null
+++ b/docs/openstack/index.rst
@@ -0,0 +1,39 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Spirent Communications, AT&T, Ixia and others.
+
+.. OPNFV VSPERF With Openstack master file.
+
+***************************
+OPNFV VSPERF with OPENSTACK
+***************************
+
+Introduction
+------------
+VSPERF performs the following, when run with openstack:
+
+1. Connect to Openstack (using the credentials)
+2. Deploy Traffic-Generators in a required way (defined by scenarios)
+3. Update the VSPERF configuration based on the deployment.
+4. Use the updated configuration to run test in "Trafficgen" Mode.
+5. Publish and store results.
+
+
+What to Configure?
+^^^^^^^^^^^^^^^^^^
+The configurable parameters are provided in *conf/11_openstackstack.conf*. The configurable parameters are:
+
+1. Access to Openstack Environment: Auth-URL, Username, Password, Project and Domain IDs/Name.
+2. VM Details - Name, Flavor, External-Network.
+3. Scenario - How many compute nodes to use, and how many instances of trafficgenerator to deploy.
+
+User can customize these parameters. Assume the customized values are placed in openstack.conf file. This file will be used to run the test.
+
+How to run?
+^^^^^^^^^^^
+Add --openstack flag as show below
+
+.. code-block:: console
+
+ vsperf --openstack --conf-file openstack.conf phy2phy_tput
+
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 860cca77..486beaf0 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -1,6 +1,193 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Intel Corporation, AT&T and others.
+.. (c) OPNFV, Intel Corporation, Spirent Communications, AT&T and others.
+
+OPNFV Jerma Release
+===================
+
+* Supported Versions - DPDK:18.11, OVS:2.12.0, VPP:19.08.1, QEMU:3.1.1, Trex:2.86
+
+* Supported Release-Requirements.
+
+ * RELREQ-6 - Openstack dataplane performance benchmarking.
+ * RELREQ-9 - Kubernetes container-networking benchmarking.
+
+* Additional Features
+
+ * OPNFV Xtesting integration - Baremetal and Openstack.
+ * Analytics of metrics and logs using Jupyter notebooks.
+ * Custom Alarms from both metrics and logs.
+ * Container metrics collection.
+
+* Traffic Generators.
+
+ * Ixia - Support for using multiple instances of Traffic-generator.
+ * Ixia - Live results support (real-time collection and storage)
+ * TRex - ETSI-NFV GS-TST009 binary search with loss-verification support.
+
+* New Tools
+
+ * Kubernetes cluster deployment.
+ * TestVNF deployment in Openstack.
+ * Server-side telemetry collection from the test-environment.
+ * Version-1 of multi-dimensional TestVNF.
+
+* Multiple bugfixes and minor improvements
+
+ * matplotlib version and log-dump.
+ * VPP socket paths.
+ * Newer version of some python packages.
+
+
+OPNFV Iruya Release
+====================
+
+* Supported Versions - DPDK:18.11, OVS:2.12.0, VPP:19.08.1, QEMU:3.1.1
+* Few bugfixes and minor improvements
+
+* New Feature: Containers to manage VSPERF.
+
+ * VSPERF Containers for both deployment and test runs
+
+* Improvement
+
+ * Results Analysis to include all 5 types of data.
+
+ * Infrastructure data
+ * End-Of-Test Results
+ * Live-Results
+ * Events from VSPERF Logs
+ * Test Environment
+
+* Usability
+
+ * Configuration Wizard tool.
+
+
+OPNFV Hunter Release
+====================
+
+* Supported Versions - DPDK:17.08, OVS:2.8.1, VPP:17.07, QEMU:2.9.1
+* Few bugfixes and minor improvements
+
+* Traffic Generators
+
+ * Spirent - Live Results Support.
+ * T-Rex - Live Results Support.
+
+* Improvment
+
+ * Results container to receive logs from Logstash/Fluentd.
+
+* CI
+
+ * Bug Fixes.
+
+
+OPNFV Gambia Release
+====================
+
+* Supported Versions - DPDK:17.08, OVS:2.8.1, VPP:17.07, QEMU:2.9.1
+* Several bugfixes and minor improvements
+
+* Documentation
+
+ * Spirent Latency histogram documentation
+
+* Virtual-Switches
+
+ * OVS-Enhancement: default bridge name and offload support.
+ * OVS-Enhancement: proper deletion of flows and bridges after stop.
+ * VSPERF-vSwitch Architecture Improvement
+
+* Tools
+
+ * Pidstat improvements
+
+* Traffic Generators
+
+ * Xena Enhancements - multi-flow and stability.
+ * T-Rex Additions - burst traffic, scapy frame, customized scapy version.
+ * Ixia: Script enhancements.
+ * Spirent: Latency-histogram support included
+
+* Tests
+
+ * Continuous stream testcase
+ * Tunnelling protocol support
+ * Custom statistics
+ * Refactoring integration testcases
+
+* CI
+
+ * Reduced daily testscases
+
+OPNFV Fraser Release
+====================
+
+* Supported Versions - DPDK:17.08, OVS:2.8.1, VPP:17.07, QEMU:2.9.1
+* Pylint 1.8.2 code conformity
+* Python virtualenv moved to python-3.
+* LTD: Requirements specification for Soak/Long Duration Tests
+* Performance Matrix functionality support
+* Several bugfixes and minor improvements
+
+* Documentation
+
+ * Configuration and installation of additional tools.
+ * Xena install document update.
+ * Installation prerequisites update
+ * Traffic Capture methods explained
+
+* Virtual-Switches
+
+ * OVS: Configurable arguments for ovs-\*ctl
+ * OVS: Fix vswitch shutdown process
+ * VPP: Define vppctl socket name
+ * VPP: Multiqueue support for VPP
+ * OVS and VPP: Improve add_phy_port error messages
+ * OVS and VPP: Updated to recent version
+
+* Tools
+
+ * Support for Stressor-VMs as a Loadgen
+ * Support for collectd as one of the collectors
+ * Support for LLC management with Intel RMD
+
+* Traffic Generators
+
+ * All Traffic-Gens: Postponed call of connect operation.
+ * Ixia: Added support of LISTs in TRAFFIC
+ * T-Rex: Version v2.38 support added.
+ * T-Rex: Support for T-Rex Traffic generator in a VM.
+ * T-Rex: Add logic for dealing with high speed cards.
+ * T-Rex: Improve error handling.
+ * T-Rex: Added support for traffic capture.
+ * T-Rex: RFC2544 verification functionality included.
+ * T-Rex: Added learning packet option.
+ * T-Rex: Added packet counts for reporting
+ * T-Rex: Added multistream support
+ * T-Rex: Added promiscuous option for SRIOV tests
+ * T-Rex: RFC2544 Throughput bugfixing
+
+* Tests
+
+ * Tests with T-Rex in VM
+ * Improvements of step driven Testcases
+ * OVS/DPDK regression tests
+ * Traffic Capture testcases added.
+
+* Installation Scripts
+
+ * Support for SLES15 and openSuse Tumbleweed
+ * Fedora installation script update
+ * rhel_path_fix: Fix pathing issue introduce by other commit
+ * Updated build scripts for Centos and RHEL to python34
+
+* CI
+
+ * Update hugepages configuration
+ * Support disabling VPP tests, if required
OPNFV Euphrates Release
=======================
diff --git a/docs/requirements.txt b/docs/requirements.txt
new file mode 100644
index 00000000..9fde2df2
--- /dev/null
+++ b/docs/requirements.txt
@@ -0,0 +1,2 @@
+lfdocs-conf
+sphinx_opnfv_theme
diff --git a/docs/testing/developer/devguide/design/trafficgen_integration_guide.rst b/docs/testing/developer/devguide/design/trafficgen_integration_guide.rst
index c88b80ed..671c7fd8 100644
--- a/docs/testing/developer/devguide/design/trafficgen_integration_guide.rst
+++ b/docs/testing/developer/devguide/design/trafficgen_integration_guide.rst
@@ -199,13 +199,20 @@ functions:
Note: There are parameters specific to testing of tunnelling protocols,
which are discussed in detail at :ref:`integration-tests` userguide.
+ Note: A detailed description of the ``TRAFFIC`` dictionary can be found at
+ :ref:`configuration-of-traffic-dictionary`.
+
* param **traffic_type**: One of the supported traffic types,
- e.g. **rfc2544_throughput**, **rfc2544_continuous**
- or **rfc2544_back2back**.
- * param **frame_rate**: Defines desired percentage of frame
- rate used during continuous stream tests.
+ e.g. **rfc2544_throughput**, **rfc2544_continuous**,
+ **rfc2544_back2back** or **burst**.
* param **bidir**: Specifies if generated traffic will be full-duplex
(true) or half-duplex (false).
+ * param **frame_rate**: Defines desired percentage of frame
+ rate used during continuous stream tests.
+ * param **burst_size**: Defines a number of frames in the single burst,
+ which is sent by burst traffic type. Burst size is applied for each
+ direction, i.e. the total number of tx frames will be 2*burst_size
+ in case of bidirectional traffic.
* param **multistream**: Defines number of flows simulated by traffic
generator. Value 0 disables MultiStream feature.
* param **stream_type**: Stream Type defines ISO OSI network layer
@@ -224,6 +231,8 @@ functions:
**dstport** and l4 on/off switch **enabled**.
* param **vlan**: A dictionary with vlan specific parameters,
e.g. **priority**, **cfi**, **id** and vlan on/off switch **enabled**.
+ * param **scapy**: A dictionary with definition of the frame content for both traffic
+ directions. The frame content is defined by a SCAPY notation.
* param **tests**: Number of times the test is executed.
* param **duration**: Duration of continuous test or per iteration duration
diff --git a/docs/testing/developer/devguide/design/vswitchperf_design.rst b/docs/testing/developer/devguide/design/vswitchperf_design.rst
index 96ffcf62..5fa892e0 100644
--- a/docs/testing/developer/devguide/design/vswitchperf_design.rst
+++ b/docs/testing/developer/devguide/design/vswitchperf_design.rst
@@ -1,6 +1,6 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Intel Corporation, AT&T and others.
+.. (c) OPNFV, Intel Corporation, AT&T, Tieto and others.
.. _vsperf-design:
@@ -23,7 +23,7 @@ Example Connectivity to DUT
Establish connectivity to the VSPERF DUT Linux host. If this is in an OPNFV lab
following the steps provided by `Pharos <https://www.opnfv.org/community/projects/pharos>`_
-to `access the POD <https://wiki.opnfv.org/display/pharos/Pharos+Lab+Support>`_
+to `access the POD <https://wiki.opnfv.org/display/INF/INFRA+Lab+Support>`_
The followign steps establish the VSPERF environment.
@@ -291,8 +291,8 @@ Detailed description of ``TRAFFIC`` dictionary items follows:
.. code-block:: console
'traffic_type' - One of the supported traffic types.
- E.g. rfc2544_throughput, rfc2544_back2back
- or rfc2544_continuous
+ E.g. rfc2544_throughput, rfc2544_back2back,
+ rfc2544_continuous or burst
Data type: str
Default value: "rfc2544_throughput".
'bidir' - Specifies if generated traffic will be full-duplex (True)
@@ -304,6 +304,12 @@ Detailed description of ``TRAFFIC`` dictionary items follows:
continuous stream tests.
Data type: int
Default value: 100.
+ 'burst_size' - Defines a number of frames in the single burst, which is sent
+ by burst traffic type. Burst size is applied for each direction,
+ i.e. the total number of tx frames will be 2*burst_size in case of
+ bidirectional traffic.
+ Data type: int
+ Default value: 100.
'multistream' - Defines number of flows simulated by traffic generator.
Value 0 disables multistream feature
Data type: int
@@ -326,7 +332,6 @@ Detailed description of ``TRAFFIC`` dictionary items follows:
feature. If enabled, it will implicitly insert a flow
for each stream. If multistream is disabled, then
pre-installed flows will be ignored.
- Note: It is supported only for p2p deployment scenario.
Data type: str
Supported values:
"Yes" - flows will be inserted into OVS
@@ -439,6 +444,53 @@ Detailed description of ``TRAFFIC`` dictionary items follows:
details.
Data type: str
Default value: ''
+ 'scapy' - A dictionary with definition of a frame content for both traffic
+ directions. The frame content is defined by a SCAPY notation.
+ NOTE: It is supported only by the T-Rex traffic generator.
+ Following keywords can be used to refer to the related parts of
+ the TRAFFIC dictionary:
+ Ether_src - refers to TRAFFIC['l2']['srcmac']
+ Ether_dst - refers to TRAFFIC['l2']['dstmac']
+ IP_proto - refers to TRAFFIC['l3']['proto']
+ IP_PROTO - refers to upper case version of TRAFFIC['l3']['proto']
+ IP_src - refers to TRAFFIC['l3']['srcip']
+ IP_dst - refers to TRAFFIC['l3']['dstip']
+ IP_PROTO_sport - refers to TRAFFIC['l4']['srcport']
+ IP_PROTO_dport - refers to TRAFFIC['l4']['dstport']
+ Dot1Q_prio - refers to TRAFFIC['vlan']['priority']
+ Dot1Q_id - refers to TRAFFIC['vlan']['cfi']
+ Dot1Q_vlan - refers to TRAFFIC['vlan']['id']
+ '0' - A string with the frame definition for the 1st direction.
+ Data type: str
+ Default value: 'Ether(src={Ether_src}, dst={Ether_dst})/'
+ 'Dot1Q(prio={Dot1Q_prio}, id={Dot1Q_id}, vlan={Dot1Q_vlan})/'
+ 'IP(proto={IP_proto}, src={IP_src}, dst={IP_dst})/'
+ '{IP_PROTO}(sport={IP_PROTO_sport}, dport={IP_PROTO_dport})'
+ '1' - A string with the frame definition for the 2nd direction.
+ Data type: str
+ Default value: 'Ether(src={Ether_dst}, dst={Ether_src})/'
+ 'Dot1Q(prio={Dot1Q_prio}, id={Dot1Q_id}, vlan={Dot1Q_vlan})/'
+ 'IP(proto={IP_proto}, src={IP_dst}, dst={IP_src})/'
+ '{IP_PROTO}(sport={IP_PROTO_dport}, dport={IP_PROTO_sport})',
+ 'latency_histogram'
+ - A dictionary with definition of a latency histogram provision in results.
+ 'enabled' - Specifies if the histogram provisioning is enabled or not.
+ 'type' - Defines how histogram is provided. Currenty only 'Default' is defined.
+ 'Default' - Default histogram as provided by the Traffic-generator.
+ 'imix' - A dictionary for IMIX Specification.
+ 'enabled' - Specifies if IMIX is enabled or NOT.
+ 'type' - The specification type - denotes how IMIX is specified.
+ Currently only 'genome' type is defined.
+ Other types (ex: table-of-proportions) can be added in future.
+ 'genome' - The Genome Encoding of Pkt-Sizes and Ratio for IMIX.
+ The Ratio is inferred from the number of particular geneome characters
+ Genome encoding is described in RFC 6985. This specification is closest
+ to the method described in section 6.2 of RFC 6985.
+ Ex: 'aaaaaaaddddg' denotes ratio of 7:4:1 of packets sizes 64:512:1518.
+ Note: Exact-sequence is not maintained, only the ratio of packets
+ is ensured.
+ Data type: str
+ Default Value: 'aaaaaaaddddg'
.. _configuration-of-guest-options:
@@ -743,6 +795,13 @@ As it is able to forward traffic between multiple VM NIC pairs.
Note: In case of ``linux_bridge``, all NICs are connected to the same
bridge inside the VM.
+Note: In case that multistream feature is configured and ``pre_installed_flows``
+is set to ``Yes``, then stream specific flows will be inserted only for connections
+originating at physical ports. The rest of the flows will be based on port
+numbers only. The same logic applies in case that ``flow_type`` TRAFFIC option
+is set to ``ip``. This configuration will avoid a testcase malfunction if frame headers
+are modified inside VM (e.g. MAC swap or IP change).
+
VM, vSwitch, Traffic Generator Independence
===========================================
@@ -786,7 +845,7 @@ ITrafficGenerator
connect()
disconnect()
- send_burst_traffic(traffic, numpkts, time, framerate)
+ send_burst_traffic(traffic, time)
send_cont_traffic(traffic, time, framerate)
start_cont_traffic(traffic, time, framerate)
@@ -878,6 +937,10 @@ Vsperf uses a standard set of routing tables in order to allow tests to easily
mix and match Deployment Scenarios (PVP, P2P topology), Tuple Matching and
Frame Modification requirements.
+The usage of routing tables is driven by configuration parameter ``OVS_ROUTING_TABLES``.
+Routing tables are disabled by default (i.e. parameter is set to ``False``) for better
+comparison of results among supported vSwitches (e.g. OVS vs. VPP).
+
.. code-block:: console
+--------------+
diff --git a/docs/testing/developer/devguide/index.rst b/docs/testing/developer/devguide/index.rst
index 49659792..64a4758c 100644
--- a/docs/testing/developer/devguide/index.rst
+++ b/docs/testing/developer/devguide/index.rst
@@ -31,7 +31,7 @@ new techniques together. A new IETF benchmarking specification (RFC8204) is base
2015. VSPERF is also contributing to development of ETSI NFV test specifications through the Test and Open Source
Working Group.
-* Wiki: https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+* Wiki: https://wiki.opnfv.org/display/vsperf
* Repository: https://git.opnfv.org/vswitchperf
* Artifacts: https://artifacts.opnfv.org/vswitchperf.html
* Continuous Integration: https://build.opnfv.org/ci/view/vswitchperf/
@@ -43,7 +43,6 @@ Design Guides
.. toctree::
:caption: Traffic Gen Integration, VSPERF Design, Test Design, Test Plan
:maxdepth: 2
- :numbered:
./design/trafficgen_integration_guide.rst
./design/vswitchperf_design.rst
@@ -75,6 +74,3 @@ VSPERF CI Test Cases
:numbered:
CI Test cases run daily on the VSPERF Pharos POD for master and stable branches.
-
- ./results/scenario.rst
- ./results/results.rst
diff --git a/docs/testing/developer/devguide/requirements/ietf_draft/rfc8204-vsperf-bmwg-vswitch-opnfv.rst b/docs/testing/developer/devguide/requirements/ietf_draft/rfc8204-vsperf-bmwg-vswitch-opnfv.rst
index ee7f98b5..10b07d54 100644
--- a/docs/testing/developer/devguide/requirements/ietf_draft/rfc8204-vsperf-bmwg-vswitch-opnfv.rst
+++ b/docs/testing/developer/devguide/requirements/ietf_draft/rfc8204-vsperf-bmwg-vswitch-opnfv.rst
@@ -13,7 +13,7 @@ informational RFC published by the IETF available here https://tools.ietf.org/ht
For more information about VSPERF refer to:
-* Wiki: https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+* Wiki: https://wiki.opnfv.org/display/vsperf
* Repository: https://git.opnfv.org/vswitchperf
* Artifacts: https://artifacts.opnfv.org/vswitchperf.html
* Continuous Integration: https://build.opnfv.org/ci/view/vswitchperf/
diff --git a/docs/testing/developer/devguide/requirements/vswitchperf_ltd.rst b/docs/testing/developer/devguide/requirements/vswitchperf_ltd.rst
index c703ff40..1ea99f7e 100644
--- a/docs/testing/developer/devguide/requirements/vswitchperf_ltd.rst
+++ b/docs/testing/developer/devguide/requirements/vswitchperf_ltd.rst
@@ -62,21 +62,21 @@ References
==========
* `RFC 1242 Benchmarking Terminology for Network Interconnection
- Devices <http://www.ietf.org/rfc/rfc1242.txt>`__
+ Devices <https://www.ietf.org/rfc/rfc1242.txt>`__
* `RFC 2544 Benchmarking Methodology for Network Interconnect
- Devices <http://www.ietf.org/rfc/rfc2544.txt>`__
+ Devices <https://www.ietf.org/rfc/rfc2544.txt>`__
* `RFC 2285 Benchmarking Terminology for LAN Switching
- Devices <http://www.ietf.org/rfc/rfc2285.txt>`__
+ Devices <https://www.ietf.org/rfc/rfc2285.txt>`__
* `RFC 2889 Benchmarking Methodology for LAN Switching
- Devices <http://www.ietf.org/rfc/rfc2889.txt>`__
+ Devices <https://www.ietf.org/rfc/rfc2889.txt>`__
* `RFC 3918 Methodology for IP Multicast
- Benchmarking <http://www.ietf.org/rfc/rfc3918.txt>`__
+ Benchmarking <https://www.ietf.org/rfc/rfc3918.txt>`__
* `RFC 4737 Packet Reordering
- Metrics <http://www.ietf.org/rfc/rfc4737.txt>`__
+ Metrics <https://www.ietf.org/rfc/rfc4737.txt>`__
* `RFC 5481 Packet Delay Variation Applicability
- Statement <http://www.ietf.org/rfc/rfc5481.txt>`__
+ Statement <https://www.ietf.org/rfc/rfc5481.txt>`__
* `RFC 6201 Device Reset
- Characterization <http://tools.ietf.org/html/rfc6201>`__
+ Characterization <https://tools.ietf.org/html/rfc6201>`__
.. 3.2
diff --git a/docs/testing/developer/devguide/requirements/vswitchperf_ltp.rst b/docs/testing/developer/devguide/requirements/vswitchperf_ltp.rst
index e5147bea..c0b63859 100644
--- a/docs/testing/developer/devguide/requirements/vswitchperf_ltp.rst
+++ b/docs/testing/developer/devguide/requirements/vswitchperf_ltp.rst
@@ -63,21 +63,21 @@ References
===============
* `RFC 1242 Benchmarking Terminology for Network Interconnection
- Devices <http://www.ietf.org/rfc/rfc1242.txt>`__
+ Devices <https://www.ietf.org/rfc/rfc1242.txt>`__
* `RFC 2544 Benchmarking Methodology for Network Interconnect
- Devices <http://www.ietf.org/rfc/rfc2544.txt>`__
+ Devices <https://www.ietf.org/rfc/rfc2544.txt>`__
* `RFC 2285 Benchmarking Terminology for LAN Switching
- Devices <http://www.ietf.org/rfc/rfc2285.txt>`__
+ Devices <https://www.ietf.org/rfc/rfc2285.txt>`__
* `RFC 2889 Benchmarking Methodology for LAN Switching
- Devices <http://www.ietf.org/rfc/rfc2889.txt>`__
+ Devices <https://www.ietf.org/rfc/rfc2889.txt>`__
* `RFC 3918 Methodology for IP Multicast
- Benchmarking <http://www.ietf.org/rfc/rfc3918.txt>`__
+ Benchmarking <https://www.ietf.org/rfc/rfc3918.txt>`__
* `RFC 4737 Packet Reordering
- Metrics <http://www.ietf.org/rfc/rfc4737.txt>`__
+ Metrics <https://www.ietf.org/rfc/rfc4737.txt>`__
* `RFC 5481 Packet Delay Variation Applicability
- Statement <http://www.ietf.org/rfc/rfc5481.txt>`__
+ Statement <https://www.ietf.org/rfc/rfc5481.txt>`__
* `RFC 6201 Device Reset
- Characterization <http://tools.ietf.org/html/rfc6201>`__
+ Characterization <https://tools.ietf.org/html/rfc6201>`__
.. 3.1.4
@@ -633,7 +633,7 @@ General Methodology:
--------------------------
To establish the baseline performance of the virtual switch, tests would
initially be run with a simple workload in the VNF (the recommended
-simple workload VNF would be `DPDK <http://www.dpdk.org/>`__'s testpmd
+simple workload VNF would be `DPDK <https://www.dpdk.org/>`__'s testpmd
application forwarding packets in a VM or vloop\_vnf a simple kernel
module that forwards traffic between two network interfaces inside the
virtualized environment while bypassing the networking stack).
@@ -656,7 +656,7 @@ tests:
- Reference application: Simple forwarding or Open Source VNF.
- Frame size (bytes): 64, 128, 256, 512, 1024, 1280, 1518, 2K, 4k OR
Packet size based on use-case (e.g. RTP 64B, 256B) OR Mix of packet sizes as
- maintained by the Functest project <https://wiki.opnfv.org/traffic_profile_management>.
+ maintained by the Functest project <https://wiki.opnfv.org/display/functest/Traffic+Profile+Management>.
- Reordering check: Tests should confirm that packets within a flow are
not reordered.
- Duplex: Unidirectional / Bidirectional. Default: Full duplex with
diff --git a/docs/testing/developer/devguide/results/scenario.rst b/docs/testing/developer/devguide/results/scenario.rst
index dbdc7877..f7eadd33 100644
--- a/docs/testing/developer/devguide/results/scenario.rst
+++ b/docs/testing/developer/devguide/results/scenario.rst
@@ -34,7 +34,7 @@ Deployment topologies:
Loopback applications in the Guest:
-* `DPDK testpmd <http://dpdk.org/doc/guides/testpmd_app_ug/index.html>`_.
+* `DPDK testpmd <http://doc.dpdk.org/guides/testpmd_app_ug/index.html>`_.
* Linux Bridge.
* :ref:`l2fwd-module`
diff --git a/docs/testing/user/configguide/index.rst b/docs/testing/user/configguide/index.rst
index 83908a97..87c32d11 100644
--- a/docs/testing/user/configguide/index.rst
+++ b/docs/testing/user/configguide/index.rst
@@ -31,7 +31,7 @@ new techniques together. A new IETF benchmarking specification (RFC8204) is base
2015. VSPERF is also contributing to development of ETSI NFV test specifications through the Test and Open Source
Working Group.
-* Wiki: https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
+* Wiki: https://wiki.opnfv.org/display/vsperf
* Repository: https://git.opnfv.org/vswitchperf
* Artifacts: https://artifacts.opnfv.org/vswitchperf.html
* Continuous Integration: https://build.opnfv.org/ci/view/vswitchperf/
@@ -48,6 +48,7 @@ VSPERF Install and Configuration
./installation.rst
./upgrade.rst
./trafficgen.rst
+ ./tools.rst
=================
VSPERF Test Guide
@@ -56,10 +57,10 @@ VSPERF Test Guide
.. toctree::
:caption: VSPERF Test Execution
:maxdepth: 2
- :numbered:
../userguide/testusage.rst
../userguide/teststeps.rst
../userguide/integration.rst
+ ../userguide/trafficcapture.rst
../userguide/yardstick.rst
../userguide/testlist.rst
diff --git a/docs/testing/user/configguide/installation.rst b/docs/testing/user/configguide/installation.rst
index 51588007..b950442e 100644
--- a/docs/testing/user/configguide/installation.rst
+++ b/docs/testing/user/configguide/installation.rst
@@ -53,6 +53,7 @@ Supported Operating Systems
* SLES 15
* RedHat 7.2 Enterprise Linux
* RedHat 7.3 Enterprise Linux
+* RedHat 7.5 Enterprise Linux
* Ubuntu 14.04
* Ubuntu 16.04
* Ubuntu 16.10 (kernel 4.8 requires DPDK 16.11 and newer)
@@ -166,8 +167,12 @@ repository provided by Software Collections (`a link`_). The installation script
will also use `virtualenv`_ to create a vsperf virtual environment, which is
isolated from the default Python environment, using the Python3 package located
in **/usr/bin/python3**. This environment will reside in a directory called
-**vsperfenv** in $HOME. It will ensure, that system wide Python installation
- is not modified or broken by VSPERF installation. The complete list of Python
+**vsperfenv** in $HOME.
+
+It will ensure, that system wide Python installation is not modified or
+broken by VSPERF installation.
+
+The complete list of Python
packages installed inside virtualenv can be found in the file
``requirements.txt``, which is located at the vswitchperf repository.
@@ -176,6 +181,11 @@ built from upstream source due to kernel incompatibilities. Please see the
instructions in the vswitchperf_design document for details on configuring
OVS Vanilla for binary package usage.
+**NOTE:** For RHEL 7.5 Enterprise DPDK and Openvswitch are not built from
+upstream sources due to kernel incompatibilities. Please use subscription
+channels to obtain binary equivalents of openvswitch and dpdk packages or
+build binaries using instructions from openvswitch.org and dpdk.org.
+
.. _vpp-installation:
VPP installation
@@ -260,8 +270,8 @@ running any of the above. For example:
export http_proxy=proxy.mycompany.com:123
export https_proxy=proxy.mycompany.com:123
-.. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/
-.. _virtualenv: https://virtualenv.readthedocs.org/en/latest/
+.. _a link: https://www.softwarecollections.org/en/scls/rhscl/python33/
+.. _virtualenv: https://virtualenv.pypa.io/en/latest/
.. _vloop-vnf-ubuntu-14.04_20160823: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160823.qcow2
.. _vloop-vnf-ubuntu-14.04_20160804: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160804.qcow2
.. _vloop-vnf-ubuntu-14.04_20160303: http://artifacts.opnfv.org/vswitchperf/vnf/vloop-vnf-ubuntu-14.04_20160303.qcow2
@@ -320,7 +330,7 @@ to your OS documentation to set hugepages correctly. It is recommended to set
the required amount of hugepages to be allocated by default on reboots.
Information on hugepage requirements for dpdk can be found at
-http://dpdk.org/doc/guides/linux_gsg/sys_reqs.html
+http://doc.dpdk.org/guides/linux_gsg/sys_reqs.html
You can review your hugepage amounts by executing the following command
@@ -350,7 +360,7 @@ default on the Linux DUT
VSPerf recommends the latest tuned-adm package, which can be downloaded from the
following location:
-http://www.tuned-project.org/2017/04/27/tuned-2-8-0-released/
+https://github.com/redhat-performance/tuned/releases
Follow the instructions to install the latest tuned-adm onto your system. For
current RHEL customers you should already have the most current version. You
diff --git a/docs/testing/user/configguide/tools.rst b/docs/testing/user/configguide/tools.rst
new file mode 100644
index 00000000..72e515fa
--- /dev/null
+++ b/docs/testing/user/configguide/tools.rst
@@ -0,0 +1,227 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Intel Corporation, Spirent, AT&T and others.
+
+.. _additional-tools-configuration:
+
+=============================================
+'vsperf' Additional Tools Configuration Guide
+=============================================
+
+Overview
+--------
+
+VSPERF supports the following categories additional tools:
+
+ * `Infrastructure Metrics Collectors`_
+ * `Load Generators`_
+ * `L3 Cache Management`_
+
+Under each category, there are one or more tools supported by VSPERF.
+This guide provides the details of how to install (if required)
+and configure the above mentioned tools.
+
+.. _`Infrastructure Metrics Collectors`:
+
+Infrastructure Metrics Collection
+---------------------------------
+
+VSPERF supports following two tools for collecting and reporting the metrics:
+
+* pidstat
+* collectd
+
+*pidstat* is a command in linux systems, which is used for monitoring individual
+tasks currently being managed by Linux kernel. In VSPERF this command is used to
+monitor *ovs-vswitchd*, *ovsdb-server* and *kvm* processes.
+
+*collectd* is linux application that collects, stores and transfers various system
+metrics. For every category of metrics, there is a separate plugin in collectd. For
+example, CPU plugin and Interface plugin provides all the cpu metrics and interface
+metrics, respectively. CPU metrics may include user-time, system-time, etc., whereas
+interface metrics may include received-packets, dropped-packets, etc.
+
+Installation
+^^^^^^^^^^^^
+
+No installation is required for *pidstat*, whereas, collectd has to be installed
+separately. For installation of collectd, we recommend to follow the process described
+in *OPNFV-Barometer* project, which can be found here `Barometer <https://opnfv-barometer.readthedocs.io/en/latest/release/userguide>`_
+recent release.
+
+VSPERF assumes that collectd is installed and configured to send metrics over localhost.
+The metrics sent should be for the following categories: CPU, Processes, Interface,
+OVS, DPDK, Intel-RDT.
+
+For multicmd, apart from collectd, installation of PROX is also necessary.
+Installation steps for PROX can be found here - `DPPD-PROX <https://github.com/opnfv/samplevnf/tree/master/VNFs/DPPD-PROX>`_
+
+Configuration
+^^^^^^^^^^^^^
+
+The configuration file for the collectors can be found in **conf/05_collector.conf**.
+*pidstat* specific configuration includes:
+
+* ``PIDSTAT_MONITOR`` - processes to be monitored by pidstat
+* ``PIDSTAT_OPTIONS`` - options which will be passed to pidstat command
+* ``PIDSTAT_SAMPLE_INTERVAL`` - sampling interval used by pidstat to collect statistics
+* ``LOG_FILE_PIDSTAT`` - prefix of pidstat's log file
+
+The *collectd* configuration option includes:
+
+* ``COLLECTD_IP`` - IP address where collectd is running
+* ``COLLECTD_PORT`` - Port number over which collectd is sending the metrics
+* ``COLLECTD_SECURITY_LEVEL`` - Security level for receiving metrics
+* ``COLLECTD_AUTH_FILE`` - Authentication file for receiving metrics
+* ``LOG_FILE_COLLECTD`` - Prefix for collectd's log file.
+* ``COLLECTD_CPU_KEYS`` - Interesting metrics from CPU
+* ``COLLECTD_PROCESSES_KEYS`` - Interesting metrics from processes
+* ``COLLECTD_INTERFACE_KEYS`` - Interesting metrics from interface
+* ``COLLECTD_OVSSTAT_KEYS`` - Interesting metrics from OVS
+* ``COLLECTD_DPDKSTAT_KEYS`` - Interesting metrics from DPDK.
+* ``COLLECTD_INTELRDT_KEYS`` - Interesting metrics from Intel-RDT
+* ``COLLECTD_INTERFACE_XKEYS`` - Metrics to exclude from Interface
+* ``COLLECTD_INTELRDT_XKEYS`` - Metrics to exclude from Intel-RDT
+* ``MC_COLLECTD_CSV`` - Path where collectd writes its metrics as CSV.
+* ``MC_COLLECTD_CMD`` - Path where Collectd is installed
+* ``MC_PROX_HOME`` - Path where PROX-IRQ is installed.
+* ``MC_PROX_CMD`` - Command to run PROX-IRQ
+* ``MC_PROX_OUT`` - Output file generated by PROX-IRQ stats collector.
+* ``MC_CRON_OUT`` - Output file path of the command run through CROND
+* ``MC_BEAT_CFILE`` - Filebeat configuration file path.
+
+
+.. _`Load Generators`:
+
+
+Load Generation
+---------------
+
+In VSPERF, load generation refers to creating background cpu and memory loads to
+study the impact of these loads on system under test. There are two options to
+create loads in VSPERF. These options are used for different use-cases. The options are:
+
+* stress or stress-ng
+* Stressor-VMs
+
+*stress and stress-ng* are linux tools to stress the system in various ways.
+It can stress different subsystems such as CPU and memory. *stress-ng* is the
+improvised version of *stress*. StressorVMs are custom build virtual-machines
+for the noisy-neighbor use-cases.
+
+Installation
+^^^^^^^^^^^^
+
+stress and stress-ng can be installed through standard linux installation process.
+Information about stress-ng, including the steps for installing can be found
+here: `stress-ng <https://github.com/ColinIanKing/stress-ng>`_
+
+There are two options for StressorVMs - one is VMs based on stress-ng and second
+is VM based on Spirent's cloudstress. VMs based on stress-ng can be found in this
+`link <https://github.com/opensource-tnbt/stressng-images>`_ . Spirent's cloudstress
+based VM can be downloaded from this `site <https://github.com/spirent/cloudstress>`_
+
+These stressorVMs are of OSV based VMs, which are very small in size. Download
+these VMs and place it in appropriate location, and this location will used in
+the configuration - as mentioned below.
+
+Configuration
+^^^^^^^^^^^^^
+
+The configuration file for loadgens can be found in **conf/07_loadgen.conf**.
+There are no specific configurations for stress and stress-ng commands based
+load-generation. However, for StressorVMs, following configurations apply:
+
+* ``NN_COUNT`` - Number of stressor VMs required.
+* ``NN_MEMORY`` - Comma separated memory configuration for each VM
+* ``NN_SMP`` - Comma separated configuration for each VM
+* ``NN_IMAGE`` - Comma separated list of Paths for each VM image
+* ``NN_SHARED_DRIVE_TYPE`` - Comma separated list of shaed drive type for each VM
+* ``NN_BOOT_DRIVE_TYPE`` - Comma separated list of boot drive type for each VM
+* ``NN_CORE_BINDING`` - Comma separated lists of list specifying the cores associated with each VM.
+* ``NN_NICS_NR`` - Comma seprated list of number of NICS for each VM
+* ``NN_BASE_VNC_PORT`` - Base VNC port Index.
+* ``NN_LOG_FILE`` - Name of the log file
+
+.. _`L3 Cache Management`:
+
+Last Level Cache Management
+---------------------------
+
+VSPERF support last-level cache management using Intel's RDT tool(s) - the
+relavant ones are `Intel CAT-CMT <https://github.com/intel/intel-cmt-cat>`_ and
+`Intel RMD <https://github.com/intel/rmd>`_. RMD is a linux daemon that runs on
+individual hosts, and provides a REST API for control/orchestration layer to
+request LLC for the VMs/Containers/Applications. RDT receives resource policy
+form orchestration layer - in this case, from VSPERF - and enforce it on the host.
+It achieves this enforcement via kernel interfaces such as resctrlfs and libpqos.
+The resource here refer to the last-level cache. User can configure policies to
+define how much of cache a CPU can get. The policy configuration is described below.
+
+Installation
+^^^^^^^^^^^^
+
+For installation of RMD tool, please install CAT-CMT first and then install RMD.
+The details of installation can be found here: `Intel CAT-CMT <https://github.com/intel/intel-cmt-cat>`_
+and `Intel RMD <https://github.com/intel/rmd>`_
+
+Configuration
+^^^^^^^^^^^^^
+
+The configuration file for cache management can be found in **conf/08_llcmanagement.conf**.
+
+VSPERF provides following configuration options, for user to define and enforce policies via RMD.
+
+* ``LLC_ALLOCATION`` - Enable or Disable LLC management.
+* ``RMD_PORT`` - RMD port (port number on which API server is listening)
+* ``RMD_SERVER_IP`` - IP address where RMD is running. Currently only localhost.
+* ``RMD_API_VERSION`` - RMD version. Currently it is 'v1'
+* ``POLICY_TYPE`` - Specify how the policy is defined - either COS or CUSTOM
+* ``VSWITCH_COS`` - Class of service (CoS for Vswitch. CoS can be gold, silver-bf or bronze-shared.
+* ``VNF_COS`` - Class of service for VNF
+* ``PMD_COS`` - Class of service for PMD
+* ``NOISEVM_COS`` - Class of service of Noisy VM.
+* ``VSWITCH_CA`` - [min-cache-value, maxi-cache-value] for vswitch
+* ``VNF_CA`` - [min-cache-value, max-cache-value] for VNF
+* ``PMD_CA`` - [min-cache-value, max-cache-value] for PMD
+* ``NOISEVM_CA`` - [min-cache-value, max-cache-value] for Noisy VM
+
+VSPERF Containers
+-----------------
+
+VSPERF containers are found in tools/docker folder.
+
+RESULTS CONTAINER
+^^^^^^^^^^^^^^^^^
+
+The results container includes multiple services - ELK Stack, Barometer-Grafana, OPNFV-TestAPI & Jupyter.
+
+Pre-Deployment Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1. Set the limit on mmap counts equal to 262144 or more.
+ You can do this by the command - ``sysctl -w vm.max_map_count = 262144``.
+ Or to set it permanently, update the ``vm.max_map_count`` field in ``/etc/sysctl.conf``.
+
+2. You may want to modify the IP address from 0.0.0.0 to appropriate host-ip in ``docker-compose.yml``
+
+3. Please add dashboards folder from OPNFV-Barometer-Grafana into the grafana folder. It can be found in `Barometer Grafana <https://github.com/opnfv/barometer/tree/master/docker/barometer-grafana`
+
+Build
+~~~~~
+
+Run ``docker-compose build`` command to build the container.
+
+Run
+~~~
+
+Run the container with ``docker-compose up`` command.
+
+Post-Deployment Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The directory ``resultsdb`` contains the source from Dovetail/Dovetail-webportal project.
+Once the results container is deployed, please run the python script as follows, to ensure that results can be
+pushed and queried correctly - ``python init_db.py host_ip_address testapi_port``.
+For example, if the host on which the container is running is 10.10.120.22, and container is exposing 8000 as the port,
+the command should be: ``python init_db.py 10.10.120.22 8000``
diff --git a/docs/testing/user/configguide/trafficgen.rst b/docs/testing/user/configguide/trafficgen.rst
index 4909c55a..3bb09d52 100644
--- a/docs/testing/user/configguide/trafficgen.rst
+++ b/docs/testing/user/configguide/trafficgen.rst
@@ -39,6 +39,7 @@ and is configured as follows:
TRAFFIC = {
'traffic_type' : 'rfc2544_throughput',
'frame_rate' : 100,
+ 'burst_size' : 100,
'bidir' : 'True', # will be passed as string in title format to tgen
'multistream' : 0,
'stream_type' : 'L4',
@@ -75,8 +76,31 @@ and is configured as follows:
'count': 1,
'filter': '',
},
+ 'scapy': {
+ 'enabled': False,
+ '0' : 'Ether(src={Ether_src}, dst={Ether_dst})/'
+ 'Dot1Q(prio={Dot1Q_prio}, id={Dot1Q_id}, vlan={Dot1Q_vlan})/'
+ 'IP(proto={IP_proto}, src={IP_src}, dst={IP_dst})/'
+ '{IP_PROTO}(sport={IP_PROTO_sport}, dport={IP_PROTO_dport})',
+ '1' : 'Ether(src={Ether_dst}, dst={Ether_src})/'
+ 'Dot1Q(prio={Dot1Q_prio}, id={Dot1Q_id}, vlan={Dot1Q_vlan})/'
+ 'IP(proto={IP_proto}, src={IP_dst}, dst={IP_src})/'
+ '{IP_PROTO}(sport={IP_PROTO_dport}, dport={IP_PROTO_sport})',
+ },
+ 'latency_histogram': {
+ 'enabled': False,
+ 'type': 'Default',
+ },
+ 'imix': {
+ 'enabled': True,
+ 'type': 'genome',
+ 'genome': 'aaaaaaaddddg',
+ },
}
+A detailed description of the ``TRAFFIC`` dictionary can be found at
+:ref:`configuration-of-traffic-dictionary`.
+
The framesize parameter can be overridden from the configuration
files by adding the following to your custom configuration file
``10_custom.conf``:
@@ -100,6 +124,13 @@ commandline above to:
$ ./vsperf --test-params "TRAFFICGEN_PKT_SIZES=(x,y);TRAFFICGEN_DURATION=10;" \
"TRAFFICGEN_RFC2544_TESTS=1" $TESTNAME
+If you use imix, set the TRAFFICGEN_PKT_SIZES to 0.
+
+.. code-block:: console
+
+ TRAFFICGEN_PKT_SIZES = (0,)
+
+
.. _trafficgen-dummy:
Dummy
@@ -376,7 +407,7 @@ Spirent Setup
Spirent installation files and instructions are available on the
Spirent support website at:
-http://support.spirent.com
+https://support.spirent.com
Select a version of Spirent TestCenter software to utilize. This example
will use Spirent TestCenter v4.57 as an example. Substitute the appropriate
@@ -428,7 +459,7 @@ STC ReST API. Basic ReST functionality is provided by the resthttp module,
and may be used for writing ReST clients independent of STC.
- Project page: <https://github.com/Spirent/py-stcrestclient>
-- Package download: <http://pypi.python.org/pypi/stcrestclient>
+- Package download: <https://pypi.python.org/project/stcrestclient>
To use REST interface, follow the instructions in the Project page to
install the package. Once installed, the scripts named with 'rest' keyword
@@ -551,6 +582,22 @@ Note that 'FORWARDING_RATE_FPS', 'CACHING_CAPACITY_ADDRS',
'ADDR_LEARNED_PERCENT' and 'OPTIMAL_LEARNING_RATE_FPS' are the new
result-constants added to support RFC2889 tests.
+4. Latency Histogram. To enable latency histogram as in results,
+enable latency_histogram in conf/03_traffic.conf.
+
+.. code-block:: python
+
+ 'Latency_hisotgram':
+ {
+ "enabled": True,
+ "tpe": "Default,
+ }
+
+Once, enabled, a 'Histogram.csv' file will be generated in the results folder.
+The Histogram.csv will include latency histogram in the following order.
+(a) Packet size (b) Ranges in 10ns (c) Packet counts. These set of 3 lines,
+will be repeated for every packet-sizes.
+
.. _`Xena Networks`:
Xena Networks
@@ -571,7 +618,7 @@ support contract.
To execute the Xena2544.exe file under Linux distributions the mono-complete
package must be installed. To install this package follow the instructions
below. Further information can be obtained from
-http://www.mono-project.com/docs/getting-started/install/linux/
+https://www.mono-project.com/docs/getting-started/install/linux/
.. code-block:: console
@@ -667,6 +714,14 @@ or modify the length of the learning by modifying the following settings.
TRAFFICGEN_XENA_CONT_PORT_LEARNING_ENABLED = False
TRAFFICGEN_XENA_CONT_PORT_LEARNING_DURATION = 3
+Multistream Modifier
+~~~~~~~~~~~~~~~~~~~~
+
+Xena has a modifier maximum value or 64k in size. For this reason when specifying
+Multistream values of greater than 64k for Layer 2 or Layer 3 it will use two
+modifiers that may be modified to a value that can be square rooted to create the
+two modifiers. You will see a log notification for the new value that was calculated.
+
MoonGen
-------
@@ -699,7 +754,7 @@ trafficgen.lua
Follow MoonGen set up and execution instructions here:
-https://github.com/atheurer/lua-trafficgen/blob/master/README.md
+https://github.com/atheurer/trafficgen/blob/master/README.md
Note one will need to set up ssh login to not use passwords between the server
running MoonGen and the device under test (running the VSPERF test
@@ -745,11 +800,14 @@ You can directly download from GitHub:
git clone https://github.com/cisco-system-traffic-generator/trex-core
-and use the master branch:
+and use the same Trex version for both server and client API.
+
+**NOTE:** The Trex API version used by VSPERF is defined by variable ``TREX_TAG``
+in file ``src/package-list.mk``.
.. code-block:: console
- git checkout master
+ git checkout v2.38
or Trex latest release you can download from here:
@@ -854,6 +912,21 @@ place. This can be adjusted with the following configurations:
TRAFFICGEN_TREX_LEARNING_MODE=True
TRAFFICGEN_TREX_LEARNING_DURATION=5
+Latency measurements have impact on T-Rex performance. Thus vswitchperf uses a separate
+latency stream for each direction with limited speed. This workaround is used for RFC2544
+**Throughput** and **Continuous** traffic types. In case of **Burst** traffic type,
+the latency statistics are measured for all frames in the burst. Collection of latency
+statistics is driven by configuration option ``TRAFFICGEN_TREX_LATENCY_PPS`` as follows:
+
+ * value ``0`` - disables latency measurements
+ * non zero integer value - enables latency measurements; In case of Throughput
+ and Continuous traffic types, it specifies a speed of latency specific stream
+ in PPS. In case of burst traffic type, it enables latency measurements for all frames.
+
+.. code-block:: console
+
+ TRAFFICGEN_TREX_LATENCY_PPS = 1000
+
SR-IOV and Multistream layer 2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
T-Rex by default only accepts packets on the receive side if the destination mac matches the
@@ -904,3 +977,68 @@ The duration and maximum number of attempted verification trials can be set to c
behavior of this step. If the verification step fails, it will resume the binary search
with new values where the maximum output will be the last attempted frame rate minus the
current set thresh hold.
+
+Scapy frame definition
+~~~~~~~~~~~~~~~~~~~~~~
+
+It is possible to use a SCAPY frame definition to generate various network protocols
+by the **T-Rex** traffic generator. In case that particular network protocol layer
+is disabled by the TRAFFIC dictionary (e.g. TRAFFIC['vlan']['enabled'] = False),
+then disabled layer will be removed from the scapy format definition by VSPERF.
+
+The scapy frame definition can refer to values defined by the TRAFFIC dictionary
+by following keywords. These keywords are used in next examples.
+
+* ``Ether_src`` - refers to ``TRAFFIC['l2']['srcmac']``
+* ``Ether_dst`` - refers to ``TRAFFIC['l2']['dstmac']``
+* ``IP_proto`` - refers to ``TRAFFIC['l3']['proto']``
+* ``IP_PROTO`` - refers to upper case version of ``TRAFFIC['l3']['proto']``
+* ``IP_src`` - refers to ``TRAFFIC['l3']['srcip']``
+* ``IP_dst`` - refers to ``TRAFFIC['l3']['dstip']``
+* ``IP_PROTO_sport`` - refers to ``TRAFFIC['l4']['srcport']``
+* ``IP_PROTO_dport`` - refers to ``TRAFFIC['l4']['dstport']``
+* ``Dot1Q_prio`` - refers to ``TRAFFIC['vlan']['priority']``
+* ``Dot1Q_id`` - refers to ``TRAFFIC['vlan']['cfi']``
+* ``Dot1Q_vlan`` - refers to ``TRAFFIC['vlan']['id']``
+
+In following examples of SCAPY frame definition only relevant parts of TRAFFIC
+dictionary are shown. The rest of the TRAFFIC dictionary is set to default values
+as they are defined in ``conf/03_traffic.conf``.
+
+Please check official documentation of SCAPY project for details about SCAPY frame
+definition and supported network layers at: https://scapy.net
+
+#. Generate ICMP frames:
+
+ .. code-block:: console
+
+ 'scapy': {
+ 'enabled': True,
+ '0' : 'Ether(src={Ether_src}, dst={Ether_dst})/IP(proto="icmp", src={IP_src}, dst={IP_dst})/ICMP()',
+ '1' : 'Ether(src={Ether_dst}, dst={Ether_src})/IP(proto="icmp", src={IP_dst}, dst={IP_src})/ICMP()',
+ }
+
+#. Generate IPv6 ICMP Echo Request
+
+ .. code-block:: console
+
+ 'l3' : {
+ 'srcip': 'feed::01',
+ 'dstip': 'feed::02',
+ },
+ 'scapy': {
+ 'enabled': True,
+ '0' : 'Ether(src={Ether_src}, dst={Ether_dst})/IPv6(src={IP_src}, dst={IP_dst})/ICMPv6EchoRequest()',
+ '1' : 'Ether(src={Ether_dst}, dst={Ether_src})/IPv6(src={IP_dst}, dst={IP_src})/ICMPv6EchoRequest()',
+ }
+
+#. Generate TCP frames:
+
+ Example uses default SCAPY frame definition, which can reflect ``TRAFFIC['l3']['proto']`` settings.
+
+ .. code-block:: console
+
+ 'l3' : {
+ 'proto' : 'tcp',
+ },
+
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index 350fbe54..2c7a78ff 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -11,7 +11,6 @@ VSPERF Test Guide
.. toctree::
:caption: VSPERF Test Execution
:maxdepth: 2
- :numbered:
./testusage.rst
./teststeps.rst
diff --git a/docs/testing/user/userguide/integration.rst b/docs/testing/user/userguide/integration.rst
index 66808400..9d847fd8 100644
--- a/docs/testing/user/userguide/integration.rst
+++ b/docs/testing/user/userguide/integration.rst
@@ -1,6 +1,6 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Intel Corporation, AT&T and others.
+.. (c) OPNFV, Intel Corporation, AT&T, Tieto and others.
.. _integration-tests:
@@ -22,6 +22,12 @@ P2P (Physical to Physical scenarios).
NOTE: The configuration for overlay tests provided in this guide is for
unidirectional traffic only.
+NOTE: The overlay tests require an IxNet traffic generator. The tunneled traffic
+is configured by ``ixnetrfc2544v2.tcl`` script. This script can be used
+with all supported deployment scenarios for generation of frames with VXLAN, GRE
+or GENEVE protocols. In that case options "Tunnel Operation" and
+"TRAFFICGEN_IXNET_TCL_SCRIPT" must be properly configured at testcase definition.
+
Executing Integration Tests
---------------------------
@@ -63,8 +69,8 @@ the following variables in you user_settings.py file:
VTEP_IP2_SUBNET = '192.168.240.0/24'
# Bridge names
- TUNNEL_INTEGRATION_BRIDGE = 'br0'
- TUNNEL_EXTERNAL_BRIDGE = 'br-ext'
+ TUNNEL_INTEGRATION_BRIDGE = 'vsperf-br0'
+ TUNNEL_EXTERNAL_BRIDGE = 'vsperf-br-ext'
# IP of br-ext
TUNNEL_EXTERNAL_BRIDGE_IP = '192.168.240.1/24'
diff --git a/docs/testing/user/userguide/testlist.rst b/docs/testing/user/userguide/testlist.rst
index 21c4b736..fe8c840a 100644
--- a/docs/testing/user/userguide/testlist.rst
+++ b/docs/testing/user/userguide/testlist.rst
@@ -68,14 +68,13 @@ vswitch_pvvp_tput vSwitch - configure switch, two chained v
vswitch_pvvp_back2back vSwitch - configure switch, two chained vnfs and execute RFC2544 back2back test
vswitch_pvvp_cont vSwitch - configure switch, two chained vnfs and execute RFC2544 continuous stream test
vswitch_pvvp_all vSwitch - configure switch, two chained vnfs and execute all test types
-vswitch_p4vp Just configure 4 chained vnfs
-vswitch_p4vp_tput 4 chained vnfs, execute RFC2544 throughput test
-vswitch_p4vp_back2back 4 chained vnfs, execute RFC2544 back2back test
-vswitch_p4vp_cont 4 chained vnfs, execute RFC2544 continuous stream test
-vswitch_p4vp_all 4 chained vnfs, execute RFC2544 throughput test
-2pvp_udp_dest_flows RFC2544 Continuous TC with 2 Parallel VMs, flows on UDP Dest Port
-4pvp_udp_dest_flows RFC2544 Continuous TC with 4 Parallel VMs, flows on UDP Dest Port
-6pvp_udp_dest_flows RFC2544 Continuous TC with 6 Parallel VMs, flows on UDP Dest Port
+vswitch_p4vp_tput 4 chained vnfs, execute RFC2544 throughput test, deployment pvvp4
+vswitch_p4vp_back2back 4 chained vnfs, execute RFC2544 back2back test, deployment pvvp4
+vswitch_p4vp_cont 4 chained vnfs, execute RFC2544 continuous stream test, deployment pvvp4
+vswitch_p4vp_all 4 chained vnfs, execute RFC2544 throughput tests, deployment pvvp4
+2pvp_udp_dest_flows RFC2544 Continuous TC with 2 Parallel VMs, flows on UDP Dest Port, deployment pvpv2
+4pvp_udp_dest_flows RFC2544 Continuous TC with 4 Parallel VMs, flows on UDP Dest Port, deployment pvpv4
+6pvp_udp_dest_flows RFC2544 Continuous TC with 6 Parallel VMs, flows on UDP Dest Port, deployment pvpv6
vhost_numa_awareness vSwitch DPDK - verify that PMD threads are served by the same NUMA slot as QEMU instances
ixnet_pvp_tput_1nic PVP Scenario with 1 port towards IXIA
vswitch_vports_add_del_connection_vpp VPP: vSwitch - configure switch with vports, add and delete connection
@@ -389,6 +388,22 @@ ovsdpdk_qos_pvp In a pvp setup, ensure when a QoS egres
traffic is limited to the specified rate.
======================================== ======================================================================================
+Custom Statistics
++++++++++++++++++
+
+A set of functional testcases for validation of Custom Statistics support by OVS.
+This feature allows Custom Statistics to be accessed by VSPERF.
+
+These testcases require DPDK v17.11, the latest Open vSwitch(v2.9.90)
+and the IxNet traffic-generator.
+
+======================================== ======================================================================================
+ovsdpdk_custstat_check Test if custom statistics are supported.
+ovsdpdk_custstat_rx_error Test bad ethernet CRC counter 'rx_crc_errors' exposed by custom
+ statistics.
+
+======================================== ======================================================================================
+
T-Rex in VM TestCases
^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/testing/user/userguide/teststeps.rst b/docs/testing/user/userguide/teststeps.rst
index 08c95311..cb627bc5 100644
--- a/docs/testing/user/userguide/teststeps.rst
+++ b/docs/testing/user/userguide/teststeps.rst
@@ -23,6 +23,13 @@ the step number by one which is indicated in the log.
(testcases.integration) - Step 0 'vswitch add_vport ['br0']' start
+Test steps are defined as a list of steps within a ``TestSteps`` item of test
+case definition. Each step is a list with following structure:
+
+.. code-block:: python
+
+ '[' [ optional-alias ',' ] test-object ',' test-function [ ',' optional-function-params ] '],'
+
Step driven tests can be used for both performance and integration testing.
In case of integration test, each step in the test case is validated. If a step
does not pass validation the test will fail and terminate. The test will continue
@@ -57,8 +64,14 @@ Step driven testcases can be used in two different ways:
Test objects and their functions
--------------------------------
-Every test step can call a function of one of the supported test objects. The list
-of supported objects and their most common functions follows:
+Every test step can call a function of one of the supported test objects. In general
+any existing function of supported test object can be called by test step. In case
+that step validation is required (valid for integration test steps, which are not
+suppressed), then appropriate ``validate_`` method must be implemented.
+
+The list of supported objects and their most common functions is listed below. Please
+check implementation of test objects for full list of implemented functions and their
+parameters.
* ``vswitch`` - provides functions for vSwitch configuration
@@ -176,6 +189,8 @@ of supported objects and their most common functions follows:
* ``getValue param`` - returns value of given ``param``
* ``setValue param value`` - sets value of ``param`` to given ``value``
+ * ``resetValue param`` - if ``param`` was overridden by ``TEST_PARAMS`` (e.g. by "Parameters"
+ section of the test case definition), then it will be set to its original value.
Examples:
@@ -185,6 +200,8 @@ of supported objects and their most common functions follows:
['settings', 'setValue', 'GUEST_USERNAME', ['root']]
+ ['settings', 'resetValue', 'WHITELIST_NICS'],
+
It is possible and more convenient to access any VSPERF configuration option directly
via ``$NAME`` notation. Option evaluation is done during runtime and vsperf will
automatically translate it to the appropriate call of ``settings.getValue``.
@@ -747,6 +764,8 @@ destination UDP port.
]
},
+The same test can be written in a shorter form using "Deployment" : "pvpv".
+
To run the test:
.. code-block:: console
@@ -779,20 +798,20 @@ and available in both csv and rst report files.
},
},
"TestSteps": [
- ['vswitch', 'add_vport', 'br0'],
- ['vswitch', 'add_vport', 'br0'],
+ ['vswitch', 'add_vport', '$VSWITCH_BRIDGE_NAME'],
+ ['vswitch', 'add_vport', '$VSWITCH_BRIDGE_NAME'],
# priority must be higher than default 32768, otherwise flows won't match
- ['vswitch', 'add_flow', 'br0',
+ ['vswitch', 'add_flow', '$VSWITCH_BRIDGE_NAME',
{'in_port': '1', 'actions': ['output:#STEP[-2][1]'], 'idle_timeout': '0', 'dl_type':'0x0800',
'nw_proto':'17', 'tp_dst':'0', 'priority': '33000'}],
- ['vswitch', 'add_flow', 'br0',
+ ['vswitch', 'add_flow', '$VSWITCH_BRIDGE_NAME',
{'in_port': '2', 'actions': ['output:#STEP[-2][1]'], 'idle_timeout': '0', 'dl_type':'0x0800',
'nw_proto':'17', 'tp_dst':'0', 'priority': '33000'}],
- ['vswitch', 'add_flow', 'br0', {'in_port': '#STEP[-4][1]', 'actions': ['output:1'],
+ ['vswitch', 'add_flow', '$VSWITCH_BRIDGE_NAME', {'in_port': '#STEP[-4][1]', 'actions': ['output:1'],
'idle_timeout': '0'}],
- ['vswitch', 'add_flow', 'br0', {'in_port': '#STEP[-4][1]', 'actions': ['output:2'],
+ ['vswitch', 'add_flow', '$VSWITCH_BRIDGE_NAME', {'in_port': '#STEP[-4][1]', 'actions': ['output:2'],
'idle_timeout': '0'}],
- ['vswitch', 'dump_flows', 'br0'],
+ ['vswitch', 'dump_flows', '$VSWITCH_BRIDGE_NAME'],
['vnf1', 'start'],
]
},
diff --git a/docs/testing/user/userguide/testusage.rst b/docs/testing/user/userguide/testusage.rst
index 20c30a40..3dd41846 100644
--- a/docs/testing/user/userguide/testusage.rst
+++ b/docs/testing/user/userguide/testusage.rst
@@ -1,6 +1,6 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV, Intel Corporation, AT&T and others.
+.. (c) OPNFV, Intel Corporation, Spirent, AT&T and others.
vSwitchPerf test suites userguide
---------------------------------
@@ -91,55 +91,41 @@ Using a custom settings file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If your ``10_custom.conf`` doesn't reside in the ``./conf`` directory
-of if you want to use an alternative configuration file, the file can
+or if you want to use an alternative configuration file, the file can
be passed to ``vsperf`` via the ``--conf-file`` argument.
.. code-block:: console
$ ./vsperf --conf-file <path_to_custom_conf> ...
-Note that configuration passed in via the environment (``--load-env``)
-or via another command line argument will override both the default and
-your custom configuration files. This "priority hierarchy" can be
-described like so (1 = max priority):
-
-1. Testcase definition section ``Parameters``
-2. Command line arguments
-3. Environment variables
-4. Configuration file(s)
-
-Further details about configuration files evaluation and special behaviour
+Evaluation of configuration parameters
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The value of configuration parameter can be specified at various places,
+e.g. at the test case definition, inside configuration files, by the command
+line argument, etc. Thus it is important to understand the order of configuration
+parameter evaluation. This "priority hierarchy" can be described like so
+(1 = max priority):
+
+1. Testcase definition keywords ``vSwitch``, ``Trafficgen``, ``VNF`` and ``Tunnel Type``
+2. Parameters inside testcase definition section ``Parameters``
+3. Command line arguments (e.g. ``--test-params``, ``--vswitch``, ``--trafficgen``, etc.)
+4. Environment variables (see ``--load-env`` argument)
+5. Custom configuration file specified via ``--conf-file`` argument
+6. Standard configuration files, where higher prefix number means higher
+ priority.
+
+For example, if the same configuration parameter is defined in custom configuration
+file (specified via ``--conf-file`` argument), via ``--test-params`` argument
+and also inside ``Parameters`` section of the testcase definition, then parameter
+value from the ``Parameters`` section will be used.
+
+Further details about order of configuration files evaluation and special behaviour
of options with ``GUEST_`` prefix could be found at :ref:`design document
<design-configuration>`.
.. _overriding-parameters-documentation:
-Referencing parameter values
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-It is possible to use a special macro ``#PARAM()`` to refer to the value of
-another configuration parameter. This reference is evaluated during
-access of the parameter value (by ``settings.getValue()`` call), so it
-can refer to parameters created during VSPERF runtime, e.g. NICS dictionary.
-It can be used to reflect DUT HW details in the testcase definition.
-
-Example:
-
-.. code:: python
-
- {
- ...
- "Name": "testcase",
- "Parameters" : {
- "TRAFFIC" : {
- 'l2': {
- # set destination MAC to the MAC of the first
- # interface from WHITELIST_NICS list
- 'dstmac' : '#PARAM(NICS[0]["mac"])',
- },
- },
- ...
-
Overriding values defined in configuration files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -155,6 +141,17 @@ Example:
$ ./vsperf --test-params "TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,);" \
"GUEST_LOOPBACK=['testpmd','l2fwd']" pvvp_tput
+The ``--test-params`` command line argument can also be used to override default
+configuration values for multiple tests. Providing a list of parameters will apply each
+element of the list to the test with the same index. If more tests are run than
+parameters provided the last element of the list will repeat.
+
+.. code:: console
+
+ $ ./vsperf --test-params "['TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)',"
+ "'TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(64,)']" \
+ pvvp_tput pvvp_tput
+
The second option is to override configuration items by ``Parameters`` section
of the test case definition. The configuration items can be added into ``Parameters``
dictionary with their new values. These values will override values defined in
@@ -186,6 +183,36 @@ parameter name is passed via ``--test-params`` CLI argument or defined in ``Para
section of test case definition. It is also forbidden to redefine a value of
``TEST_PARAMS`` configuration item via CLI or ``Parameters`` section.
+**NOTE:** The new definition of the dictionary parameter, specified via ``--test-params``
+or inside ``Parameters`` section, will not override original dictionary values. Instead
+the original dictionary will be updated with values from the new dictionary definition.
+
+Referencing parameter values
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+It is possible to use a special macro ``#PARAM()`` to refer to the value of
+another configuration parameter. This reference is evaluated during
+access of the parameter value (by ``settings.getValue()`` call), so it
+can refer to parameters created during VSPERF runtime, e.g. NICS dictionary.
+It can be used to reflect DUT HW details in the testcase definition.
+
+Example:
+
+.. code:: python
+
+ {
+ ...
+ "Name": "testcase",
+ "Parameters" : {
+ "TRAFFIC" : {
+ 'l2': {
+ # set destination MAC to the MAC of the first
+ # interface from WHITELIST_NICS list
+ 'dstmac' : '#PARAM(NICS[0]["mac"])',
+ },
+ },
+ ...
+
vloop_vnf
^^^^^^^^^
@@ -205,6 +232,12 @@ A Kernel Module that provides OSI Layer 2 Ipv4 termination or forwarding with
support for Destination Network Address Translation (DNAT) for both the MAC and
IP addresses. l2fwd can be found in <vswitchperf_dir>/src/l2fwd
+Additional Tools Setup
+^^^^^^^^^^^^^^^^^^^^^^
+
+Follow the `Additional tools instructions <additional-tools-configuration>` to
+install and configure additional tools such as collectors and loadgens.
+
Executing tests
^^^^^^^^^^^^^^^
@@ -234,6 +267,12 @@ To run a single test:
Where $TESTNAME is the name of the vsperf test you would like to run.
+To run a test multiple times, repeat it:
+
+.. code-block:: console
+
+ $ ./vsperf $TESTNAME $TESTNAME $TESTNAME
+
To run a group of tests, for example all tests with a name containing
'RFC2544':
@@ -256,6 +295,30 @@ Some tests allow for configurable parameters, including test duration
--tests RFC2544Tput \
--test-params "TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)"
+To specify configurable parameters for multiple tests, use a list of
+parameters. One element for each test.
+
+.. code:: console
+
+ $ ./vsperf --conf-file user_settings.py \
+ --test-params "['TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)',"\
+ "'TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(64,)']" \
+ phy2phy_cont phy2phy_cont
+
+If the ``CUMULATIVE_PARAMS`` setting is set to True and there are different parameters
+provided for each test using ``--test-params``, each test will take the parameters of
+the previous test before appyling it's own.
+With ``CUMULATIVE_PARAMS`` set to True the following command will be equivalent to the
+previous example:
+
+.. code:: console
+
+ $ ./vsperf --conf-file user_settings.py \
+ --test-params "['TRAFFICGEN_DURATION=10;TRAFFICGEN_PKT_SIZES=(128,)',"\
+ "'TRAFFICGEN_PKT_SIZES=(64,)']" \
+ phy2phy_cont phy2phy_cont
+ "
+
For all available options, check out the help dialog:
.. code-block:: console
@@ -425,10 +488,6 @@ set ``PATHS['dpdk']['bin']['modules']`` instead.
**NOTE:** Please ensure your boot/grub parameters include
the following:
-**NOTE:** In case of VPP, it is required to explicitly define, that vfio-pci
-DPDK driver should be used. It means to update dpdk part of VSWITCH_VPP_ARGS
-dictionary with uio-driver section, e.g. VSWITCH_VPP_ARGS['dpdk'] = 'uio-driver vfio-pci'
-
.. code-block:: console
iommu=pt intel_iommu=on
@@ -448,6 +507,10 @@ To check that IOMMU is enabled on your platform:
[ 3.335746] IOMMU: dmar1 using Queued invalidation
....
+**NOTE:** In case of VPP, it is required to explicitly define, that vfio-pci
+DPDK driver should be used. It means to update dpdk part of VSWITCH_VPP_ARGS
+dictionary with uio-driver section, e.g. VSWITCH_VPP_ARGS['dpdk'] = 'uio-driver vfio-pci'
+
.. _SRIOV-support:
Using SRIOV support
@@ -584,7 +647,7 @@ The supported dpdk guest bind drivers are:
.. code-block:: console
- 'uio_pci_generic' - Use uio_pci_generic driver
+ 'uio_pci_generic' - Use uio_pci_generic driver
'igb_uio_from_src' - Build and use the igb_uio driver from the dpdk src
files
'vfio_no_iommu' - Use vfio with no iommu option. This requires custom
@@ -599,7 +662,7 @@ modified to use igb_uio_from_src instead.
Note: vfio_no_iommu requires kernels equal to or greater than 4.5 and dpdk
16.04 or greater. Using this option will also taint the kernel.
-Please refer to the dpdk documents at http://dpdk.org/doc/guides for more
+Please refer to the dpdk documents at https://doc.dpdk.org/guides for more
information on these drivers.
Guest Core and Thread Binding
@@ -915,6 +978,39 @@ Example of execution of VSPERF in "trafficgen" mode:
$ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \
--test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}"
+Performance Matrix
+^^^^^^^^^^^^^^^^^^
+
+The ``--matrix`` command line argument analyses and displays the performance of
+all the tests run. Using the metric specified by ``MATRIX_METRIC`` in the conf-file,
+the first test is set as the baseline and all the other tests are compared to it.
+The ``MATRIX_METRIC`` must always refer to a numeric value to enable comparision.
+A table, with the test ID, metric value, the change of the metric in %, testname
+and the test parameters used for each test, is printed out as well as saved into the
+results directory.
+
+Example of 2 tests being compared using Performance Matrix:
+
+.. code-block:: console
+
+ $ ./vsperf --conf-file user_settings.py \
+ --test-params "['TRAFFICGEN_PKT_SIZES=(64,)',"\
+ "'TRAFFICGEN_PKT_SIZES=(128,)']" \
+ phy2phy_cont phy2phy_cont --matrix
+
+Example output:
+
+.. code-block:: console
+
+ +------+--------------+---------------------+----------+---------------------------------------+
+ | ID | Name | throughput_rx_fps | Change | Parameters, CUMULATIVE_PARAMS = False |
+ +======+==============+=====================+==========+=======================================+
+ | 0 | phy2phy_cont | 23749000.000 | 0 | 'TRAFFICGEN_PKT_SIZES': [64] |
+ +------+--------------+---------------------+----------+---------------------------------------+
+ | 1 | phy2phy_cont | 16850500.000 | -29.048 | 'TRAFFICGEN_PKT_SIZES': [128] |
+ +------+--------------+---------------------+----------+---------------------------------------+
+
+
Code change verification by pylint
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/testing/user/userguide/trafficcapture.rst b/docs/testing/user/userguide/trafficcapture.rst
index fa09bfed..8a224dcb 100644
--- a/docs/testing/user/userguide/trafficcapture.rst
+++ b/docs/testing/user/userguide/trafficcapture.rst
@@ -92,9 +92,9 @@ An example of Traffic Capture in VM test:
},
TestSteps: [
# replace original flows with vlan ID modification
- ['!vswitch', 'add_flow', 'br0', {'in_port': '1', 'actions': ['mod_vlan_vid:4','output:3']}],
- ['!vswitch', 'add_flow', 'br0', {'in_port': '2', 'actions': ['mod_vlan_vid:4','output:4']}],
- ['vswitch', 'dump_flows', 'br0'],
+ ['!vswitch', 'add_flow', '$VSWITCH_BRIDGE_NAME', {'in_port': '1', 'actions': ['mod_vlan_vid:4','output:3']}],
+ ['!vswitch', 'add_flow', '$VSWITCH_BRIDGE_NAME', {'in_port': '2', 'actions': ['mod_vlan_vid:4','output:4']}],
+ ['vswitch', 'dump_flows', '$VSWITCH_BRIDGE_NAME'],
# verify that received frames have modified vlan ID
['VNF0', 'execute_and_wait', 'tcpdump -i eth0 -c 5 -w dump.pcap vlan 4 &'],
['trafficgen', 'send_traffic',{}],
@@ -199,7 +199,7 @@ An example of Traffic Capture for testing NICs with HW offloading test:
['tools', 'exec_shell_background', 'tcpdump -i [2][device] -c 5 -w capture.pcap '
'ether src [l2][srcmac]'],
['trafficgen', 'send_traffic', {}],
- ['vswitch', 'dump_flows', 'br0'],
+ ['vswitch', 'dump_flows', '$VSWITCH_BRIDGE_NAME'],
['vswitch', 'dump_flows', 'br1'],
# there must be 5 captured frames...
['tools', 'exec_shell', 'tcpdump -r capture.pcap | wc -l', '|^(\d+)$'],
diff --git a/docs/xtesting/index.rst b/docs/xtesting/index.rst
new file mode 100644
index 00000000..9259a12a
--- /dev/null
+++ b/docs/xtesting/index.rst
@@ -0,0 +1,85 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Spirent, AT&T, Ixia and others.
+
+.. OPNFV VSPERF Documentation master file.
+
+********************************
+OPNFV VSPERF with OPNFV Xtesting
+********************************
+
+============
+Introduction
+============
+User can use VSPERF with Xtesting for two different usecases.
+
+1. Baremetal Dataplane Testing/Benchmarking.
+2. Openstack Dataplane Testing/Benchmarking.
+
+The Baremetal usecase is the legacy usecase of OPNFV VSPERF.
+
+The below figure summarizes both the usecases.
+
+.. image:: ./vsperf-xtesting.png
+ :width: 400
+
+===========
+How to Use?
+===========
+
+Step-1: Build the container
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Go the xtesting/baremetal or xtesting/openstack and run the following command.
+
+.. code-block:: console
+
+ docker build -t 127.0.0.1:5000/vsperfbm
+
+
+Step-2: Install and run Xtesting Playbook
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+These commands are described in OPNFV Xtesting Documentation. Please refere to OPNFV Xtesting wiki for description of these commands.
+
+.. code-block:: console
+
+ virtualenv xtesting
+ . xtesting/bin/activate
+ ansible-galaxy install collivier.xtesting
+ ansible-playbook site.yml
+
+======================
+Accessing the Results?
+======================
+
+VSPERF automatically publishes the results to any OPNFV Testapi deployment.
+User has to configure following two parameters in VSPERF.
+
+1. OPNFVPOD - The name of the pod.
+2. OPNFV_URL - The endpoint serving testapi.
+
+As Xtesting runs its own testapi, user should point to this (testapi endpoint of Xtesting) using the above two configuration.
+
+The above two configurations should be done wherever VSPERF is running (refer to the figure above)
+
+NOTE: Before running the test, it would help if user can prepre the testapi of Xtesting (if needed). The preparation include setting up the following:
+
+1. Projects
+2. Testcases.
+3. Pods.
+
+Please refer to the documentation of testapi for more details.
+
+=======================================
+Accessing other components of Xtesting?
+=======================================
+
+Please refer to the documentation of Xtesting in OPNFV Wiki.
+
+===========
+Limitations
+===========
+For Jerma Release, following limitations apply:
+
+1. For both baremetal and openstack, only phy2phy_tput testcase is supported.
+2. For openstack, only Spirent's STCv and Keysight's Ixnet-Virtual is supported.
diff --git a/docs/xtesting/vsperf-xtesting.png b/docs/xtesting/vsperf-xtesting.png
new file mode 100755
index 00000000..64cad722
--- /dev/null
+++ b/docs/xtesting/vsperf-xtesting.png
Binary files differ