summaryrefslogtreecommitdiffstats
path: root/docs/testing
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing')
-rw-r--r--docs/testing/developer/design/02-Get_started_Guide.rst2
-rw-r--r--docs/testing/developer/design/04-SampleVNF_Design.rst77
-rw-r--r--docs/testing/developer/requirements/03-Requirements.rst2
-rw-r--r--[-rwxr-xr-x]docs/testing/user/userguide/01-introduction.rst15
-rw-r--r--docs/testing/user/userguide/01-prox_documentation.rst4
-rw-r--r--docs/testing/user/userguide/02-methodology.rst5
-rw-r--r--[-rwxr-xr-x]docs/testing/user/userguide/03-architecture.rst14
-rw-r--r--docs/testing/user/userguide/03-installation.rst23
-rw-r--r--docs/testing/user/userguide/04-running_the_test.rst59
-rw-r--r--docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst26
-rw-r--r--docs/testing/user/userguide/06-How_to_use_REST_api.rst23
-rw-r--r--docs/testing/user/userguide/07-Config_files.rst2
-rw-r--r--docs/testing/user/userguide/index.rst1
-rw-r--r--docs/testing/user/userguide/references.rst6
14 files changed, 142 insertions, 117 deletions
diff --git a/docs/testing/developer/design/02-Get_started_Guide.rst b/docs/testing/developer/design/02-Get_started_Guide.rst
index c8f35ed3..2a9806b5 100644
--- a/docs/testing/developer/design/02-Get_started_Guide.rst
+++ b/docs/testing/developer/design/02-Get_started_Guide.rst
@@ -6,7 +6,7 @@
====================================
Get started as a SampleVNF developer
-===================================
+====================================
.. _SampleVNF: https://wiki.opnfv.org/samplevnf
.. _Gerrit: https://www.gerritcodereview.com/
diff --git a/docs/testing/developer/design/04-SampleVNF_Design.rst b/docs/testing/developer/design/04-SampleVNF_Design.rst
index a3332e27..f813a297 100644
--- a/docs/testing/developer/design/04-SampleVNF_Design.rst
+++ b/docs/testing/developer/design/04-SampleVNF_Design.rst
@@ -348,7 +348,7 @@ transmit takes packets from worker thread in a dedicated ring and sent to the
hardware queue.
Master pipeline
-^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^
This component does not process any packets and should configure with Core 0,
to save cores for other components which processes traffic. The component
is responsible for:
@@ -359,7 +359,7 @@ is responsible for:
4. ARP and ICMP are handled here.
Load Balancer pipeline
-^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^^^
Load balancer is part of the Multi-Threaded CGMAPT release which distributes
the flows to Multiple ACL worker threads.
@@ -371,7 +371,7 @@ affinity of flows to worker threads.
Tuple can be modified/configured using configuration file
vCGNAPT - Static
-------------------
+----------------
The vCGNAPT component performs translation of private IP & port to public IP &
port at egress side and public IP & port to private IP & port at Ingress side
@@ -383,7 +383,7 @@ match will be taken a default action. The default action may result in drop of
the packets.
vCGNAPT- Dynamic
------------------
+----------------
The vCGNAPT component performs translation of private IP & port to public IP &
port at egress side and public IP & port to private IP & port at Ingress side
@@ -399,11 +399,13 @@ Dynamic vCGNAPT acts as static one too, we can do NAT entries statically.
Static NAT entries port range must not conflict to dynamic NAT port range.
vCGNAPT Static Topology
-----------------------
+-----------------------
-IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA
+IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1)IXIA
operation:
+
Egress --> The packets sent out from ixia(port 0) will be CGNAPTed to ixia(port 1).
+
Igress --> The packets sent out from ixia(port 1) will be CGNAPTed to ixia(port 0).
vCGNAPT Dynamic Topology (UDP_REPLAY)
@@ -411,9 +413,11 @@ vCGNAPT Dynamic Topology (UDP_REPLAY)
IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)UDP_REPLAY
operation:
+
Egress --> The packets sent out from ixia will be CGNAPTed to L3FWD/L4REPLAY.
+
Ingress --> The L4REPLAY upon reception of packets (Private to Public Network),
- will immediately replay back the traffic to IXIA interface. (Pub -->Priv).
+ will immediately replay back the traffic to IXIA interface. (Pub -->Priv).
How to run L4Replay
-------------------
@@ -431,7 +435,7 @@ vACL - Design
=============
Introduction
---------------
+------------
This application implements Access Control List (ACL). ACL is typically used
for rule based policy enforcement. It restricts access to a destination IP
address/port based on various header fields, such as source IP address/port,
@@ -439,12 +443,12 @@ destination IP address/port and protocol. It is built on top of DPDK and uses
the packet framework infrastructure.
Scope
-------
+-----
This application provides a standalone DPDK based high performance ACL Virtual
Network Function implementation.
High Level Design
-------------------
+-----------------
The ACL Filter performs bulk filtering of incoming packets based on rules in
current ruleset, discarding any packets not permitted by the rules. The
mechanisms needed for building the rule database and performing lookups are
@@ -460,12 +464,12 @@ The Input and Output FIFOs will be implemented using DPDK Ring Buffers.
The DPDK ACL example:
-http://dpdk.org/doc/guides/sample_app_ug/l3_forward_access_ctrl.html
+http://doc.dpdk.org/guides/sample_app_ug/l3_forward.html
#figure-ipv4-acl-rule contains a suitable syntax and parser for ACL rules.
Components of ACL
-------------------
+-----------------
In ACL, each component is constructed as a packet framework. It includes
Master pipeline component, driver, load balancer pipeline component and ACL
worker pipeline component. A pipeline framework is a collection of input ports,
@@ -607,27 +611,33 @@ Edge Router has the following functionalities in Upstream.
Update the packet color in MPLS EXP field in each MPLS header.
Components of vPE
--------------------
+-----------------
The vPE has downstream and upstream pipelines controlled by Master component.
-Edge router processes two different types of traffic through pipelines
-I. Downstream (Core-to-Customer)
- 1. Receives TCP traffic from core
- 2. Routes the packet based on the routing rules
- 3. Performs traffic scheduling based on the traffic profile
- a. Qos scheduling is performed using token bucket algorithm
- SVLAN, CVLAN, DSCP fields are used to determine transmission priority.
- 4. Appends QinQ label in each outgoing packet.
-II. Upstream (Customer-to-Core)
- 1. Receives QinQ labelled TCP packets from Customer
- 2. Removes the QinQ label
- 3. Classifies the flow using QinQ label and apply Qos metering
- a. 1st stage Qos metering is performed with flow ID using trTCM algorithm
- b. 2nd stage Qos metering is performed with flow ID and traffic class using
- trTCM algorithm
- c. traffic class maps to DSCP field in the packet.
- 4. Routes the packet based on the routing rules
- 5. Appends two MPLS labels in each outgoing packet.
+Edge router processes two different types of traffic through pipelines:
+
+I) Downstream (Core-to-Customer)
+
+ 1. Receives TCP traffic from core
+ 2. Routes the packet based on the routing rules
+ 3. Performs traffic scheduling based on the traffic profile
+
+ a. Qos scheduling is performed using token bucket algorithm.
+ SVLAN, CVLAN, DSCP fields are used to determine transmission priority.
+ 4. Appends QinQ label in each outgoing packet.
+
+II) Upstream (Customer-to-Core)
+
+ 1. Receives QinQ labelled TCP packets from Customer
+ 2. Removes the QinQ label
+ 3. Classifies the flow using QinQ label and apply Qos metering
+
+ a. 1st stage Qos metering is performed with flow ID using trTCM algorithm
+ b. 2nd stage Qos metering is performed with flow ID and traffic class using
+ trTCM algorithm
+ c. traffic class maps to DSCP field in the packet.
+ 4. Routes the packet based on the routing rules
+ 5. Appends two MPLS labels in each outgoing packet.
Master Component
^^^^^^^^^^^^^^^^
@@ -635,7 +645,8 @@ Master Component
The Master component is part of all the IP Pipeline applications. This
component does not process any packets and should configure with Core0,
to save cores for other components which processes traffic. The component
-is responsible for
+is responsible for:
+
1. Initializing each component of the Pipeline application in different threads
2. Providing CLI shell for the user
3. Propagating the commands from user to the corresponding components.
@@ -656,7 +667,7 @@ To run the VNF, execute the following:
Prox - Packet pROcessing eXecution engine
-==========================================
+=========================================
Introduction
------------
diff --git a/docs/testing/developer/requirements/03-Requirements.rst b/docs/testing/developer/requirements/03-Requirements.rst
index 25798606..97b1813f 100644
--- a/docs/testing/developer/requirements/03-Requirements.rst
+++ b/docs/testing/developer/requirements/03-Requirements.rst
@@ -13,7 +13,7 @@ Requirements
.. _SampleVNF: https://wiki.opnfv.org/samplevnf
.. _Technical_Briefs: https://wiki.opnfv.org/display/SAM/Technical+Briefs+of+VNFs
-Supported Test setup:
+Supported Test setup
--------------------
The device under test (DUT) consists of a system following
diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst
index 74357460..4ddde201 100755..100644
--- a/docs/testing/user/userguide/01-introduction.rst
+++ b/docs/testing/user/userguide/01-introduction.rst
@@ -9,10 +9,6 @@ Introduction
**Welcome to SampleVNF's documentation !**
-.. _Pharos: https://wiki.opnfv.org/display/pharos
-.. _SampleVNF: https://wiki.opnfv.org/display/SAM
-
-SampleVNF_ is an OPNFV Project.
The project's goal was to provide a placeholder for various sample VNF
(Virtual Network Function (:term:`VNF`)) development which includes example
@@ -22,9 +18,7 @@ Today, we only maintain PROX and rapid scripts as part of this project
to perform Network Function Virtualization Infrastructure
(:term:`NFVI`) characterization.
-*SampleVNF* is used in OPNFV for characterization of NFVI on OPNFV infrastructure.
-
-.. seealso:: 'Pharos'_ for information on OPNFV community labs
+*SampleVNF* is used in OPNFV for characterization of NFVI.
About This Document
@@ -41,10 +35,3 @@ This document consists of the following chapters:
* Chapter :doc:`03-installation` provides instructions to install *SampleVNF*.
* Chapter :doc:`04-running_the_test` shows how to run the dataplane testing.
-
-Contact SampleVNF
-=================
-
-Feedback? `Contact us`_
-
-.. _Contact us: opnfv-users@lists.opnfv.org
diff --git a/docs/testing/user/userguide/01-prox_documentation.rst b/docs/testing/user/userguide/01-prox_documentation.rst
index 0fbee344..12c740da 100644
--- a/docs/testing/user/userguide/01-prox_documentation.rst
+++ b/docs/testing/user/userguide/01-prox_documentation.rst
@@ -1,4 +1,4 @@
Testing with PROX
=================
-The PROX documentation can be found in `Prox - Packet pROcessing eXecution engine <https://wiki.opnfv.org/x/AAa9>`_
-How to use PROX with the rapid pyton scripts can be found in `Rapid scripting <https://wiki.opnfv.org/x/OwM-Ag>`_
+The PROX documentation can be found in `Prox - Packet pROcessing eXecution engine <https://wiki-old.opnfv.org/x/AAa9>`_
+How to use PROX with the rapid pyton scripts can be found in `Rapid scripting <https://wiki-old.opnfv.org/x/OwM-Ag>`_
diff --git a/docs/testing/user/userguide/02-methodology.rst b/docs/testing/user/userguide/02-methodology.rst
index 30efe79f..e5a7d383 100644
--- a/docs/testing/user/userguide/02-methodology.rst
+++ b/docs/testing/user/userguide/02-methodology.rst
@@ -69,6 +69,5 @@ are shown in :ref:`Table1 <table2_1>`.
+-----------------+---------------------------------------------------------------+
.. note:: The description in this OPNFV document is intended as a reference for
- users to understand the scope of the SampleVNF Project and the
- deliverables of the SampleVNF framework. For complete description of
- the methodology, please refer to the ETSI document.
+ users to execute the benchmarking. For complete description of the methodology,
+ please refer to the ETSI document.
diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst
index 08e1b2f2..bdc51d3f 100755..100644
--- a/docs/testing/user/userguide/03-architecture.rst
+++ b/docs/testing/user/userguide/03-architecture.rst
@@ -37,8 +37,8 @@ validating the sample VNFs through OPEN SOURCE VNF approximations and test tools
The VNFs belongs to this project are never meant for field deployment.
All the VNF source code part of this project requires Apache License Version 2.0.
-Supported deployment:
-----------------------
+Supported deployment
+--------------------
* Bare-Metal - All VNFs can run on a Bare-Metal DUT
* Standalone Virtualization(SV): All VNFs can run on SV like VPP as switch, ovs,
ovs-dpdk, srioc
@@ -47,7 +47,6 @@ Supported deployment:
VNF supported
-------------
- Carrier Grade Network Address Translation (CG-NAT) VNF
- ::
The Carrier Grade Network Address and port Translation (vCG-NAPT) is a
VNF approximation extending the life of the service providers IPv4 network
infrastructure and mitigate IPv4 address exhaustion by using address and
@@ -55,23 +54,19 @@ VNF supported
It also supports the connectivity between the IPv6 access network to
IPv4 data network using the IPv6 to IPv4 address translation and vice versa.
- Firewall (vFW) VNF
- ::
The Virtual Firewall (vFW) is a VNF approximation serving as a state full
L3/L4 packet filter with connection tracking enabled for TCP, UDP and ICMP.
The VNF could be a part of Network Services (industry use-cases) deployed
to secure the enterprise network from un-trusted network.
- Access Control List (vACL) VNF
- ::
The vACL vNF is implemented as a DPDK application using VNF Infrastructure
Library (VIL). The VIL implements common VNF internal, optimized for
Intel Architecture functions like load balancing between cores, IPv4/IPv6
stack features, and interface to NFV infrastructure like OVS or SRIOV.
- UDP_Replay
- ::
The UDP Replay is implemented as a DPDK application using VNF Infrastructure
Library (VIL). Performs as a refelector of all the traffic on given port.
- Prox - Packet pROcessing eXecution engine.
- ::
Packet pROcessing eXecution Engine (PROX) which is a DPDK application.
PROX can do operations on packets in a highly configurable manner.
The PROX application is also displaying performance statistics that can
@@ -142,14 +137,15 @@ The following features were verified by SampleVNF test cases:
Test Framework
--------------
-.. _Yardstick_NSB: http://artifacts.opnfv.org/yardstick/docs/testing_user_userguide/index.html#document-13-nsb-overview
+.. _Yardstick_NSB: http://artifacts.opnfv.org/yardstick/docs/testing_user_userguide/index.html#document-11-nsb-overview
+.. _ETSI GS NFV-TST 001: https://portal.etsi.org/webapp/workprogram/Report_WorkItem.asp?WKI_ID=46009
SampleVNF Test Infrastructure (NSB (Yardstick_NSB_)) in yardstick helps to facilitate
consistent/repeatable methodologies for characterizing & validating the
sample VNFs (:term:`VNF`) through OPEN SOURCE VNF approximations.
-Network Service Benchmarking in yardstick framework follows ETSI GS NFV-TST001_
+Network Service Benchmarking in yardstick framework follows `ETSI GS NFV-TST 001`_
to verify/characterize both :term:`NFVI` & :term:`VNF`
For more inforamtion refer, Yardstick_NSB_
diff --git a/docs/testing/user/userguide/03-installation.rst b/docs/testing/user/userguide/03-installation.rst
index 661fc7ad..4407b276 100644
--- a/docs/testing/user/userguide/03-installation.rst
+++ b/docs/testing/user/userguide/03-installation.rst
@@ -5,6 +5,8 @@
SampleVNF Installation
======================
+.. _RapidScripting: https://wiki.opnfv.org/display/SAM/Rapid+scripting
+.. _XtestingDocumentation: https://xtesting.readthedocs.io/en/latest/
Abstract
--------
@@ -28,15 +30,17 @@ Prerequisites
-------------
Supported Test setup
-^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^
The device under test (DUT) is an NFVI instance on which we can deploy PROX instances.
-A PROX instance is a machine that
+A PROX instance is a machine that:
+
* has a management interface that can be reached from the test container
* has one or more data plane interfaces on a dataplane network.
* can be a container, a VM or a bare metal machine. We just need to be able to ssh into the
PROX machine from the test container.
* is optimized for data plane traffic.
* will measure the throughput that is offered through its dataplane interface(s)
+
There are no requirements on the NFVI instance itself. Of course, the measured throughput will
depend heavily on the NFVI characteristics.
In this release, we are supporting an automated deployment of the PROX instance on an NFVI that
@@ -72,13 +76,13 @@ Installation Steps
Step 1: Identify a machine on which you will install the containers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-This machine will needs enough resource to install the Xtesting framework and needs to be enabled
+This machine will need enough resources to install the Xtesting framework and needs to be enabled
for containers.
From a network point of view, it will need to have access to the PROX instances: That means it will need
to be able to ssh into these machines and that the network also needs to allow for TCP port 8474 traffic.
When using the automation to create the VM through the Heat Stack API, this machine also needs to be able
-execute the OpenStack API. Alternatively, the creation of the VMs can be executed on another machine, but
+to execute the OpenStack API. Alternatively, the creation of the VMs can be executed on another machine, but
this will involve some manual file copying.
Step 2: Clone the samplevnf project on that machine
@@ -103,16 +107,16 @@ First, a PROX qcow2 image needs to be downloaded.
wget http://artifacts.opnfv.org/samplevnf/jerma/prox_jerma.qcow2
-This image can also be created mannualy by following instructions in https://wiki.opnfv.org/display/SAM/Rapid+scripting,
+This image can also be created mannualy by following instructions in RapidScripting_,
in the section "Creating an image"
Now upload this image to Openstack:
.. code-block:: console
- openstack image` create --disk-format qcow2 --container-format bare --file prox_jerma.qcow2 rapidVM
+ openstack image create --disk-format qcow2 --container-format bare --file prox_jerma.qcow2 rapidVM
Now run createrapid.sh to create the stack. This process takes the config_file as input. Details can be found in
-https://wiki.opnfv.org/display/SAM/Rapid+scripting, in the section "Deploying the VMs"
+RapidScripting_, in the section "Deploying the VMs"
.. code-block:: console
@@ -122,7 +126,7 @@ At the end of this step, VMs should be running and the rapid.env and rapid_key.p
Step 4: Deploy your own Xtesting toolchain
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Install Xtesting as described in https://xtesting.readthedocs.io/en/latest/
+Install Xtesting as described in XtestingDocumentation_.
First goto the xtesting directory in samplevnf/VNFs/DPPD-PROX/helper-scripts/rapid/xtesting (this was cloned
in step 2)
@@ -142,8 +146,7 @@ Go to the directory samplevnf/VNFs/DPPD-PROX/helper-scripts/rapid/xtesting
While building this container, some files will be copied into the container image. Two of these files
are generated by Step 3: rapid.env and rapid_key.pem and reside in the samplevnf/VNFs/DPPD-PROX/helper-scripts/rapid/.
Please copy them into the xtesting directory.
-The 3rd file that will be copied is testcases.yaml. You might want to modify this file according to the testing you would
-like to execute.
+The 3rd file that will be copied is testcases.yaml.
.. code-block:: console
diff --git a/docs/testing/user/userguide/04-running_the_test.rst b/docs/testing/user/userguide/04-running_the_test.rst
index 35404c89..3d3a1e6c 100644
--- a/docs/testing/user/userguide/04-running_the_test.rst
+++ b/docs/testing/user/userguide/04-running_the_test.rst
@@ -7,13 +7,13 @@
Running the test
================
.. _NFV-TST009: https://docbox.etsi.org/ISG/NFV/open/Publications_pdf/Specs-Reports/NFV-TST%20009v3.2.1%20-%20GS%20-%20NFVI_Benchmarks.pdf
-.. _TST009_Throughput_64B_64F.test: https://github.com/opnfv/samplevnf/blob/master/VNFs/DPPD-PROX/helper-scripts/rapid/TST009_Throughput_64B_64F.test
+.. _TST009_Throughput_64B_64F.test: https://github.com/opnfv/samplevnf/blob/master/VNFs/DPPD-PROX/helper-scripts/rapid/tests/TST009_Throughput_64B_64F.test
.. _rapid_location: https://github.com/opnfv/samplevnf/blob/master/VNFs/DPPD-PROX/helper-scripts/rapid/
Overview
--------
A default test will be run automatically when you launch the testing. The
-details and definition of that test is defined in file
+details and definition of that test are defined in file
TST009_Throughput_64B_64F.test_.
We will discuss the sections of such a test file and how this can be changed to
@@ -21,6 +21,10 @@ accomodate the testing you want to execute. This will be done by creating your
own test file and making sure it becomes part of your testcases.yaml, as will
be shown below.
+As the name of the default test file suggests, the test will find the
+throughput, latency and packet loss according to NFV-TST009_, for packets that
+are 64 bytes long and for 64 different flows.
+
Test File Description
---------------------
The test file has multiple sections. The first section is a generic section
@@ -82,20 +86,30 @@ the machine. This will be the name of the PROX instance and will be shown in
case you run the PROX UI. In this automated testing, this will be not be
visible.
-The PROX config file is used by the PROX program and defines what PROX will be
+The config_file parameter defines which PROX config file is used by the PROX
+program and what PROX will be
doing. For a generator, this will typically be gen.cfg. Multiple cfg files
-exist in the rapid_location_. dest_vm is used by a generator to find out to
-which VM he needs to send the packets. Int e example below, the packets will be
-sent to TestM2. gencores is a list of cores to be used for the generator tasks.
+exist in the rapid_location_.
+
+The dest_vm parameter is used by a generator to find out to
+which VM he needs to send the packets. In the example below, the packets will be
+sent to TestM2.
+
+The gencores parameter defines a list of cores to be used for the generator tasks.
Note that if you specify more than 1 core, the interface will need to support as
-many tx queues as there are generator cores. The latcores field specifies a
+many tx queues as there are generator cores.
+
+The latcores parameter specifies a
list of cores to be used by the latency measurement tasks. You need as many rx
-queueus on the interface as the number of latcores. The default value for the
+queues on the interface as specified in the latcores parameter.
+
+The default value for the
bucket_size_exp parameter is 12. It is also its minimum value. In case most of
the latency measurements in the histogram are falling in the last bucket, this
number needs to be increased. Every time you increase this number by 1, the
bucket size for the latency histogram is multiplied by 2. There are 128 buckets
in the histogram.
+
cores is a parameter that will be used by non-generator configurations that
don't need a disctinction between generator and latency cores (e.g. swap.cfg).
@@ -110,6 +124,7 @@ not something to start with.
gencores = [1]
latcores = [3]
#bucket_size_exp = 12
+
testy
^^^^^
In the testy sections, where y denotes the index of the test, the test that will
@@ -124,10 +139,12 @@ rapid_location_.
The pass_threshold parameter defines the success criterium for the test. When
this test uses multiple combinations of packet size and flows, all combinations
-must be meeting the same threshold. The threshold is expressed in Mpps.
+must be meeting the same threshold. If one of the combinations fails, the test
+will be reported as failed.
+The threshold is expressed in Mpps.
The imixs parameter defines the pakcet sizes that will be used. Each element in
-the imix list will result in a separate test. Each element is on its turn a
+the imixs list will result in a separate test. Each element is on its turn a
list of packet sizes which will be used during one test execution. If you only
want to test 1 imix size, define imixs with only one element. For each element in
the imixs list, the generator will iterate over the packet lengths and send them
@@ -139,40 +156,42 @@ needing results for more sizes, one should create a specific test file per size
and launch the different tests using Xtesting.
The flows parameter is a list of flow sizes. For each flow size, a test will be
-run with the specified amount of flows. The flow size needs to be powers of 2,
+run with the specified amount of flows. The flow size needs to be a power of 2,
max 2^30. If not a power of 2, we will use the lowest power of 2 that is larger
than the requested number of flows. e.g. 9 will result in 16 flows.
Same remark as for the imixs parameter: we will only use one element in the
-flows list. When more flows need to be tested, create a differnt test file and
+flows list. When more flows need to be tested, create a different test file and
launch it using Xtesting.
-drop_rate_threshold specifies the ratio of packets than can be dropped and still
-consider the test run as succesful. Note that a value of 0 means a zero packet
+The drop_rate_threshold parameter specifies the maximum ratio of packets than
+can be dropped while still considering
+the test run as succesful. Note that a value of 0 means an absolute zero packet
loss: even if we lose 1 packet during a certain step in a test run, it will be
marked as failed.
-lat_avg_threshold, lat_perc_threshold, lat_max_threshold are threshols to define
+The lat_avg_threshold, lat_perc_threshold, lat_max_threshold parameters
+are thresholds to define
the maximal acceptable round trip latency to mark the test step as successful.
You can set this threshold for the average, the percentile and the maximum
-latency. Which percentile is being used is define in the TestParameters section.
+latency. Which percentile is being used is defined in the TestParameters section.
All these thresholds are expressed in micro-seconds. You can also put the value
to inf, which means the threshold will never be reached and hence the threshold
value is not being used to define if the run is successful or not.
-MAXr, MAXz, MAXFramesPerSecondAllIngress and StepSize are defined in
+The MAXr, MAXz, MAXFramesPerSecondAllIngress and StepSize parameters are defined in
NFV-TST009_ and are used to control the binary search algorithm.
-ramp_step is a variable that controls the ramping of the generated traffic. When
+The ramp_step variable controls the ramping of the generated traffic. When
not specified, the requested traffic for each step in the testing will be
applied immediately. If specified, the generator will slowly go to the requested
speed by increasing the traffic each second with the value specified in this
-parameter till it reached the requested speed. This parameter is expressed in
+parameter till it reaches the requested speed. This parameter is expressed in
100Mb/s.
.. code-block:: console
pass_threshold=0.001
- imixs=[[64]]
+ imixs=[[128, 256, 64, 64, 128]]
flows=[64]
drop_rate_threshold = 0
lat_avg_threshold = inf
diff --git a/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst b/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst
index 7ba25fe1..28da0ebd 100644
--- a/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst
+++ b/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst
@@ -17,6 +17,7 @@ The device under test (DUT) consists of a system following;
* Specific Intel Network Interface Cards (NICs)
* BIOS settings noting those that updated from the basic settings
* DPDK build configuration settings, and commands used for tests
+
Connected to the DUT is an IXIA* or Software Traffic generator like pktgen or TRex,
simulation platform to generate packet traffic to the DUT ports and
determine the throughput/latency at the tester side.
@@ -103,17 +104,16 @@ The connectivity could be
(TG_2(UDP_Replay) reflects all the traffic on the given port)
* Bare-Metal
- Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run
+ Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run
* Standalone Virtualization - PHY-VM-PHY
+
* SRIOV
- Refer below link to setup sriov
- https://software.intel.com/en-us/articles/using-sr-iov-to-share-an-ethernet-port-among-multiple-vms
+ https://software.intel.com/en-us/articles/using-sr-iov-to-share-an-ethernet-port-among-multiple-vms
* OVS_DPDK
- Refer below link to setup ovs-dpdk
- http://docs.openvswitch.org/en/latest/intro/install/general/
- http://docs.openvswitch.org/en/latest/intro/install/dpdk/
+ http://docs.openvswitch.org/en/latest/intro/install/general/
+ http://docs.openvswitch.org/en/latest/intro/install/dpdk/
* Openstack
Use any OPNFV installer to deploy the openstack.
@@ -132,19 +132,21 @@ Step 0: Preparing hardware connection
Step 1: Setting up Traffic generator (TRex)
TRex Software preparations
- **************************
* Install the OS (Bare metal Linux, not VM!)
* Obtain the latest TRex package: wget https://trex-tgn.cisco.com/trex/release/latest
* Untar the package: tar -xzf latest
* Change dir to unzipped TRex
* Create config file using command: sudo python dpdk_setup_ports.py -i
+
In case of Ubuntu 16 need python3
+
See paragraph config creation for detailed step-by-step
+
(Refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html)
Build SampleVNFs
------------------
+----------------
Step 2: Procedure to build SampleVNFs
@@ -487,7 +489,7 @@ step 4: Run Test using traffic geneator
UDP_Replay - How to run
-----------------------------------------
+-----------------------
Step 3: Bind the datapath ports to DPDK
@@ -532,7 +534,7 @@ step 4: Run Test using traffic geneator
For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
PROX - How to run
-------------------
+-----------------
Description
^^^^^^^^^^^
@@ -654,7 +656,7 @@ PROX COMMANDS AND SCREENS
+----------------------------------------------+---------------------------------------------------------------------------+----------------------------+
| version | Show version | |
+----------------------------------------------+---------------------------------------------------------------------------+----------------------------+
- | port_stats <port id> | Print rate for no_mbufs, ierrors, rx_bytes, tx_bytes, rx_pkts, | |
+ | port_stats <port id> | Print rate for no_mbufs, ierrors, rx_bytes, tx_bytes, rx_pkts, | |
| | tx_pkts and totals for RX, TX, no_mbufs ierrors for port <port id> | |
+----------------------------------------------+---------------------------------------------------------------------------+----------------------------+
@@ -941,7 +943,7 @@ PROX Compiation installation
* cd samplevnf
* export RTE_SDK=`pwd`/dpdk
* export RTE_TARGET=x86_64-native-linuxapp-gcc
-* git clone http://dpdk.org/git/dpdk
+* git clone git://dpdk.org/dpdk
* cd dpdk
* git checkout v17.05
* make install T=$RTE_TARGET
diff --git a/docs/testing/user/userguide/06-How_to_use_REST_api.rst b/docs/testing/user/userguide/06-How_to_use_REST_api.rst
index b8c0cbea..ba768d78 100644
--- a/docs/testing/user/userguide/06-How_to_use_REST_api.rst
+++ b/docs/testing/user/userguide/06-How_to_use_REST_api.rst
@@ -3,12 +3,12 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) opnfv, national center of scientific research "demokritos" and others.
-========================================================
+========
REST API
-========================================================
+========
Introduction
----------------
+------------
As the internet industry progresses creating REST API becomes more concrete
with emerging best Practices. RESTful web services don’t follow a prescribed
standard except fpr the protocol that is used which is HTTP, its important
@@ -26,7 +26,7 @@ Here are important points to be considered:
always same no matter how many times these operations are invoked.
* PUT and POST operation are nearly same with the difference lying
only in the result where PUT operation is idempotent and POST
- operation can cause different result.
+ operation can cause different result.
REST API in SampleVNF
@@ -45,7 +45,7 @@ REST api on VNF’s will help adapting with the new automation techniques
being adapted in yardstick.
Web server integration with VNF’s
-----------------------------------
+---------------------------------
In order to implement REST api’s in VNF one of the first task is to
identify a simple web server that needs to be integrated with VNF’s.
@@ -150,7 +150,7 @@ API Usage
---------
Run time Usage
-^^^^^^^^^^^^^^
+==============
An application(say vFW) with REST API support is run as follows
with just PORT MASK as input. The following environment variables
@@ -182,6 +182,7 @@ samplevnf directory).
2. Check the Link IP's using the REST API (vCGNAPT/vACL/vFW)
::
+
e.g curl <IP>/vnf/config/link
This would indicate the number of links enabled. You should enable all the links
@@ -194,6 +195,7 @@ samplevnf directory).
3. Now that links are enabled we can configure IP's using link method as follows (vCGNAPT/vACL/vFW)
::
+
e.g curl -X POST -H "Content-Type:application/json" -d '{"ipv4":"<IP to be configured>","depth":"24"}'
http://<IP>/vnf/config/link/0
curl -X POST -H "Content-Type:application/json" -d '{"ipv4":"IP to be configured","depth":"24"}'
@@ -207,6 +209,7 @@ samplevnf directory).
4. Adding arp entries we can use this method (vCGNAPT/vACL/vFW)
::
+
/vnf/config/arp
e.g
@@ -220,15 +223,17 @@ samplevnf directory).
5. Adding route entries we can use this method (vCGNAPT/vACL/vFW)
::
+
/vnf/config/route
e.g curl -X POST -H "Content-Type:application/json" -d '{"type":"net", "depth":"8", "nhipv4":"202.16.100.20",
- "portid":"0"}' http://10.223.166.240/vnf/config/route
+ "portid":"0"}' http://10.223.166.240/vnf/config/route
curl -X POST -H "Content-Type:application/json" -d '{"type":"net", "depth":8", "nhipv4":"172.16.100.20",
"portid":"1"}' http://10.223.166.240/vnf/config/route
5. In order to load the rules a script file needs to be posting a script.(vACL/vFW)
::
+
/vnf/config/rules/load
Typical example for loading a script file is shown below
@@ -239,12 +244,14 @@ samplevnf directory).
6. The following REST api's for runtime configuring through a script (vCGNAPT Only)
::
+
/vnf/config/rules/clear
/vnf/config/nat
/vnf/config/nat/load
7. For debug purpose following REST API's could be used as described above.(vCGNAPT/vACL/vFW)
::
+
/vnf/dbg
e.g curl http://10.223.166.240/vnf/config/dbg
@@ -258,10 +265,12 @@ samplevnf directory).
8. For stats we can use the following method (vCGNAPT/vACL/vFW)
::
+
/vnf/stats
e.g curl <IP>/vnf/stats
9. For quittiong the application (vCGNAPT/vACL/vFW)
::
+
/vnf/quit
e.g curl <IP>/vnf/quit
diff --git a/docs/testing/user/userguide/07-Config_files.rst b/docs/testing/user/userguide/07-Config_files.rst
index d5564e8d..f96462e1 100644
--- a/docs/testing/user/userguide/07-Config_files.rst
+++ b/docs/testing/user/userguide/07-Config_files.rst
@@ -380,7 +380,7 @@ This configuration doesn't require LOADB and TXRX pipelines
vACL Config files
-----------------
+-----------------
The reference configuration files explained here are for Software and Hardware
loadbalancing with IPv4 traffic type and single port pair.
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index 64b01b88..5cc2c5e1 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -10,7 +10,6 @@ SampleVNF User Guide
.. toctree::
:maxdepth: 4
- :numbered:
01-introduction.rst
02-methodology.rst
diff --git a/docs/testing/user/userguide/references.rst b/docs/testing/user/userguide/references.rst
index 30f6e604..f00a872c 100644
--- a/docs/testing/user/userguide/references.rst
+++ b/docs/testing/user/userguide/references.rst
@@ -11,8 +11,8 @@ References
OPNFV
=====
-* Yardstick wiki: https://wiki.opnfv.org/yardstick
-* SampleVNF wiki: https://wiki.opnfv.org/samplevnf
+* Yardstick wiki: https://wiki-old.opnfv.org/display/yardstick
+* SampleVNF wiki: https://wiki-old.opnfv.org/display/SAM
References used in Test Cases
=============================
@@ -22,7 +22,7 @@ References used in Test Cases
* DPDK: http://dpdk.org
* DPDK supported NICs: http://dpdk.org/doc/nics
* fdisk: http://www.tldp.org/HOWTO/Partition/fdisk_partitioning.html
-* fio: http://www.bluestop.org/fio/HOWTO.txt
+* fio: https://github.com/axboe/fio
* free: http://manpages.ubuntu.com/manpages/trusty/en/man1/free.1.html
* iperf3: https://iperf.fr/
* Lmbench man-pages: http://manpages.ubuntu.com/manpages/trusty/lat_mem_rd.8.html