summaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst')
-rw-r--r--docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst239
1 files changed, 131 insertions, 108 deletions
diff --git a/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst b/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst
index b5880645..29c76e69 100644
--- a/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst
+++ b/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst
@@ -9,8 +9,8 @@ SampleVNF - How to run
Prerequisites
-------------
-Supported Test setup:
---------------------
+Supported Test setup
+^^^^^^^^^^^^^^^^^^^^
The device under test (DUT) consists of a system following;
* A single or dual processor and PCH chip, except for System on Chip (SoC) cases
* DRAM memory size and frequency (normally single DIMM per channel)
@@ -21,18 +21,18 @@ Connected to the DUT is an IXIA* or Software Traffic generator like pktgen or TR
simulation platform to generate packet traffic to the DUT ports and
determine the throughput/latency at the tester side.
-Below are the supported/tested (:term `VNF`) deployment type.
+Below are the supported/tested (:term:`VNF`) deployment type.
.. image:: images/deploy_type.png
:width: 800px
:alt: SampleVNF supported topology
Hardware & Software Ingredients
--------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SUT requirements:
-^^^^^^^^^^^^^^^^
-::
+
+
+-----------+------------------+
| Item | Description |
+-----------+------------------+
@@ -48,8 +48,8 @@ SUT requirements:
+-----------+------------------+
Boot and BIOS settings:
-^^^^^^^^^^^^^^^^^^^^^^
-::
+
+
+------------------+---------------------------------------------------+
| Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 |
| | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 |
@@ -73,7 +73,8 @@ The ethernet cables should be connected between traffic generator and the VNF se
SRIOV or OVS) setup based on the test profile.
The connectivity could be
-1) Single port pair : One pair ports used for traffic
+
+1) Single port pair : One pair ports used for traffic
::
e.g. Single port pair link0 and link1 of VNF are used
TG:port 0 <------> VNF:Port 0
@@ -84,7 +85,7 @@ The connectivity could be
e.g. Two port pair link 0, link1, link2 and link3 of VNF are used
TG:port 0 <------> VNF:Port 0
TG:port 1 <------> VNF:Port 1
- TG:port 2 <------> VNF:Port 2
+ TG:port 2 <------> VNF:Port 2
TG:port 3 <------> VNF:Port 3
For correalted traffic, use below configuration
@@ -92,7 +93,7 @@ The connectivity could be
VNF:Port 1 <------> TG_2:port 0 (UDP Replay)
(TG_2(UDP_Replay) reflects all the traffic on the given port)
* Bare-Metal
- Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run
+ Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run
* Standalone Virtualization - PHY-VM-PHY
* SRIOV
@@ -111,12 +112,12 @@ Setup Traffic generator
-----------------------
Step 0: Preparing hardware connection
- ::
+
Connect Traffic generator and VNF system back to back as shown in previous section
TRex port 0 ↔ (VNF Port 0) ↔ (VNF Port 1) ↔ TRex port 1
Step 1: Setting up Traffic generator (TRex)
- ::
+
TRex Software preparations
^^^^^^^^^^^^^^^^^^^^^^^^^^
* Install the OS (Bare metal Linux, not VM!)
@@ -131,13 +132,19 @@ Step 1: Setting up Traffic generator (TRex)
Build SampleVNFs
-----------------
+
Step 2: Procedure to build SampleVNFs
- ::
+
a) Clone sampleVNF project repository - git clone https://git.opnfv.org/samplevnf
b) Build VNFs
- Auto Build
- ^^^^^^^^^^
- * Interactive options:
+
+Auto Build
+^^^^^^^^^^
+
+* Interactive options:
+
+::
+
./tools/vnf_build.sh -i
Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs.
It will automatically download selected DPDK version and any required patches and will setup everything and build VNFs.
@@ -156,18 +163,27 @@ Step 2: Procedure to build SampleVNFs
[5] Download DPDK zip
[6] Build and Install DPDK
[7] Setup hugepages
+ [8] Download civetweb
----------------------------------------------------------
Step 3: Build VNFs
----------------------------------------------------------
- [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX)
+ [9] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX)
+
+ [10] Exit Script
+
+
+* Non-Interactive options:
+
+::
- [9] Exit Script
- * non-Interactive options:
./tools/vnf_build.sh -s -d=<dpdk version eg 17.02>
- Manual Build
- ^^^^^^^^^^^^
+Manual Build
+^^^^^^^^^^^^
+
+::
+
1) Download DPDK supported version from dpdk.org
http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip
unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 (Not required for other DPDK versions)
@@ -175,22 +191,30 @@ Step 2: Procedure to build SampleVNFs
make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
cd x86_64-native-linuxapp-gcc
make
- 2) Setup huge pages
+
+ 2) Download civetweb 1.9 version from the following link
+ https://sourceforge.net/projects/civetweb/files/1.9/CivetWeb_V1.9.zip
+ unzip CivetWeb_V1.9.zip
+ mv civetweb-master civetweb
+ cd civetweb
+ make lib
+
+ 3) Setup huge pages
For 1G/2M hugepage sizes, for example 1G pages, the size must be
specified explicitly and can also be optionally set as the
default hugepage size for the system. For example, to reserve 8G
of hugepage memory in the form of eight 1G pages, the following
options should be passed to the kernel: * default_hugepagesz=1G
hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
- 3) Add this to Go to /etc/default/grub configuration file.
+ 4) Add this to Go to /etc/default/grub configuration file.
Append “default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048”
to the GRUB_CMDLINE_LINUX entry.
- 4) Setup Environment Variable
+ 5) Setup Environment Variable
export RTE_SDK=<samplevnf>/dpdk
export RTE_TARGET=x86_64-native-linuxapp-gcc
export VNF_CORE=<samplevnf>
or using ./tools/setenv.sh
- 5) Build VNFs
+ 6) Build VNFs
cd <samplevnf>
make
or to build individual VNFs
@@ -200,18 +224,27 @@ Step 2: Procedure to build SampleVNFs
The vFW executable will be created at the following location
<samplevnf>/VNFs/vFW/build/vFW
+
Virtual Firewall - How to run
-----------------------------
Step 3: Bind the datapath ports to DPDK
+
+a) Bind ports to DPDK
+
::
- a) Bind ports to DPDK
+
For DPDK versions 17.xx
1) cd <samplevnf>/dpdk
2) ./usertools/dpdk-devbind.py --status <--- List the network device
3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
- .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
- b) Prepare script to enalble VNF to route the packets
+ .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+
+
+b) Prepare script to enalble VNF to route the packets
+
+ ::
+
cd <samplevnf>/VNFs/vFW/config
Open -> VFW_SWLB_SinglePortPair_script.tc. Replace the bold items based on your setting.
@@ -245,12 +278,20 @@ Step 3: Bind the datapath ports to DPDK
p vfw add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
p vfw add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
p vfw applyruleset
- c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
+
+
+c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
+
+ ::
+
cd <samplevnf>/VNFs/vFW/
./build/vFW -p 0x3 -f ./config/VFW_SWLB_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_SinglePortPair_script.tc
+
step 4: Run Test using traffic geneator
+
::
+
On traffic generator system:
cd <trex eg v2.28/stl>
Update the bench.py to generate the traffic.
@@ -268,18 +309,27 @@ step 4: Run Test using traffic geneator
start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
+
Virtual Access Control list - How to run
----------------------------------------
Step 3: Bind the datapath ports to DPDK
+
+a) Bind ports to DPDK
+
::
- a) Bind ports to DPDK
+
For DPDK versions 17.xx
1) cd <samplevnf>/dpdk
2) ./usertools/dpdk-devbind.py --status <--- List the network device
3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
.. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
- b) Prepare script to enalble VNF to route the packets
+
+
+b) Prepare script to enalble VNF to route the packets
+
+ ::
+
cd <samplevnf>/VNFs/vACL/config
Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting.
@@ -313,80 +363,20 @@ Step 3: Bind the datapath ports to DPDK
p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
p acl applyruleset
- c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
- cd <samplevnf>/VNFs/vFW/
- ./build/vFW -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
-step 4: Run Test using traffic geneator
- ::
- On traffic generator system:
- cd <trex eg v2.28/stl>
- Update the bench.py to generate the traffic.
- class STLBench(object):
- ip_range = {}
- ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'}
- ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'}
- cd <trex eg v2.28>
- Run the TRex server: sudo ./t-rex-64 -i -c 7
- In another shell run TRex console: trex-console
- The console can be run from another computer with -s argument, --help for more info.
- Other options for TRex client are automation or GUI
- In the console, run "tui" command, and then send the traffic with commands like:
- start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
- For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
+c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
-Virtual Access Control list - How to run
-----------------------------------------
-
-Step 3: Bind the datapath ports to DPDK
::
- a) Bind ports to DPDK
- For DPDK versions 17.xx
- 1) cd <samplevnf>/dpdk
- 2) ./usertools/dpdk-devbind.py --status <--- List the network device
- 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
- .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
- b) Prepare script to enalble VNF to route the packets
- cd <samplevnf>/VNFs/vACL/config
- Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting.
-
- link 0 config <VNF port 0 IP eg 202.16.100.10> 8
- link 0 up
- link 1 down
- link 1 config <VNF port 0 IP eg 172.16.40.10> 8
- link 1 up
- ; routeadd <net/host> <port #> <ipv4 nhip address in decimal> <Mask>
- routeadd net 0 <traffic generator port 0 IP eg 202.16.100.20> 0xff000000
- routeadd net 1 <traffic generator port 1 IP eg 172.16.40.20> 0xff000000
+ cd <samplevnf>/VNFs/vFW/
+ ./build/vFW -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
- ; IPv4 static ARP; disable if dynamic arp is enabled.
- p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
- p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
- p action add 0 accept
- p action add 0 fwd 0
- p action add 0 count
- p action add 1 accept
- p action add 1 fwd 1
- p action add 1 count
- p action add 2 drop
- p action add 2 count
- p action add 0 conntrack
- p action add 1 conntrack
- p action add 2 conntrack
- p action add 3 conntrack
- ; IPv4 rules
- p acl add 1 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 67 69 0 0 2
- p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1
- p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0
- p acl applyruleset
- c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
- cd <samplevnf>/VNFs/vACL/
- ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc.
step 4: Run Test using traffic geneator
+
::
+
On traffic generator system:
cd <trex eg v2.28/stl>
Update the bench.py to generate the traffic.
@@ -404,18 +394,27 @@ step 4: Run Test using traffic geneator
start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
+
vCGNAPT - How to run
-----------------------------------------
+--------------------
Step 3: Bind the datapath ports to DPDK
+
+ a) Bind ports to DPDK
+
::
- a) Bind ports to DPDK
+
For DPDK versions 17.xx
1) cd <samplevnf>/dpdk
2) ./usertools/dpdk-devbind.py --status <--- List the network device
3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
- .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
- b) Prepare script to enalble VNF to route the packets
+ .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+
+
+ b) Prepare script to enalble VNF to route the packets
+
+ ::
+
cd <samplevnf>/VNFs/vCGNAPT/config
Open -> sample_swlb_2port_2WT.tc Replace the bold items based on your setting.
@@ -436,17 +435,24 @@ Step 3: Bind the datapath ports to DPDK
; IPv4 static ARP; disable if dynamic arp is enabled.
p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC>
p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC>
- For dynamic cgnapt. Please use UDP_Replay as one of the traffic generator
+ For dynamic cgnapt. Please use UDP_Replay as one of the traffic generator
(TG1) (port 0) --> (port 0) VNF (CGNAPT) (Port 1) --> (port0)(UDPReplay)
- c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
+
+ c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
+
+ ::
+
cd <samplevnf>/VNFs/vCGNAPT/
./build/vCGNAPT -p 0x3 -f ./config/sample_swlb_2port_2WT.cfg -s ./config/sample_swlb_2port_2WT.tc
step 4: Run Test using traffic geneator
- ::
+
On traffic generator system:
+
+ ::
+
cd <trex eg v2.28/stl>
Update the bench.py to generate the traffic.
@@ -463,24 +469,35 @@ step 4: Run Test using traffic geneator
start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1
For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html
+
UDP_Replay - How to run
----------------------------------------
Step 3: Bind the datapath ports to DPDK
+
+ a) Bind ports to DPDK
+
::
- a) Bind ports to DPDK
+
For DPDK versions 17.xx
1) cd <samplevnf>/dpdk
2) ./usertools/dpdk-devbind.py --status <--- List the network device
3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
- .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
- b) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
+ .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+
+ b) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk.
+
+ ::
+
cd <samplevnf>/VNFs/UDP_Replay/
cmd: ./build/UDP_Replay -c 0x7 -n 4 -w <pci> -w <pci> -- --no-hw-csum -p <portmask> --config='(port, queue, cpucore)'
e.g ./build/UDP_Replay -c 0x7 -n 4 -w 0000:07:00.0 -w 0000:07:00.1 -- --no-hw-csum -p 0x3 --config='(0, 0, 1)(1, 0, 2)'
+
step 4: Run Test using traffic geneator
- ::
+
+ ::
+
On traffic generator system:
cd <trex eg v2.28/stl>
Update the bench.py to generate the traffic.
@@ -503,6 +520,7 @@ PROX - How to run
Description
^^^^^^^^^^^
+
This is PROX, the Packet pROcessing eXecution engine, part of Intel(R)
Data Plane Performance Demonstrators, and formerly known as DPPD-BNG.
PROX is a DPDK-based application implementing Telco use-cases such as
@@ -511,6 +529,7 @@ finer grained network functions like QoS, Routing, load-balancing...
Compiling and running this application
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
This application supports DPDK 16.04, 16.11, 17.02 and 17.05.
The following commands assume that the following variables have been set:
@@ -519,6 +538,7 @@ export RTE_TARGET=x86_64-native-linuxapp-gcc
Example: DPDK 17.05 installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
* git clone http://dpdk.org/git/dpdk
* cd dpdk
* git checkout v17.05
@@ -526,12 +546,14 @@ Example: DPDK 17.05 installation
PROX compilation
^^^^^^^^^^^^^^^^
+
The Makefile with this application expects RTE_SDK to point to the
root directory of DPDK (e.g. export RTE_SDK=/root/dpdk). If RTE_TARGET
has not been set, x86_64-native-linuxapp-gcc will be assumed.
Running PROX
^^^^^^^^^^^^
+
After DPDK has been set up, run make from the directory where you have
extracted this application. A build directory will be created
containing the PROX executable. The usage of the application is shown
@@ -540,6 +562,7 @@ been bound to the DPDK provided igb_uio driver. Refer to the "Getting
Started Guide - DPDK" document for more details.
::
+
Usage: ./build/prox [-f CONFIG_FILE] [-l LOG_FILE] [-p] [-o DISPLAY] [-v] [-a|-e] [-m|-s|-i] [-n] [-w DEF] [-q] [-k] [-d] [-z] [-r VAL] [-u] [-t]
-f CONFIG_FILE : configuration file to load, ./prox.cfg by default
-l LOG_FILE : log file name, ./prox.log by default