diff options
Diffstat (limited to 'docs')
-rwxr-xr-x | docs/testing/user/userguide/01-introduction.rst | 2 | ||||
-rw-r--r-- | docs/testing/user/userguide/02-methodology.rst | 1 | ||||
-rwxr-xr-x | docs/testing/user/userguide/03-architecture.rst | 6 | ||||
-rw-r--r-- | docs/testing/user/userguide/04-installation.rst | 144 | ||||
-rw-r--r-- | docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst | 394 |
5 files changed, 292 insertions, 255 deletions
diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst index bb92af6f..1a60fb58 100755 --- a/docs/testing/user/userguide/01-introduction.rst +++ b/docs/testing/user/userguide/01-introduction.rst @@ -52,7 +52,7 @@ This document consists of the following chapters: * Chapter :doc:`04-installation` provides instructions to install *SampleVNF*. -* Chapter :doc:`05-BKMs` provides example on how installing and running *SampleVNF*. +* Chapter :doc:`05-How_to_run_SampleVNFs` provides example on how installing and running *SampleVNF*. Contact SampleVNF ================= diff --git a/docs/testing/user/userguide/02-methodology.rst b/docs/testing/user/userguide/02-methodology.rst index 9f377d8d..07e9e7ce 100644 --- a/docs/testing/user/userguide/02-methodology.rst +++ b/docs/testing/user/userguide/02-methodology.rst @@ -72,6 +72,7 @@ The metrics, as defined by ETSI GS NFV-TST001, are shown in | | * Latency between NFVI nodes | | | * Packet delay variation (jitter) between VMs | | | * Packet delay variation (jitter) between NFVI nodes | +| | * RFC 3511 benchmark | | | | +---------+-------------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst index 4c4b7e61..d8b81c60 100755 --- a/docs/testing/user/userguide/03-architecture.rst +++ b/docs/testing/user/userguide/03-architecture.rst @@ -30,7 +30,7 @@ of Sample† Traffic Flows. * Not a commercial product. Encourage the community to contribute and close the feature gaps. † No Vendor/Proprietary Workloads -t helps to facilitate deterministic & repeatable bench-marking on Industry +t helpsIt helps to facilitate deterministic & repeatable bench-marking on Industry standard high volume Servers. It augments well with a Test Infrastructure to help facilitate consistent/repeatable methodologies for characterizing & validating the sample VNFs through OPEN SOURCE VNF approximations and test tools. @@ -85,7 +85,7 @@ Test Framework .. _Yardstick_NSB: http://artifacts.opnfv.org/yardstick/docs/testing_user_userguide/index.html#document-13-nsb-overview -SampleVNF Test Infrastructure (NSB (Yardstick_NSB_))in yardstick helps to facilitate +SampleVNF Test Infrastructure (NSB (Yardstick_NSB_)) in yardstick helps to facilitate consistent/repeatable methodologies for characterizing & validating the sample VNFs (:term:`VNF`) through OPEN SOURCE VNF approximations. @@ -93,7 +93,7 @@ sample VNFs (:term:`VNF`) through OPEN SOURCE VNF approximations. Network Service Benchmarking in yardstick framework follows ETSI GS NFV-TST001_ to verify/characterize both :term:`NFVI` & :term:`VNF` -For more inforamtion refer, +For more inforamtion refer, Yardstick_NSB_ SampleVNF Directory structure ============================= diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst index 5b8b9322..d7c26c9d 100644 --- a/docs/testing/user/userguide/04-installation.rst +++ b/docs/testing/user/userguide/04-installation.rst @@ -6,7 +6,6 @@ SampleVNF Installation ====================== - Abstract -------- @@ -19,17 +18,17 @@ optimized VNF + NFVi Infrastructure libraries, with Performance Characterization of Sample† Traffic Flows. :: - • * Not a commercial product. Encourage the community to contribute and close the feature gaps. - • † No Vendor/Proprietary Workloads + + * Not a commercial product. Encourage the community to contribute and close the feature gaps. + † No Vendor/Proprietary Workloads SampleVNF supports installation directly in Ubuntu. The installation procedure are detailed in the sections below. The steps needed to run SampleVNF are: -1. Install and Build SampleVNF. -2. deploy the VNF on the target and modify the config based on the - Network under test -3. Run the traffic generator to generate the traffic. + 1) Install and Build SampleVNF. + 2) deploy the VNF on the target and modify the config based on the Network under test + 3) Run the traffic generator to generate the traffic. Prerequisites ------------- @@ -47,13 +46,17 @@ simulation platform to generate packet traffic to the DUT ports and determine the throughput/latency at the tester side. Below are the supported/tested (:term `VNF`) deployment type. + .. image:: images/deploy_type.png :width: 800px :alt: SampleVNF supported topology Hardware & Software Ingredients ------------------------------- -.. code-block:: console + +SUT requirements: +^^^^^^^^^^^^^^^^ +:: +-----------+------------------+ | Item | Description | +-----------+------------------+ @@ -65,10 +68,12 @@ Hardware & Software Ingredients +-----------+------------------+ | kernel | 4.4.0-34-generic| +-----------+------------------+ - |DPD | 17.02 | + | DPDK | 17.02 | +-----------+------------------+ - Boot and BIOS settings +Boot and BIOS settings: +^^^^^^^^^^^^^^^^^^^^^^ +:: +------------------+---------------------------------------------------+ | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 | | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 | @@ -92,20 +97,24 @@ The ethernet cables should be connected between traffic generator and the VNF se SRIOV or OVS) setup based on the test profile. The connectivity could be -1. Single port pair : One pair ports used for traffic +1) Single port pair : One pair ports used for traffic :: e.g. Single port pair link0 and link1 of VNF are used - TG:port 0 ------ VNF:Port 0 - TG:port 1 ------ VNF:Port 1 + TG:port 0 <------> VNF:Port 0 + TG:port 1 <------> VNF:Port 1 -2. Multi port pair : More than one pair of traffic +2) Multi port pair : More than one pair of traffic :: e.g. Two port pair link 0, link1, link2 and link3 of VNF are used - TG:port 0 ------ VNF:Port 0 - TG:port 1 ------ VNF:Port 1 - TG:port 2 ------ VNF:Port 2 - TG:port 3 ------ VNF:Port 3 - + TG:port 0 <------> VNF:Port 0 + TG:port 1 <------> VNF:Port 1 + TG:port 2 <------> VNF:Port 2 + TG:port 3 <------> VNF:Port 3 + + For correalted traffic, use below configuration + TG_1:port 0 <------> VNF:Port 0 + VNF:Port 1 <------> TG_2:port 0 (UDP Replay) + (TG_2(UDP_Replay) reflects all the traffic on the given port) * Bare-Metal Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run @@ -114,55 +123,57 @@ The connectivity could be Refer below link to setup sriov https://software.intel.com/en-us/articles/using-sr-iov-to-share-an-ethernet-port-among-multiple-vms - * OVS/OVS/DPDK - Refer below link to setup ovs/ovs-dpdk + * OVS_DPDK + Refer below link to setup ovs-dpdk http://docs.openvswitch.org/en/latest/intro/install/general/ http://docs.openvswitch.org/en/latest/intro/install/dpdk/ * Openstack - Use OPNFV installer to deploy the openstack. + Use any OPNFV installer to deploy the openstack. Build VNFs on the DUT: ---------------------- - * Clone sampleVNF project repository - git clone https://git.opnfv.org/samplevnf - Auto Build - ---------- - * Interactive options: - :: - ./tools/vnf_build.sh -i - Follow the steps in the screen from option [1] –> [9] and - select option [8] to build the vnfs. - It will automatically download selected DPDK version and any - required patches and will setup everything and build VNFs. - - Following are the options for setup: - ---------------------------------------------------------- - Step 1: Environment setup. - ---------------------------------------------------------- - [1] Check OS and network connection - [2] Select DPDK RTE version - - ---------------------------------------------------------- - Step 2: Download and Install - ---------------------------------------------------------- - [3] Agree to download - [4] Download packages - [5] Download DPDK zip - [6] Build and Install DPDK - [7] Setup hugepages - - ---------------------------------------------------------- - Step 3: Build VNFs - ---------------------------------------------------------- - [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX) - - [9] Exit Script - * non-Interactive options: - :: - ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02> -Manual Build ------------- +1) Clone sampleVNF project repository - git clone https://git.opnfv.org/samplevnf + + Auto Build - Using script to build VNFs + ^^^^^^^^^^ + * Interactive options: + :: + ./tools/vnf_build.sh -i + Follow the steps in the screen from option [1] –> [9] and + select option [8] to build the vnfs. + It will automatically download selected DPDK version and any + required patches and will setup everything and build VNFs. + + Following are the options for setup: + ---------------------------------------------------------- + Step 1: Environment setup. + ---------------------------------------------------------- + [1] Check OS and network connection + [2] Select DPDK RTE version + + ---------------------------------------------------------- + Step 2: Download and Install + ---------------------------------------------------------- + [3] Agree to download + [4] Download packages + [5] Download DPDK zip + [6] Build and Install DPDK + [7] Setup hugepages + + ---------------------------------------------------------- + Step 3: Build VNFs + ---------------------------------------------------------- + [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX) + + [9] Exit Script + * non-Interactive options: + :: + ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02> + + Manual Build + ^^^^^^^^^^^^ :: 1.Download DPDK supported version from dpdk.org http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip @@ -191,11 +202,12 @@ Manual Build The vACL executable will be created at the following location <samplevnf>/VNFs/vACL/build/vACL -Standalone virtualization/Openstack: - :: - * Build image from yardstick - git clone https://git.opnfv.org/yardstick - * cd yardstick and run - ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh +2) Standalone virtualization/Openstack: + + Build VM image from script in yardstick + :: + 1) git clone https://git.opnfv.org/yardstick + 2) cd yardstick and run + ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-samplevnf-modify.sh To run VNFs. Please refer chapter `05-How_to_run_SampleVNFs.rst` diff --git a/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst b/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst index c7667dec..dc764fa6 100644 --- a/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst +++ b/docs/testing/user/userguide/05-How_to_run_SampleVNFs.rst @@ -22,13 +22,17 @@ simulation platform to generate packet traffic to the DUT ports and determine the throughput/latency at the tester side. Below are the supported/tested (:term `VNF`) deployment type. + .. image:: images/deploy_type.png :width: 800px :alt: SampleVNF supported topology Hardware & Software Ingredients ------------------------------- -.. code-block:: console + +SUT requirements: +^^^^^^^^^^^^^^^^ +:: +-----------+------------------+ | Item | Description | +-----------+------------------+ @@ -40,10 +44,12 @@ Hardware & Software Ingredients +-----------+------------------+ | kernel | 4.4.0-34-generic| +-----------+------------------+ - |DPDK | 17.02 | + | DPDK | 17.02 | +-----------+------------------+ - Boot and BIOS settings +Boot and BIOS settings: +^^^^^^^^^^^^^^^^^^^^^^ +:: +------------------+---------------------------------------------------+ | Boot settings | default_hugepagesz=1G hugepagesz=1G hugepages=16 | | | hugepagesz=2M hugepages=2048 isolcpus=1-11,22-33 | @@ -67,116 +73,124 @@ The ethernet cables should be connected between traffic generator and the VNF se SRIOV or OVS) setup based on the test profile. The connectivity could be -1. Single port pair : One pair ports used for traffic +1) Single port pair : One pair ports used for traffic :: e.g. Single port pair link0 and link1 of VNF are used - TG:port 0 ------ VNF:Port 0 - TG:port 1 ------ VNF:Port 1 + TG:port 0 <------> VNF:Port 0 + TG:port 1 <------> VNF:Port 1 -2. Multi port pair : More than one pair of traffic +2) Multi port pair : More than one pair of traffic :: e.g. Two port pair link 0, link1, link2 and link3 of VNF are used - TG:port 0 ------ VNF:Port 0 - TG:port 1 ------ VNF:Port 1 - TG:port 2 ------ VNF:Port 2 - TG:port 3 ------ VNF:Port 3 - + TG:port 0 <------> VNF:Port 0 + TG:port 1 <------> VNF:Port 1 + TG:port 2 <------> VNF:Port 2 + TG:port 3 <------> VNF:Port 3 + + For correalted traffic, use below configuration + TG_1:port 0 <------> VNF:Port 0 + VNF:Port 1 <------> TG_2:port 0 (UDP Replay) + (TG_2(UDP_Replay) reflects all the traffic on the given port) * Bare-Metal - Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run + Refer: http://fast.dpdk.org/doc/pdf-guides/ to setup the DUT for VNF to run - * Stadalone Virtualization - PHY-VM-PHY + * Standalone Virtualization - PHY-VM-PHY * SRIOV Refer below link to setup sriov https://software.intel.com/en-us/articles/using-sr-iov-to-share-an-ethernet-port-among-multiple-vms - * OVS/OVS-DPDK - Refer below link to setup ovs/ovs-dpdk + * OVS_DPDK + Refer below link to setup ovs-dpdk http://docs.openvswitch.org/en/latest/intro/install/general/ http://docs.openvswitch.org/en/latest/intro/install/dpdk/ * Openstack - use OPNFV installer to deploy the openstack. + Use any OPNFV installer to deploy the openstack. Setup Traffic generator ----------------------- -Step 0: Preparing hardware connection:: - Connect Traffic generator and VNF system back to back as shown in previous section - TRex port 0 ↔ (VNF Port 0) ↔ (VNF Port 1) ↔ TRex port 1 - -Step 1: Setting up Traffic generator (TRex) :: - (Refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html) - TRex Software preparations - ^^^^^^^^^^^^^^^^^^^^^^^^^^ - a. Install the OS (Bare metal Linux, not VM!) - b. Obtain the latest TRex package: wget https://trex-tgn.cisco.com/trex/release/latest - c. Untar the package: tar -xzf latest - d. Change dir to unzipped TRex - e. Create config file using command: sudo python dpdk_setup_ports.py -i - In case of Ubuntu 16 need python3 - See paragraph config creation for detailed step-by-step +Step 0: Preparing hardware connection + :: + Connect Traffic generator and VNF system back to back as shown in previous section + TRex port 0 ↔ (VNF Port 0) ↔ (VNF Port 1) ↔ TRex port 1 + +Step 1: Setting up Traffic generator (TRex) + :: + TRex Software preparations + ^^^^^^^^^^^^^^^^^^^^^^^^^^ + * Install the OS (Bare metal Linux, not VM!) + * Obtain the latest TRex package: wget https://trex-tgn.cisco.com/trex/release/latest + * Untar the package: tar -xzf latest + * Change dir to unzipped TRex + * Create config file using command: sudo python dpdk_setup_ports.py -i + In case of Ubuntu 16 need python3 + See paragraph config creation for detailed step-by-step + (Refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html) Build SampleVNFs ----------------- -Step 2: Procedure to build SampleVNFs:: +Step 2: Procedure to build SampleVNFs + :: a) Clone sampleVNF project repository - git clone https://git.opnfv.org/samplevnf b) Build VNFs Auto Build ^^^^^^^^^^ * Interactive options: - ./tools/vnf_build.sh -i - Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs. - It will automatically download selected DPDK version and any required patches and will setup everything and build VNFs. - Following are the options for setup: - ---------------------------------------------------------- - Step 1: Environment setup. - ---------------------------------------------------------- - [1] Check OS and network connection - [2] Select DPDK RTE version - - ---------------------------------------------------------- - Step 2: Download and Install - ---------------------------------------------------------- - [3] Agree to download - [4] Download packages - [5] Download DPDK zip - [6] Build and Install DPDK - [7] Setup hugepages - - ---------------------------------------------------------- - Step 3: Build VNFs - ---------------------------------------------------------- - [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX) - - [9] Exit Script + ./tools/vnf_build.sh -i + Follow the steps in the screen from option [1] –> [9] and select option [8] to build the vnfs. + It will automatically download selected DPDK version and any required patches and will setup everything and build VNFs. + Following are the options for setup: + ---------------------------------------------------------- + Step 1: Environment setup. + ---------------------------------------------------------- + [1] Check OS and network connection + [2] Select DPDK RTE version + + ---------------------------------------------------------- + Step 2: Download and Install + ---------------------------------------------------------- + [3] Agree to download + [4] Download packages + [5] Download DPDK zip + [6] Build and Install DPDK + [7] Setup hugepages + + ---------------------------------------------------------- + Step 3: Build VNFs + ---------------------------------------------------------- + [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay, DPPD-PROX) + + [9] Exit Script * non-Interactive options: - ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02> + ./tools/vnf_build.sh -s -d=<dpdk version eg 17.02> + Manual Build ^^^^^^^^^^^^ - 1. Download DPDK supported version from dpdk.org + 1) Download DPDK supported version from dpdk.org http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 (Not required for other DPDK versions) cd dpdk make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc cd x86_64-native-linuxapp-gcc make - 2. Setup huge pages + 2) Setup huge pages For 1G/2M hugepage sizes, for example 1G pages, the size must be specified explicitly and can also be optionally set as the default hugepage size for the system. For example, to reserve 8G of hugepage memory in the form of eight 1G pages, the following options should be passed to the kernel: * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048 - 3. Add this to Go to /etc/default/grub configuration file. + 3) Add this to Go to /etc/default/grub configuration file. Append “default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048” to the GRUB_CMDLINE_LINUX entry. - 4. Setup Environment Variable + 4) Setup Environment Variable export RTE_SDK=<samplevnf>/dpdk export RTE_TARGET=x86_64-native-linuxapp-gcc export VNF_CORE=<samplevnf> or using ./tools/setenv.sh - 5. Build VNFs + 5) Build VNFs cd <samplevnf> make or to build individual VNFs @@ -189,14 +203,15 @@ Step 2: Procedure to build SampleVNFs:: Virtual Firewall - How to run ----------------------------- -Step 3: Bind the datapath ports to DPDK :: - a. Bind ports to DPDK +Step 3: Bind the datapath ports to DPDK + :: + a) Bind ports to DPDK For DPDK versions 17.xx - 1. cd <samplevnf>/dpdk - 2. ./usertools/dpdk-devbind.py --status <--- List the network device - 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + 1) cd <samplevnf>/dpdk + 2) ./usertools/dpdk-devbind.py --status <--- List the network device + 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules - b. Prepare script to enalble VNF to route the packets + b) Prepare script to enalble VNF to route the packets cd <samplevnf>/VNFs/vFW/config Open -> VFW_SWLB_SinglePortPair_script.tc. Replace the bold items based on your setting. @@ -230,39 +245,41 @@ Step 3: Bind the datapath ports to DPDK :: p vfw add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1 p vfw add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0 p vfw applyruleset - c. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. - cd <samplevnf>/VNFs/vFW/ - ./build/vFW -p 0x3 -f ./config/VFW_SWLB_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_SinglePortPair_script.tc - -step 4: Run Test using traffic geneator :: - On traffic generator system: - cd <trex eg v2.28/stl> - Update the bench.py to generate the traffic. - - class STLBench(object): - ip_range = {} - ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'} - ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'} - cd <trex eg v2.28> - Run the TRex server: sudo ./t-rex-64 -i -c 7 - In another shell run TRex console: trex-console - The console can be run from another computer with -s argument, --help for more info. - Other options for TRex client are automation or GUI - In the console, run "tui" command, and then send the traffic with commands like: - start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1 - For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html + c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. + cd <samplevnf>/VNFs/vFW/ + ./build/vFW -p 0x3 -f ./config/VFW_SWLB_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_SinglePortPair_script.tc + +step 4: Run Test using traffic geneator + :: + On traffic generator system: + cd <trex eg v2.28/stl> + Update the bench.py to generate the traffic. + + class STLBench(object): + ip_range = {} + ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'} + ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'} + cd <trex eg v2.28> + Run the TRex server: sudo ./t-rex-64 -i -c 7 + In another shell run TRex console: trex-console + The console can be run from another computer with -s argument, --help for more info. + Other options for TRex client are automation or GUI + In the console, run "tui" command, and then send the traffic with commands like: + start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1 + For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html Virtual Access Control list - How to run ---------------------------------------- -Step 3: Bind the datapath ports to DPDK :: - a. Bind ports to DPDK +Step 3: Bind the datapath ports to DPDK + :: + a) Bind ports to DPDK For DPDK versions 17.xx - 1. cd <samplevnf>/dpdk - 2. ./usertools/dpdk-devbind.py --status <--- List the network device - 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + 1) cd <samplevnf>/dpdk + 2) ./usertools/dpdk-devbind.py --status <--- List the network device + 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules - b. Prepare script to enalble VNF to route the packets + b) Prepare script to enalble VNF to route the packets cd <samplevnf>/VNFs/vACL/config Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting. @@ -295,39 +312,41 @@ Step 3: Bind the datapath ports to DPDK :: p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1 p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0 p acl applyruleset - c. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. + c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. cd <samplevnf>/VNFs/vFW/ ./build/vFW -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc. -step 4: Run Test using traffic geneator :: - On traffic generator system: - cd <trex eg v2.28/stl> - Update the bench.py to generate the traffic. - - class STLBench(object): - ip_range = {} - ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'} - ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'} - cd <trex eg v2.28> - Run the TRex server: sudo ./t-rex-64 -i -c 7 - In another shell run TRex console: trex-console - The console can be run from another computer with -s argument, --help for more info. - Other options for TRex client are automation or GUI - In the console, run "tui" command, and then send the traffic with commands like: - start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1 - For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html +step 4: Run Test using traffic geneator + :: + On traffic generator system: + cd <trex eg v2.28/stl> + Update the bench.py to generate the traffic. + + class STLBench(object): + ip_range = {} + ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'} + ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'} + cd <trex eg v2.28> + Run the TRex server: sudo ./t-rex-64 -i -c 7 + In another shell run TRex console: trex-console + The console can be run from another computer with -s argument, --help for more info. + Other options for TRex client are automation or GUI + In the console, run "tui" command, and then send the traffic with commands like: + start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1 + For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html Virtual Access Control list - How to run ---------------------------------------- -Step 3: Bind the datapath ports to DPDK :: - a. Bind ports to DPDK +Step 3: Bind the datapath ports to DPDK + :: + a) Bind ports to DPDK For DPDK versions 17.xx - 1. cd <samplevnf>/dpdk - 2. ./usertools/dpdk-devbind.py --status <--- List the network device - 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + 1) cd <samplevnf>/dpdk + 2) ./usertools/dpdk-devbind.py --status <--- List the network device + 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules - b. Prepare script to enalble VNF to route the packets + b) Prepare script to enalble VNF to route the packets cd <samplevnf>/VNFs/vACL/config Open -> IPv4_swlb_acl.tc. Replace the bold items based on your setting. @@ -360,39 +379,41 @@ Step 3: Bind the datapath ports to DPDK :: p acl add 2 <traffic generator port 0 IP eg 202.16.100.20> 8 <traffic generator port 1 IP eg 172.16.40.20> 8 0 65535 0 65535 0 0 1 p acl add 2 <traffic generator port 1 IP eg 172.16.40.20> 8 <traffic generator port 0 IP eg 202.16.100.20> 8 0 65535 0 65535 0 0 0 p acl applyruleset - c. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. + c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. cd <samplevnf>/VNFs/vACL/ ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/IPv4_swlb_acl.tc. -step 4: Run Test using traffic geneator :: - On traffic generator system: - cd <trex eg v2.28/stl> - Update the bench.py to generate the traffic. - - class STLBench(object): - ip_range = {} - ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'} - ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'} - cd <trex eg v2.28> - Run the TRex server: sudo ./t-rex-64 -i -c 7 - In another shell run TRex console: trex-console - The console can be run from another computer with -s argument, --help for more info. - Other options for TRex client are automation or GUI - In the console, run "tui" command, and then send the traffic with commands like: - start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1 - For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html +step 4: Run Test using traffic geneator + :: + On traffic generator system: + cd <trex eg v2.28/stl> + Update the bench.py to generate the traffic. + + class STLBench(object): + ip_range = {} + ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'} + ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<traffic generator port 1 IP eg 172.16.40.20>'} + cd <trex eg v2.28> + Run the TRex server: sudo ./t-rex-64 -i -c 7 + In another shell run TRex console: trex-console + The console can be run from another computer with -s argument, --help for more info. + Other options for TRex client are automation or GUI + In the console, run "tui" command, and then send the traffic with commands like: + start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1 + For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html vCGNAPT - How to run ---------------------------------------- -Step 3: Bind the datapath ports to DPDK :: - a. Bind ports to DPDK +Step 3: Bind the datapath ports to DPDK + :: + a) Bind ports to DPDK For DPDK versions 17.xx - 1. cd <samplevnf>/dpdk - 2. ./usertools/dpdk-devbind.py --status <--- List the network device - 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + 1) cd <samplevnf>/dpdk + 2) ./usertools/dpdk-devbind.py --status <--- List the network device + 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules - b. Prepare script to enalble VNF to route the packets + b) Prepare script to enalble VNF to route the packets cd <samplevnf>/VNFs/vCGNAPT/config Open -> sample_swlb_2port_2WT.tc Replace the bold items based on your setting. @@ -414,15 +435,16 @@ Step 3: Bind the datapath ports to DPDK :: ; IPv4 static ARP; disable if dynamic arp is enabled. p 1 arpadd 0 <traffic generator port 0 IP eg 202.16.100.20> <traffic generator port 0 MAC> p 1 arpadd 1 <traffic generator port 1 IP eg 172.16.40.20> <traffic generator port 1 MAC> - For dynamic cgnapt. Please use UDP_Replay as one of the traffic generator - (TG1) (port 0) --> (port 0) VNF (CGNAPT) (Port 1) --> (port0)(UDPReplay) + For dynamic cgnapt. Please use UDP_Replay as one of the traffic generator + (TG1) (port 0) --> (port 0) VNF (CGNAPT) (Port 1) --> (port0)(UDPReplay) - c. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. + c) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. cd <samplevnf>/VNFs/vCGNAPT/ ./build/vCGNAPT -p 0x3 -f ./config/sample_swlb_2port_2WT.cfg -s ./config/sample_swlb_2port_2WT.tc -step 4: Run Test using traffic geneator :: +step 4: Run Test using traffic geneator + :: On traffic generator system: cd <trex eg v2.28/stl> Update the bench.py to generate the traffic. @@ -443,41 +465,43 @@ step 4: Run Test using traffic geneator :: UDP_Replay - How to run ---------------------------------------- -Step 3: Bind the datapath ports to DPDK :: - a. Bind ports to DPDK +Step 3: Bind the datapath ports to DPDK + :: + a) Bind ports to DPDK For DPDK versions 17.xx - 1. cd <samplevnf>/dpdk - 2. ./usertools/dpdk-devbind.py --status <--- List the network device - 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + 1) cd <samplevnf>/dpdk + 2) ./usertools/dpdk-devbind.py --status <--- List the network device + 3) ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules - b. Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. + b) Run below cmd to launch the VNF. Please make sure both hugepages and ports to be used are bind to dpdk. cd <samplevnf>/VNFs/UDP_Replay/ cmd: ./build/UDP_Replay -c 0x7 -n 4 -w <pci> -w <pci> -- --no-hw-csum -p <portmask> --config='(port, queue, cpucore)' e.g ./build/UDP_Replay -c 0x7 -n 4 -w 0000:07:00.0 -w 0000:07:00.1 -- --no-hw-csum -p 0x3 --config='(0, 0, 1)(1, 0, 2)' -step 4: Run Test using traffic geneator :: - On traffic generator system: - cd <trex eg v2.28/stl> - Update the bench.py to generate the traffic. - - class STLBench(object): - ip_range = {} - ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'} - ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'} - cd <trex eg v2.28> - Run the TRex server: sudo ./t-rex-64 -i -c 7 - In another shell run TRex console: trex-console - The console can be run from another computer with -s argument, --help for more info. - Other options for TRex client are automation or GUI - In the console, run "tui" command, and then send the traffic with commands like: - start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1 - For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html +step 4: Run Test using traffic geneator + :: + On traffic generator system: + cd <trex eg v2.28/stl> + Update the bench.py to generate the traffic. + + class STLBench(object): + ip_range = {} + ip_range['src'] = {'start': '<traffic generator port 0 IP eg 202.16.100.20>', 'end': '<traffic generator port 0 IP eg 202.16.100.20>'} + ip_range['dst'] = {'start': '<traffic generator port 1 IP eg 172.16.40.20>', 'end': '<public ip e.g 152.16.40.10>'} + cd <trex eg v2.28> + Run the TRex server: sudo ./t-rex-64 -i -c 7 + In another shell run TRex console: trex-console + The console can be run from another computer with -s argument, --help for more info. + Other options for TRex client are automation or GUI + In the console, run "tui" command, and then send the traffic with commands like: + start -f stl/bench.py -m 50% --port 0 3 -t size=590,vm=var1 + For more details refer: https://trex-tgn.cisco.com/trex/doc/trex_stateless_bench.html PROX - How to run ---------------------- +------------------ Description ------------ +^^^^^^^^^^^ This is PROX, the Packet pROcessing eXecution engine, part of Intel(R) Data Plane Performance Demonstrators, and formerly known as DPPD-BNG. PROX is a DPDK-based application implementing Telco use-cases such as @@ -485,7 +509,7 @@ a simplified BRAS/BNG, light-weight AFTR... It also allows configuring finer grained network functions like QoS, Routing, load-balancing... Compiling and running this application --------------------------------------- +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This application supports DPDK 16.04, 16.11, 17.02 and 17.05. The following commands assume that the following variables have been set: @@ -493,20 +517,20 @@ export RTE_SDK=/path/to/dpdk export RTE_TARGET=x86_64-native-linuxapp-gcc Example: DPDK 17.05 installation --------------------------------- -git clone http://dpdk.org/git/dpdk -cd dpdk -git checkout v17.05 -make install T=$RTE_TARGET +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +* git clone http://dpdk.org/git/dpdk +* cd dpdk +* git checkout v17.05 +* make install T=$RTE_TARGET PROX compilation ----------------- +^^^^^^^^^^^^^^^^ The Makefile with this application expects RTE_SDK to point to the root directory of DPDK (e.g. export RTE_SDK=/root/dpdk). If RTE_TARGET has not been set, x86_64-native-linuxapp-gcc will be assumed. Running PROX ------------- +^^^^^^^^^^^^ After DPDK has been set up, run make from the directory where you have extracted this application. A build directory will be created containing the PROX executable. The usage of the application is shown @@ -514,8 +538,8 @@ below. Note that this application assumes that all required ports have been bound to the DPDK provided igb_uio driver. Refer to the "Getting Started Guide - DPDK" document for more details. -Usage: ./build/prox [-f CONFIG_FILE] [-l LOG_FILE] [-p] [-o DISPLAY] [-v] [-a|-e] \ - [-m|-s|-i] [-n] [-w DEF] [-q] [-k] [-d] [-z] [-r VAL] [-u] [-t] +:: + Usage: ./build/prox [-f CONFIG_FILE] [-l LOG_FILE] [-p] [-o DISPLAY] [-v] [-a|-e] [-m|-s|-i] [-n] [-w DEF] [-q] [-k] [-d] [-z] [-r VAL] [-u] [-t] -f CONFIG_FILE : configuration file to load, ./prox.cfg by default -l LOG_FILE : log file name, ./prox.log by default -p : include PID in log file name if default log file is used @@ -547,7 +571,7 @@ application from the source directory execute: user@target:~$ ./build/prox -f ./config/nop.cfg Provided example configurations -------------------------------- +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PROX can be configured either as the SUT (System Under Test) or as the Traffic Generator. Some example configuration files are provided, both in the config directory to run PROX as a SUT, and in the gen directory |