summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--VNFs/UDP_Replay/main.c4
-rw-r--r--docs/INSTALL.rst (renamed from INSTALL.rst)18
-rw-r--r--docs/RELEASE_NOTES.rst (renamed from RELEASE_NOTES.rst)18
-rw-r--r--docs/index.rst13
-rw-r--r--docs/vACL/INSTALL.rst205
-rw-r--r--docs/vACL/README.rst85
-rw-r--r--docs/vACL/RELEASE_NOTES.rst64
-rw-r--r--docs/vACL/index.rst11
-rw-r--r--docs/vCGNAPT/INSTALL.rst175
-rw-r--r--docs/vCGNAPT/README.rst58
-rw-r--r--docs/vCGNAPT/RELEASE_NOTES.rst68
-rw-r--r--docs/vCGNAPT/index.rst11
-rw-r--r--docs/vFW/INSTALL.rst212
-rw-r--r--docs/vFW/README.rst78
-rw-r--r--docs/vFW/RELEASE_NOTES.rst66
-rw-r--r--docs/vFW/index.rst11
16 files changed, 690 insertions, 407 deletions
diff --git a/VNFs/UDP_Replay/main.c b/VNFs/UDP_Replay/main.c
index e6bc8aa3..1b37c181 100644
--- a/VNFs/UDP_Replay/main.c
+++ b/VNFs/UDP_Replay/main.c
@@ -2890,7 +2890,7 @@ main(int argc, char **argv)
int ret;
unsigned nb_ports;
unsigned lcore_id;
- uint32_t n_tx_queue, nb_lcores;
+ uint32_t n_tx_queue;
uint8_t portid, nb_rx_queue;
struct cmdline *cl;
uint32_t size;
@@ -2927,8 +2927,6 @@ main(int argc, char **argv)
if (check_port_config(nb_ports) < 0)
rte_exit(EXIT_FAILURE, "check_port_config failed\n");
- nb_lcores = rte_lcore_count();
-
/*
*Configuring port_config_t structure for interface manager initialization
*/
diff --git a/INSTALL.rst b/docs/INSTALL.rst
index 310bffe4..6c321846 100644
--- a/INSTALL.rst
+++ b/docs/INSTALL.rst
@@ -4,26 +4,30 @@
.. (c) opnfv, national center of scientific research "demokritos" and others.
============================
-SAMPLEVNF Installation Guide
+samplevf Installation Guide
============================
Introduction
============
This project provides a placeholder for various sample VNF (Virtual Network Function)
development which includes example reference architecture and optimization methods
-related to VNF/Network service for high performance VNFs.
+related to VNF/Network service for high performance VNF
The sample VNFs are Open Source approximations* of Telco grade VNF’s using
optimized VNF + NFVi Infrastructure libraries, with Performance Characterization
of Sample† Traffic Flows.
-• * Not a commercial product. Encourage the community to contribute and close the feature gaps.
-• † No Vendor/Proprietary Workloads
+
+::
+
+ * Not a commercial product. Encourage the community to contribute and close the feature gaps.
+ † No Vendor/Proprietary Workloads
VNF supported
=============
- 1. CG-NAT (Carrier Grade Network Address Translation) VNF
- 2. Firewall (vFW) VNF
- 4. Access Control List (vACL) VNF
+
+ - Carrier Grade Network Address Translation (CG-NAT) VNF
+ - Firewall (vFW) VNF
+ - Access Control List (vACL) VNF
Please refer docs folder for individual VNF Installation guide.
diff --git a/RELEASE_NOTES.rst b/docs/RELEASE_NOTES.rst
index 9dd4fef8..fa9ab919 100644
--- a/RELEASE_NOTES.rst
+++ b/docs/RELEASE_NOTES.rst
@@ -4,25 +4,29 @@
.. (c) opnfv, national center of scientific research "demokritos" and others.
==========================
-Sample VNF Release Notes
+samplevnf Release Notes
==========================
Introduction
============
This project provides a placeholder for various sample VNF (Virtual Network Function)
development which includes example reference architecture and optimization methods
-related to VNF/Network service for high performance VNFs.
+related to VNF/Network service for high performance VNFs.
The sample VNFs are Open Source approximations* of Telco grade VNF’s using
optimized VNF + NFVi Infrastructure libraries, with Performance Characterization
of Sample† Traffic Flows.
-• * Not a commercial product. Encourage the community to contribute and close the feature gaps.
-• † No Vendor/Proprietary Workloads
+
+::
+
+ * Not a commercial product. Encourage the community to contribute and close the feature gaps.
+ † No Vendor/Proprietary Workloads
VNF supported
=============
- 1. CG-NAT (Carrier Grade Network Address Translation) VNF
- 2. Firewall (vFW) VNF
- 4. Access Control List (vACL) VNF
+
+ - Carrier Grade Network Address Translation (CG-NAT) VNF
+ - Firewall (vFW) VNF
+ - Access Control List (vACL) VNF
Please refer docs folder for individual VNF release notes.
diff --git a/docs/index.rst b/docs/index.rst
new file mode 100644
index 00000000..050fa670
--- /dev/null
+++ b/docs/index.rst
@@ -0,0 +1,13 @@
+####################
+samplevnf
+####################
+
+.. toctree::
+ :numbered:
+ :maxdepth: 2
+
+ RELEASE_NOTES.rst
+ INSTALL.rst
+ vCGNAPT/index.rst
+ vFW/index.rst
+ vACL/index.rst
diff --git a/docs/vACL/INSTALL.rst b/docs/vACL/INSTALL.rst
index e00c6b24..7f21fc1f 100644
--- a/docs/vACL/INSTALL.rst
+++ b/docs/vACL/INSTALL.rst
@@ -12,165 +12,222 @@ vACL Compilation
After downloading (or doing a git clone) in a directory (samplevnf)
-###### Dependencies
-* DPDK 16.04: Downloaded and installed via vnf_build.sh or manually from [here](http://fast.dpdk.org/rel/dpdk-16.04.tar.xz)
-Both the options are available as part of vnf_build.sh below.
-* libpcap-dev
-* libzmq
-* libcurl
-
-###### Environment variables
-
+-------------
+Dependencies
+-------------
+
+- DPDK supported versions ($DPDK_RTE_VER = 16.04, 16.11, 17.02 or 17.05): Downloaded and installed via vnf_build.sh or manually from [here] (http://fast.dpdk.org/rel/)
+- libpcap-dev
+- libzmq
+- libcurl
+
+---------------------
+Environment variables
+---------------------
Apply all the additional patches in 'patches/dpdk_custom_patch/' and build dpdk
::
- export RTE_SDK=<dpdk 16.04 directory>
+
+ export RTE_SDK=<dpdk directory>
export RTE_TARGET=x86_64-native-linuxapp-gcc
This is done by vnf_build.sh script.
Auto Build:
-==========
+===========
$ ./tools/vnf_build.sh in samplevnf root folder
-Follow the steps in the screen from option [1] --> [8] and select option [7]
+Follow the steps in the screen from option [1] --> [9] and select option [8]
to build the vnfs.
-It will automatically download DPDK 16.04 and any required patches and will setup
-everything and build vACL VNFs.
+It will automatically download selected DPDK version and any required patches
+and will setup everything and build vACL VNFs.
Following are the options for setup:
::
- ----------------------------------------------------------
- Step 1: Environment setup.
- ----------------------------------------------------------
- [1] Check OS and network connection
+ ----------------------------------------------------------
+ Step 1: Environment setup.
+ ----------------------------------------------------------
+ [1] Check OS and network connection
+ [2] Select DPDK RTE version
- ----------------------------------------------------------
- Step 2: Download and Install
- ----------------------------------------------------------
- [2] Agree to download
- [3] Download packages
- [4] Download DPDK zip (optional, use it when option 4 fails)
- [5] Install DPDK
- [6] Setup hugepages
+ ----------------------------------------------------------
+ Step 2: Download and Install
+ ----------------------------------------------------------
+ [3] Agree to download
+ [4] Download packages
+ [5] Download DPDK zip
+ [6] Build and Install DPDK
+ [7] Setup hugepages
- ----------------------------------------------------------
- Step 3: Build VNF
- ----------------------------------------------------------
- [7] Build VNF
+ ----------------------------------------------------------
+ Step 3: Build VNFs
+ ----------------------------------------------------------
+ [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay)
- [8] Exit Script
+ [9] Exit Script
An vACL executable will be created at the following location
samplevnf/VNFs/vACL/build/vACL
Manual Build:
-============
-1. Download DPDK 16.04 from dpdk.org
- - http://dpdk.org/browse/dpdk/snapshot/dpdk-16.04.zip
-2. unzip dpdk-16.04 and apply dpdk patch
- - cd dpdk-16.04
- - patch -p0 < VNF_CORE/patches/dpdk_custom_patch/rte_pipeline.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch
+=============
+1. Download DPDK supported version from dpdk.org
+
+ - http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip
+
+2. unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04
+ (Not required for other DPDK versions)
+
+ - cd dpdk
+
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch
+ - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/disable-acl-debug-logs.patch
+ - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/set-log-level-to-info.patch
+
- build dpdk
- - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
- - cd x86_64-native-linuxapp-gcc
- - make
+
+ - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
+ - cd x86_64-native-linuxapp-gcc
+ - make
+
- Setup huge pages
- - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified
- explicitly and can also be optionally set as the default hugepage size for
- the system. For example, to reserve 8G of hugepage memory in the form of
- eight 1G pages, the following options should be passed to the kernel:
- * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
- - Add this to Go to /etc/default/grub configuration file.
- - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048"
- to the GRUB_CMDLINE_LINUX entry.
+
+ - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified
+ explicitly and can also be optionally set as the default hugepage
+ size for the system. For example, to reserve 8G of hugepage memory
+ in the form of eight 1G pages, the following options should be passed
+ to the kernel:
+ * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
+ - Add this to Go to /etc/default/grub configuration file.
+ - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048"
+ to the GRUB_CMDLINE_LINUX entry.
+
3. Setup Environment Variable
- - export RTE_SDK=<samplevnf>/dpdk-16.04
+
+ - export RTE_SDK=<samplevnf>/dpdk
- export RTE_TARGET=x86_64-native-linuxapp-gcc
- export VNF_CORE=<samplevnf>
- or using ./toot/setenv.sh
+
+ or using ./tools/setenv.sh
+
4. Build vACL VNFs
+
- cd <samplevnf>/VNFs/vACL
- make clean
- make
+
5. The vACL executable will be created at the following location
+
- <samplevnf>/VNFs/vACL/build/vACL
Run
====
-Setup Port to run VNF:
----------------------
+Setup Port to run VNF
+----------------------
+
::
- 1. cd <samplevnf>/dpdk-16.04
- 3. ./tool/dpdk_nic_bind.py --status <--- List the network device
- 2. ./tool/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+
+ For DPDK versions 16.04
+ 1. cd <samplevnf>/dpdk
+ 2. ./tools/dpdk_nic_bind.py --status <--- List the network device
+ 3. ./tools/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1>
.. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+ For DPDK versions 16.11
+ 1. cd <samplevnf>/dpdk
+ 2. ./tools/dpdk-devbind.py --status <--- List the network device
+ 3. ./tools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+ .. _More details: http://dpdk.org/doc/guides-16.11/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+
+ For DPDK versions 17.xx
+ 1. cd <samplevnf>/dpdk
+ 2. ./usertools/dpdk-devbind.py --status <--- List the network device
+ 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+ .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+
+
Make the necessary changes to the config files to run the vACL VNF
- eg: ports_mac_list = 00:00:00:30:21:00 00:00:00:30:21:00
+ eg: ports_mac_list = 00:00:00:30:21:00 00:00:00:30:21:00
-ACL
---------------
+-----------------
+ACL run commands
+-----------------
Update the configuration according to system configuration.
::
+
./build/vACL -p <port mask> -f <config> -s <script> - SW_LoadB
+
./build/vACL -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB
Run IPv4
-----------
+--------
+
::
- Software LoadB
- --------------
+
+ Software LoadB
+
cd <samplevnf>/VNFs/vACL/
+
./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/ IPv4_swlb_acl.tc
- Hardware LoadB
- --------------
+ Hardware LoadB
+
cd <samplevnf>/VNFs/vACL/
+
./build/vACL -p 0x3 -f ./config/IPv4_hwlb_acl_1LB_1t.cfg -s ./config/IPv4_hwlb_acl.tc --hwlb 1
Run IPv6
----------
+--------
+
::
+
Software LoadB
- --------------
+
cd <samplevnf>/VNFs/vACL/
+
./build/vACL -p 0x3 -f ./config/IPv6_swlb_acl_1LB_1t.cfg -s ./config/IPv6_swlb_acl.tc
Hardware LoadB
- --------------
+
cd <samplevnf>/VNFs/vACL/
+
./build/vACL -p 0x3 -f ./config/IPv6_hwlb_acl_1LB_1t.cfg -s ./config/IPv6_hwlb_acl.tc --hwlb 1
-vACL execution on BM & SRIOV:
+vACL execution on BM & SRIOV
--------------------------------
+To run the VNF, execute the following
+
::
- To run the VNF, execute the following:
+
samplevnf/VNFs/vACL# ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/ IPv4_swlb_acl.tc
+
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
-f CONFIG FILE: vACL configuration file
-s SCRIPT FILE: vACL script file
-vACL execution on OVS:
+vACL execution on OVS
-------------------------
+To run the VNF, execute the following:
+
::
- To run the VNF, execute the following:
+
samplevnf/VNFs/vACL# ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/ IPv4_swlb_acl.tc --disable-hw-csum
+
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
-f CONFIG FILE: vACL configuration file
-s SCRIPT FILE: vACL script file
---disable-hw-csum :Disable TCP/UDP hw checksum
+ --disable-hw-csum :Disable TCP/UDP hw checksum
diff --git a/docs/vACL/README.rst b/docs/vACL/README.rst
index 547d33bc..f8c3e817 100644
--- a/docs/vACL/README.rst
+++ b/docs/vACL/README.rst
@@ -4,18 +4,19 @@
.. (c) opnfv, national center of scientific research "demokritos" and others.
========================================================
-Virtual ACL - vACL
+vACL - Readme
========================================================
-1. Introduction
-==============
-This application implements Access Control List (ACL). ACL is typically
+Introduction
+=================
+This application implements Access Control List (ACL). ACL is typically
used for rule based policy enforcement. It restricts access to a destination
-IP address/port based on various header fields, such as source IP address/port,
+IP address/port based on various header fields, such as source IP address/port,
destination IP address/port and protocol. It is built on top of DPDK and
uses the packet framework infrastructure.
+----------
About DPDK
----------
The DPDK IP Pipeline Framework provides a set of libraries to build a pipeline
@@ -26,14 +27,14 @@ This document assumes the reader possesses the knowledge of DPDK concepts and
packet framework. For more details, read DPDK Getting Started Guide, DPDK
Programmers Guide, DPDK Sample Applications Guide.
-2. Scope
+Scope
==========
This application provides a standalone DPDK based high performance vACL Virtual
Network Function implementation.
-3. Features
+Features
===========
-The vACL VNF currently supports the following functionality:
+The vACL VNF currently supports the following functionality
• CLI based Run-time rule configuration.(Add, Delete, List, Display, Clear, Modify)
• Ipv4 and ipv6 standard 5 tuple packet Selector support.
• Multithread support
@@ -43,8 +44,8 @@ The vACL VNF currently supports the following functionality:
• ARP (request, response, gratuitous)
• ICMP (terminal echo, echo response, passthrough)
• ICMPv6 and ND (Neighbor Discovery)
-
-4. High Level Design
+
+High Level Design
====================
The ACL Filter performs bulk filtering of incoming packets based on rules in current ruleset,
discarding any packets not permitted by the rules. The mechanisms needed for building the
@@ -58,85 +59,101 @@ The Input and Output FIFOs will be implemented using DPDK Ring Buffers.
The DPDK ACL example: http://dpdk.org/doc/guides/sample_app_ug/l3_forward_access_ctrl.html
#figure-ipv4-acl-rule contains a suitable syntax and parser for ACL rules.
-===================
-5. Components of vACL
-===================
-In vACL, each component is constructed using packet framework pipelines.
+Components of vACL
+=======================
+In vACL, each component is constructed using packet framework pipelines.
It includes Rx and Tx Driver, Master pipeline, load balancer pipeline and
vACL worker pipeline components. A Pipeline framework is a collection of input
ports, table(s),output ports and actions (functions).
+---------------------------
Receive and Transmit Driver
-******************************
+---------------------------
Packets will be received in bulk and provided to LoadBalancer(LB) thread.
Transimit takes packets from worker threads in a dedicated ring and sent to
hardware queue.
+---------------------------
Master Pipeline
-******************************
+---------------------------
The Master component is part of all the IP Pipeline applications. This component
does not process any packets and should configure with Core 0, to allow
other cores for processing of the traffic. This component is responsible for
- 1. Initializing each component of the Pipeline application in different threads
- 2. Providing CLI shell for the user control/debug
- 3. Propagating the commands from user to the corresponding components
+1. Initializing each component of the Pipeline application in different threads
+2. Providing CLI shell for the user control/debug
+3. Propagating the commands from user to the corresponding components
+---------------------------
ARPICMP Pipeline
-******************************
+---------------------------
This pipeline processes the APRICMP packets.
+---------------------------
TXRX Pipelines
-******************************
+---------------------------
The TXTX and RXRX pipelines are pass through pipelines to forward both ingress
and egress traffic to Loadbalancer. This is required when the Software
Loadbalancer is used.
+---------------------------
Load Balancer Pipeline
-******************************
-The vACL support both hardware and software balancing for load blalcning of
+---------------------------
+The vACL support both hardware and software balancing for load blalcning of
traffic across multiple VNF threads. The Hardware load balncing require support
from hardware like Flow Director for steering of packets to application through
-hardware queues.
+hardware queues.
The Software Load balancer is also supported if hardware loadbalancing can't be
used for any reason. The TXRX along with LOADB pipeline provides support for
software load balancing by distributing the flows to Multiple vACL worker
threads.
-Loadbalancer (HW or SW) distributes traffic based on the 5 tuple (src addr, src
+Loadbalancer (HW or SW) distributes traffic based on the 5 tuple (src addr, src
port, dest addr, dest port and protocol) applying an XOR logic distributing to
active worker threads, thereby maintaining an affinity of flows to worker
threads.
+---------------------------
vACL Pipeline
-******************************
+---------------------------
The vACL performs the rule-based packet filtering.
-vACL Topology:
+vACL Topology
------------------------
+
::
+
IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA
operation:
+
Egress --> The packets sent out from ixia(port 0) will be sent through ACL to ixia(port 1).
+
Igress --> The packets sent out from ixia(port 1) will be sent through ACL to ixia(port 0).
-vACL Topology (L4REPLAY):
+vACL Topology (L4REPLAY)
------------------------------------
+
::
+
IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY
+
operation:
+
Egress --> The packets sent out from ixia will pass through vACL to L3FWD/L4REPLAY.
+
Ingress --> The L4REPLAY upon reception of packets (Private to Public Network),
- will immediately replay back the traffic to IXIA interface. (Pub -->Priv).
+ will immediately replay back the traffic to IXIA interface. (Pub -->Priv).
-How to run L4Replay:
+How to run L4Replay
--------------------
+After the installation of samplevnf
+
::
- 1. After the installation of samplevnf:
+
go to <samplevnf/VNFs/L4Replay>
- 2. ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)"
+ ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)"
eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)"
-6. Installation, Compile and Execution
------------------------------------------------------------------
+Installation, Compile and Execution
+=======================================
Plase refer to <samplevnf>/docs/vACL/INSTALL.rst for installation, configuration, compilation
and execution.
diff --git a/docs/vACL/RELEASE_NOTES.rst b/docs/vACL/RELEASE_NOTES.rst
index b35dcce4..c947a371 100644
--- a/docs/vACL/RELEASE_NOTES.rst
+++ b/docs/vACL/RELEASE_NOTES.rst
@@ -4,25 +4,25 @@
.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others.
=========================================================
-Virtual ACL - vACL
+vACL - Release Notes
=========================================================
-1. Introduction
-================
+Introduction
+===================
This is a beta release for Sample Virtual ACL VNF.
This vACL can application can be run independently (refer INSTALL.rst).
-2. User Guide
+User Guide
===============
Refer to README.rst for further details on vACL, HLD, features supported, test
plan. For build configurations and execution requisites please refer to
INSTALL.rst.
-3. Feature for this release
+Feature for this release
===========================
The vACL VNF currently supports the following functionality:
- • CLI based Run-time rule configuration.(Add, Delete, List, Display, Clear, Modify)
+ • CLI based Run-time rule configuration.(Add,Delete,List,Display,Clear,Modify)
• Ipv4 and ipv6 standard 5 tuple packet Selector support.
• Multithread support
• Multiple physical port support
@@ -32,38 +32,50 @@ The vACL VNF currently supports the following functionality:
• ICMP (terminal echo, echo response, passthrough)
• ICMPv6 and ND (Neighbor Discovery)
-4. System requirements - OS and kernel version
+System requirements - OS and kernel version
==============================================
-This is supported on Ubuntu 14.04 and Ubuntu 16.04 and kernel version less than 4.5
+This is supported on Ubuntu 14.04 and 16.04 and kernel version less than 4.5
VNFs on BareMetal support:
- OS: Ubuntu 14.04 or 16.04 LTS
- kernel: < 4.5
- http://releases.ubuntu.com/16.04/
- Download/Install the image: ubuntu-16.04.1-server-amd64.iso
+ OS: Ubuntu 14.04 or 16.04 LTS
+ kernel: < 4.5
+ http://releases.ubuntu.com/16.04/
+ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
VNFs on Standalone Hypervisor
- HOST OS: Ubuntu 14.04 or 16.04 LTS
- http://releases.ubuntu.com/16.04/
- Download/Install the image: ubuntu-16.04.1-server-amd64.iso
- - OVS (DPDK) - 2.5
- - kernel: < 4.5
- - Hypervisor - KVM
- - VM OS - Ubuntu 16.04/Ubuntu 14.04
+ HOST OS: Ubuntu 14.04 or 16.04 LTS
+ http://releases.ubuntu.com/16.04/
+ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
-5. Known Bugs and limitations
+ - OVS (DPDK) - 2.5
+ - kernel: < 4.5
+ - Hypervisor - KVM
+ - VM OS - Ubuntu 16.04/Ubuntu 14.04
+
+Known Bugs and limitations
=============================
- - Hardware Load Balancer feature is supported on Fortville nic ACL version 4.53 and below.
+ - Hardware Load Balancer feature is supported on Fortville nic ACL
+ version 4.53 and below.
- Hardware Checksum offload is not supported for IPv6 traffic.
- vACL on sriov is tested upto 4 threads
-6. Future Work
+Future Work
==============
Following would be possible enhancements
- Performance optimization on different platforms
-7. References
+References
=============
-Following links provides additional information
- .. _QUICKSTART: http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html
- .. _DPDKGUIDE: http://dpdk.org/doc/guides-16.04/prog_guide/index.html
+Following links provides additional information for differenet version of DPDKs
+
+.. _QUICKSTART:
+ http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-16.11/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-17.02/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-17.05/linux_gsg/quick_start.html
+
+.. _DPDKGUIDE:
+ http://dpdk.org/doc/guides-16.04/prog_guide/index.html
+ http://dpdk.org/doc/guides-16.11/prog_guide/index.html
+ http://dpdk.org/doc/guides-17.02/prog_guide/index.html
+ http://dpdk.org/doc/guides-17.05/prog_guide/index.html
diff --git a/docs/vACL/index.rst b/docs/vACL/index.rst
new file mode 100644
index 00000000..c1ae029b
--- /dev/null
+++ b/docs/vACL/index.rst
@@ -0,0 +1,11 @@
+####################
+vACL samplevnf
+####################
+
+.. toctree::
+ :numbered:
+ :maxdepth: 2
+
+ RELEASE_NOTES.rst
+ README.rst
+ INSTALL.rst
diff --git a/docs/vCGNAPT/INSTALL.rst b/docs/vCGNAPT/INSTALL.rst
index 3a556819..85873109 100644
--- a/docs/vCGNAPT/INSTALL.rst
+++ b/docs/vCGNAPT/INSTALL.rst
@@ -4,7 +4,7 @@
.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others.
============================
-CGNAPT - Installation Guide
+vCGNAPT - Installation Guide
============================
@@ -13,108 +13,143 @@ vCGNAPT Compilation
After downloading (or doing a git clone) in a directory (samplevnf)
-###### Dependencies
-* DPDK 16.04: Downloaded and installed via vnf_build.sh or manually from [here](http://fast.dpdk.org/rel/dpdk-16.04.tar.xz)
-Both the options are available as part of vnf_build.sh below.
-* libpcap-dev
-* libzmq
-* libcurl
+Dependencies
+-------------
+
+- DPDK supported versions ($DPDK_RTE_VER = 16.04, 16.11, 17.02 or 17.05) Downloaded and installed via vnf_build.sh or manually from [here] (http://fast.dpdk.org/rel/)
+- libpcap-dev
+- libzmq
+- libcurl
-###### Environment variables
+Environment variables
+---------------------
Apply all the additional patches in 'patches/dpdk_custom_patch/' and build dpdk
+required only for DPDK version 16.04.
::
- export RTE_SDK=<dpdk 16.04 directory>
+
+ export RTE_SDK=<dpdk directory>
export RTE_TARGET=x86_64-native-linuxapp-gcc
This is done by vnf_build.sh script.
Auto Build:
-==========
+===========
$ ./tools/vnf_build.sh in samplevnf root folder
-Follow the steps in the screen from option [1] --> [8] and select option [7]
+Follow the steps in the screen from option [1] --> [9] and select option [8]
to build the vnfs.
-It will automatically download DPDK 16.04 and any required patches and will setup
-everything and build vCGNAPT VNFs.
+It will automatically download selected DPDK version and any required patches
+and will setup everything and build vCGNAPT VNFs.
Following are the options for setup:
::
- ----------------------------------------------------------
- Step 1: Environment setup.
- ----------------------------------------------------------
- [1] Check OS and network connection
+ ----------------------------------------------------------
+ Step 1: Environment setup.
+ ----------------------------------------------------------
+ [1] Check OS and network connection
+ [2] Select DPDK RTE version
- ----------------------------------------------------------
- Step 2: Download and Install
- ----------------------------------------------------------
- [2] Agree to download
- [3] Download packages
- [4] Download DPDK zip (optional, use it when option 4 fails)
- [5] Install DPDK
- [6] Setup hugepages
+ ----------------------------------------------------------
+ Step 2: Download and Install
+ ----------------------------------------------------------
+ [3] Agree to download
+ [4] Download packages
+ [5] Download DPDK zip
+ [6] Build and Install DPDK
+ [7] Setup hugepages
- ----------------------------------------------------------
- Step 3: Build VNF
- ----------------------------------------------------------
- [7] Build VNF
+ ----------------------------------------------------------
+ Step 3: Build VNFs
+ ----------------------------------------------------------
+ [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay)
- [8] Exit Script
+ [9] Exit Script
An vCGNAPT executable will be created at the following location
samplevnf/VNFs/vCGNAPT/build/vCGNAPT
Manual Build:
-============
-1. Download DPDK 16.04 from dpdk.org
- - http://dpdk.org/browse/dpdk/snapshot/dpdk-16.04.zip
-2. unzip dpdk-16.04 and apply dpdk patch
- - cd dpdk-16.04
- - patch -p0 < VNF_CORE/patches/dpdk_custom_patch/rte_pipeline.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch
+=============
+1. Download DPDK supported version from dpdk.org
+
+ - http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip
+2. unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04
+ (Not required for other DPDK versions)
+
+ - cd dpdk
+
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch
+ - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/disable-acl-debug-logs.patch
+ - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/set-log-level-to-info.patch
+
- build dpdk
- - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
- - cd x86_64-native-linuxapp-gcc
- - make
+
+ - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
+ - cd x86_64-native-linuxapp-gcc
+ - make
+
- Setup huge pages
- - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified
+
+ - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified
explicitly and can also be optionally set as the default hugepage size for
the system. For example, to reserve 8G of hugepage memory in the form of
eight 1G pages, the following options should be passed to the kernel:
- * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
- - Add this to Go to /etc/default/grub configuration file.
- - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048"
- to the GRUB_CMDLINE_LINUX entry.
+ * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
+ - Add this to Go to /etc/default/grub configuration file.
+ - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048"
+ to the GRUB_CMDLINE_LINUX entry.
+
3. Setup Environment Variable
- - export RTE_SDK=<samplevnf>/dpdk-16.04
+
+ - export RTE_SDK=<samplevnf>/dpdk
- export RTE_TARGET=x86_64-native-linuxapp-gcc
- export VNF_CORE=<samplevnf>
- or using ./toot/setenv.sh
+ or using ./tools/setenv.sh
+
4. Build vCGNAPT VNFs
+
- cd <samplevnf>/VNFs/vCGNAPT
- make clean
- make
+
5. An vCGNAPT executable will be created at the following location
+
- <samplevnf>/VNFs/vCGNAPT/build/vCGNAPT
Run
====
-Setup Port to run VNF:
+Setup Port to run VNF
----------------------
+
::
- 1. cd <samplevnf>/dpdk-16.04
- 3. ./tool/dpdk_nic_bind.py --status <--- List the network device
- 2. ./tool/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+
+ For DPDK versions 16.04
+ 1. cd <samplevnf>/dpdk
+ 2. ./tools/dpdk_nic_bind.py --status <--- List the network device
+ 3. ./tools/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1>
.. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+ For DPDK versions 16.11
+ 1. cd <samplevnf>/dpdk
+ 2. ./tools/dpdk-devbind.py --status <--- List the network device
+ 3. ./tools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+ .. _More details: http://dpdk.org/doc/guides-16.11/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+
+ For DPDK versions 17.xx
+ 1. cd <samplevnf>/dpdk
+ 2. ./usertools/dpdk-devbind.py --status <--- List the network device
+ 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+ .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+
Make the necessary changes to the config files to run the vCGNAPT VNF
eg: ports_mac_list = 00:00:00:30:21:F0 00:00:00:30:21:F1
@@ -123,6 +158,7 @@ Dynamic CGNAPT
Update the configuration according to system configuration.
::
+
./vCGNAPT -p <port mask> -f <config> -s <script> - SW_LoadB
./vCGNAPT -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB
@@ -131,41 +167,48 @@ Static CGNAPT
Update the script file and add Static NAT Entry
::
+
e.g,
;p <pipeline id> entry addm <prv_ipv4/6> prvport> <pub_ip> <pub_port> <phy_port> <ttl> <no_of_entries> <end_prv_port> <end_pub_port>
;p 3 entry addm 152.16.100.20 1234 152.16.40.10 1 0 500 65535 1234 65535
Run IPv4
----------
+
::
- Software LoadB
- --------------
+
+ Software LoadB:
+
cd <samplevnf>/VNFs/vCGNAPT/build
./vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg
- Hardware LoadB
- --------------
+ Hardware LoadB:
+
cd <samplevnf>/VNFs/vCGNAPT/build
./vCGNAPT -p 0x3 -f ./config/arp_hwlb-2P-1T.cfg -s ./config/arp_hwlb_scriptfile_2P.cfg --hwlb 1
Run IPv6
---------
+
::
- Software LoadB
- --------------
+
+ Software LoadB:
+
cd <samplevnf>/VNFs/vCGNAPT/build
./vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T-ipv6.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg
- Hardware LoadB
- --------------
+ Hardware LoadB:
+
cd <samplevnf>/VNFs/vCGNAPT/build
./vCGNAPT -p 0x3 -f ./config/arp_hwlb-2P-1T-ipv6.cfg -s ./config/arp_hwlb_scriptfile_2P.cfg --hwlb 1
-vCGNAPT execution on BM & SRIOV:
+vCGNAPT execution on BM & SRIOV
--------------------------------
+
::
+
To run the VNF, execute the following:
samplevnf/VNFs/vCGNAPT# ./build/vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg
Command Line Params:
@@ -173,10 +216,12 @@ vCGNAPT execution on BM & SRIOV:
-f CONFIG FILE: vCGNAPT configuration file
-s SCRIPT FILE: vCGNAPT script file
-vCGNAPT execution on OVS:
+vCGNAPT execution on OVS
-------------------------
+To run the VNF, execute the following:
+
::
- To run the VNF, execute the following:
+
samplevnf/VNFs/vCGNAPT# ./build/vCGNAPT -p 0x3 ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg --disable-hw-csum
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
diff --git a/docs/vCGNAPT/README.rst b/docs/vCGNAPT/README.rst
index eda94831..dd6bb079 100644
--- a/docs/vCGNAPT/README.rst
+++ b/docs/vCGNAPT/README.rst
@@ -4,10 +4,10 @@
.. (c) opnfv, national center of scientific research "demokritos" and others.
========================================================
-Carrier Grade Network Address Port Translation - vCGNAPT
+vCGNAPT - Readme
========================================================
-1 Introduction
+Introduction
==============
This application implements vCGNAPT. The idea of vCGNAPT is to extend the life of
the service providers IPv4 network infrastructure and mitigate IPv4 address
@@ -27,12 +27,12 @@ This document assumes the reader possess the knowledge of DPDK concepts and IP
Pipeline Framework. For more details, read DPDK Getting Started Guide, DPDK
Programmers Guide, DPDK Sample Applications Guide.
-2. Scope
+Scope
==========
This application provides a standalone DPDK based high performance vCGNAPT
Virtual Network Function implementation.
-3. Features
+Features
===========
The vCGNAPT VNF currently supports the following functionality:
• Static NAT
@@ -53,7 +53,7 @@ The vCGNAPT VNF currently supports the following functionality:
• ALG SIP
• ALG FTP
-4. High Level Design
+High Level Design
====================
The Upstream path defines the traffic from Private to Public and the downstream
path defines the traffic from Public to Private. The vCGNAPT has same set of
@@ -77,6 +77,7 @@ and mitigate IPv4 address exhaustion by using address and port translation in la
It processes the traffic in both the directions.
.. code-block:: console
+
+------------------+
| +-----+
| Private consumer | CPE |---------------+
@@ -96,8 +97,10 @@ It processes the traffic in both the directions.
Figure 1: vCGNAPT deployment in Service provider network
+
Components of vCGNAPT
=====================
+
In vCGNAPT, each component is constructed as a packet framework. It includes Master pipeline
component, driver, load balancer pipeline component and vCGNAPT worker pipeline component. A
pipeline framework is a collection of input ports, table(s), output ports and actions
@@ -112,14 +115,13 @@ ARPICMP pipeline
------------------------
ARPICMP pipeline is responsible for handling all l2l3 arp related packets.
-----------------
This component does not process any packets and should configure with Core 0,
to save cores for other components which processes traffic. The component
is responsible for:
- 1. Initializing each component of the Pipeline application in different threads
- 2. Providing CLI shell for the user
- 3. Propagating the commands from user to the corresponding components.
- 4. ARP and ICMP are handled here.
+1. Initializing each component of the Pipeline application in different threads
+2. Providing CLI shell for the user
+3. Propagating the commands from user to the corresponding components.
+4. ARP and ICMP are handled here.
Load Balancer pipeline
------------------------
@@ -133,7 +135,7 @@ affinity of flows to worker threads.
Tuple can be modified/configured using configuration file
-4. vCGNAPT - Static
+vCGNAPT - Static
====================
The vCGNAPT component performs translation of private IP & port to public IP &
port at egress side and public IP & port to private IP & port at Ingress side
@@ -144,7 +146,7 @@ port and will be forwarded to the output port. The packets that do not have a
match will be taken a default action. The default action may result in drop of
the packets.
-5. vCGNAPT- Dynamic
+vCGNAPT - Dynamic
===================
The vCGNAPT component performs translation of private IP & port to public IP & port
at egress side and public IP & port to private IP & port at Ingress side based on the
@@ -158,32 +160,38 @@ port and will be forwarded to the output port defined in the entry.
Dynamic vCGNAPT acts as static one too, we can do NAT entries statically. Static NAT
entries port range must not conflict to dynamic NAT port range.
-vCGNAPT Static Topology:
+vCGNAPT Static Topology
------------------------
+
::
+
IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA
operation:
- Egress --> The packets sent out from ixia(port 0) will be CGNAPTed to ixia(port 1).
- Igress --> The packets sent out from ixia(port 1) will be CGNAPTed to ixia(port 0).
+ Egress --> The packets sent out from ixia(port 0) will be CGNAPTed to ixia(port 1).
+ Igress --> The packets sent out from ixia(port 1) will be CGNAPTed to ixia(port 0).
-vCGNAPT Dynamic Topology (L4REPLAY):
+vCGNAPT Dynamic Topology (L4REPLAY)
------------------------------------
+
::
+
IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY
operation:
- Egress --> The packets sent out from ixia will be CGNAPTed to L3FWD/L4REPLAY.
- Ingress --> The L4REPLAY upon reception of packets (Private to Public Network),
- will immediately replay back the traffic to IXIA interface. (Pub -->Priv).
+ Egress --> The packets sent out from ixia will be CGNAPTed to L3FWD/L4REPLAY.
+ Ingress --> The L4REPLAY upon reception of packets (Private to Public Network),
+ will immediately replay back the traffic to IXIA interface. (Pub -->Priv).
-How to run L4Replay:
+How to run L4Replay
--------------------
+After the installation of samplevnf:
+
::
- 1. After the installation of samplevnf:
- go to <samplevnf/VNFs/L4Replay>
- 2. ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)"
+
+ go to <samplevnf/VNFs/L4Replay>
+ ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)"
eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)"
-6. Installation, Compile and Execution
------------------------------------------------------------------
+Installation, Compile and Execution
+====================================
Plase refer to <samplevnf>/docs/vCGNAPT/INSTALL.rst for installation, configuration, compilation
and execution.
diff --git a/docs/vCGNAPT/RELEASE_NOTES.rst b/docs/vCGNAPT/RELEASE_NOTES.rst
index 91b73075..a776a54d 100644
--- a/docs/vCGNAPT/RELEASE_NOTES.rst
+++ b/docs/vCGNAPT/RELEASE_NOTES.rst
@@ -4,25 +4,25 @@
.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others.
=========================================================
-Carrier Grade Network Address Port Translation - vCGNAPT
+vCGNAPT - Release Notes
=========================================================
-1. Introduction
+Introduction
================
This is the beta release for vCGNAPT VNF.
vCGNAPT application can be run independently (refer INSTALL.rst).
-2. User Guide
+User Guide
===============
-Refer to README.rst for further details on vCGNAPT, HLD, features supported, test
-plan. For build configurations and execution requisites please refer to
+Refer to README.rst for further details on vCGNAPT, HLD, features supported,
+test plan. For build configurations and execution requisites please refer to
INSTALL.rst.
-3. Feature for this release
+Feature for this release
===========================
-This release supports following features as part of vCGNAPT:
-- vCGNAPT can run as a standalone application on bare-metal linux server or on a
- virtual machine using SRIOV and OVS dpdk.
+This release supports following features as part of vCGNAPT
+
+- vCGNAPT can run as a standalone application on bare-metal linux server or on a virtual machine using SRIOV and OVS dpdk.
- Static NAT
- Dynamic NAT
- Static NAPT
@@ -41,40 +41,50 @@ This release supports following features as part of vCGNAPT:
- ALG SIP
- ALG FTP
-4. System requirements - OS and kernel version
+System requirements - OS and kernel version
==============================================
-This is supported on Ubuntu 14.04 and Ubuntu 16.04 and kernel version less than 4.5
+This is supported on Ubuntu 14.04 and 16.04 and kernel version less than 4.5
VNFs on BareMetal support:
- OS: Ubuntu 14.04 or 16.04 LTS
- kernel: < 4.5
- http://releases.ubuntu.com/16.04/
- Download/Install the image: ubuntu-16.04.1-server-amd64.iso
+ OS: Ubuntu 14.04 or 16.04 LTS
+ kernel: < 4.5
+ http://releases.ubuntu.com/16.04/
+ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
VNFs on Standalone Hypervisor
- HOST OS: Ubuntu 14.04 or 16.04 LTS
- http://releases.ubuntu.com/16.04/
- Download/Install the image: ubuntu-16.04.1-server-amd64.iso
- - OVS (DPDK) - 2.5
- - kernel: < 4.5
- - Hypervisor - KVM
- - VM OS - Ubuntu 16.04/Ubuntu 14.04
+ HOST OS: Ubuntu 14.04 or 16.04 LTS
+ http://releases.ubuntu.com/16.04/
+ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
+
+ - OVS (DPDK) - 2.5
+ - kernel: < 4.5
+ - Hypervisor - KVM
+ - VM OS - Ubuntu 16.04/Ubuntu 14.04
-5. Known Bugs and limitations
+Known Bugs and limitations
=============================
-- Hadware Loab Balancer feature is supported on fortville nic FW version 4.53 and below.
+- Hadware Load Balancer feature is supported on fortville nic FW version 4.53 and below.
- L4 UDP Replay is used to capture throughput for dynamic cgnapt
- Hardware Checksum offload is not supported for IPv6 traffic.
- CGNAPT on sriov is tested till 4 threads
-6. Future Work
+Future Work
==============
- SCTP passthrough support
- Multi-homing support
- Performance optimization on different platforms
-7. References
+References
=============
-Following links provides additional information
- .. _QUICKSTART: http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html
- .. _DPDKGUIDE: http://dpdk.org/doc/guides-16.04/prog_guide/index.html
+Following links provides additional information for differenet version of DPDKs
+ .. _QUICKSTART:
+ http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-16.11/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-17.02/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-17.05/linux_gsg/quick_start.html
+
+ .. _DPDKGUIDE:
+ http://dpdk.org/doc/guides-16.04/prog_guide/index.html
+ http://dpdk.org/doc/guides-16.11/prog_guide/index.html
+ http://dpdk.org/doc/guides-17.02/prog_guide/index.html
+ http://dpdk.org/doc/guides-17.05/prog_guide/index.html
diff --git a/docs/vCGNAPT/index.rst b/docs/vCGNAPT/index.rst
new file mode 100644
index 00000000..aacda1c2
--- /dev/null
+++ b/docs/vCGNAPT/index.rst
@@ -0,0 +1,11 @@
+####################
+vCGNAPT samplevnf
+####################
+
+.. toctree::
+ :numbered:
+ :maxdepth: 2
+
+ RELEASE_NOTES.rst
+ README.rst
+ INSTALL.rst
diff --git a/docs/vFW/INSTALL.rst b/docs/vFW/INSTALL.rst
index 21b2d369..b4cee086 100644
--- a/docs/vFW/INSTALL.rst
+++ b/docs/vFW/INSTALL.rst
@@ -9,169 +9,221 @@ vFW - Installation Guide
vFW Compilation
-===================
-
+================
After downloading (or doing a git clone) in a directory (samplevnf)
-###### Dependencies
-* DPDK 16.04: Downloaded and installed via vnf_build.sh or manually from [here](http://fast.dpdk.org/rel/dpdk-16.04.tar.xz)
-Both the options are available as part of vnf_build.sh below.
-* libpcap-dev
-* libzmq
-* libcurl
+-------------
+Dependencies
+-------------
-###### Environment variables
+- DPDK supported versions ($DPDK_RTE_VER = 16.04, 16.11, 17.02 or 17.05). Downloaded and installed via vnf_build.sh or manually from [here] (http://fast.dpdk.org/rel/dpdk-$DPDK_RTE_VER.zip). Both the options are available as part of vnf_build.sh below.
+- libpcap-dev
+- libzmq
+- libcurl
+---------------------
+Environment variables
+---------------------
Apply all the additional patches in 'patches/dpdk_custom_patch/' and build dpdk
+(NOTE: required only for DPDK version 16.04).
::
- export RTE_SDK=<dpdk 16.04 directory>
+
+ export RTE_SDK=<dpdk directory>
+
export RTE_TARGET=x86_64-native-linuxapp-gcc
This is done by vnf_build.sh script.
-Auto Build:
-==========
+Auto Build
+===========
$ ./tools/vnf_build.sh in samplevnf root folder
-Follow the steps in the screen from option [1] --> [8] and select option [7]
+Follow the steps in the screen from option [1] --> [9] and select option [8]
to build the vnfs.
-It will automatically download DPDK 16.04 and any required patches and will setup
-everything and build vFW VNFs.
+It will automatically download selected DPDK version and any required patches
+and will setup everything and build vFW VNFs.
Following are the options for setup:
::
- ----------------------------------------------------------
- Step 1: Environment setup.
- ----------------------------------------------------------
- [1] Check OS and network connection
+ ----------------------------------------------------------
+ Step 1: Environment setup.
+ ----------------------------------------------------------
+ [1] Check OS and network connection
+ [2] Select DPDK RTE version
- ----------------------------------------------------------
- Step 2: Download and Install
- ----------------------------------------------------------
- [2] Agree to download
- [3] Download packages
- [4] Download DPDK zip (optional, use it when option 4 fails)
- [5] Install DPDK
- [6] Setup hugepages
+ ----------------------------------------------------------
+ Step 2: Download and Install
+ ----------------------------------------------------------
+ [3] Agree to download
+ [4] Download packages
+ [5] Download DPDK zip
+ [6] Build and Install DPDK
+ [7] Setup hugepages
- ----------------------------------------------------------
- Step 3: Build VNF
- ----------------------------------------------------------
- [7] Build VNF
+ ----------------------------------------------------------
+ Step 3: Build VNFs
+ ----------------------------------------------------------
+ [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay)
- [8] Exit Script
+ [9] Exit Script
An vFW executable will be created at the following location
samplevnf/VNFs/vFW/build/vFW
+Manual Build
+=============
+1. Download DPDK supported version from dpdk.org
+
+ - http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip
+2. unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04
+ (Not required for other DPDK versions)
+
+ - cd dpdk
+
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch
+ - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch
+ - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/disable-acl-debug-logs.patch
+ - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/set-log-level-to-info.patch
-Manual Build:
-============
-1. Download DPDK 16.04 from dpdk.org
- - http://dpdk.org/browse/dpdk/snapshot/dpdk-16.04.zip
-2. unzip dpdk-16.04 and apply dpdk patch
- - cd dpdk-16.04
- - patch -p0 < VNF_CORE/patches/dpdk_custom_patch/rte_pipeline.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch
- - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch
- build dpdk
- - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
- - cd x86_64-native-linuxapp-gcc
- - make
+ - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc
+ - cd x86_64-native-linuxapp-gcc
+ - make
+
- Setup huge pages
- - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified
- explicitly and can also be optionally set as the default hugepage size for
- the system. For example, to reserve 8G of hugepage memory in the form of
- eight 1G pages, the following options should be passed to the kernel:
- * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
- - Add this to Go to /etc/default/grub configuration file.
- - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048"
- to the GRUB_CMDLINE_LINUX entry.
+ - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified
+ explicitly and can also be optionally set as the default hugepage
+ size for the system. For example, to reserve 8G of hugepage memory in
+ the form of eight 1G pages, the following options should be passed
+ to the kernel:
+ * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048
+ - Add this to Go to /etc/default/grub configuration file.
+ - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048"
+ to the GRUB_CMDLINE_LINUX entry.
+
3. Setup Environment Variable
- - export RTE_SDK=<samplevnf>/dpdk-16.04
+
+ - export RTE_SDK=<samplevnf>/dpdk
- export RTE_TARGET=x86_64-native-linuxapp-gcc
- export VNF_CORE=<samplevnf>
- or using ./toot/setenv.sh
+
+ or using ./tools/setenv.sh
+
4. Build vFW VNFs
+
- cd <samplevnf>/VNFs/vFW
- make clean
- make
+
5. The vFW executable will be created at the following location
+
- <samplevnf>/VNFs/vFW/build/vFW
Run
====
-Setup Port to run VNF:
----------------------
+Setup Port to run VNF
+----------------------
+The tools folder and utilities names are different across DPDK versions.
+
::
- 1. cd <samplevnf>/dpdk-16.04
- 3. ./tool/dpdk_nic_bind.py --status <--- List the network device
- 2. ./tool/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+
+ For DPDK versions 16.04
+ 1. cd <samplevnf>/dpdk
+ 2. ./tools/dpdk_nic_bind.py --status <--- List the network device
+ 3. ./tools/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+
.. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
- Make the necessary changes to the config files to run the vFW VNF
- eg: ports_mac_list = 00:00:00:30:21:01 00:00:00:30:21:00
+ For DPDK versions 16.11
+ 1. cd <samplevnf>/dpdk
+ 2. ./tools/dpdk-devbind.py --status <--- List the network device
+ 3. ./tools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+
+ .. _More details: http://dpdk.org/doc/guides-16.11/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
-Firewall
---------------
+ For DPDK versions 17.xx
+ 1. cd <samplevnf>/dpdk
+ 2. ./usertools/dpdk-devbind.py --status <--- List the network device
+ 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1>
+
+ .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules
+
+Make the necessary changes to the config files to run the vFW VNF
+
+eg: ports_mac_list = 00:00:00:30:21:01 00:00:00:30:21:00
+
+----------------------
+Firewall Run commands
+----------------------
Update the configuration according to system configuration.
::
- ./vFW -p <port mask> -f <config> -s <script> - SW_LoadB
- ./vFW -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB
+ ./vFW -p <port mask> -f <config> -s <script> - SW_LoadB
+ ./vFW -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB
Run IPv4
----------
+To run the vFW in Software LB or Hardware LB with IPv4 traffic
+
::
- Software LoadB
- --------------
+
+ Software LoadB:
+
cd <samplevnf>/VNFs/vFW/
./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc
- Hardware LoadB
- --------------
+ Hardware LoadB:
+
cd <samplevnf>/VNFs/vFW/
./build/vFW -p 0x3 -f ./config/VFW_HWLB_IPV4_SinglePortPair_4Thread.cfg -s ./config/VFW_HWLB_IPV4_SinglePortPair_script.cfg --hwlb 4
Run IPv6
---------
+To run the vFW in Software LB or Hardware LB with IPvr64 traffic
+
::
- Software LoadB
- --------------
+
+ Software LoadB:
+
cd <samplevnf>/VNFs/vFW
./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV6_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_IPV6_SinglePortPair_script.tc
- Hardware LoadB
- --------------
+ Hardware LoadB:
+
cd <samplevnf>/VNFs/vFW/
./build/vFW -p 0x3 -f ./config/VFW_HWLB_IPV6_SinglePortPair_4Thread.cfg -s ./config/VFW_HWLB_IPV6_SinglePortPair_script.tc --hwlb 4
-vFW execution on BM & SRIOV:
---------------------------------
+vFW execution on BM & SRIOV
+---------------------------
+To run the VNF, execute the following
+
::
- To run the VNF, execute the following:
+
samplevnf/VNFs/vFW# ./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
-f CONFIG FILE: vFW configuration file
-s SCRIPT FILE: vFW script file
-vFW execution on OVS:
--------------------------
+vFW execution on OVS
+--------------------
+
::
+
To run the VNF, execute the following:
samplevnf/VNFs/vFW# ./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc --disable-hw-csum
Command Line Params:
-p PORTMASK: Hexadecimal bitmask of ports to configure
-f CONFIG FILE: vFW configuration file
-s SCRIPT FILE: vFW script file
---disable-hw-csum :Disable TCP/UDP hw checksum
+ --disable-hw-csum :Disable TCP/UDP hw checksum
diff --git a/docs/vFW/README.rst b/docs/vFW/README.rst
index 45e8a17d..cc3c2b40 100644
--- a/docs/vFW/README.rst
+++ b/docs/vFW/README.rst
@@ -4,11 +4,11 @@
.. (c) opnfv, national center of scientific research "demokritos" and others.
========================================================
-Virtual Firewall - vFW
+vFW - Readme
========================================================
-1. Introduction
-==============
+Introduction
+===============
The virtual firewall (vFW) is an application implements Firewall. vFW is used
as a barrier between secure internal and an un-secure external network. The
firewall performs Dynamic Packet Filtering. This involves keeping track of the
@@ -19,6 +19,7 @@ be performed by Connection Tracking component, similar to that supported in
linux. The firewall also supports Access Controlled List(ACL) for rule based
policy enforcement. Firewall is built on top of DPDK and uses the packet library.
+----------
About DPDK
----------
The DPDK IP Pipeline Framework provides a set of libraries to build a pipeline
@@ -29,12 +30,12 @@ This document assumes the reader possesses the knowledge of DPDK concepts and
packet framework. For more details, read DPDK Getting Started Guide, DPDK
Programmers Guide, DPDK Sample Applications Guide.
-2. Scope
+Scope
==========
This application provides a standalone DPDK based high performance vFW Virtual
Network Function implementation.
-3. Features
+Features
===========
The vFW VNF currently supports the following functionality:
• Basic packet filtering (malformed packets, IP fragments)
@@ -52,7 +53,7 @@ The vFW VNF currently supports the following functionality:
• ICMP (terminal echo, echo response, passthrough)
• ICMPv6 and ND (Neighbor Discovery)
-4. High Level Design
+High Level Design
====================
The Firewall performs basic filtering for malformed packets and dynamic packet
filtering incoming packets using the connection tracker library.
@@ -77,41 +78,46 @@ across multiple worker threads. The hardware loadbalancing require ethernet
flow director support from hardware (eg. Fortville x710 NIC card).
The Input and Output FIFOs will be implemented using DPDK Ring Buffers.
-===================
-5. Components of vFW
-===================
+Components of vFW
+====================
+
In vFW, each component is constructed using packet framework pipelines.
It includes Rx and Tx Driver, Master pipeline, load balancer pipeline and
vfw worker pipeline components. A Pipeline framework is a collection of input
ports, table(s),output ports and actions (functions).
+---------------------------
Receive and Transmit Driver
-******************************
+---------------------------
Packets will be received in bulk and provided to LoadBalancer(LB) thread.
Transimit takes packets from worker threads in a dedicated ring and sent to
hardware queue.
+---------------------------
Master Pipeline
-******************************
+---------------------------
The Master component is part of all the IP Pipeline applications. This component
does not process any packets and should configure with Core 0, to allow
other cores for processing of the traffic. This component is responsible for
- 1. Initializing each component of the Pipeline application in different threads
- 2. Providing CLI shell for the user control/debug
- 3. Propagating the commands from user to the corresponding components
+1. Initializing each component of the Pipeline application in different threads
+2. Providing CLI shell for the user control/debug
+3. Propagating the commands from user to the corresponding components
+------------------
ARPICMP Pipeline
-******************************
+------------------
This pipeline processes the APRICMP packets.
+---------------
TXRX Pipelines
-******************************
+---------------
The TXTX and RXRX pipelines are pass through pipelines to forward both ingress
and egress traffic to Loadbalancer. This is required when the Software
Loadbalancer is used.
+----------------------
Load Balancer Pipeline
-******************************
+----------------------
The vFW support both hardware and software balancing for load balancing of
traffic across multiple VNF threads. The Hardware load balancing require support
from hardware like Flow Director for steering of packets to application through
@@ -126,8 +132,9 @@ port, dest addr, dest port and protocol) applying an XOR logic distributing to
active worker threads, thereby maintaining an affinity of flows to worker
threads.
+---------------
vFW Pipeline
-******************************
+---------------
The vFW performs the basic packet filtering and will drop the invalid and
malformed packets.The Dynamic packet filtering done using the connection tracker
library. The packets are processed in bulk and Hash table is used to maintain
@@ -135,32 +142,41 @@ the connection details.
Every TCP/UDP packets are passed through connection tracker library for valid
connection. The ACL library integrated to firewall provide rule based filtering.
-vFW Topology:
------------------------
+vFW Topology
+------------------------
+
::
+
IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA
operation:
- Egress --> The packets sent out from ixia(port 0) will be Firewalled to ixia(port 1).
- Igress --> The packets sent out from ixia(port 1) will be Firewalled to ixia(port 0).
+ Egress --> The packets sent out from ixia(port 0) will be Firewalled to ixia(port 1).
+ Igress --> The packets sent out from ixia(port 1) will be Firewalled to ixia(port 0).
-vFW Topology (L4REPLAY):
------------------------------------
+vFW Topology (L4REPLAY)
+------------------------------------
+
::
+
IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY
operation:
- Egress --> The packets sent out from ixia will pass through vFW to L3FWD/L4REPLAY.
- Ingress --> The L4REPLAY upon reception of packets (Private to Public Network),
- will immediately replay back the traffic to IXIA interface. (Pub -->Priv).
+ Egress --> The packets sent out from ixia will pass through vFW to L3FWD/L4REPLAY.
+ Ingress --> The L4REPLAY upon reception of packets (Private to Public Network),
+ will immediately replay back the traffic to IXIA interface. (Pub -->Priv).
-How to run L4Replay:
--------------------
+How to run L4Replay
+--------------------
+After the installation of samplevnf
+
::
- 1. After the installation of samplevnf:
- go to <samplevnf/VNFs/L4Replay>
- 2. ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)"
+
+ go to <samplevnf/VNFs/L4Replay>
+ ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)"
eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)"
-6. Installation, Compile and Execution
------------------------------------------------------------------
+Installation, Compile and Execution
+====================================
Plase refer to <samplevnf>/docs/vFW/INSTALL.rst for installation, configuration,
compilation and execution.
diff --git a/docs/vFW/RELEASE_NOTES.rst b/docs/vFW/RELEASE_NOTES.rst
index 0c880042..540f671d 100644
--- a/docs/vFW/RELEASE_NOTES.rst
+++ b/docs/vFW/RELEASE_NOTES.rst
@@ -4,24 +4,25 @@
.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others.
=========================================================
-Virtual Firewall - vFW
+vFW - Release Notes
=========================================================
-1. Introduction
+Introduction
================
This is a beta release for Sample Virtual Firewall VNF.
This vFW can application can be run independently (refer INSTALL.rst).
-2. User Guide
+User Guide
===============
Refer to README.rst for further details on vFW, HLD, features supported, test
plan. For build configurations and execution requisites please refer to
INSTALL.rst.
-3. Feature for this release
+Feature for this release
===========================
-This release supports following features as part of vFW:
+This release supports following features as part of vFW
+
- Basic packet filtering (malformed packets, IP fragments)
- Connection tracking for TCP and UDP
- Access Control List for rule based policy enforcement
@@ -37,42 +38,55 @@ This release supports following features as part of vFW:
- Multithread support
- Multiple physical port support
-4. System requirements - OS and kernel version
+System requirements - OS and kernel version
==============================================
This is supported on Ubuntu 14.04 and Ubuntu 16.04 and kernel version less than 4.5
VNFs on BareMetal support:
- OS: Ubuntu 14.04 or 16.04 LTS
- kernel: < 4.5
- http://releases.ubuntu.com/16.04/
- Download/Install the image: ubuntu-16.04.1-server-amd64.iso
-
- VNFs on Standalone Hypervisor
- HOST OS: Ubuntu 14.04 or 16.04 LTS
- http://releases.ubuntu.com/16.04/
- Download/Install the image: ubuntu-16.04.1-server-amd64.iso
- - OVS (DPDK) - 2.5
- - kernel: < 4.5
- - Hypervisor - KVM
- - VM OS - Ubuntu 16.04/Ubuntu 14.04
-
-5. Known Bugs and limitations
+ OS: Ubuntu 14.04 or 16.04 LTS
+ kernel: < 4.5
+ http://releases.ubuntu.com/16.04/
+ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
+
+ VNFs on Standalone Hypervisor:
+ HOST OS: Ubuntu 14.04 or 16.04 LTS
+ http://releases.ubuntu.com/16.04/
+ Download/Install the image: ubuntu-16.04.1-server-amd64.iso
+
+ - OVS (DPDK) - 2.5
+ - kernel: < 4.5
+ - Hypervisor - KVM
+ - VM OS - Ubuntu 16.04/Ubuntu 14.04
+
+Known Bugs and limitations
=============================
+
- Hadware Load Balancer feature is supported on fortville nic FW version 4.53 and below.
- Hardware Checksum offload is not supported for IPv6 traffic.
- vFW on sriov is tested upto 4 threads
- Http Multiple clients/server with HWLB is not working
-6. Future Work
+Future Work
==============
Following would be possible enhancement functionalities
+
- Automatic enable/disable of synproxy
- Support TCP timestamps with synproxy
- FTP ALG integration
- Performance optimization on different platforms
-7. References
+References
=============
-Following links provides additional information
- .. _QUICKSTART: http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html
- .. _DPDKGUIDE: http://dpdk.org/doc/guides-16.04/prog_guide/index.html
+Following links provides additional information for differenet version of DPDKs
+
+.. _QUICKSTART:
+ http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-16.11/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-17.02/linux_gsg/quick_start.html
+ http://dpdk.org/doc/guides-17.05/linux_gsg/quick_start.html
+
+.. _DPDKGUIDE:
+ http://dpdk.org/doc/guides-16.04/prog_guide/index.html
+ http://dpdk.org/doc/guides-16.11/prog_guide/index.html
+ http://dpdk.org/doc/guides-17.02/prog_guide/index.html
+ http://dpdk.org/doc/guides-17.05/prog_guide/index.html
diff --git a/docs/vFW/index.rst b/docs/vFW/index.rst
new file mode 100644
index 00000000..8b6a8186
--- /dev/null
+++ b/docs/vFW/index.rst
@@ -0,0 +1,11 @@
+####################
+vFW samplevnf
+####################
+
+.. toctree::
+ :numbered:
+ :maxdepth: 2
+
+ RELEASE_NOTES.rst
+ README.rst
+ INSTALL.rst