diff options
Diffstat (limited to 'docs/testing/user')
-rw-r--r-- | docs/testing/user/userguide/vACL/INSTALL.rst | 233 | ||||
-rw-r--r-- | docs/testing/user/userguide/vACL/README.rst | 159 | ||||
-rw-r--r-- | docs/testing/user/userguide/vACL/RELEASE_NOTES.rst | 81 | ||||
-rw-r--r-- | docs/testing/user/userguide/vACL/index.rst | 11 | ||||
-rw-r--r-- | docs/testing/user/userguide/vCGNAPT/INSTALL.rst | 230 | ||||
-rw-r--r-- | docs/testing/user/userguide/vCGNAPT/README.rst | 197 | ||||
-rw-r--r-- | docs/testing/user/userguide/vCGNAPT/RELEASE_NOTES.rst | 90 | ||||
-rw-r--r-- | docs/testing/user/userguide/vCGNAPT/index.rst | 11 | ||||
-rw-r--r-- | docs/testing/user/userguide/vFW/INSTALL.rst | 229 | ||||
-rw-r--r-- | docs/testing/user/userguide/vFW/README.rst | 182 | ||||
-rw-r--r-- | docs/testing/user/userguide/vFW/RELEASE_NOTES.rst | 92 | ||||
-rw-r--r-- | docs/testing/user/userguide/vFW/index.rst | 11 |
12 files changed, 1526 insertions, 0 deletions
diff --git a/docs/testing/user/userguide/vACL/INSTALL.rst b/docs/testing/user/userguide/vACL/INSTALL.rst new file mode 100644 index 00000000..7f21fc1f --- /dev/null +++ b/docs/testing/user/userguide/vACL/INSTALL.rst @@ -0,0 +1,233 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +============================ +vACL - Installation Guide +============================ + +vACL Compilation +=================== + +After downloading (or doing a git clone) in a directory (samplevnf) + +------------- +Dependencies +------------- + +- DPDK supported versions ($DPDK_RTE_VER = 16.04, 16.11, 17.02 or 17.05): Downloaded and installed via vnf_build.sh or manually from [here] (http://fast.dpdk.org/rel/) +- libpcap-dev +- libzmq +- libcurl + +--------------------- +Environment variables +--------------------- +Apply all the additional patches in 'patches/dpdk_custom_patch/' and build dpdk + +:: + + export RTE_SDK=<dpdk directory> + export RTE_TARGET=x86_64-native-linuxapp-gcc + +This is done by vnf_build.sh script. + +Auto Build: +=========== +$ ./tools/vnf_build.sh in samplevnf root folder + +Follow the steps in the screen from option [1] --> [9] and select option [8] +to build the vnfs. +It will automatically download selected DPDK version and any required patches +and will setup everything and build vACL VNFs. + +Following are the options for setup: + +:: + + ---------------------------------------------------------- + Step 1: Environment setup. + ---------------------------------------------------------- + [1] Check OS and network connection + [2] Select DPDK RTE version + + ---------------------------------------------------------- + Step 2: Download and Install + ---------------------------------------------------------- + [3] Agree to download + [4] Download packages + [5] Download DPDK zip + [6] Build and Install DPDK + [7] Setup hugepages + + ---------------------------------------------------------- + Step 3: Build VNFs + ---------------------------------------------------------- + [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay) + + [9] Exit Script + +An vACL executable will be created at the following location +samplevnf/VNFs/vACL/build/vACL + + +Manual Build: +============= +1. Download DPDK supported version from dpdk.org + + - http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip + +2. unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 + (Not required for other DPDK versions) + + - cd dpdk + + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch + - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/disable-acl-debug-logs.patch + - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/set-log-level-to-info.patch + + - build dpdk + + - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc + - cd x86_64-native-linuxapp-gcc + - make + + - Setup huge pages + + - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified + explicitly and can also be optionally set as the default hugepage + size for the system. For example, to reserve 8G of hugepage memory + in the form of eight 1G pages, the following options should be passed + to the kernel: + * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048 + - Add this to Go to /etc/default/grub configuration file. + - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048" + to the GRUB_CMDLINE_LINUX entry. + +3. Setup Environment Variable + + - export RTE_SDK=<samplevnf>/dpdk + - export RTE_TARGET=x86_64-native-linuxapp-gcc + - export VNF_CORE=<samplevnf> + + or using ./tools/setenv.sh + +4. Build vACL VNFs + + - cd <samplevnf>/VNFs/vACL + - make clean + - make + +5. The vACL executable will be created at the following location + + - <samplevnf>/VNFs/vACL/build/vACL + +Run +==== + +---------------------- +Setup Port to run VNF +---------------------- + +:: + + For DPDK versions 16.04 + 1. cd <samplevnf>/dpdk + 2. ./tools/dpdk_nic_bind.py --status <--- List the network device + 3. ./tools/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1> + .. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + For DPDK versions 16.11 + 1. cd <samplevnf>/dpdk + 2. ./tools/dpdk-devbind.py --status <--- List the network device + 3. ./tools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + .. _More details: http://dpdk.org/doc/guides-16.11/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + For DPDK versions 17.xx + 1. cd <samplevnf>/dpdk + 2. ./usertools/dpdk-devbind.py --status <--- List the network device + 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + + Make the necessary changes to the config files to run the vACL VNF + eg: ports_mac_list = 00:00:00:30:21:00 00:00:00:30:21:00 + +----------------- +ACL run commands +----------------- +Update the configuration according to system configuration. + +:: + + ./build/vACL -p <port mask> -f <config> -s <script> - SW_LoadB + + ./build/vACL -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB + + +Run IPv4 +-------- + +:: + + Software LoadB + + cd <samplevnf>/VNFs/vACL/ + + ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/ IPv4_swlb_acl.tc + + + Hardware LoadB + + cd <samplevnf>/VNFs/vACL/ + + ./build/vACL -p 0x3 -f ./config/IPv4_hwlb_acl_1LB_1t.cfg -s ./config/IPv4_hwlb_acl.tc --hwlb 1 + +Run IPv6 +-------- + +:: + + Software LoadB + + cd <samplevnf>/VNFs/vACL/ + + ./build/vACL -p 0x3 -f ./config/IPv6_swlb_acl_1LB_1t.cfg -s ./config/IPv6_swlb_acl.tc + + + Hardware LoadB + + cd <samplevnf>/VNFs/vACL/ + + ./build/vACL -p 0x3 -f ./config/IPv6_hwlb_acl_1LB_1t.cfg -s ./config/IPv6_hwlb_acl.tc --hwlb 1 + +vACL execution on BM & SRIOV +-------------------------------- +To run the VNF, execute the following + +:: + + samplevnf/VNFs/vACL# ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/ IPv4_swlb_acl.tc + + Command Line Params: + -p PORTMASK: Hexadecimal bitmask of ports to configure + -f CONFIG FILE: vACL configuration file + -s SCRIPT FILE: vACL script file + +vACL execution on OVS +------------------------- +To run the VNF, execute the following: + +:: + + samplevnf/VNFs/vACL# ./build/vACL -p 0x3 -f ./config/IPv4_swlb_acl_1LB_1t.cfg -s ./config/ IPv4_swlb_acl.tc --disable-hw-csum + + Command Line Params: + -p PORTMASK: Hexadecimal bitmask of ports to configure + -f CONFIG FILE: vACL configuration file + -s SCRIPT FILE: vACL script file + --disable-hw-csum :Disable TCP/UDP hw checksum diff --git a/docs/testing/user/userguide/vACL/README.rst b/docs/testing/user/userguide/vACL/README.rst new file mode 100644 index 00000000..f8c3e817 --- /dev/null +++ b/docs/testing/user/userguide/vACL/README.rst @@ -0,0 +1,159 @@ +.. This work is licensed under a creative commons attribution 4.0 international +.. license. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) opnfv, national center of scientific research "demokritos" and others. + +======================================================== +vACL - Readme +======================================================== + +Introduction +================= +This application implements Access Control List (ACL). ACL is typically +used for rule based policy enforcement. It restricts access to a destination +IP address/port based on various header fields, such as source IP address/port, +destination IP address/port and protocol. It is built on top of DPDK and +uses the packet framework infrastructure. + + +---------- +About DPDK +---------- +The DPDK IP Pipeline Framework provides a set of libraries to build a pipeline +application. In this document, vACL will be explained in detail with its own +building blocks. + +This document assumes the reader possesses the knowledge of DPDK concepts and +packet framework. For more details, read DPDK Getting Started Guide, DPDK +Programmers Guide, DPDK Sample Applications Guide. + +Scope +========== +This application provides a standalone DPDK based high performance vACL Virtual +Network Function implementation. + +Features +=========== +The vACL VNF currently supports the following functionality + • CLI based Run-time rule configuration.(Add, Delete, List, Display, Clear, Modify) + • Ipv4 and ipv6 standard 5 tuple packet Selector support. + • Multithread support + • Multiple physical port support + • Hardware and Software Load Balancing + • L2L3 stack support for ARP/ICMP handling + • ARP (request, response, gratuitous) + • ICMP (terminal echo, echo response, passthrough) + • ICMPv6 and ND (Neighbor Discovery) + +High Level Design +==================== +The ACL Filter performs bulk filtering of incoming packets based on rules in current ruleset, +discarding any packets not permitted by the rules. The mechanisms needed for building the +rule database and performing lookups are provided by the DPDK API. +http://dpdk.org/doc/api/rte__acl_8h.html + +The Input FIFO contains all the incoming packets for ACL filtering. Packets will be dequeued +from the FIFO in bulk for processing by the ACL. Packets will be enqueued to the output FIFO. +The Input and Output FIFOs will be implemented using DPDK Ring Buffers. + +The DPDK ACL example: http://dpdk.org/doc/guides/sample_app_ug/l3_forward_access_ctrl.html +#figure-ipv4-acl-rule contains a suitable syntax and parser for ACL rules. + +Components of vACL +======================= +In vACL, each component is constructed using packet framework pipelines. +It includes Rx and Tx Driver, Master pipeline, load balancer pipeline and +vACL worker pipeline components. A Pipeline framework is a collection of input +ports, table(s),output ports and actions (functions). + +--------------------------- +Receive and Transmit Driver +--------------------------- +Packets will be received in bulk and provided to LoadBalancer(LB) thread. +Transimit takes packets from worker threads in a dedicated ring and sent to +hardware queue. + +--------------------------- +Master Pipeline +--------------------------- +The Master component is part of all the IP Pipeline applications. This component +does not process any packets and should configure with Core 0, to allow +other cores for processing of the traffic. This component is responsible for +1. Initializing each component of the Pipeline application in different threads +2. Providing CLI shell for the user control/debug +3. Propagating the commands from user to the corresponding components + +--------------------------- +ARPICMP Pipeline +--------------------------- +This pipeline processes the APRICMP packets. + +--------------------------- +TXRX Pipelines +--------------------------- +The TXTX and RXRX pipelines are pass through pipelines to forward both ingress +and egress traffic to Loadbalancer. This is required when the Software +Loadbalancer is used. + +--------------------------- +Load Balancer Pipeline +--------------------------- +The vACL support both hardware and software balancing for load blalcning of +traffic across multiple VNF threads. The Hardware load balncing require support +from hardware like Flow Director for steering of packets to application through +hardware queues. + +The Software Load balancer is also supported if hardware loadbalancing can't be +used for any reason. The TXRX along with LOADB pipeline provides support for +software load balancing by distributing the flows to Multiple vACL worker +threads. +Loadbalancer (HW or SW) distributes traffic based on the 5 tuple (src addr, src +port, dest addr, dest port and protocol) applying an XOR logic distributing to +active worker threads, thereby maintaining an affinity of flows to worker +threads. + +--------------------------- +vACL Pipeline +--------------------------- +The vACL performs the rule-based packet filtering. + +vACL Topology +------------------------ + +:: + + IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA + operation: + + Egress --> The packets sent out from ixia(port 0) will be sent through ACL to ixia(port 1). + + Igress --> The packets sent out from ixia(port 1) will be sent through ACL to ixia(port 0). + +vACL Topology (L4REPLAY) +------------------------------------ + +:: + + IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY + + operation: + + Egress --> The packets sent out from ixia will pass through vACL to L3FWD/L4REPLAY. + + Ingress --> The L4REPLAY upon reception of packets (Private to Public Network), + will immediately replay back the traffic to IXIA interface. (Pub -->Priv). + +How to run L4Replay +-------------------- +After the installation of samplevnf + +:: + + go to <samplevnf/VNFs/L4Replay> + ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)" + eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)" + +Installation, Compile and Execution +======================================= +Plase refer to <samplevnf>/docs/vACL/INSTALL.rst for installation, configuration, compilation +and execution. diff --git a/docs/testing/user/userguide/vACL/RELEASE_NOTES.rst b/docs/testing/user/userguide/vACL/RELEASE_NOTES.rst new file mode 100644 index 00000000..c947a371 --- /dev/null +++ b/docs/testing/user/userguide/vACL/RELEASE_NOTES.rst @@ -0,0 +1,81 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +========================================================= +vACL - Release Notes +========================================================= + +Introduction +=================== + +This is a beta release for Sample Virtual ACL VNF. +This vACL can application can be run independently (refer INSTALL.rst). + +User Guide +=============== +Refer to README.rst for further details on vACL, HLD, features supported, test +plan. For build configurations and execution requisites please refer to +INSTALL.rst. + +Feature for this release +=========================== +The vACL VNF currently supports the following functionality: + • CLI based Run-time rule configuration.(Add,Delete,List,Display,Clear,Modify) + • Ipv4 and ipv6 standard 5 tuple packet Selector support. + • Multithread support + • Multiple physical port support + • Hardware and Software Load Balancing + • L2L3 stack support for ARP/ICMP handling + • ARP (request, response, gratuitous) + • ICMP (terminal echo, echo response, passthrough) + • ICMPv6 and ND (Neighbor Discovery) + +System requirements - OS and kernel version +============================================== +This is supported on Ubuntu 14.04 and 16.04 and kernel version less than 4.5 + + VNFs on BareMetal support: + OS: Ubuntu 14.04 or 16.04 LTS + kernel: < 4.5 + http://releases.ubuntu.com/16.04/ + Download/Install the image: ubuntu-16.04.1-server-amd64.iso + + VNFs on Standalone Hypervisor + HOST OS: Ubuntu 14.04 or 16.04 LTS + http://releases.ubuntu.com/16.04/ + Download/Install the image: ubuntu-16.04.1-server-amd64.iso + + - OVS (DPDK) - 2.5 + - kernel: < 4.5 + - Hypervisor - KVM + - VM OS - Ubuntu 16.04/Ubuntu 14.04 + +Known Bugs and limitations +============================= + - Hardware Load Balancer feature is supported on Fortville nic ACL + version 4.53 and below. + - Hardware Checksum offload is not supported for IPv6 traffic. + - vACL on sriov is tested upto 4 threads + +Future Work +============== +Following would be possible enhancements + - Performance optimization on different platforms + +References +============= +Following links provides additional information for differenet version of DPDKs + +.. _QUICKSTART: + http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-16.11/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-17.02/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-17.05/linux_gsg/quick_start.html + +.. _DPDKGUIDE: + http://dpdk.org/doc/guides-16.04/prog_guide/index.html + http://dpdk.org/doc/guides-16.11/prog_guide/index.html + http://dpdk.org/doc/guides-17.02/prog_guide/index.html + http://dpdk.org/doc/guides-17.05/prog_guide/index.html diff --git a/docs/testing/user/userguide/vACL/index.rst b/docs/testing/user/userguide/vACL/index.rst new file mode 100644 index 00000000..c1ae029b --- /dev/null +++ b/docs/testing/user/userguide/vACL/index.rst @@ -0,0 +1,11 @@ +#################### +vACL samplevnf +#################### + +.. toctree:: + :numbered: + :maxdepth: 2 + + RELEASE_NOTES.rst + README.rst + INSTALL.rst diff --git a/docs/testing/user/userguide/vCGNAPT/INSTALL.rst b/docs/testing/user/userguide/vCGNAPT/INSTALL.rst new file mode 100644 index 00000000..85873109 --- /dev/null +++ b/docs/testing/user/userguide/vCGNAPT/INSTALL.rst @@ -0,0 +1,230 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +============================ +vCGNAPT - Installation Guide +============================ + + +vCGNAPT Compilation +=================== + +After downloading (or doing a git clone) in a directory (samplevnf) + +Dependencies +------------- + +- DPDK supported versions ($DPDK_RTE_VER = 16.04, 16.11, 17.02 or 17.05) Downloaded and installed via vnf_build.sh or manually from [here] (http://fast.dpdk.org/rel/) +- libpcap-dev +- libzmq +- libcurl + +Environment variables +--------------------- + +Apply all the additional patches in 'patches/dpdk_custom_patch/' and build dpdk +required only for DPDK version 16.04. + +:: + + export RTE_SDK=<dpdk directory> + export RTE_TARGET=x86_64-native-linuxapp-gcc + +This is done by vnf_build.sh script. + +Auto Build: +=========== +$ ./tools/vnf_build.sh in samplevnf root folder + +Follow the steps in the screen from option [1] --> [9] and select option [8] +to build the vnfs. +It will automatically download selected DPDK version and any required patches +and will setup everything and build vCGNAPT VNFs. + +Following are the options for setup: + +:: + + ---------------------------------------------------------- + Step 1: Environment setup. + ---------------------------------------------------------- + [1] Check OS and network connection + [2] Select DPDK RTE version + + ---------------------------------------------------------- + Step 2: Download and Install + ---------------------------------------------------------- + [3] Agree to download + [4] Download packages + [5] Download DPDK zip + [6] Build and Install DPDK + [7] Setup hugepages + + ---------------------------------------------------------- + Step 3: Build VNFs + ---------------------------------------------------------- + [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay) + + [9] Exit Script + +An vCGNAPT executable will be created at the following location +samplevnf/VNFs/vCGNAPT/build/vCGNAPT + + +Manual Build: +============= +1. Download DPDK supported version from dpdk.org + + - http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip +2. unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 + (Not required for other DPDK versions) + + - cd dpdk + + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch + - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/disable-acl-debug-logs.patch + - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/set-log-level-to-info.patch + + - build dpdk + + - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc + - cd x86_64-native-linuxapp-gcc + - make + + - Setup huge pages + + - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified + explicitly and can also be optionally set as the default hugepage size for + the system. For example, to reserve 8G of hugepage memory in the form of + eight 1G pages, the following options should be passed to the kernel: + * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048 + - Add this to Go to /etc/default/grub configuration file. + - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048" + to the GRUB_CMDLINE_LINUX entry. + +3. Setup Environment Variable + + - export RTE_SDK=<samplevnf>/dpdk + - export RTE_TARGET=x86_64-native-linuxapp-gcc + - export VNF_CORE=<samplevnf> + or using ./tools/setenv.sh + +4. Build vCGNAPT VNFs + + - cd <samplevnf>/VNFs/vCGNAPT + - make clean + - make + +5. An vCGNAPT executable will be created at the following location + + - <samplevnf>/VNFs/vCGNAPT/build/vCGNAPT + +Run +==== + +Setup Port to run VNF +---------------------- + +:: + + For DPDK versions 16.04 + 1. cd <samplevnf>/dpdk + 2. ./tools/dpdk_nic_bind.py --status <--- List the network device + 3. ./tools/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1> + .. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + For DPDK versions 16.11 + 1. cd <samplevnf>/dpdk + 2. ./tools/dpdk-devbind.py --status <--- List the network device + 3. ./tools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + .. _More details: http://dpdk.org/doc/guides-16.11/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + For DPDK versions 17.xx + 1. cd <samplevnf>/dpdk + 2. ./usertools/dpdk-devbind.py --status <--- List the network device + 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + Make the necessary changes to the config files to run the vCGNAPT VNF + eg: ports_mac_list = 00:00:00:30:21:F0 00:00:00:30:21:F1 + +Dynamic CGNAPT +-------------- +Update the configuration according to system configuration. + +:: + + ./vCGNAPT -p <port mask> -f <config> -s <script> - SW_LoadB + ./vCGNAPT -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB + +Static CGNAPT +------------- +Update the script file and add Static NAT Entry + +:: + + e.g, + ;p <pipeline id> entry addm <prv_ipv4/6> prvport> <pub_ip> <pub_port> <phy_port> <ttl> <no_of_entries> <end_prv_port> <end_pub_port> + ;p 3 entry addm 152.16.100.20 1234 152.16.40.10 1 0 500 65535 1234 65535 + +Run IPv4 +---------- + +:: + + Software LoadB: + + cd <samplevnf>/VNFs/vCGNAPT/build + ./vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg + + + Hardware LoadB: + + cd <samplevnf>/VNFs/vCGNAPT/build + ./vCGNAPT -p 0x3 -f ./config/arp_hwlb-2P-1T.cfg -s ./config/arp_hwlb_scriptfile_2P.cfg --hwlb 1 + +Run IPv6 +--------- + +:: + + Software LoadB: + + cd <samplevnf>/VNFs/vCGNAPT/build + ./vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T-ipv6.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg + + + Hardware LoadB: + + cd <samplevnf>/VNFs/vCGNAPT/build + ./vCGNAPT -p 0x3 -f ./config/arp_hwlb-2P-1T-ipv6.cfg -s ./config/arp_hwlb_scriptfile_2P.cfg --hwlb 1 + +vCGNAPT execution on BM & SRIOV +-------------------------------- + +:: + + To run the VNF, execute the following: + samplevnf/VNFs/vCGNAPT# ./build/vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg + Command Line Params: + -p PORTMASK: Hexadecimal bitmask of ports to configure + -f CONFIG FILE: vCGNAPT configuration file + -s SCRIPT FILE: vCGNAPT script file + +vCGNAPT execution on OVS +------------------------- +To run the VNF, execute the following: + +:: + + samplevnf/VNFs/vCGNAPT# ./build/vCGNAPT -p 0x3 ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg --disable-hw-csum + Command Line Params: + -p PORTMASK: Hexadecimal bitmask of ports to configure + -f CONFIG FILE: vCGNAPT configuration file + -s SCRIPT FILE: vCGNAPT script file + --disable-hw-csum :Disable TCP/UDP hw checksum diff --git a/docs/testing/user/userguide/vCGNAPT/README.rst b/docs/testing/user/userguide/vCGNAPT/README.rst new file mode 100644 index 00000000..dd6bb079 --- /dev/null +++ b/docs/testing/user/userguide/vCGNAPT/README.rst @@ -0,0 +1,197 @@ +.. this work is licensed under a creative commons attribution 4.0 international +.. license. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) opnfv, national center of scientific research "demokritos" and others. + +======================================================== +vCGNAPT - Readme +======================================================== + +Introduction +============== +This application implements vCGNAPT. The idea of vCGNAPT is to extend the life of +the service providers IPv4 network infrastructure and mitigate IPv4 address +exhaustion by using address and port translation in large scale. It processes the +traffic in both the directions. + +It also supports the connectivity between the IPv6 access network to IPv4 data network +using the IPv6 to IPv4 address translation and vice versa. + +About DPDK +---------- +The DPDK IP Pipeline Framework provides set of libraries to build a pipeline +application. In this document, CG-NAT application will be explained with its +own building blocks. + +This document assumes the reader possess the knowledge of DPDK concepts and IP +Pipeline Framework. For more details, read DPDK Getting Started Guide, DPDK +Programmers Guide, DPDK Sample Applications Guide. + +Scope +========== +This application provides a standalone DPDK based high performance vCGNAPT +Virtual Network Function implementation. + +Features +=========== +The vCGNAPT VNF currently supports the following functionality: + • Static NAT + • Dynamic NAT + • Static NAPT + • Dynamic NAPT + • ARP (request, response, gratuitous) + • ICMP (terminal echo, echo response, passthrough) + • ICMPv6 and ND (Neighbor Discovery) + • UDP, TCP and ICMP protocol passthrough + • Multithread support + • Multiple physical port support + • Limiting max ports per client + • Limiting max clients per public IP address + • Live Session tracking to NAT flow + • NAT64 + • PCP Support + • ALG SIP + • ALG FTP + +High Level Design +==================== +The Upstream path defines the traffic from Private to Public and the downstream +path defines the traffic from Public to Private. The vCGNAPT has same set of +components to process Upstream and Downstream traffic. + +In vCGNAPT application, each component is constructed as IP Pipeline framework. +It includes Master pipeline component, load balancer pipeline component and vCGNAPT +pipeline component. + +A Pipeline framework is collection of input ports, table(s), output ports and +actions (functions). In vCGNAPT pipeline, main sub components are the Inport function +handler, Table and Table function handler. vCGNAPT rules will be configured in the +table which translates egress and ingress traffic according to physical port +information from which side packet is arrived. The actions can be forwarding to the +output port (either egress or ingress) or to drop the packet. + +vCGNAPT Graphical Overview +========================== +The idea of vCGNAPT is to extend the life of the service providers IPv4 network infrastructure +and mitigate IPv4 address exhaustion by using address and port translation in large scale. +It processes the traffic in both the directions. + +.. code-block:: console + + +------------------+ + | +-----+ + | Private consumer | CPE |---------------+ + | IPv4 traffic +-----+ | + +------------------+ | + +------------------+ v +----------------+ + | | +------------+ | | + | Private IPv4 | | vCGNAPT | | Public | + | access network | | NAT44 | | IPv4 traffic | + | | +------------+ | | + +------------------+ | +----------------+ + +------------------+ | + | +-----+ | + | Private consumer| CPE |-----------------+ + | IPv4 traffic +-----+ + +------------------+ + Figure 1: vCGNAPT deployment in Service provider network + + + +Components of vCGNAPT +===================== + +In vCGNAPT, each component is constructed as a packet framework. It includes Master pipeline +component, driver, load balancer pipeline component and vCGNAPT worker pipeline component. A +pipeline framework is a collection of input ports, table(s), output ports and actions +(functions). + +Receive and transmit driver +---------------------------- +Packets will be received in bulk and provided to load balancer thread. The transmit takes +packets from worker thread in a dedicated ring and sent to the hardware queue. + +ARPICMP pipeline +------------------------ +ARPICMP pipeline is responsible for handling all l2l3 arp related packets. + +This component does not process any packets and should configure with Core 0, +to save cores for other components which processes traffic. The component +is responsible for: +1. Initializing each component of the Pipeline application in different threads +2. Providing CLI shell for the user +3. Propagating the commands from user to the corresponding components. +4. ARP and ICMP are handled here. + +Load Balancer pipeline +------------------------ +Load balancer is part of the Multi-Threaded CGMAPT release which distributes +the flows to Multiple ACL worker threads. + +Distributes traffic based on the 2 or 5 tuple (source address, source port, +destination address, destination port and protocol) applying an XOR logic +distributing the load to active worker threads, thereby maintaining an +affinity of flows to worker threads. + +Tuple can be modified/configured using configuration file + +vCGNAPT - Static +==================== +The vCGNAPT component performs translation of private IP & port to public IP & +port at egress side and public IP & port to private IP & port at Ingress side +based on the NAT rules added to the pipeline Hash table. The NAT rules are +added to the Hash table via user commands. The packets that have a matching +egress key or ingress key in the NAT table will be processed to change IP & +port and will be forwarded to the output port. The packets that do not have a +match will be taken a default action. The default action may result in drop of +the packets. + +vCGNAPT - Dynamic +=================== +The vCGNAPT component performs translation of private IP & port to public IP & port +at egress side and public IP & port to private IP & port at Ingress side based on the +NAT rules added to the pipeline Hash table. Dynamic nature of vCGNAPT refers to the +addition of NAT entries in the Hash table dynamically when new packet arrives. The NAT +rules will be added to the Hash table automatically when there is no matching entry in +the table and the packet is circulated through software queue. The packets that have a +matching egress key or ingress key in the NAT table will be processed to change IP & +port and will be forwarded to the output port defined in the entry. + +Dynamic vCGNAPT acts as static one too, we can do NAT entries statically. Static NAT +entries port range must not conflict to dynamic NAT port range. + +vCGNAPT Static Topology +------------------------ + +:: + + IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA + operation: + Egress --> The packets sent out from ixia(port 0) will be CGNAPTed to ixia(port 1). + Igress --> The packets sent out from ixia(port 1) will be CGNAPTed to ixia(port 0). + +vCGNAPT Dynamic Topology (L4REPLAY) +------------------------------------ + +:: + + IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY + operation: + Egress --> The packets sent out from ixia will be CGNAPTed to L3FWD/L4REPLAY. + Ingress --> The L4REPLAY upon reception of packets (Private to Public Network), + will immediately replay back the traffic to IXIA interface. (Pub -->Priv). + +How to run L4Replay +-------------------- +After the installation of samplevnf: + +:: + + go to <samplevnf/VNFs/L4Replay> + ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)" + eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)" + +Installation, Compile and Execution +==================================== +Plase refer to <samplevnf>/docs/vCGNAPT/INSTALL.rst for installation, configuration, compilation +and execution. diff --git a/docs/testing/user/userguide/vCGNAPT/RELEASE_NOTES.rst b/docs/testing/user/userguide/vCGNAPT/RELEASE_NOTES.rst new file mode 100644 index 00000000..a776a54d --- /dev/null +++ b/docs/testing/user/userguide/vCGNAPT/RELEASE_NOTES.rst @@ -0,0 +1,90 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +========================================================= +vCGNAPT - Release Notes +========================================================= + +Introduction +================ +This is the beta release for vCGNAPT VNF. +vCGNAPT application can be run independently (refer INSTALL.rst). + +User Guide +=============== +Refer to README.rst for further details on vCGNAPT, HLD, features supported, +test plan. For build configurations and execution requisites please refer to +INSTALL.rst. + +Feature for this release +=========================== +This release supports following features as part of vCGNAPT + +- vCGNAPT can run as a standalone application on bare-metal linux server or on a virtual machine using SRIOV and OVS dpdk. +- Static NAT +- Dynamic NAT +- Static NAPT +- Dynamic NAPT +- ARP (request, response, gratuitous) +- ICMP (terminal echo, echo response, passthrough) +- ICMPv6 and ND (Neighbor Discovery) +- UDP, TCP and ICMP protocol passthrough +- Multithread support +- Multiple physical port support +- Limiting max ports per client +- Limiting max clients per public IP address +- Live Session tracking to NAT flow +- PCP support +- NAT64 +- ALG SIP +- ALG FTP + +System requirements - OS and kernel version +============================================== +This is supported on Ubuntu 14.04 and 16.04 and kernel version less than 4.5 + + VNFs on BareMetal support: + OS: Ubuntu 14.04 or 16.04 LTS + kernel: < 4.5 + http://releases.ubuntu.com/16.04/ + Download/Install the image: ubuntu-16.04.1-server-amd64.iso + + VNFs on Standalone Hypervisor + HOST OS: Ubuntu 14.04 or 16.04 LTS + http://releases.ubuntu.com/16.04/ + Download/Install the image: ubuntu-16.04.1-server-amd64.iso + + - OVS (DPDK) - 2.5 + - kernel: < 4.5 + - Hypervisor - KVM + - VM OS - Ubuntu 16.04/Ubuntu 14.04 + +Known Bugs and limitations +============================= +- Hadware Load Balancer feature is supported on fortville nic FW version 4.53 and below. +- L4 UDP Replay is used to capture throughput for dynamic cgnapt +- Hardware Checksum offload is not supported for IPv6 traffic. +- CGNAPT on sriov is tested till 4 threads + +Future Work +============== +- SCTP passthrough support +- Multi-homing support +- Performance optimization on different platforms + +References +============= +Following links provides additional information for differenet version of DPDKs + .. _QUICKSTART: + http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-16.11/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-17.02/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-17.05/linux_gsg/quick_start.html + + .. _DPDKGUIDE: + http://dpdk.org/doc/guides-16.04/prog_guide/index.html + http://dpdk.org/doc/guides-16.11/prog_guide/index.html + http://dpdk.org/doc/guides-17.02/prog_guide/index.html + http://dpdk.org/doc/guides-17.05/prog_guide/index.html diff --git a/docs/testing/user/userguide/vCGNAPT/index.rst b/docs/testing/user/userguide/vCGNAPT/index.rst new file mode 100644 index 00000000..aacda1c2 --- /dev/null +++ b/docs/testing/user/userguide/vCGNAPT/index.rst @@ -0,0 +1,11 @@ +#################### +vCGNAPT samplevnf +#################### + +.. toctree:: + :numbered: + :maxdepth: 2 + + RELEASE_NOTES.rst + README.rst + INSTALL.rst diff --git a/docs/testing/user/userguide/vFW/INSTALL.rst b/docs/testing/user/userguide/vFW/INSTALL.rst new file mode 100644 index 00000000..b4cee086 --- /dev/null +++ b/docs/testing/user/userguide/vFW/INSTALL.rst @@ -0,0 +1,229 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +============================ +vFW - Installation Guide +============================ + + +vFW Compilation +================ +After downloading (or doing a git clone) in a directory (samplevnf) + +------------- +Dependencies +------------- + +- DPDK supported versions ($DPDK_RTE_VER = 16.04, 16.11, 17.02 or 17.05). Downloaded and installed via vnf_build.sh or manually from [here] (http://fast.dpdk.org/rel/dpdk-$DPDK_RTE_VER.zip). Both the options are available as part of vnf_build.sh below. +- libpcap-dev +- libzmq +- libcurl + +--------------------- +Environment variables +--------------------- +Apply all the additional patches in 'patches/dpdk_custom_patch/' and build dpdk +(NOTE: required only for DPDK version 16.04). + +:: + + export RTE_SDK=<dpdk directory> + + export RTE_TARGET=x86_64-native-linuxapp-gcc + +This is done by vnf_build.sh script. + +Auto Build +=========== +$ ./tools/vnf_build.sh in samplevnf root folder + +Follow the steps in the screen from option [1] --> [9] and select option [8] +to build the vnfs. +It will automatically download selected DPDK version and any required patches +and will setup everything and build vFW VNFs. + +Following are the options for setup: + +:: + + ---------------------------------------------------------- + Step 1: Environment setup. + ---------------------------------------------------------- + [1] Check OS and network connection + [2] Select DPDK RTE version + + ---------------------------------------------------------- + Step 2: Download and Install + ---------------------------------------------------------- + [3] Agree to download + [4] Download packages + [5] Download DPDK zip + [6] Build and Install DPDK + [7] Setup hugepages + + ---------------------------------------------------------- + Step 3: Build VNFs + ---------------------------------------------------------- + [8] Build all VNFs (vACL, vCGNAPT, vFW, UDP_Replay) + + [9] Exit Script + +An vFW executable will be created at the following location +samplevnf/VNFs/vFW/build/vFW + +Manual Build +============= +1. Download DPDK supported version from dpdk.org + + - http://dpdk.org/browse/dpdk/snapshot/dpdk-$DPDK_RTE_VER.zip +2. unzip dpdk-$DPDK_RTE_VER.zip and apply dpdk patches only in case of 16.04 + (Not required for other DPDK versions) + + - cd dpdk + + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch + - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/disable-acl-debug-logs.patch + - patch -p1 < $VNF_CORE/patches/dpdk_custom_patch/set-log-level-to-info.patch + + - build dpdk + - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc + - cd x86_64-native-linuxapp-gcc + - make + + - Setup huge pages + - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified + explicitly and can also be optionally set as the default hugepage + size for the system. For example, to reserve 8G of hugepage memory in + the form of eight 1G pages, the following options should be passed + to the kernel: + * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048 + - Add this to Go to /etc/default/grub configuration file. + - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048" + to the GRUB_CMDLINE_LINUX entry. + +3. Setup Environment Variable + + - export RTE_SDK=<samplevnf>/dpdk + - export RTE_TARGET=x86_64-native-linuxapp-gcc + - export VNF_CORE=<samplevnf> + + or using ./tools/setenv.sh + +4. Build vFW VNFs + + - cd <samplevnf>/VNFs/vFW + - make clean + - make + +5. The vFW executable will be created at the following location + + - <samplevnf>/VNFs/vFW/build/vFW + +Run +==== + +---------------------- +Setup Port to run VNF +---------------------- +The tools folder and utilities names are different across DPDK versions. + +:: + + For DPDK versions 16.04 + 1. cd <samplevnf>/dpdk + 2. ./tools/dpdk_nic_bind.py --status <--- List the network device + 3. ./tools/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1> + + .. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + For DPDK versions 16.11 + 1. cd <samplevnf>/dpdk + 2. ./tools/dpdk-devbind.py --status <--- List the network device + 3. ./tools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + + .. _More details: http://dpdk.org/doc/guides-16.11/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + For DPDK versions 17.xx + 1. cd <samplevnf>/dpdk + 2. ./usertools/dpdk-devbind.py --status <--- List the network device + 3. ./usertools/dpdk-devbind.py -b igb_uio <PCI Port 0> <PCI Port 1> + + .. _More details: http://dpdk.org/doc/guides-17.05/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + +Make the necessary changes to the config files to run the vFW VNF + +eg: ports_mac_list = 00:00:00:30:21:01 00:00:00:30:21:00 + +---------------------- +Firewall Run commands +---------------------- +Update the configuration according to system configuration. + +:: + + ./vFW -p <port mask> -f <config> -s <script> - SW_LoadB + ./vFW -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB + +Run IPv4 +---------- +To run the vFW in Software LB or Hardware LB with IPv4 traffic + +:: + + Software LoadB: + + cd <samplevnf>/VNFs/vFW/ + ./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc + + + Hardware LoadB: + + cd <samplevnf>/VNFs/vFW/ + ./build/vFW -p 0x3 -f ./config/VFW_HWLB_IPV4_SinglePortPair_4Thread.cfg -s ./config/VFW_HWLB_IPV4_SinglePortPair_script.cfg --hwlb 4 + +Run IPv6 +--------- +To run the vFW in Software LB or Hardware LB with IPvr64 traffic + +:: + + Software LoadB: + + cd <samplevnf>/VNFs/vFW + ./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV6_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_IPV6_SinglePortPair_script.tc + + + Hardware LoadB: + + cd <samplevnf>/VNFs/vFW/ + ./build/vFW -p 0x3 -f ./config/VFW_HWLB_IPV6_SinglePortPair_4Thread.cfg -s ./config/VFW_HWLB_IPV6_SinglePortPair_script.tc --hwlb 4 + +vFW execution on BM & SRIOV +--------------------------- +To run the VNF, execute the following + +:: + + samplevnf/VNFs/vFW# ./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc + Command Line Params: + -p PORTMASK: Hexadecimal bitmask of ports to configure + -f CONFIG FILE: vFW configuration file + -s SCRIPT FILE: vFW script file + +vFW execution on OVS +-------------------- + +:: + + To run the VNF, execute the following: + samplevnf/VNFs/vFW# ./build/vFW -p 0x3 -f ./config/VFW_SWLB_IPV4_SinglePortPair_4Thread.cfg -s ./config/VFW_SWLB_IPV4_SinglePortPair_script.tc --disable-hw-csum + Command Line Params: + -p PORTMASK: Hexadecimal bitmask of ports to configure + -f CONFIG FILE: vFW configuration file + -s SCRIPT FILE: vFW script file + --disable-hw-csum :Disable TCP/UDP hw checksum diff --git a/docs/testing/user/userguide/vFW/README.rst b/docs/testing/user/userguide/vFW/README.rst new file mode 100644 index 00000000..cc3c2b40 --- /dev/null +++ b/docs/testing/user/userguide/vFW/README.rst @@ -0,0 +1,182 @@ +.. This work is licensed under a creative commons attribution 4.0 international +.. license. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) opnfv, national center of scientific research "demokritos" and others. + +======================================================== +vFW - Readme +======================================================== + +Introduction +=============== +The virtual firewall (vFW) is an application implements Firewall. vFW is used +as a barrier between secure internal and an un-secure external network. The +firewall performs Dynamic Packet Filtering. This involves keeping track of the +state of Layer 4 (Transport)traffic,by examining both incoming and outgoing +packets over time. Packets which don't fall within expected parameters given +the state of the connection are discarded. The Dynamic Packet Filtering will +be performed by Connection Tracking component, similar to that supported in +linux. The firewall also supports Access Controlled List(ACL) for rule based +policy enforcement. Firewall is built on top of DPDK and uses the packet library. + +---------- +About DPDK +---------- +The DPDK IP Pipeline Framework provides a set of libraries to build a pipeline +application. In this document, vFW will be explained in detail with its own +building blocks. + +This document assumes the reader possesses the knowledge of DPDK concepts and +packet framework. For more details, read DPDK Getting Started Guide, DPDK +Programmers Guide, DPDK Sample Applications Guide. + +Scope +========== +This application provides a standalone DPDK based high performance vFW Virtual +Network Function implementation. + +Features +=========== +The vFW VNF currently supports the following functionality: + • Basic packet filtering (malformed packets, IP fragments) + • Connection tracking for TCP and UDP + • Access Control List for rule based policy enforcement + • SYN-flood protection via Synproxy* for TCP + • UDP, TCP and ICMP protocol pass-through + • CLI based enable/disable connection tracking, synproxy, basic packet + filtering + • Multithread support + • Multiple physical port support + • Hardware and Software Load Balancing + • L2L3 stack support for ARP/ICMP handling + • ARP (request, response, gratuitous) + • ICMP (terminal echo, echo response, passthrough) + • ICMPv6 and ND (Neighbor Discovery) + +High Level Design +==================== +The Firewall performs basic filtering for malformed packets and dynamic packet +filtering incoming packets using the connection tracker library. +The connection data will be stored using a DPDK hash table. There will be one +entry in the hash table for each connection. The hash key will be based on source +address/port,destination address/port, and protocol of a packet. The hash key +will be processed to allow a single entry to be used, regardless of which +direction the packet is flowing (thus changing the source and destination). +The ACL is implemented as libray stattically linked to vFW, which is used for +used for rule based packet filtering. + +TCP connections and UDP pseudo connections will be tracked separately even if +theaddresses and ports are identical. Including the protocol in the hash key +will ensure this. + +The Input FIFO contains all the incoming packets for vFW filtering. The vFW +Filter has no dependency on which component has written to the Input FIFO. +Packets will be dequeued from the FIFO in bulk for processing by the vFW. +Packets will be enqueued to the output FIFO. +The software or hardware loadbalancing can be used for traffic distribution +across multiple worker threads. The hardware loadbalancing require ethernet +flow director support from hardware (eg. Fortville x710 NIC card). +The Input and Output FIFOs will be implemented using DPDK Ring Buffers. + +Components of vFW +==================== + +In vFW, each component is constructed using packet framework pipelines. +It includes Rx and Tx Driver, Master pipeline, load balancer pipeline and +vfw worker pipeline components. A Pipeline framework is a collection of input +ports, table(s),output ports and actions (functions). + +--------------------------- +Receive and Transmit Driver +--------------------------- +Packets will be received in bulk and provided to LoadBalancer(LB) thread. +Transimit takes packets from worker threads in a dedicated ring and sent to +hardware queue. + +--------------------------- +Master Pipeline +--------------------------- +The Master component is part of all the IP Pipeline applications. This component +does not process any packets and should configure with Core 0, to allow +other cores for processing of the traffic. This component is responsible for +1. Initializing each component of the Pipeline application in different threads +2. Providing CLI shell for the user control/debug +3. Propagating the commands from user to the corresponding components + +------------------ +ARPICMP Pipeline +------------------ +This pipeline processes the APRICMP packets. + +--------------- +TXRX Pipelines +--------------- +The TXTX and RXRX pipelines are pass through pipelines to forward both ingress +and egress traffic to Loadbalancer. This is required when the Software +Loadbalancer is used. + +---------------------- +Load Balancer Pipeline +---------------------- +The vFW support both hardware and software balancing for load balancing of +traffic across multiple VNF threads. The Hardware load balancing require support +from hardware like Flow Director for steering of packets to application through +hardware queues. + +The Software Load balancer is also supported if hardware load balancing can't be +used for any reason. The TXRX along with LOADB pipeline provides support for +software load balancing by distributing the flows to Multiple vFW worker +threads. +Loadbalancer (HW or SW) distributes traffic based on the 5 tuple (src addr, src +port, dest addr, dest port and protocol) applying an XOR logic distributing to +active worker threads, thereby maintaining an affinity of flows to worker +threads. + +--------------- +vFW Pipeline +--------------- +The vFW performs the basic packet filtering and will drop the invalid and +malformed packets.The Dynamic packet filtering done using the connection tracker +library. The packets are processed in bulk and Hash table is used to maintain +the connection details. +Every TCP/UDP packets are passed through connection tracker library for valid +connection. The ACL library integrated to firewall provide rule based filtering. + +------------------------ +vFW Topology +------------------------ + +:: + + IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA + operation: + Egress --> The packets sent out from ixia(port 0) will be Firewalled to ixia(port 1). + Igress --> The packets sent out from ixia(port 1) will be Firewalled to ixia(port 0). + +------------------------------------ +vFW Topology (L4REPLAY) +------------------------------------ + +:: + + IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY + operation: + Egress --> The packets sent out from ixia will pass through vFW to L3FWD/L4REPLAY. + Ingress --> The L4REPLAY upon reception of packets (Private to Public Network), + will immediately replay back the traffic to IXIA interface. (Pub -->Priv). + +-------------------- +How to run L4Replay +-------------------- +After the installation of samplevnf + +:: + + go to <samplevnf/VNFs/L4Replay> + ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)" + eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)" + +Installation, Compile and Execution +==================================== +Plase refer to <samplevnf>/docs/vFW/INSTALL.rst for installation, configuration, +compilation and execution. diff --git a/docs/testing/user/userguide/vFW/RELEASE_NOTES.rst b/docs/testing/user/userguide/vFW/RELEASE_NOTES.rst new file mode 100644 index 00000000..540f671d --- /dev/null +++ b/docs/testing/user/userguide/vFW/RELEASE_NOTES.rst @@ -0,0 +1,92 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +========================================================= +vFW - Release Notes +========================================================= + +Introduction +================ + +This is a beta release for Sample Virtual Firewall VNF. +This vFW can application can be run independently (refer INSTALL.rst). + +User Guide +=============== +Refer to README.rst for further details on vFW, HLD, features supported, test +plan. For build configurations and execution requisites please refer to +INSTALL.rst. + +Feature for this release +=========================== +This release supports following features as part of vFW + + - Basic packet filtering (malformed packets, IP fragments) + - Connection tracking for TCP and UDP + - Access Control List for rule based policy enforcement + - SYN-flood protection via Synproxy* for TCP + - UDP, TCP and ICMP protocol pass-through + - CLI based enable/disable connection tracking, synproxy, basic packet + filtering + - L2L3 stack support for ARP/ICMP handling + - ARP (request, response, gratuitous) + - ICMP (terminal echo, echo response, passthrough) + - ICMPv6 and ND (Neighbor Discovery) + - Hardware and Software Load Balancing + - Multithread support + - Multiple physical port support + +System requirements - OS and kernel version +============================================== +This is supported on Ubuntu 14.04 and Ubuntu 16.04 and kernel version less than 4.5 + + VNFs on BareMetal support: + OS: Ubuntu 14.04 or 16.04 LTS + kernel: < 4.5 + http://releases.ubuntu.com/16.04/ + Download/Install the image: ubuntu-16.04.1-server-amd64.iso + + VNFs on Standalone Hypervisor: + HOST OS: Ubuntu 14.04 or 16.04 LTS + http://releases.ubuntu.com/16.04/ + Download/Install the image: ubuntu-16.04.1-server-amd64.iso + + - OVS (DPDK) - 2.5 + - kernel: < 4.5 + - Hypervisor - KVM + - VM OS - Ubuntu 16.04/Ubuntu 14.04 + +Known Bugs and limitations +============================= + + - Hadware Load Balancer feature is supported on fortville nic FW version 4.53 and below. + - Hardware Checksum offload is not supported for IPv6 traffic. + - vFW on sriov is tested upto 4 threads + - Http Multiple clients/server with HWLB is not working + +Future Work +============== +Following would be possible enhancement functionalities + + - Automatic enable/disable of synproxy + - Support TCP timestamps with synproxy + - FTP ALG integration + - Performance optimization on different platforms + +References +============= +Following links provides additional information for differenet version of DPDKs + +.. _QUICKSTART: + http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-16.11/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-17.02/linux_gsg/quick_start.html + http://dpdk.org/doc/guides-17.05/linux_gsg/quick_start.html + +.. _DPDKGUIDE: + http://dpdk.org/doc/guides-16.04/prog_guide/index.html + http://dpdk.org/doc/guides-16.11/prog_guide/index.html + http://dpdk.org/doc/guides-17.02/prog_guide/index.html + http://dpdk.org/doc/guides-17.05/prog_guide/index.html diff --git a/docs/testing/user/userguide/vFW/index.rst b/docs/testing/user/userguide/vFW/index.rst new file mode 100644 index 00000000..8b6a8186 --- /dev/null +++ b/docs/testing/user/userguide/vFW/index.rst @@ -0,0 +1,11 @@ +#################### +vFW samplevnf +#################### + +.. toctree:: + :numbered: + :maxdepth: 2 + + RELEASE_NOTES.rst + README.rst + INSTALL.rst |