diff options
Diffstat (limited to 'docs/vCGNAPT')
-rw-r--r-- | docs/vCGNAPT/INSTALL.rst | 185 | ||||
-rw-r--r-- | docs/vCGNAPT/README.rst | 189 | ||||
-rw-r--r-- | docs/vCGNAPT/RELEASE_NOTES.rst | 80 |
3 files changed, 454 insertions, 0 deletions
diff --git a/docs/vCGNAPT/INSTALL.rst b/docs/vCGNAPT/INSTALL.rst new file mode 100644 index 00000000..3a556819 --- /dev/null +++ b/docs/vCGNAPT/INSTALL.rst @@ -0,0 +1,185 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +============================ +CGNAPT - Installation Guide +============================ + + +vCGNAPT Compilation +=================== + +After downloading (or doing a git clone) in a directory (samplevnf) + +###### Dependencies +* DPDK 16.04: Downloaded and installed via vnf_build.sh or manually from [here](http://fast.dpdk.org/rel/dpdk-16.04.tar.xz) +Both the options are available as part of vnf_build.sh below. +* libpcap-dev +* libzmq +* libcurl + +###### Environment variables + +Apply all the additional patches in 'patches/dpdk_custom_patch/' and build dpdk + +:: + export RTE_SDK=<dpdk 16.04 directory> + export RTE_TARGET=x86_64-native-linuxapp-gcc + +This is done by vnf_build.sh script. + +Auto Build: +========== +$ ./tools/vnf_build.sh in samplevnf root folder + +Follow the steps in the screen from option [1] --> [8] and select option [7] +to build the vnfs. +It will automatically download DPDK 16.04 and any required patches and will setup +everything and build vCGNAPT VNFs. + +Following are the options for setup: + +:: + + ---------------------------------------------------------- + Step 1: Environment setup. + ---------------------------------------------------------- + [1] Check OS and network connection + + ---------------------------------------------------------- + Step 2: Download and Install + ---------------------------------------------------------- + [2] Agree to download + [3] Download packages + [4] Download DPDK zip (optional, use it when option 4 fails) + [5] Install DPDK + [6] Setup hugepages + + ---------------------------------------------------------- + Step 3: Build VNF + ---------------------------------------------------------- + [7] Build VNF + + [8] Exit Script + +An vCGNAPT executable will be created at the following location +samplevnf/VNFs/vCGNAPT/build/vCGNAPT + + +Manual Build: +============ +1. Download DPDK 16.04 from dpdk.org + - http://dpdk.org/browse/dpdk/snapshot/dpdk-16.04.zip +2. unzip dpdk-16.04 and apply dpdk patch + - cd dpdk-16.04 + - patch -p0 < VNF_CORE/patches/dpdk_custom_patch/rte_pipeline.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-management.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-Rx-hang-when-disable-LLDP.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-link-status-change-interrupt.patch + - patch -p1 < VNF_CORE/patches/dpdk_custom_patch/i40e-fix-VF-bonded-device-link-down.patch + - build dpdk + - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc + - cd x86_64-native-linuxapp-gcc + - make + - Setup huge pages + - For 1G/2M hugepage sizes, for example 1G pages, the size must be specified + explicitly and can also be optionally set as the default hugepage size for + the system. For example, to reserve 8G of hugepage memory in the form of + eight 1G pages, the following options should be passed to the kernel: + * default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048 + - Add this to Go to /etc/default/grub configuration file. + - Append "default_hugepagesz=1G hugepagesz=1G hugepages=8 hugepagesz=2M hugepages=2048" + to the GRUB_CMDLINE_LINUX entry. +3. Setup Environment Variable + - export RTE_SDK=<samplevnf>/dpdk-16.04 + - export RTE_TARGET=x86_64-native-linuxapp-gcc + - export VNF_CORE=<samplevnf> + or using ./toot/setenv.sh +4. Build vCGNAPT VNFs + - cd <samplevnf>/VNFs/vCGNAPT + - make clean + - make +5. An vCGNAPT executable will be created at the following location + - <samplevnf>/VNFs/vCGNAPT/build/vCGNAPT + +Run +==== + +Setup Port to run VNF: +---------------------- +:: + 1. cd <samplevnf>/dpdk-16.04 + 3. ./tool/dpdk_nic_bind.py --status <--- List the network device + 2. ./tool/dpdk_nic_bind.py -b igb_uio <PCI Port 0> <PCI Port 1> + .. _More details: http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#binding-and-unbinding-network-ports-to-from-the-kernel-modules + + Make the necessary changes to the config files to run the vCGNAPT VNF + eg: ports_mac_list = 00:00:00:30:21:F0 00:00:00:30:21:F1 + +Dynamic CGNAPT +-------------- +Update the configuration according to system configuration. + +:: + ./vCGNAPT -p <port mask> -f <config> -s <script> - SW_LoadB + ./vCGNAPT -p <port mask> -f <config> -s <script> -hwlb <num_WT> - HW_LoadB + +Static CGNAPT +------------- +Update the script file and add Static NAT Entry + +:: + e.g, + ;p <pipeline id> entry addm <prv_ipv4/6> prvport> <pub_ip> <pub_port> <phy_port> <ttl> <no_of_entries> <end_prv_port> <end_pub_port> + ;p 3 entry addm 152.16.100.20 1234 152.16.40.10 1 0 500 65535 1234 65535 + +Run IPv4 +---------- +:: + Software LoadB + -------------- + cd <samplevnf>/VNFs/vCGNAPT/build + ./vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg + + + Hardware LoadB + -------------- + cd <samplevnf>/VNFs/vCGNAPT/build + ./vCGNAPT -p 0x3 -f ./config/arp_hwlb-2P-1T.cfg -s ./config/arp_hwlb_scriptfile_2P.cfg --hwlb 1 + +Run IPv6 +--------- +:: + Software LoadB + -------------- + cd <samplevnf>/VNFs/vCGNAPT/build + ./vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T-ipv6.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg + + + Hardware LoadB + -------------- + cd <samplevnf>/VNFs/vCGNAPT/build + ./vCGNAPT -p 0x3 -f ./config/arp_hwlb-2P-1T-ipv6.cfg -s ./config/arp_hwlb_scriptfile_2P.cfg --hwlb 1 + +vCGNAPT execution on BM & SRIOV: +-------------------------------- +:: + To run the VNF, execute the following: + samplevnf/VNFs/vCGNAPT# ./build/vCGNAPT -p 0x3 -f ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg + Command Line Params: + -p PORTMASK: Hexadecimal bitmask of ports to configure + -f CONFIG FILE: vCGNAPT configuration file + -s SCRIPT FILE: vCGNAPT script file + +vCGNAPT execution on OVS: +------------------------- +:: + To run the VNF, execute the following: + samplevnf/VNFs/vCGNAPT# ./build/vCGNAPT -p 0x3 ./config/arp_txrx-2P-1T.cfg -s ./config/arp_txrx_ScriptFile_2P.cfg --disable-hw-csum + Command Line Params: + -p PORTMASK: Hexadecimal bitmask of ports to configure + -f CONFIG FILE: vCGNAPT configuration file + -s SCRIPT FILE: vCGNAPT script file + --disable-hw-csum :Disable TCP/UDP hw checksum diff --git a/docs/vCGNAPT/README.rst b/docs/vCGNAPT/README.rst new file mode 100644 index 00000000..eda94831 --- /dev/null +++ b/docs/vCGNAPT/README.rst @@ -0,0 +1,189 @@ +.. this work is licensed under a creative commons attribution 4.0 international +.. license. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) opnfv, national center of scientific research "demokritos" and others. + +======================================================== +Carrier Grade Network Address Port Translation - vCGNAPT +======================================================== + +1 Introduction +============== +This application implements vCGNAPT. The idea of vCGNAPT is to extend the life of +the service providers IPv4 network infrastructure and mitigate IPv4 address +exhaustion by using address and port translation in large scale. It processes the +traffic in both the directions. + +It also supports the connectivity between the IPv6 access network to IPv4 data network +using the IPv6 to IPv4 address translation and vice versa. + +About DPDK +---------- +The DPDK IP Pipeline Framework provides set of libraries to build a pipeline +application. In this document, CG-NAT application will be explained with its +own building blocks. + +This document assumes the reader possess the knowledge of DPDK concepts and IP +Pipeline Framework. For more details, read DPDK Getting Started Guide, DPDK +Programmers Guide, DPDK Sample Applications Guide. + +2. Scope +========== +This application provides a standalone DPDK based high performance vCGNAPT +Virtual Network Function implementation. + +3. Features +=========== +The vCGNAPT VNF currently supports the following functionality: + • Static NAT + • Dynamic NAT + • Static NAPT + • Dynamic NAPT + • ARP (request, response, gratuitous) + • ICMP (terminal echo, echo response, passthrough) + • ICMPv6 and ND (Neighbor Discovery) + • UDP, TCP and ICMP protocol passthrough + • Multithread support + • Multiple physical port support + • Limiting max ports per client + • Limiting max clients per public IP address + • Live Session tracking to NAT flow + • NAT64 + • PCP Support + • ALG SIP + • ALG FTP + +4. High Level Design +==================== +The Upstream path defines the traffic from Private to Public and the downstream +path defines the traffic from Public to Private. The vCGNAPT has same set of +components to process Upstream and Downstream traffic. + +In vCGNAPT application, each component is constructed as IP Pipeline framework. +It includes Master pipeline component, load balancer pipeline component and vCGNAPT +pipeline component. + +A Pipeline framework is collection of input ports, table(s), output ports and +actions (functions). In vCGNAPT pipeline, main sub components are the Inport function +handler, Table and Table function handler. vCGNAPT rules will be configured in the +table which translates egress and ingress traffic according to physical port +information from which side packet is arrived. The actions can be forwarding to the +output port (either egress or ingress) or to drop the packet. + +vCGNAPT Graphical Overview +========================== +The idea of vCGNAPT is to extend the life of the service providers IPv4 network infrastructure +and mitigate IPv4 address exhaustion by using address and port translation in large scale. +It processes the traffic in both the directions. + +.. code-block:: console + +------------------+ + | +-----+ + | Private consumer | CPE |---------------+ + | IPv4 traffic +-----+ | + +------------------+ | + +------------------+ v +----------------+ + | | +------------+ | | + | Private IPv4 | | vCGNAPT | | Public | + | access network | | NAT44 | | IPv4 traffic | + | | +------------+ | | + +------------------+ | +----------------+ + +------------------+ | + | +-----+ | + | Private consumer| CPE |-----------------+ + | IPv4 traffic +-----+ + +------------------+ + Figure 1: vCGNAPT deployment in Service provider network + + +Components of vCGNAPT +===================== +In vCGNAPT, each component is constructed as a packet framework. It includes Master pipeline +component, driver, load balancer pipeline component and vCGNAPT worker pipeline component. A +pipeline framework is a collection of input ports, table(s), output ports and actions +(functions). + +Receive and transmit driver +---------------------------- +Packets will be received in bulk and provided to load balancer thread. The transmit takes +packets from worker thread in a dedicated ring and sent to the hardware queue. + +ARPICMP pipeline +------------------------ +ARPICMP pipeline is responsible for handling all l2l3 arp related packets. + +---------------- +This component does not process any packets and should configure with Core 0, +to save cores for other components which processes traffic. The component +is responsible for: + 1. Initializing each component of the Pipeline application in different threads + 2. Providing CLI shell for the user + 3. Propagating the commands from user to the corresponding components. + 4. ARP and ICMP are handled here. + +Load Balancer pipeline +------------------------ +Load balancer is part of the Multi-Threaded CGMAPT release which distributes +the flows to Multiple ACL worker threads. + +Distributes traffic based on the 2 or 5 tuple (source address, source port, +destination address, destination port and protocol) applying an XOR logic +distributing the load to active worker threads, thereby maintaining an +affinity of flows to worker threads. + +Tuple can be modified/configured using configuration file + +4. vCGNAPT - Static +==================== +The vCGNAPT component performs translation of private IP & port to public IP & +port at egress side and public IP & port to private IP & port at Ingress side +based on the NAT rules added to the pipeline Hash table. The NAT rules are +added to the Hash table via user commands. The packets that have a matching +egress key or ingress key in the NAT table will be processed to change IP & +port and will be forwarded to the output port. The packets that do not have a +match will be taken a default action. The default action may result in drop of +the packets. + +5. vCGNAPT- Dynamic +=================== +The vCGNAPT component performs translation of private IP & port to public IP & port +at egress side and public IP & port to private IP & port at Ingress side based on the +NAT rules added to the pipeline Hash table. Dynamic nature of vCGNAPT refers to the +addition of NAT entries in the Hash table dynamically when new packet arrives. The NAT +rules will be added to the Hash table automatically when there is no matching entry in +the table and the packet is circulated through software queue. The packets that have a +matching egress key or ingress key in the NAT table will be processed to change IP & +port and will be forwarded to the output port defined in the entry. + +Dynamic vCGNAPT acts as static one too, we can do NAT entries statically. Static NAT +entries port range must not conflict to dynamic NAT port range. + +vCGNAPT Static Topology: +------------------------ +:: + IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 1) IXIA + operation: + Egress --> The packets sent out from ixia(port 0) will be CGNAPTed to ixia(port 1). + Igress --> The packets sent out from ixia(port 1) will be CGNAPTed to ixia(port 0). + +vCGNAPT Dynamic Topology (L4REPLAY): +------------------------------------ +:: + IXIA(Port 0)-->(Port 0)VNF(Port 1)-->(Port 0)L4REPLAY + operation: + Egress --> The packets sent out from ixia will be CGNAPTed to L3FWD/L4REPLAY. + Ingress --> The L4REPLAY upon reception of packets (Private to Public Network), + will immediately replay back the traffic to IXIA interface. (Pub -->Priv). + +How to run L4Replay: +-------------------- +:: + 1. After the installation of samplevnf: + go to <samplevnf/VNFs/L4Replay> + 2. ./buid/L4replay -c core_mask -n no_of_channels(let it be as 2) -- -p PORT_MASK --config="(port,queue,lcore)" + eg: ./L4replay -c 0xf -n 4 -- -p 0x3 --config="(0,0,1)" + +6. Installation, Compile and Execution +----------------------------------------------------------------- +Plase refer to <samplevnf>/docs/vCGNAPT/INSTALL.rst for installation, configuration, compilation +and execution. diff --git a/docs/vCGNAPT/RELEASE_NOTES.rst b/docs/vCGNAPT/RELEASE_NOTES.rst new file mode 100644 index 00000000..91b73075 --- /dev/null +++ b/docs/vCGNAPT/RELEASE_NOTES.rst @@ -0,0 +1,80 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +========================================================= +Carrier Grade Network Address Port Translation - vCGNAPT +========================================================= + +1. Introduction +================ +This is the beta release for vCGNAPT VNF. +vCGNAPT application can be run independently (refer INSTALL.rst). + +2. User Guide +=============== +Refer to README.rst for further details on vCGNAPT, HLD, features supported, test +plan. For build configurations and execution requisites please refer to +INSTALL.rst. + +3. Feature for this release +=========================== +This release supports following features as part of vCGNAPT: +- vCGNAPT can run as a standalone application on bare-metal linux server or on a + virtual machine using SRIOV and OVS dpdk. +- Static NAT +- Dynamic NAT +- Static NAPT +- Dynamic NAPT +- ARP (request, response, gratuitous) +- ICMP (terminal echo, echo response, passthrough) +- ICMPv6 and ND (Neighbor Discovery) +- UDP, TCP and ICMP protocol passthrough +- Multithread support +- Multiple physical port support +- Limiting max ports per client +- Limiting max clients per public IP address +- Live Session tracking to NAT flow +- PCP support +- NAT64 +- ALG SIP +- ALG FTP + +4. System requirements - OS and kernel version +============================================== +This is supported on Ubuntu 14.04 and Ubuntu 16.04 and kernel version less than 4.5 + + VNFs on BareMetal support: + OS: Ubuntu 14.04 or 16.04 LTS + kernel: < 4.5 + http://releases.ubuntu.com/16.04/ + Download/Install the image: ubuntu-16.04.1-server-amd64.iso + + VNFs on Standalone Hypervisor + HOST OS: Ubuntu 14.04 or 16.04 LTS + http://releases.ubuntu.com/16.04/ + Download/Install the image: ubuntu-16.04.1-server-amd64.iso + - OVS (DPDK) - 2.5 + - kernel: < 4.5 + - Hypervisor - KVM + - VM OS - Ubuntu 16.04/Ubuntu 14.04 + +5. Known Bugs and limitations +============================= +- Hadware Loab Balancer feature is supported on fortville nic FW version 4.53 and below. +- L4 UDP Replay is used to capture throughput for dynamic cgnapt +- Hardware Checksum offload is not supported for IPv6 traffic. +- CGNAPT on sriov is tested till 4 threads + +6. Future Work +============== +- SCTP passthrough support +- Multi-homing support +- Performance optimization on different platforms + +7. References +============= +Following links provides additional information + .. _QUICKSTART: http://dpdk.org/doc/guides-16.04/linux_gsg/quick_start.html + .. _DPDKGUIDE: http://dpdk.org/doc/guides-16.04/prog_guide/index.html |