diff options
Diffstat (limited to 'docs/userguide/packet_forwarding.userguide.rst')
-rw-r--r-- | docs/userguide/packet_forwarding.userguide.rst | 555 |
1 files changed, 555 insertions, 0 deletions
diff --git a/docs/userguide/packet_forwarding.userguide.rst b/docs/userguide/packet_forwarding.userguide.rst new file mode 100644 index 000000000..ba117508c --- /dev/null +++ b/docs/userguide/packet_forwarding.userguide.rst @@ -0,0 +1,555 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. + +.. http://creativecommons.org/licenses/by/4.0 + +================= +PACKET FORWARDING +================= +======================= +About Packet Forwarding +======================= + +Packet Forwarding is a test suite of KVMFORNFV which is used to measure the total time taken by a +**Packet** generated by the traffic generator to return from Guest/Host as per the implemented +scenario. Packet Forwarding is implemented using VSWITCHPERF/``VSPERF software of OPNFV`` and an +``IXIA Traffic Generator``. + +Version Features +---------------- + ++-----------------------------+---------------------------------------------------+ +| | | +| **Release** | **Features** | +| | | ++=============================+===================================================+ +| | - Packet Forwarding is not part of Colorado | +| Colorado | release of KVMFORNFV | +| | | ++-----------------------------+---------------------------------------------------+ +| | - Packet Forwarding is a testcase in KVMFORNFV | +| | - Implements three scenarios (Host/Guest/SRIOV) | +| | as part of testing in KVMFORNFV | +| Danube | - Uses available testcases of OPNFV's VSWTICHPERF | +| | software (PVP/PVVP) | +| | - Works with IXIA Traffic Generator | ++-----------------------------+---------------------------------------------------+ + +====== +VSPERF +====== + +VSPerf is an OPNFV testing project. +VSPerf will develop a generic and architecture agnostic vSwitch testing framework and associated +tests, that will serve as a basis for validating the suitability of different vSwitch +implementations in a Telco NFV deployment environment. The output of this project will be utilized +by the OPNFV Performance and Test group and its associated projects, as part of OPNFV Platform and +VNF level testing and validation. + +For complete VSPERF documentation go to `link.`_ + +.. _link.: <http://artifacts.opnfv.org/vswitchperf/colorado/index.html> + + +Installation +------------ +Guidelines of installating `VSPERF`_. + +.. _VSPERF: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html> + +Supported Operating Systems +--------------------------- + +* CentOS 7 +* Fedora 20 +* Fedora 21 +* Fedora 22 +* RedHat 7.2 +* Ubuntu 14.04 + +Supported vSwitches +------------------- +The vSwitch must support Open Flow 1.3 or greater. + +* OVS (built from source). +* OVS with DPDK (built from source). + +Supported Hypervisors +--------------------- + +* Qemu version 2.3. + +Other Requirements +------------------ +The test suite requires Python 3.3 and relies on a number of other +packages. These need to be installed for the test suite to function. + +Installation of required packages, preparation of Python 3 virtual +environment and compilation of OVS, DPDK and QEMU is performed by +script **systems/build_base_machine.sh**. It should be executed under +user account, which will be used for vsperf execution. + + **Please Note:** Password-less sudo access must be configured for given user + before script is executed. + +Execution of installation script: + +.. code:: bashFtrace.debugging.tool.userguide.rst + + $ cd Vswitchperf + $ cd systems + $ ./build_base_machine.sh + +Script **build_base_machine.sh** will install all the vsperf dependencies +in terms of system packages, Python 3.x and required Python modules. +In case of CentOS 7 it will install Python 3.3 from an additional repository +provided by Software Collections (`a link`_). In case of RedHat 7 it will +install Python 3.4 as an alternate installation in /usr/local/bin. Installation +script will also use `virtualenv`_ to create a vsperf virtual environment, +which is isolated from the default Python environment. This environment will +reside in a directory called **vsperfenv** in $HOME. + +You will need to activate the virtual environment every time you start a +new shell session. Its activation is specific to your OS: + +For running testcases VSPERF is installed on Intel pod1-node2 in which centos +operating system is installed. Only VSPERF installion on Centos is discussed here. +For installation steps on other operating systems please refer to `here`_. + +.. _here: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/index.html> + +For CentOS 7 +----------------- + +## Python 3 Packages + +To avoid file permission errors and Python version issues, use virtualenv to create an isolated environment with Python3. +The required Python 3 packages can be found in the `requirements.txt` file in the root of the test suite. +They can be installed in your virtual environment like so: + +.. code:: bash + + scl enable python33 bash + # Create virtual environment + virtualenv vsperfenv + cd vsperfenv + source bin/activate + pip install -r requirements.txt + + +You need to activate the virtual environment every time you start a new shell session. +To activate, simple run: + +.. code:: bash + + scl enable python33 bash + cd vsperfenv + source bin/activate + + +Working Behind a Proxy +----------------------- + +If you're behind a proxy, you'll likely want to configure this before running any of the above. For example: + +.. code:: bash + + export http_proxy=proxy.mycompany.com:123 + export https_proxy=proxy.mycompany.com:123 + + + +.. _a link: http://www.softwarecollections.org/en/scls/rhscl/python33/ +.. _virtualenv: https://virtualenv.readthedocs.org/en/latest/ + +For other OS specific activation click `this link`_: + +.. _this link: http://artifacts.opnfv.org/vswitchperf/colorado/configguide/installation.html#other-requirements + +Traffic-Generators +------------------- +VSPERF supports many Traffic-generators. For configuring VSPERF to work with the available traffic-generator go through `this`_. + +.. _this: <http://artifacts.opnfv.org/vswitchperf/colorado/configguide/trafficgen.html> + +VSPERF supports the following traffic generators: + + * Dummy (DEFAULT): Allows you to use your own external + traffic generator. + * IXIA (IxNet and IxOS) + * Spirent TestCenter + * Xena Networks + * MoonGen + +To see the list of traffic gens from the cli: + +.. code-block:: console + + $ ./vsperf --list-trafficgens + +This guide provides the details of how to install +and configure the various traffic generators. + +As KVM4NFV uses only IXIA traffic generator, it is discussed here. For complete documentation regarding traffic generators please follow this `link`_. + +.. _link: <https://gerrit.opnfv.org/gerrit/gitweb?p=vswitchperf.git;a=blob;f=docs/configguide/trafficgen.rst;h=85fc35b886d30db3b92a6b7dcce7ca742b70cbdc;hb=HEAD> + +========== +IXIA Setup +========== + +===================== +Hardware Requirements +===================== +VSPERF requires the following hardware to run tests: IXIA traffic generator (IxNetwork), a machine that runs the IXIA client software and a CentOS Linux release 7.1.1503 (Core) host. + +Installation +------------- + +Follow the [installation instructions] to install. + +IXIA Setup +------------ +On the CentOS 7 system +---------------------- +You need to install IxNetworkTclClient$(VER_NUM)Linux.bin.tgz. + +On the IXIA client software system +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Find the IxNetwork TCL server app (start -> All Programs -> IXIA -> IxNetwork -> IxNetwork_$(VER_NUM) -> IxNetwork TCL Server) + - Right click on IxNetwork TCL Server, select properties + - Under shortcut tab in the Target dialogue box make sure there is the argument "-tclport xxxx" where xxxx is your port number (take note of this port number you will need it for the 10_custom.conf file). + +.. Figure:: ../images/IXIA1.png + +- Hit Ok and start the TCL server application + +VSPERF configuration +-------------------- + +There are several configuration options specific to the IxNetworks traffic generator +from IXIA. It is essential to set them correctly, before the VSPERF is executed +for the first time. + +Detailed description of options follows: + + * TRAFFICGEN_IXNET_MACHINE - IP address of server, where IxNetwork TCL Server is running + * TRAFFICGEN_IXNET_PORT - PORT, where IxNetwork TCL Server is accepting connections from + TCL clients + * TRAFFICGEN_IXNET_USER - username, which will be used during communication with IxNetwork + TCL Server and IXIA chassis + * TRAFFICGEN_IXIA_HOST - IP address of IXIA traffic generator chassis + * TRAFFICGEN_IXIA_CARD - identification of card with dedicated ports at IXIA chassis + * TRAFFICGEN_IXIA_PORT1 - identification of the first dedicated port at TRAFFICGEN_IXIA_CARD + at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of + unidirectional traffic, it is essential to correctly connect 1st IXIA port to the 1st NIC + at DUT, i.e. to the first PCI handle from WHITELIST_NICS list. Otherwise traffic may not + be able to pass through the vSwitch. + * TRAFFICGEN_IXIA_PORT2 - identification of the second dedicated port at TRAFFICGEN_IXIA_CARD + at IXIA chassis; VSPERF uses two separated ports for traffic generation. In case of + unidirectional traffic, it is essential to correctly connect 2nd IXIA port to the 2nd NIC + at DUT, i.e. to the second PCI handle from WHITELIST_NICS list. Otherwise traffic may not + be able to pass through the vSwitch. + * TRAFFICGEN_IXNET_LIB_PATH - path to the DUT specific installation of IxNetwork TCL API + * TRAFFICGEN_IXNET_TCL_SCRIPT - name of the TCL script, which VSPERF will use for + communication with IXIA TCL server + * TRAFFICGEN_IXNET_TESTER_RESULT_DIR - folder accessible from IxNetwork TCL server, + where test results are stored, e.g. ``c:/ixia_results``; see test-results-share_ + * TRAFFICGEN_IXNET_DUT_RESULT_DIR - directory accessible from the DUT, where test + results from IxNetwork TCL server are stored, e.g. ``/mnt/ixia_results``; see + test-results-share_ + +.. _test-results-share: + +Test results share +------------------- + +VSPERF is not able to retrieve test results via TCL API directly. Instead, all test +results are stored at IxNetwork TCL server. Results are stored at folder defined by +``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` configuration parameter. Content of this +folder must be shared (e.g. via samba protocol) between TCL Server and DUT, where +VSPERF is executed. VSPERF expects, that test results will be available at directory +configured by ``TRAFFICGEN_IXNET_DUT_RESULT_DIR`` configuration parameter. + +Example of sharing configuration: + + * Create a new folder at IxNetwork TCL server machine, e.g. ``c:\ixia_results`` + * Modify sharing options of ``ixia_results`` folder to share it with everybody + * Create a new directory at DUT, where shared directory with results + will be mounted, e.g. ``/mnt/ixia_results`` + * Update your custom VSPERF configuration file as follows: + + .. code-block:: python + + TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/ixia_results' + TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia_results' + + Note: It is essential to use slashes '/' also in path + configured by ``TRAFFICGEN_IXNET_TESTER_RESULT_DIR`` parameter. + * Install cifs-utils package. + + e.g. at rpm based Linux distribution: + + .. code-block:: console + + yum install cifs-utils + + * Mount shared directory, so VSPERF can access test results. + + e.g. by adding new record into ``/etc/fstab`` + + .. code-block:: console + + mount -t cifs //_TCL_SERVER_IP_OR_FQDN_/ixia_results /mnt/ixia_results + -o file_mode=0777,dir_mode=0777,nounix + +It is recommended to verify, that any new file inserted into ``c:/ixia_results`` folder +is visible at DUT inside ``/mnt/ixia_results`` directory. + + +Cloning and building src dependencies +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +In order to run VSPERF, you will need to download DPDK and OVS. You can do this manually and build +them in a preferred location, or you could use vswitchperf/src. The vswitchperf/src directory +contains makefiles that will allow you to clone and build the libraries that VSPERF depends on, +such as DPDK and OVS. To clone and build simply: + +.. code:: bash + + cd src + make + +To delete a src subdirectory and its contents to allow you to re-clone simply use: + +.. code:: bash + + make cleanse + +Configure the `./conf/10_custom.conf` file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The supplied `10_custom.conf` file must be modified, as it contains configuration items for which there are no reasonable default values. + +The configuration items that can be added is not limited to the initial contents. Any configuration item mentioned in any .conf file in `./conf` directory can be added and that item will be overridden by the custom +configuration value. + +Using a custom settings file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Alternatively a custom settings file can be passed to `vsperf` via the `--conf-file` argument. + +.. code:: bash + + ./vsperf --conf-file <path_to_settings_py> ... + +Note that configuration passed in via the environment (`--load-env`) or via another command line +argument will override both the default and your custom configuration files. This +"priority hierarchy" can be described like so (1 = max priority): + +1. Command line arguments +2. Environment variables +3. Configuration file(s) + +Executing tests +~~~~~~~~~~~~~~~~ +Before running any tests make sure you have root permissions by adding the following line to /etc/sudoers: +.. code:: bash + + username ALL=(ALL) NOPASSWD: ALL + +username in the example above should be replaced with a real username. + +To list the available tests: + +.. code:: bash + + ./vsperf --list-tests + + +To run a group of tests, for example all tests with a name containing +'RFC2544': + +.. code:: bash + + ./vsperf --conf-file=user_settings.py --tests="RFC2544" + +To run all tests: + +.. code:: bash + + ./vsperf --conf-file=user_settings.py + +Some tests allow for configurable parameters, including test duration (in seconds) as well as packet sizes (in bytes). + +.. code:: bash + + ./vsperf --conf-file user_settings.py + --tests RFC2544Tput + --test-param "rfc2544_duration=10;packet_sizes=128" + +For all available options, check out the help dialog: + +.. code:: bash + + ./vsperf --help + + +Testcases +---------- +Available Tests in VSPERF are: + + * phy2phy_tput + * phy2phy_forwarding + * back2back + * phy2phy_tput_mod_vlan + * phy2phy_cont + * pvp_cont + * pvvp_cont + * pvpv_cont + * phy2phy_scalability + * pvp_tput + * pvp_back2back + * pvvp_tput + * pvvp_back2back + * phy2phy_cpu_load + * phy2phy_mem_load + +VSPERF modes of operation +-------------------------- + +VSPERF can be run in different modes. By default it will configure vSwitch, +traffic generator and VNF. However it can be used just for configuration +and execution of traffic generator. Another option is execution of all +components except traffic generator itself. + +Mode of operation is driven by configuration parameter -m or --mode + +.. code-block:: console + + -m MODE, --mode MODE vsperf mode of operation; + Values: + "normal" - execute vSwitch, VNF and traffic generator + "trafficgen" - execute only traffic generator + "trafficgen-off" - execute vSwitch and VNF + "trafficgen-pause" - execute vSwitch and VNF but wait before traffic transmission + +In case, that VSPERF is executed in "trafficgen" mode, then configuration +of traffic generator can be modified through ``TRAFFIC`` dictionary passed to the +``--test-params`` option. It is not needed to specify all values of ``TRAFFIC`` +dictionary. It is sufficient to specify only values, which should be changed. +Detailed description of ``TRAFFIC`` dictionary can be found at: ref:`configuration-of-traffic-dictionary`. + +Example of execution of VSPERF in "trafficgen" mode: + +.. code-block:: console + + $ ./vsperf -m trafficgen --trafficgen IxNet --conf-file vsperf.conf \ + --test-params "TRAFFIC={'traffic_type':'rfc2544_continuous','bidir':'False','framerate':60}" + + +================================ +Packet Forwarding Test Scenarios +================================ +KVMFORNFV currently implements three scenarios as part of testing: + + * Host Scenario + * Guest Scenario. + * SR-IOV Scenario. + + +Packet Forwarding Host Scenario +------------------------------- +Here Host is NODE-2. It has VSPERF installed in it and is properly configured to use IXIA Traffic-generator by providing IXIA CARD, PORTS and Lib paths along with IP. +please refer to figure.2 + +.. Figure:: ../images/Host_Scenario.png + +Packet Forwarding Guest Scenario +-------------------------------- +Here the guest is a Virtual Machine (VM) launched by using a modified CentOS image(vsperf provided) +on Node-2 (Host) using Qemu. In this scenario, the packet is initially forwarded to Host which is +then forwarded to the launched guest. The time taken by the packet to reach the IXIA traffic-generator +via Host and Guest is calculated and published as a test result of this scenario. + +.. Figure:: ../images/Guest_Scenario.png + +Packet Forwarding SRIOV Scenario +-------------------------------- +Unlike the packet forwarding to Guest-via-Host scenario, here the packet generated at the IXIA is +directly forwarded to the Guest VM launched on Host by implementing SR-IOV interface at NIC level +of Host .i.e., Node-2. The time taken by the packet to reach the IXIA traffic-generator is calculated +and published as a test result for this scenario. SRIOV-support_ is given below, it details how to use SR-IOV. + +.. Figure:: ../images/SRIOV_Scenario.png + +Using vfio_pci with DPDK +------------------------ + +To use vfio with DPDK instead of igb_uio add into your custom configuration +file the following parameter: + +.. code-block:: python + + PATHS['dpdk']['src']['modules'] = ['uio', 'vfio-pci'] + + +**NOTE:** In case, that DPDK is installed from binary package, then please + + set ``PATHS['dpdk']['bin']['modules']`` instead. + +**NOTE:** Please ensure that Intel VT-d is enabled in BIOS. + +**NOTE:** Please ensure your boot/grub parameters include +the following: + +.. code-block:: console + + iommu=pt intel_iommu=on + +To check that IOMMU is enabled on your platform: + +.. code-block:: console + + $ dmesg | grep IOMMU + [ 0.000000] Intel-IOMMU: enabled + [ 0.139882] dmar: IOMMU 0: reg_base_addr fbffe000 ver 1:0 cap d2078c106f0466 ecap f020de + [ 0.139888] dmar: IOMMU 1: reg_base_addr ebffc000 ver 1:0 cap d2078c106f0466 ecap f020de + [ 0.139893] IOAPIC id 2 under DRHD base 0xfbffe000 IOMMU 0 + [ 0.139894] IOAPIC id 0 under DRHD base 0xebffc000 IOMMU 1 + [ 0.139895] IOAPIC id 1 under DRHD base 0xebffc000 IOMMU 1 + [ 3.335744] IOMMU: dmar0 using Queued invalidation + [ 3.335746] IOMMU: dmar1 using Queued invalidation + .... + +.. _SRIOV-support: + +Using SRIOV support +------------------- + +To use virtual functions of NIC with SRIOV support, use extended form +of NIC PCI slot definition: + +.. code-block:: python + + WHITELIST_NICS = ['0000:03:00.0|vf0', '0000:03:00.1|vf3'] + +Where ``vf`` is an indication of virtual function usage and following +number defines a VF to be used. In case that VF usage is detected, +then vswitchperf will enable SRIOV support for given card and it will +detect PCI slot numbers of selected VFs. + +So in example above, one VF will be configured for NIC '0000:05:00.0' +and four VFs will be configured for NIC '0000:05:00.1'. Vswitchperf +will detect PCI addresses of selected VFs and it will use them during +test execution. + +At the end of vswitchperf execution, SRIOV support will be disabled. + +SRIOV support is generic and it can be used in different testing scenarios. +For example: + + +* vSwitch tests with DPDK or without DPDK support to verify impact + of VF usage on vSwitch performance +* tests without vSwitch, where traffic is forwared directly + between VF interfaces by packet forwarder (e.g. testpmd application) +* tests without vSwitch, where VM accesses VF interfaces directly + by PCI-passthrough to measure raw VM throughput performance. + |