diff options
Diffstat (limited to 'docs')
5 files changed, 524 insertions, 110 deletions
diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst index 2dff80ef9..213821798 100644 --- a/docs/testing/user/userguide/04-installation.rst +++ b/docs/testing/user/userguide/04-installation.rst @@ -463,112 +463,111 @@ These configuration files can be found in the ``samples`` directory. Default location for the output is ``/tmp/yardstick.out``. -Automatic installation of Yardstick using ansible -------------------------------------------------- +Automatic installation of Yardstick +----------------------------------- -Automatic installation can be used as an alternative to the manual. -Yardstick can be installed on the bare metal and to the container. Yardstick +Automatic installation can be used as an alternative to the manual by +providing parameters for ansible script ``install.yaml`` in a ``nsb_setup.sh`` +file. Yardstick can be installed on the bare metal and to the container. Yardstick container can be either pulled or built. Bare metal installation ^^^^^^^^^^^^^^^^^^^^^^^ -Use ansible script ``install.yaml`` to install Yardstick on Ubuntu server: +Modify ``nsb_setup.sh`` file ``install.yaml`` parameters to install Yardstick +on Ubuntu server: .. code-block:: console ansible-playbook -i install-inventory.ini install.yaml \ + -e IMAGE_PROPERTY='none' \ -e YARDSTICK_DIR=<path to Yardstick folder> .. note:: By default ``INSTALLATION_MODE`` is ``baremetal``. -.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to - Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter. +.. note:: No modification in ``install-inventory.ini`` is needed for Yardstick + installation. .. note:: To install Yardstick in virtual environment pass parameter ``-e VIRTUAL_ENVIRONMENT=True``. -To build Yardstick NSB image pass ``IMG_PROPERTY=nsb`` as input parameter: - -.. code-block:: console - - ansible-playbook -i install-inventory.ini install.yaml \ - -e IMAGE_PROPERTY=nsb \ - -e YARDSTICK_DIR=<path to Yardstick folder> - -.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF - images will be built. Image type is defined by parameter ``IMAGE_PROPERTY``. - By default Yardstick image will be built. - Container installation ^^^^^^^^^^^^^^^^^^^^^^ -Use ansible script ``install.yaml`` to pull or build Yardstick -container. To pull Yardstick image and start container run: +Modify ``install.yaml`` parameters in ``nsb_setup.sh`` file to pull or build +Yardstick container. To pull Yardstick image and start container run: .. code-block:: console ansible-playbook -i install-inventory.ini install.yaml \ - -e YARDSTICK_DIR=<path to Yardstick folder> \ + -e IMAGE_PROPERTY='none' \ -e INSTALLATION_MODE=container_pull -.. note:: In this ``INSTALLATION_MODE`` mode either Yardstick image or SampleVNF - images will be built. Image type is defined by variable ``IMG_PROPERTY`` in - file ``ansible/group_vars/all.yml``. By default Yardstick image will be - built. - -.. note:: Open question: How to know if Docker image is built on Ubuntu 16.04 and 18.04? - Do we need separate tag to be used? +.. note:: Yardstick docker image is available for both Ubuntu 16.04 and Ubuntu + 18.04. By default Ubuntu 16.04 based docker image is used. To use + Ubuntu 18.04 based docker image pass ``-i opnfv/yardstick-ubuntu-18.04`` + parameter to ``nsb_setup.sh``. -To build Yardstick image run: +To build Yardstick image modify Dockerfile as per comments in it and run: .. code-block:: console - ansible-playbook -i install-inventory.ini install.yaml \ - -e YARDSTICK_DIR=<path to Yardstick folder> \ - -e INSTALLATION_MODE=container + cd yardstick + docker build -f docker/Dockerfile -t opnfv/yardstick:<tag> . -.. note:: In this ``INSTALLATION_MODE`` mode neither Yardstick image nor SampleVNF - image will be built. +.. note:: Yardstick docker image based on Ubuntu 16.04 will be built. + Pass ``-f docker/Dockerfile_ubuntu18`` to build Yardstick docker image based + on Ubuntu 18.04. -.. note:: By default Ubuntu 16.04 is chosen (xenial). It can be changed to - Ubuntu 18.04 (bionic) by passing ``-e OS_RELEASE=bionic`` parameter. +.. note:: Add ``--build-arg http_proxy=http://<proxy_host>:<proxy_port>`` to + build docker image if server is behind the proxy. Parameters for ``install.yaml`` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Description of the parameters used with ``install.yaml`` script +Description of the parameters used with ``install.yaml``: +-------------------------+-------------------------------------------------+ | Parameters | Detail | +=========================+=================================================+ - | -i install-inventory.ini| Installs package dependency to remote servers | - | | Mandatory parameter | - | | By default no remote servers are provided | - | | Needed packages will be installed on localhost | + | -i install-inventory.ini|| Installs package dependency to remote servers | + | || and localhost | + | || Mandatory parameter | + | || By default no remote servers are provided | +-------------------------+-------------------------------------------------+ - | -e YARDSTICK_DIR | Path to Yardstick folder | - | | Mandatory parameter | + | -e YARDSTICK_DIR || Path to Yardstick folder | + | || Mandatory parameter for Yardstick bare metal | + | || installation | +-------------------------+-------------------------------------------------+ - | -e INSTALLATION_MODE | baremetal: Yardstick is installed to the bare | - | | metal | - | | Default parameter | + | -e INSTALLATION_MODE || baremetal: Yardstick is installed to the bare | + | | metal | + | || Default parameter | | +-------------------------------------------------+ - | | container: Yardstick is installed in container | - | | Container is built from Dockerfile | + | || container: Yardstick is installed in container | + | || Container is built from Dockerfile | | +-------------------------------------------------+ - | | container_pull: Yardstick is installed in | - | | container | - | | Container is pulled from docker hub | + | || container_pull: Yardstick is installed in | + | || container | + | || Container is pulled from docker hub | +-------------------------+-------------------------------------------------+ - | -e OS_RELEASE | xenial or bionic: Ubuntu version to be used | - | | Default is Ubuntu 16.04 (xenial) | + | -e OS_RELEASE || xenial or bionic: Ubuntu version to be used for| + | || VM image (nsb or normal) | + | || Default is Ubuntu 16.04, xenial | + +-------------------------+-------------------------------------------------+ + | -e IMAGE_PROPERTY || nsb: Build Yardstick NSB VM image | + | || Used to run Yardstick NSB tests on sample VNF | + | || Default parameter | + | +-------------------------------------------------+ + | || normal: Build VM image to run ping test in | + | || OpenStack | + | +-------------------------------------------------+ + | || none: don't build a VM image. | +-------------------------+-------------------------------------------------+ - | -e IMAGE_PROPERTY | normal or nsb: Type of the VM image to be built | - | | Default image is Yardstick | + | -e VIRTUAL_ENVIRONMENT || False or True: Whether install in virtualenv | + | || Default is False | +-------------------------+-------------------------------------------------+ - | -e VIRTUAL_ENVIRONMENT | False or True: Whether install in virtualenv | - | | Default is False | + | -e YARD_IMAGE_ARCH || CPU architecture on servers | + | || Default is 'amd64' | +-------------------------+-------------------------------------------------+ diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst index 71ced43ea..0487dad9a 100644 --- a/docs/testing/user/userguide/13-nsb-installation.rst +++ b/docs/testing/user/userguide/13-nsb-installation.rst @@ -28,7 +28,7 @@ Abstract The steps needed to run Yardstick with NSB testing are: * Install Yardstick (NSB Testing). -* Setup/reference ``pod.yaml`` describing Test topology +* Setup/reference ``pod.yaml`` describing Test topology. * Create/reference the test configuration yaml file. * Run the test case. @@ -89,21 +89,24 @@ Boot and BIOS settings: Install Yardstick (NSB Testing) ------------------------------- -Download the source code and check out the latest stable branch:: +Yardstick with NSB can be installed using ``nsb_setup.sh``. +The ``nsb_setup.sh`` allows to: -.. code-block:: console - - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - # Switch to latest stable branch - git checkout stable/gambia - -Configure the network proxy, either using the environment variables or setting -the global environment file. +1. Install Yardstick in specified mode: bare metal or container. + Refer :doc:`04-installation`. +2. Install package dependencies on remote servers used as traffic generator or + sample VNF. Add such servers to ``install-inventory.ini`` file to either + ``yardstick-standalone`` or ``yardstick-baremetal`` server groups. + Configures IOMMU, hugepages, open file limits, CPU isolation, etc. +3. Build VM image either nsb or normal. The nsb VM image is used to run + Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc. + The normal VM image is used to run Yardstick ping tests in OpenStack context. +4. Add nsb or normal VM image to OpenStack together with OpenStack variables. -* Set environment +Firstly, configure the network proxy, either using the environment variables or +setting the global environment file. -.. code-block:: +Set environment:: http_proxy='http://proxy.company.com:port' https_proxy='http://proxy.company.com:port' @@ -113,42 +116,102 @@ the global environment file. export http_proxy='http://proxy.company.com:port' export https_proxy='http://proxy.company.com:port' -Modify the Yardstick installation inventory, used by Ansible:: +Download the source code and check out the latest stable branch + +.. code-block:: console + + git clone https://gerrit.opnfv.org/gerrit/yardstick + cd yardstick + # Switch to latest stable branch + git checkout stable/gambia + +Modify the Yardstick installation inventory used by Ansible:: cat ./ansible/install-inventory.ini [jumphost] localhost ansible_connection=local - [yardstick-standalone] - yardstick-standalone-node ansible_host=192.168.1.2 - yardstick-standalone-node-2 ansible_host=192.168.1.3 - # section below is only due backward compatibility. # it will be removed later [yardstick:children] jumphost + [yardstick-standalone] + standalone ansible_host=192.168.2.51 ansible_connection=ssh + + [yardstick-baremetal] + baremetal ansible_host=192.168.2.52 ansible_connection=ssh + [all:vars] + arch_amd64=amd64 + arch_arm64=arm64 + inst_mode_baremetal=baremetal + inst_mode_container=container + inst_mode_container_pull=container_pull + ubuntu_archive={"amd64": "http://archive.ubuntu.com/ubuntu/", "arm64": "http://ports.ubuntu.com/ubuntu-ports/"} ansible_user=root - ansible_pass=root + ansible_ssh_pass=root # OR ansible_ssh_private_key_file=/root/.ssh/id_rsa + +.. warning:: + + Before running ``nsb_setup.sh`` make sure python is installed on servers + added to ``yardstick-standalone`` or ``yardstick-baremetal`` groups. .. note:: SSH access without password needs to be configured for all your nodes - defined in ``yardstick-install-inventory.ini`` file. + defined in ``install-inventory.ini`` file. If you want to use password authentication you need to install ``sshpass``:: sudo -EH apt-get install sshpass -To execute an installation for a BareMetal or a Standalone context:: + +.. note:: + + A VM image built by other means than Yardstick can be added to OpenStack. + Uncomment and set correct path to the VM image in the + ``install-inventory.ini`` file:: + + path_to_img=/tmp/workspace/yardstick-image.img + + +.. note:: + + CPU isolation can be applied to the remote servers, like: + ISOL_CPUS=2-27,30-55 + Uncomment and modify accordingly in ``install-inventory.ini`` file. + +By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from +docker hub and starts container, builds NSB VM image based on Ubuntu 16.04, +installs packages to the servers given in ``yardstick-standalone`` and +``yardstick-baremetal`` host groups. + +To change default behavior modify parameters for ``install.yaml`` in +``nsb_setup.sh`` file. + +Refer chapter :doc:`04-installation` for more details on ``install.yaml`` +parameters. + +To execute an installation for a **BareMetal** or a **Standalone context**:: ./nsb_setup.sh -To execute an installation for an OpenStack context:: +To execute an installation for an **OpenStack** context:: ./nsb_setup.sh <path to admin-openrc.sh> +.. warning:: + + The Yardstick VM image (NSB or normal) cannot be built inside a VM. + +.. warning:: + + The ``nsb_setup.sh`` configures huge pages, CPU isolation, IOMMU on the grub. + Reboot of the servers from ``yardstick-standalone`` or + ``yardstick-baremetal`` groups in the file ``install-inventory.ini`` is + required to apply those changes. + The above commands will set up Docker with the latest Yardstick code. To execute:: @@ -159,10 +222,6 @@ setup. Refer chapter :doc:`04-installation` for more on Docker **Install Yardstick using Docker (recommended)** -Another way to execute an installation for a Bare-Metal or a Standalone context -is to use ansible script ``install.yaml``. Refer chapter :doc:`04-installation` -for more details. - System Topology --------------- @@ -175,7 +234,7 @@ System Topology | | | | | | (1)<-----(1) | | +----------+ +----------+ - trafficgen_1 vnf + trafficgen_0 vnf Environment parameters and credentials @@ -185,7 +244,7 @@ Configure yardstick.conf ^^^^^^^^^^^^^^^^^^^^^^^^ If you did not run ``yardstick env influxdb`` inside the container to generate - ``yardstick.conf``, then create the config file manually (run inside the +``yardstick.conf``, then create the config file manually (run inside the container):: cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf @@ -251,7 +310,7 @@ Bare-Metal 2-Node setup | | | | | | (n)<-----(n) | | +----------+ +----------+ - trafficgen_1 vnf + trafficgen_0 vnf Bare-Metal 3-Node setup - Correlated Traffic ++++++++++++++++++++++++++++++++++++++++++++ @@ -265,7 +324,7 @@ Bare-Metal 3-Node setup - Correlated Traffic | | | | | | | | | |(1)<---->(0)| | +----------+ +----------+ +------------+ - trafficgen_1 vnf trafficgen_2 + trafficgen_0 vnf trafficgen_1 Bare-Metal Config pod.yaml @@ -279,7 +338,7 @@ topology and update all the required fields.:: nodes: - - name: trafficgen_1 + name: trafficgen_0 role: TrafficGen ip: 1.1.1.1 user: root @@ -388,7 +447,7 @@ On Host, where VM is created: .. code-block:: YAML servers: - vnf: + vnf_0: network_ports: mgmt: cidr: '1.1.1.7/24' @@ -446,7 +505,7 @@ SR-IOV 2-Node setup | | (n)<----->(n) | ----------------- | | | | | +----------+ +-------------------------+ - trafficgen_1 host + trafficgen_0 host @@ -474,7 +533,7 @@ SR-IOV 3-Node setup - Correlated Traffic | | | | | | | | | (n)<----->(n) | -----| (n)<-->(n) | | +----------+ +---------------------+ +--------------+ - trafficgen_1 host trafficgen_2 + trafficgen_0 host trafficgen_1 Before executing Yardstick test cases, make sure that ``pod.yaml`` reflects the topology and update all the required fields. @@ -493,7 +552,7 @@ SR-IOV Config pod_trex.yaml nodes: - - name: trafficgen_1 + name: trafficgen_0 role: TrafficGen ip: 1.1.1.1 user: root @@ -554,7 +613,7 @@ Update contexts section user: "" # update VM username password: "" # update password servers: - vnf: + vnf_0: network_ports: mgmt: cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask> @@ -619,7 +678,7 @@ On Host, where VM is created: .. code-block:: YAML servers: - vnf: + vnf_0: network_ports: mgmt: cidr: '1.1.1.7/24' @@ -683,7 +742,7 @@ OVS-DPDK 2-Node setup | | | (ovs-dpdk) | | | | (n)<----->(n) |------------------ | +----------+ +-------------------------+ - trafficgen_1 host + trafficgen_0 host OVS-DPDK 3-Node setup - Correlated Traffic @@ -713,7 +772,7 @@ OVS-DPDK 3-Node setup - Correlated Traffic | | | (ovs-dpdk) | | | | | | (n)<----->(n) | ------ |(n)<-->(n)| | +----------+ +-------------------------+ +------------+ - trafficgen_1 host trafficgen_2 + trafficgen_0 host trafficgen_1 Before executing Yardstick test cases, make sure that the ``pod.yaml`` reflects @@ -731,7 +790,7 @@ OVS-DPDK Config pod_trex.yaml nodes: - - name: trafficgen_1 + name: trafficgen_0 role: TrafficGen ip: 1.1.1.1 user: root @@ -802,7 +861,7 @@ Update contexts section user: "" # update VM username password: "" # update password servers: - vnf: + vnf_0: network_ports: mgmt: cidr: '1.1.1.61/24' # Update VM IP address, if static, <ip>/<mask> or if dynamic, <start of ip>/<mask> @@ -859,7 +918,7 @@ Single node OpenStack with external TG | | (PF1)<----->(PF1) +--------------------+ | | | | | +----------+ +----------------------------+ - trafficgen_1 host + trafficgen_0 host Host pre-configuration @@ -1011,7 +1070,7 @@ context using steps described in `NS testing - using yardstick CLI`_ section. Multi node OpenStack TG and VNF setup (two nodes) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: console @@ -1022,7 +1081,7 @@ Multi node OpenStack TG and VNF setup (two nodes) | |sample-VNF VM | | | |sample-VNF VM | | | | | | | | | | | | TG | | | | DUT | | - | | trafficgen_1 | | | | (VNF) | | + | | trafficgen_0 | | | | (VNF) | | | | | | | | | | | +--------+ +--------+ | | +--------+ +--------+ | | | VF NIC | | VF NIC | | | | VF NIC | | VF NIC | | @@ -1094,7 +1153,7 @@ Enabling other Traffic generators --------------------------------- IxLoad -~~~~~~ +^^^^^^ 1. Software needed: IxLoadAPI ``<IxLoadTclApi verson>Linux64.bin.tgz`` and ``<IxOS version>Linux64.bin.tar.gz`` (Download from ixia support site) @@ -1197,9 +1256,9 @@ to be preinstalled and properly configured. ``PYTHONPATH`` environment variable. .. important:: - The current version of LsApi module has an issue with reading LD_LIBRARY_PATH. - For LsApi module to initialize correctly following lines (184-186) in - lsapi.py + The current version of LsApi module has an issue with reading LD_LIBRARY_PATH. + For LsApi module to initialize correctly following lines (184-186) in + lsapi.py .. code-block:: python diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst index 12e269187..941a0bb65 100644 --- a/docs/testing/user/userguide/14-nsb-operation.rst +++ b/docs/testing/user/userguide/14-nsb-operation.rst @@ -136,7 +136,7 @@ case, please follow the instructions below. image: yardstick-samplevnfs ... servers: - vnf__0: + vnf_0: ... availability_zone: <AZ_NAME> ... @@ -332,8 +332,8 @@ Baremetal traffic_profile: ../../traffic_profiles/ipv4_throughput.yaml topology: vfw-tg-topology.yaml nodes: - tg__0: trafficgen_1.yardstick - vnf__0: vnf.yardstick + tg__0: trafficgen_0.yardstick + vnf__0: vnf_0.yardstick options: framesize: uplink: {64B: 100} @@ -427,7 +427,7 @@ options section. scenarios: - type: NSPerf nodes: - tg__0: tg_0.yardstick + tg__0: trafficgen_0.yardstick options: tg_0: diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst new file mode 100644 index 000000000..ffe4f6c19 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia.rst @@ -0,0 +1,177 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2019 Intel Corporation. + +*************************************************************** +Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE +*************************************************************** + ++-----------------------------------------------------------------------------+ +|NSB vBNG RFC2544 QoS base line test case without link congestion | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up | +| | | ++--------------+--------------------------------------------------------------+ +| metric | Network metrics: | +| | * TxThroughput | +| | * RxThroughput | +| | * TG packets in | +| | * TG packets out | +| | * Max Latency | +| | * Min Latency | +| | * Average Latency | +| | * Packets drop percentage | +| | | +| | PPPoE subscribers metrics: | +| | * Sessions up | +| | * Sessions down | +| | * Sessions Not Started | +| | * Sessions Total | +| | | +| | NOTE: the same network metrics list are collecting: | +| | * summary for all ports | +| | * per port | +| | * per priority flows summary on all ports | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | This test allows to measure performance of BNG network device| +| | according to RFC2544 testing methodology. Test case creates | +| | PPPoE subscriber connections to BNG, runs prioritized traffic| +| | on maximum throughput on all ports and collects network | +| | and PPPoE subscriber metrics. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The BNG QoS RFC2544 test cases are listed below: | +| | | +| | * tc_bng_pppoe_rfc2544_ixia_IMIX_scale_up.yaml | +| | | +| | Mentioned test case is a template and number of ports in the | +| | setup could be passed using cli arguments, e.g: | +| | | +| | yardstick -d task start --task-args='{vports: 8}' <tc_yaml> | +| | | +| | By default, vports=2. | +| | | +| | Test duration: | +| | * set as 30sec; | +| | | +| | Traffic type: | +| | * IPv4; | +| | | +| | Packet sizes: | +| | * IMIX. The following default IMIX distribution is using: | +| | | +| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% | +| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% | +| | | +| | VLAN settings: | +| | * QinQ on access ports; | +| | * VLAN on core ports; | +| | | +| | Number of PPPoE subscribers: | +| | * 4000 per access port; | +| | * 1000 per SVLAN; | +| | | +| | Default ToS bits settings: | +| | * 0 - (000) Routine | +| | * 4 - (100) Flash Override | +| | * 7 - (111) Network Control. | +| | | +| | The above fields are the main options used for the test case | +| | and could be configured using cli options on test run or | +| | directly in test case yaml file. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | IXIA IxNetwork | +| | | +| | IXIA IxNetwork is using to emulates PPPoE sessions, generate | +| | L2-L3 traffic, analyze traffic flows and collect network | +| | metrics during test run. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Mentioned BNG QoS RFC2544 test case can be configured with | +| | different: | +| | | +| | * Number of PPPoE subscribers sessions; | +| | * Setup ports number; | +| | * IP Priority type; | +| | * Packet size; | +| | * Enable/disable BGP protocol on core ports; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|references | RFC2544 | +| | | ++--------------+--------------------------------------------------------------+ +| pre-test | 1. BNG is up and running and has configured: | +| conditions | * access ports with QinQ tagging; | +| | * core ports with configured IP addresses and VLAN; | +| | * PPPoE subscribers authorization settings (no auth or | +| | Radius server, PAP auth protocol); | +| | * QoS settings; | +| | | +| | 2. IxNetwork API server is running on specified in pod.yaml | +| | file TCL port; | +| | | +| | 3. BNG ports are connected to IXIA ports (IXIA uplink | +| | ports are connected to BNG access ports and IXIA | +| | downlink ports are connected to BNG core ports; | +| | | +| | 4. The pod.yaml file contains all necessary information | +| | (BNG access and core ports settings, core ports IP | +| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink | +| | ports, etc). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Yardstick resolves the topology and connects to IxNetwork | +| | API server by TCL. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Test scenarios run, which performs the following steps: | +| | | +| | 1. Create access network topologies (this topologies are | +| | based on IXIA ports which are connected to BNG access | +| | ports); | +| | 2. Configure access network topologies with multiple device | +| | groups. Each device group represents single SVLAN with | +| | PPPoE subscribers sessions (number of created on port | +| | SVLANs and subscribers depends on specified if test case | +| | file options); | +| | 3. Create core network topologies (this topologies are | +| | based on IXIA ports which are connected to BNG core | +| | ports); | +| | 4. Configure core network topologies with single device | +| | group which represents one connection with configured | +| | VLAN and BGP protocol; | +| | 5. Establish PPPoE subscribers connections to BNG; | +| | 6. Create traffic flows between access and core ports | +| | (traffic flows are creating between access-core ports | +| | pairs, traffic is bi-directional); | +| | 7. Configure each traffic flow with specified in traffic | +| | profile options; | +| | 8. Run traffic with specified in test case file duration; | +| | 9. Collect network metrics after traffic was stopped; | +| | 10. In case drop percentage rate is higher than expected, | +| | reduce traffic line rate and repeat steps 7-10 again; | +| | 11. In case drop percentage rate is as expected or number | +| | of maximum iterations in step 10 achieved, disconnect | +| | PPPoE subscribers and stop traffic; | +| | 12. Stop test. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | During each iteration interval in the test run, all specified| +| | metrics are retrieved from IxNetwork and stored in the | +| | yardstick dispatcher. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The vBNG RFC2544 test case will achieve maximum traffic line | +| | rate with zero packet loss (or other non-zero allowed | +| | partial drop rate). | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst new file mode 100644 index 000000000..889ba2410 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested.rst @@ -0,0 +1,179 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2019 Intel Corporation. + +*************************************************************** +Yardstick Test Case Description: NSB vBNG RFC2544 QoS TEST CASE +*************************************************************** + ++-----------------------------------------------------------------------------+ +|NSB vBNG RFC2544 QoS base line test case with link congestion | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX | +| | | ++--------------+--------------------------------------------------------------+ +| metric | Network metrics: | +| | * TxThroughput | +| | * RxThroughput | +| | * TG packets in | +| | * TG packets out | +| | * Max Latency | +| | * Min Latency | +| | * Average Latency | +| | * Packets drop percentage | +| | | +| | PPPoE subscribers metrics: | +| | * Sessions up | +| | * Sessions down | +| | * Sessions Not Started | +| | * Sessions Total | +| | | +| | NOTE: the same network metrics list are collecting: | +| | * summary for all ports | +| | * per port | +| | * per priority flows summary on all ports | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | This test allows to measure performance of BNG network device| +| | according to RFC2544 testing methodology. Test case creates | +| | PPPoE subscribers connections to BNG, run prioritized traffic| +| | causing congestion of access port (port xe0) and collects | +| | network and PPPoE subscribers metrics. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The BNG QoS RFC2544 test cases are listed below: | +| | | +| | * tc_bng_pppoe_rfc2544_ixia_8ports_1port_congested_IMIX.yaml | +| | | +| | Number of ports: | +| | * 8 ports | +| | | +| | Test duration: | +| | * set as 30sec; | +| | | +| | Traffic type: | +| | * IPv4; | +| | | +| | Packet sizes: | +| | * IMIX. The following default IMIX distribution is using: | +| | | +| | uplink: 70B - 33%, 940B - 33%, 1470B - 34% | +| | downlink: 68B - 3%, 932B - 1%, 1470B - 96% | +| | | +| | VLAN settings: | +| | * QinQ on access ports; | +| | * VLAN on core ports; | +| | | +| | Number of PPPoE subscribers: | +| | * 4000 per access port; | +| | * 1000 per SVLAN; | +| | | +| | Default ToS bits settings: | +| | * 0 - (000) Routine | +| | * 4 - (100) Flash Override | +| | * 7 - (111) Network Control. | +| | | +| | The above fields are the main options used for the test case | +| | and could be configured using cli options on test run or | +| | directly in test case yaml file. | +| | | +| | NOTE: that only parameter that can't be changed is ports | +| | number. To run the test with another number of ports | +| | traffic profile should be updated. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | IXIA IxNetwork | +| | | +| | IXIA IxNetwork is using to emulates PPPoE sessions, generate | +| | L2-L3 traffic, analyze traffic flows and collect network | +| | metrics during test run. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Mentioned BNG QoS RFC2544 test cases can be configured with | +| | different: | +| | | +| | * Number of PPPoE subscribers sessions; | +| | * IP Priority type; | +| | * Packet size; | +| | * enable/disable BGP protocol on core ports; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|references | RFC2544 | +| | | ++--------------+--------------------------------------------------------------+ +| pre-test | 1. BNG is up and running and has configured: | +| conditions | * access ports with QinQ tagging; | +| | * core ports with configured IP addresses and VLAN; | +| | * PPPoE subscribers authorization settings (no auth or | +| | Radius server, PAP auth protocol); | +| | * QoS settings; | +| | | +| | 2. IxNetwork API server is running on specified in pod.yaml | +| | file TCL port; | +| | | +| | 3. BNG ports are connected to IXIA ports (IXIA uplink | +| | ports are connected to BNG access ports and IXIA | +| | downlink ports are connected to BNG core ports; | +| | | +| | 4. The pod.yaml file contains all necessary information | +| | (BNG access and core ports settings, core ports IP | +| | address, NICs, IxNetwork TCL port, IXIA uplink/downlink | +| | ports, etc). | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Yardstick resolve the topology and connects to IxNetwork | +| | API server by TCL. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Test scenarios run, which performs the following steps: | +| | | +| | 1. Create access network topologies (this topologies are | +| | based on IXIA ports which are connected to BNG access | +| | ports); | +| | 2. Configure access network topologies with multiple device | +| | groups. Each device group represents single SVLAN with | +| | PPPoE subscribers sessions (number of created on port | +| | SVLANs and subscribers depends on specified if test case | +| | file options); | +| | 3. Create core network topologies (this topologies are | +| | based on IXIA ports which are connected to BNG core | +| | ports); | +| | 4. Configure core network topologies with single device | +| | group which represents one connection with configured | +| | VLAN and BGP protocol; | +| | 5. Establish PPPoE subscribers connections to BNG; | +| | 6. Create traffic flows between access and core ports. | +| | While test covers case with access port congestion, | +| | flows between ports will be created in the following | +| | way: traffic from two core ports are going to one access | +| | port causing port congestion and traffic from other two | +| | core ports is splitting between remaining three access | +| | ports; | +| | 7. Configure each traffic flow with specified in traffic | +| | profile options; | +| | 8. Run traffic with specified in test case file duration; | +| | 9. Collect network metrics after traffic was stopped; | +| | 10. Measure drop percentage rate of different priority | +| | packets on congested port. Expected that all high and | +| | medium priority packets was forwarded and only low | +| | priority packets has drops. | +| | 11. Disconnect PPPoE subscribers and stop test. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | During test run, in the end of each iteration all specified | +| | in the document metrics are retrieved from IxNetwork and | +| | stored in the yardstick dispatcher. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The test case is successful if all high and medium priority | +| | packets on congested port was forwarded and only low | +| | priority packets has drops. | +| | | ++--------------+--------------------------------------------------------------+ |