From f06f083e4e8dcda48f794d256d39c32eab5ef21e Mon Sep 17 00:00:00 2001 From: JingLu5 Date: Wed, 15 Mar 2017 09:56:09 +0000 Subject: Update yardstick framework architecture in userguide JIRA: YARDSTICK-590 This patch update the yardstick framework architecture in the userguide, also fix some rst grammar mistakes Change-Id: I84e7c24b4cd936a01f4c191e9f530f15f9f711de Signed-off-by: JingLu5 (cherry picked from commit 7150e6bc49098937edcac0fa9fa108329c74af4a) --- docs/testing/user/userguide/01-introduction.rst | 29 +- docs/testing/user/userguide/03-architecture.rst | 6 +- docs/testing/user/userguide/04-installation.rst | 500 +++++++++++++++++++++ docs/testing/user/userguide/04-vtc-overview.rst | 122 ----- .../user/userguide/05-apexlake_installation.rst | 300 ------------- .../testing/user/userguide/05-yardstick_plugin.rst | 145 ++++++ docs/testing/user/userguide/06-apexlake_api.rst | 89 ---- .../user/userguide/06-result-store-InfluxDB.rst | 86 ++++ docs/testing/user/userguide/07-grafana.rst | 119 +++++ docs/testing/user/userguide/07-nsb-overview.rst | 177 -------- .../testing/user/userguide/08-nsb_installation.rst | 253 ----------- docs/testing/user/userguide/08-vtc-overview.rst | 125 ++++++ .../user/userguide/09-apexlake_installation.rst | 302 +++++++++++++ docs/testing/user/userguide/09-installation.rst | 401 ----------------- docs/testing/user/userguide/10-apexlake_api.rst | 89 ++++ .../testing/user/userguide/10-yardstick_plugin.rst | 144 ------ docs/testing/user/userguide/11-nsb-overview.rst | 213 +++++++++ .../user/userguide/11-result-store-InfluxDB.rst | 86 ---- docs/testing/user/userguide/12-grafana.rst | 119 ----- .../testing/user/userguide/12-nsb_installation.rst | 268 +++++++++++ .../Yardstick_framework_architecture_in_D.png | Bin 0 -> 74121 bytes docs/testing/user/userguide/index.rst | 18 +- 22 files changed, 1875 insertions(+), 1716 deletions(-) create mode 100644 docs/testing/user/userguide/04-installation.rst delete mode 100644 docs/testing/user/userguide/04-vtc-overview.rst delete mode 100644 docs/testing/user/userguide/05-apexlake_installation.rst create mode 100644 docs/testing/user/userguide/05-yardstick_plugin.rst delete mode 100644 docs/testing/user/userguide/06-apexlake_api.rst create mode 100644 docs/testing/user/userguide/06-result-store-InfluxDB.rst create mode 100644 docs/testing/user/userguide/07-grafana.rst delete mode 100644 docs/testing/user/userguide/07-nsb-overview.rst delete mode 100644 docs/testing/user/userguide/08-nsb_installation.rst create mode 100644 docs/testing/user/userguide/08-vtc-overview.rst create mode 100644 docs/testing/user/userguide/09-apexlake_installation.rst delete mode 100644 docs/testing/user/userguide/09-installation.rst create mode 100644 docs/testing/user/userguide/10-apexlake_api.rst delete mode 100644 docs/testing/user/userguide/10-yardstick_plugin.rst create mode 100644 docs/testing/user/userguide/11-nsb-overview.rst delete mode 100644 docs/testing/user/userguide/11-result-store-InfluxDB.rst delete mode 100644 docs/testing/user/userguide/12-grafana.rst create mode 100644 docs/testing/user/userguide/12-nsb_installation.rst create mode 100644 docs/testing/user/userguide/images/Yardstick_framework_architecture_in_D.png diff --git a/docs/testing/user/userguide/01-introduction.rst b/docs/testing/user/userguide/01-introduction.rst index 0e0eea002..2aa870c2a 100755 --- a/docs/testing/user/userguide/01-introduction.rst +++ b/docs/testing/user/userguide/01-introduction.rst @@ -37,35 +37,38 @@ About This Document This document consists of the following chapters: +* Chapter :doc:`01-introduction` provides a brief introduction to yardstick + project's goal and scope and gives the structure of this document. + * Chapter :doc:`02-methodology` describes the methodology implemented by the Yardstick Project for :term:`NFVI` verification. * Chapter :doc:`03-architecture` provides information on the software architecture of yardstick. -* Chapter :doc:`04-vtc-overview` provides information on the :term:`VTC`. +* Chapter :doc:`04-installation` provides instructions to install *Yardstick*. + +* Chapter :doc:`05-yardstick_plugin` provides information on how to integrate + other OPNFV testing projects into *Yardstick*. + +* Chapter :doc:`06-result-store-InfluxDB` provides inforamtion on how to run + plug-in test cases and store test results into community's InfluxDB. + +* Chapter :doc:`07-vtc-overview` provides information on the :term:`VTC`. -* Chapter :doc:`05-apexlake_installation` provides instructions to install the +* Chapter :doc:`08-apexlake_installation` provides instructions to install the experimental framework *ApexLake* -* Chapter :doc:`06-apexlake_api` explains how this framework is integrated in +* Chapter :doc:`09-apexlake_api` explains how this framework is integrated in *Yardstick*. -* Chapter :doc:`07-nsb-overview` describes the methodology implemented by the +* Chapter :doc:`10-nsb-overview` describes the methodology implemented by the yardstick - Network service benchmarking to test real world usecase for a given VNF -* Chapter :doc:`08-nsb_installation` provides instructions to install +* Chapter :doc:`11-nsb_installation` provides instructions to install *Yardstick - Network service benchmarking testing*. -* Chapter :doc:`09-installation` provides instructions to install *Yardstick*. - -* Chapter :doc:`10-yardstick_plugin` provides information on how to integrate - other OPNFV testing projects into *Yardstick*. - -* Chapter :doc:`11-result-store-InfluxDB` provides inforamtion on how to run - plug-in test cases and store test results into community's InfluxDB. - * Chapter :doc:`12-list-of-tcs` includes a list of available Yardstick test cases. diff --git a/docs/testing/user/userguide/03-architecture.rst b/docs/testing/user/userguide/03-architecture.rst index 03bf00f58..95fe050e8 100755 --- a/docs/testing/user/userguide/03-architecture.rst +++ b/docs/testing/user/userguide/03-architecture.rst @@ -187,9 +187,9 @@ run test measurement scripts through the ssh tunnel. After all TestScenaio is finished, TaskCommands will undeploy the heat stack. Then the whole test is finished. -.. image:: images/Logical_view.png +.. image:: images/Yardstick_framework_architecture_in_D.png :width: 800px - :alt: Yardstick Logical View + :alt: Yardstick framework architecture in Danube Process View (Test execution flow) ================================== @@ -236,7 +236,7 @@ Yardstick Directory structure **yardstick/** - Yardstick main directory. -*ci/* - Used for continuous integration of Yardstick at different PODs and +*/tests/ci/* - Used for continuous integration of Yardstick at different PODs and with support for different installers. *docs/* - All documentation is stored here, such as configuration guides, diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst new file mode 100644 index 000000000..c1db21220 --- /dev/null +++ b/docs/testing/user/userguide/04-installation.rst @@ -0,0 +1,500 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. + +Yardstick Installation +====================== + + +Abstract +-------- + +Yardstick supports installation by Docker or directly in Ubuntu. The +installation procedure for Docker and direct installation are detailed in +the section below. + +To use Yardstick you should have access to an OpenStack environment, with at +least Nova, Neutron, Glance, Keystone and Heat installed. + +The steps needed to run Yardstick are: + +1. Install Yardstick. +2. Load OpenStack environment variables. +3. Create a Neutron external network. +4. Build Yardstick flavor and a guest image. +5. Load the guest image into the OpenStack environment. +6. Create the test configuration .yaml file. +7. Run the test case. + + +Prerequisites +------------- + +The OPNFV deployment is out of the scope of this document but it can be +found in http://artifacts.opnfv.org/opnfvdocs/colorado/docs/configguide/index.html. +The OPNFV platform is considered as the System Under Test (SUT) in this +document. + +Several prerequisites are needed for Yardstick: + + #. A Jumphost to run Yardstick on + #. A Docker daemon shall be installed on the Jumphost + #. A public/external network created on the SUT + #. Connectivity from the Jumphost to the SUT public/external network + +WARNING: Connectivity from Jumphost is essential and it is of paramount +importance to make sure it is working before even considering to install +and run Yardstick. Make also sure you understand how your networking is +designed to work. + +NOTE: **Jumphost** refers to any server which meets the previous +requirements. Normally it is the same server from where the OPNFV +deployment has been triggered previously. + +NOTE: If your Jumphost is operating behind a company http proxy and/or +Firewall, please consult first the section `Proxy Support`_, towards +the end of this document. The section details some tips/tricks which +*may* be of help in a proxified environment. + + +Installing Yardstick using Docker +--------------------------------- + +Yardstick has a Docker image, +**It is recommended to use this Docker image to run Yardstick test**. + +Pulling the Yardstick Docker image +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ + +Pull the Yardstick Docker image (**opnfv/yardstick**) from the public dockerhub +registry under the OPNFV account: [dockerhub_], with the following docker +command:: + + docker pull opnfv/yardstick:stable + +After pulling the Docker image, check that it is available with the +following docker command:: + + [yardsticker@jumphost ~]$ docker images + REPOSITORY TAG IMAGE ID CREATED SIZE + opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB + +Run the Docker image to get a Yardstick container +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock -p 8888:5000 -e INSTALLER_IP=192.168.200.2 -e INSTALLER_TYPE=compass --name yardstick opnfv/yardstick:stable + +note: + ++----------------------------------------------+------------------------------+ +| parameters | Detail | ++==============================================+==============================+ +| -itd | -i: interactive, Keep STDIN | +| | open even if not attached. | +| | -t: allocate a pseudo-TTY. | +| | -d: run container in | +| | detached mode, in the | +| | background. | ++----------------------------------------------+------------------------------+ +| --privileged | If you want to build | +| | yardstick-image in yardstick | +| | container, this parameter is | +| | needed. | ++----------------------------------------------+------------------------------+ +| -e INSTALLER_IP=192.168.200.2 | If you want to use yardstick | +| | env prepare command(or | +| -e INSTALLER_TYPE=compass | related API) to load the | +| | images that yardstick needs, | +| | these parameters should be | +| | provided. | +| | The INSTALLER_IP and | +| | INSTALLER_TYPE are depending | +| | on your OpenStack installer, | +| | currently apex, compass, | +| | fuel and joid are supported. | +| | If you use other installers, | +| | such as devstack, these | +| | parameters can be ignores. | ++----------------------------------------------+------------------------------+ +| -p 8888:5000 | If you want to call | +| | yardstick API out of | +| | yardstick container, this | +| | parameter is needed. | ++----------------------------------------------+------------------------------+ +| -v /var/run/docker.sock:/var/run/docker.sock | If you want to use yardstick | +| | env grafana/influxdb to | +| | create a grafana/influxdb | +| | container out of yardstick | +| | container, this parameter is | +| | needed. | ++----------------------------------------------+------------------------------+ +| --name yardstick | The name for this container, | +| | not needed and can be | +| | defined by the user. | ++----------------------------------------------+------------------------------+ + +Enter Yardstick container +^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + docker exec -it yardstick /bin/bash + +In the container, the Yardstick repository is located in the /home/opnfv/repos +directory. + +In Danube release, we have improved the Yardstick installation steps. +Now Yardstick provides a CLI to prepare openstack environment variables and +load yardstick images:: + + yardstick env prepare + +If you ues this command. you can skip the following sections about how to +prepare openstack environment variables, load yardstick images and load +yardstick flavor manually. + + +Installing Yardstick directly in Ubuntu +--------------------------------------- + +.. _install-framework: + +Alternatively you can install Yardstick framework directly in Ubuntu or in an Ubuntu Docker +image. No matter which way you choose to install Yardstick framework, the +following installation steps are identical. + +If you choose to use the Ubuntu Docker image, You can pull the Ubuntu +Docker image from Docker hub: + +:: + + docker pull ubuntu:16.04 + + +Installing Yardstick framework +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Download source code and install Yardstick framework: + +:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + cd yardstick + ./install.sh + +For installing yardstick directly in Ubuntu, the **yardstick env command** is not available. +You need to prepare openstack environment variables, load yardstick images and load +yardstick flavor manually. + + +OpenStack parameters and credentials +------------------------------------ + +Environment variables +^^^^^^^^^^^^^^^^^^^^^ +Before running Yardstick it is necessary to export OpenStack environment variables +from the OpenStack *openrc* file (using the ``source`` command) and export the +external network name ``export EXTERNAL_NETWORK="external-network-name"``, +the default name for the external network is ``net04_ext``. + +Credential environment variables in the *openrc* file have to include at least: + +* OS_AUTH_URL +* OS_USERNAME +* OS_PASSWORD +* OS_TENANT_NAME + +A sample openrc file may look like this: + +* export OS_PASSWORD=console +* export OS_TENANT_NAME=admin +* export OS_AUTH_URL=http://172.16.1.222:35357/v2.0 +* export OS_USERNAME=admin +* export OS_VOLUME_API_VERSION=2 +* export EXTERNAL_NETWORK=net04_ext + + +Yardstick falvor and guest images +--------------------------------- + +Before executing Yardstick test cases, make sure that yardstick guest image and +yardstick flavor are available in OpenStack. +Detailed steps about creating yardstick flavor and building yardstick-trusty-server +image can be found below. + +Yardstick-flavor +^^^^^^^^^^^^^^^^ +Most of the sample test cases in Yardstick are using an OpenStack flavor called +*yardstick-flavor* which deviates from the OpenStack standard m1.tiny flavor by the +disk size - instead of 1GB it has 3GB. Other parameters are the same as in m1.tiny. + +Create yardstick-flavor: + +:: + + nova flavor-create yardstick-flavor 100 512 3 1 + + +.. _guest-image: + +Building a guest image +^^^^^^^^^^^^^^^^^^^^^^ +Most of the sample test cases in Yardstick are using a guest image called +*yardstick-trusty-server* which deviates from an Ubuntu Cloud Server image +containing all the required tools to run test cases supported by Yardstick. +Yardstick has a tool for building this custom image. It is necessary to have +sudo rights to use this tool. + +Also you may need install several additional packages to use this tool, by +follwing the commands below: + +:: + + apt-get update && apt-get install -y \ + qemu-utils \ + kpartx + +This image can be built using the following command while in the directory where +Yardstick is installed (``~/yardstick`` if the framework is installed +by following the commands above): + +:: + + sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh + +**Warning:** the script will create files by default in: +``/tmp/workspace/yardstick`` and the files will be owned by root! + +If you are building this guest image in inside a docker container make sure the +container is granted with privilege. + +The created image can be added to OpenStack using the ``glance image-create`` or +via the OpenStack Dashboard. + +Example command: + +:: + + glance --os-image-api-version 1 image-create \ + --name yardstick-image --is-public true \ + --disk-format qcow2 --container-format bare \ + --file /tmp/workspace/yardstick/yardstick-image.img + +Some Yardstick test cases use a Cirros image and a Ubuntu 14.04 image, you can find one at +http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img, https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img + +Add cirros and ubuntu image to OpenStack: + +:: + + openstack image create \ + --disk-format qcow2 \ + --container-format bare \ + --file $cirros_image_file \ + cirros-0.3.3 + + openstack image create \ + --disk-format qcow2 \ + --container-format bare \ + --file $ubuntu_image_file \ + Ubuntu-14.04 + +Automatic flavor and image creation +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Yardstick has a script for automatic creating yardstick flavor and building +guest images. This script is mainly used in CI, but you can still use it in +your local environment. + +Example command: + +:: + + source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh + + +Examples and verifying the install +---------------------------------- + +It is recommended to verify that Yardstick was installed successfully +by executing some simple commands and test samples. Before executing yardstick +test cases make sure yardstick flavor and building yardstick-trusty-server +image can be found in glance and openrc file is sourced. Below is an example +invocation of yardstick help command and ping.py test sample: +:: + + yardstick -h + yardstick task start samples/ping.yaml + +Each testing tool supported by Yardstick has a sample configuration file. +These configuration files can be found in the **samples** directory. + +Default location for the output is ``/tmp/yardstick.out``. + + +Deploy InfluxDB and Grafana locally +------------------------------------ + +The 'yardstick env' command can also help you to build influxDB and Grafana in +your local environment. + +Create InfluxDB container and config with the following command:: + + yardstick env influxdb + + +Create Grafana container and config:: + + yardstick env grafana + +Then you can run a test case and visit http://host_ip:3000(user:admin,passwd:admin) to see the results. + +note: Using **yardstick env** command to deploy InfluxDB and Grafana requires +Jump Server's docker API version => 1.24. You can use the following command to +check the docker API version: + +:: + + docker version + +The following sections describe how to deploy influxDB and Grafana manually. + +.. pull docker images + +Pull docker images + +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + docker pull tutum/influxdb + docker pull grafana/grafana + +Run influxdb and config +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run influxdb +:: + + docker run -d --name influxdb \ + -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ + tutum/influxdb + docker exec -it influxdb bash + +Config influxdb +:: + + influx + >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + >CREATE DATABASE yardstick; + >use yardstick; + >show MEASUREMENTS; + +Run grafana and config +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run grafana +:: + + docker run -d --name grafana -p 3000:3000 grafana/grafana + +Config grafana +:: + + http://{YOUR_IP_HERE}:3000 + log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 + +.. image:: images/Grafana_config.png + :width: 800px + :alt: Grafana data source configration + +Config yardstick conf +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + +vi /etc/yardstick/yardstick.conf +Config yardstick.conf +:: + + [DEFAULT] + debug = True + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root + +Now you can run yardstick test cases and store the results in influxdb +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + +Create a test suite for yardstick +------------------------------------ + +A test suite in yardstick is a yaml file which include one or more test cases. +Yardstick is able to support running test suite task, so you can customize you +own test suite and run it in one task. + +"tests/opnfv/test_suites" is where yardstick put ci test-suite. A typical test +suite is like below: + +fuel_test_suite.yaml + +:: + + --- + # Fuel integration test task suite + + schema: "yardstick:suite:0.1" + + name: "fuel_test_suite" + test_cases_dir: "samples/" + test_cases: + - + file_name: ping.yaml + - + file_name: iperf3.yaml + +As you can see, there are two test cases in fuel_test_suite, the syntax is simple +here, you must specify the schema and the name, then you just need to list the +test cases in the tag "test_cases" and also mark their relative directory in the +tag "test_cases_dir". + +Yardstick test suite also support constraints and task args for each test case. +Here is another sample to show this, which is digested from one big test suite. + +os-nosdn-nofeature-ha.yaml + +:: + + --- + + schema: "yardstick:suite:0.1" + + name: "os-nosdn-nofeature-ha" + test_cases_dir: "tests/opnfv/test_cases/" + test_cases: + - + file_name: opnfv_yardstick_tc002.yaml + - + file_name: opnfv_yardstick_tc005.yaml + - + file_name: opnfv_yardstick_tc043.yaml + constraint: + installer: compass + pod: huawei-pod1 + task_args: + huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml", + "host": "node4.LF","target": "node5.LF"}' + +As you can see in test case "opnfv_yardstick_tc043.yaml", there are two tags, "constraint" and +"task_args". "constraint" is where you can specify which installer or pod it can be run in +the ci environment. "task_args" is where you can specify the task arguments for each pod. + +All in all, to create a test suite in yardstick, you just need to create a suite yaml file +and add test cases and constraint or task arguments if necessary. diff --git a/docs/testing/user/userguide/04-vtc-overview.rst b/docs/testing/user/userguide/04-vtc-overview.rst deleted file mode 100644 index 82b20cad5..000000000 --- a/docs/testing/user/userguide/04-vtc-overview.rst +++ /dev/null @@ -1,122 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. - -========================== -Virtual Traffic Classifier -========================== - -Abstract -======== - -.. _TNOVA: http://www.t-nova.eu/ -.. _TNOVAresults: http://www.t-nova.eu/results/ -.. _Yardstick: https://wiki.opnfv.org/yardstick - -This chapter provides an overview of the virtual Traffic Classifier, a -contribution to OPNFV Yardstick_ from the EU Project TNOVA_. -Additional documentation is available in TNOVAresults_. - -Overview -======== - -The virtual Traffic Classifier (:term:`VTC`) :term:`VNF`, comprises of a -Virtual Network Function Component (:term:`VNFC`). The :term:`VNFC` contains -both the Traffic Inspection module, and the Traffic forwarding module, needed -to run the :term:`VNF`. The exploitation of Deep Packet Inspection -(:term:`DPI`) methods for traffic classification is built around two basic -assumptions: - -* third parties unaffiliated with either source or recipient are able to -inspect each IP packet’s payload - -* the classifier knows the relevant syntax of each application’s packet -payloads (protocol signatures, data patterns, etc.). - -The proposed :term:`DPI` based approach will only use an indicative, small -number of the initial packets from each flow in order to identify the content -and not inspect each packet. - -In this respect it follows the Packet Based per Flow State (term:`PBFS`). This -method uses a table to track each session based on the 5-tuples (src address, -dest address, src port,dest port, transport protocol) that is maintained for -each flow. - -Concepts -======== - -* *Traffic Inspection*: The process of packet analysis and application -identification of network traffic that passes through the :term:`VTC`. - -* *Traffic Forwarding*: The process of packet forwarding from an incoming -network interface to a pre-defined outgoing network interface. - -* *Traffic Rule Application*: The process of packet tagging, based on a -predefined set of rules. Packet tagging may include e.g. Type of Service -(:term:`ToS`) field modification. - -Architecture -============ - -The Traffic Inspection module is the most computationally intensive component -of the :term:`VNF`. It implements filtering and packet matching algorithms in -order to support the enhanced traffic forwarding capability of the :term:`VNF`. -The component supports a flow table (exploiting hashing algorithms for fast -indexing of flows) and an inspection engine for traffic classification. - -The implementation used for these experiments exploits the nDPI library. -The packet capturing mechanism is implemented using libpcap. When the -:term:`DPI` engine identifies a new flow, the flow register is updated with the -appropriate information and transmitted across the Traffic Forwarding module, -which then applies any required policy updates. - -The Traffic Forwarding moudle is responsible for routing and packet forwarding. -It accepts incoming network traffic, consults the flow table for classification -information for each incoming flow and then applies pre-defined policies -marking e.g. :term:`ToS`/Differentiated Services Code Point (:term:`DSCP`) -multimedia traffic for Quality of Service (:term:`QoS`) enablement on the -forwarded traffic. -It is assumed that the traffic is forwarded using the default policy until it -is identified and new policies are enforced. - -The expected response delay is considered to be negligible, as only a small -number of packets are required to identify each flow. - -Graphical Overview -================== - -.. code-block:: console - - +----------------------------+ - | | - | Virtual Traffic Classifier | - | | - | Analysing/Forwarding | - | ------------> | - | ethA ethB | - | | - +----------------------------+ - | ^ - | | - v | - +----------------------------+ - | | - | Virtual Switch | - | | - +----------------------------+ - -Install -======= - -run the build.sh with root privileges - -Run -=== - -sudo ./pfbridge -a eth1 -b eth2 - -Development Environment -======================= - -Ubuntu 14.04 diff --git a/docs/testing/user/userguide/05-apexlake_installation.rst b/docs/testing/user/userguide/05-apexlake_installation.rst deleted file mode 100644 index d4493e0f8..000000000 --- a/docs/testing/user/userguide/05-apexlake_installation.rst +++ /dev/null @@ -1,300 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - - -.. _DPDK: http://dpdk.org/doc/nics -.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/ -.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking -.. _PORTSEC: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver -.. _here: https://wiki.opnfv.org/vtc - - -============================ -Apexlake Installation Guide -============================ - -Abstract --------- - -ApexLake is a framework that provides automatic execution of experiments and -related data collection to enable a user validate infrastructure from the -perspective of a Virtual Network Function (:term:`VNF`). - -In the context of Yardstick, a virtual Traffic Classifier (:term:`VTC`) network -function is utilized. - - -Framework Hardware Dependencies -=============================== - -In order to run the framework there are some hardware related dependencies for -ApexLake. - -The framework needs to be installed on the same physical node where DPDK-pktgen_ -is installed. - -The installation requires the physical node hosting the packet generator must -have 2 NICs which are DPDK_ compatible. - -The 2 NICs will be connected to the switch where the OpenStack VM -network is managed. - -The switch used must support multicast traffic and :term:`IGMP` snooping. -Further details about the configuration are provided at the following here_. - -The corresponding ports to which the cables are connected need to be configured -as VLAN trunks using two of the VLAN IDs available for Neutron. -Note the VLAN IDs used as they will be required in later configuration steps. - - -Framework Software Dependencies -=============================== -Before starting the framework, a number of dependencies must first be installed. -The following describes the set of instructions to be executed via the Linux -shell in order to install and configure the required dependencies. - -1. Install Dependencies. - -To support the framework dependencies the following packages must be installed. -The example provided is based on Ubuntu and needs to be executed in root mode. - -:: - - apt-get install python-dev - apt-get install python-pip - apt-get install python-mock - apt-get install tcpreplay - apt-get install libpcap-dev - -2. Source OpenStack openrc file. - -:: - - source openrc - -3. Configure Openstack Neutron - -In order to support traffic generation and management by the virtual -Traffic Classifier, the configuration of the port security driver -extension is required for Neutron. - -For further details please follow the following link: PORTSEC_ -This step can be skipped in case the target OpenStack is Juno or Kilo release, -but it is required to support Liberty. -It is therefore required to indicate the release version in the configuration -file located in ./yardstick/vTC/apexlake/apexlake.conf - - -4. Create Two Networks based on VLANs in Neutron. - -To enable network communications between the packet generator and the compute -node, two networks must be created via Neutron and mapped to the VLAN IDs -that were previously used in the configuration of the physical switch. -The following shows the typical set of commands required to configure Neutron -correctly. -The physical switches need to be configured accordingly. - -:: - - VLAN_1=2032 - VLAN_2=2033 - PHYSNET=physnet2 - neutron net-create apexlake_inbound_network \ - --provider:network_type vlan \ - --provider:segmentation_id $VLAN_1 \ - --provider:physical_network $PHYSNET - - neutron subnet-create apexlake_inbound_network \ - 192.168.0.0/24 --name apexlake_inbound_subnet - - neutron net-create apexlake_outbound_network \ - --provider:network_type vlan \ - --provider:segmentation_id $VLAN_2 \ - --provider:physical_network $PHYSNET - - neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \ - --name apexlake_outbound_subnet - - -5. Download Ubuntu Cloud Image and load it on Glance - -The virtual Traffic Classifier is supported on top of Ubuntu 14.04 cloud image. -The image can be downloaded on the local machine and loaded on Glance -using the following commands: - -:: - - wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img - glance image-create \ - --name ubuntu1404 \ - --is-public true \ - --disk-format qcow \ - --container-format bare \ - --file trusty-server-cloudimg-amd64-disk1.img - - - -6. Configure the Test Cases - -The VLAN tags must also be included in the test case Yardstick yaml file -as parameters for the following test cases: - - * :doc:`opnfv_yardstick_tc006` - - * :doc:`opnfv_yardstick_tc007` - - * :doc:`opnfv_yardstick_tc020` - - * :doc:`opnfv_yardstick_tc021` - - -Install and Configure DPDK Pktgen -+++++++++++++++++++++++++++++++++ - -Execution of the framework is based on DPDK Pktgen. -If DPDK Pktgen has not installed, it is necessary to download, install, compile -and configure it. -The user can create a directory and download the dpdk packet generator source -code: - -:: - - cd experimental_framework/libraries - mkdir dpdk_pktgen - git clone https://github.com/pktgen/Pktgen-DPDK.git - -For instructions on the installation and configuration of DPDK and DPDK Pktgen -please follow the official DPDK Pktgen README file. -Once the installation is completed, it is necessary to load the DPDK kernel -driver, as follow: - -:: - - insmod uio - insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko - -It is necessary to set the configuration file to support the desired Pktgen -configuration. -A description of the required configuration parameters and supporting examples -is provided in the following: - -:: - - [PacketGen] - packet_generator = dpdk_pktgen - - # This is the directory where the packet generator is installed - # (if the user previously installed dpdk-pktgen, - # it is required to provide the director where it is installed). - pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/ - - # This is the directory where DPDK is installed - dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/ - - # Name of the dpdk-pktgen program that starts the packet generator - program_name = app/app/x86_64-native-linuxapp-gcc/pktgen - - # DPDK coremask (see DPDK-Pktgen readme) - coremask = 1f - - # DPDK memory channels (see DPDK-Pktgen readme) - memory_channels = 3 - - # Name of the interface of the pktgen to be used to send traffic (vlan_sender) - name_if_1 = p1p1 - - # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver) - name_if_2 = p1p2 - - # PCI bus address correspondent to if_1 - bus_slot_nic_1 = 01:00.0 - - # PCI bus address correspondent to if_2 - bus_slot_nic_2 = 01:00.1 - - -To find the parameters related to names of the NICs and the addresses of the PCI buses -the user may find it useful to run the :term:`DPDK` tool nic_bind as follows: - -:: - - DPDK_DIR/tools/dpdk_nic_bind.py --status - -Lists the NICs available on the system, and shows the available drivers and bus addresses for each interface. -Please make sure to select NICs which are :term:`DPDK` compatible. - -Installation and Configuration of smcroute -++++++++++++++++++++++++++++++++++++++++++ - -The user is required to install smcroute which is used by the framework to -support multicast communications. - -The following is the list of commands required to download and install smroute. - -:: - - cd ~ - git clone https://github.com/troglobit/smcroute.git - cd smcroute - git reset --hard c3f5c56 - sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh - sed -i 's/automake-1.11/automake/g' ./autogen.sh - ./autogen.sh - ./configure - make - sudo make install - cd .. - -It is required to do the reset to the specified commit ID. -It is also requires the creation a configuration file using the following -command: - - SMCROUTE_NIC=(name of the nic) - -where name of the nic is the name used previously for the variable "name_if_2". -For example: - -:: - - SMCROUTE_NIC=p1p2 - -Then create the smcroute configuration file /etc/smcroute.conf - -:: - - echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf - - -At the end of this procedure it will be necessary to perform the following -actions to add the user to the sudoers: - -:: - - adduser USERNAME sudo - echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers - - -Experiment using SR-IOV Configuration on the Compute Node -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -To enable :term:`SR-IOV` interfaces on the physical NIC of the compute node, a -compatible NIC is required. -NIC configuration depends on model and vendor. After proper configuration to -support :term:`SR-IOV`, a proper configuration of OpenStack is required. -For further information, please refer to the SRIOV_ configuration guide - -Finalize installation the framework on the system -================================================= - -The installation of the framework on the system requires the setup of the project. -After entering into the apexlake directory, it is sufficient to run the following -command. - -:: - - python setup.py install - -Since some elements are copied into the /tmp directory (see configuration file) -it could be necessary to repeat this step after a reboot of the host. diff --git a/docs/testing/user/userguide/05-yardstick_plugin.rst b/docs/testing/user/userguide/05-yardstick_plugin.rst new file mode 100644 index 000000000..b724b361b --- /dev/null +++ b/docs/testing/user/userguide/05-yardstick_plugin.rst @@ -0,0 +1,145 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. + +=================================== +Installing a plug-in into yardstick +=================================== + +Abstract +======== + +Yardstick currently provides a ``plugin`` CLI command to support integration +with other OPNFV testing projects. Below is an example invocation of yardstick +plugin command and Storperf plug-in sample. + + +Installing Storperf into yardstick +================================== + +Storperf is delivered as a Docker container from +https://hub.docker.com/r/opnfv/storperf/tags/. + +There are two possible methods for installation in your environment: + +* Run container on Jump Host +* Run container in a VM + +In this introduction we will install Storperf on Jump Host. + + +Step 0: Environment preparation +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Running Storperf on Jump Host +Requirements: + +* Docker must be installed +* Jump Host must have access to the OpenStack Controller API +* Jump Host must have internet connectivity for downloading docker image +* Enough floating IPs must be available to match your agent count + +Before installing Storperf into yardstick you need to check your openstack +environment and other dependencies: + +1. Make sure docker is installed. +2. Make sure Keystone, Nova, Neutron, Glance, Heat are installed correctly. +3. Make sure Jump Host have access to the OpenStack Controller API. +4. Make sure Jump Host must have internet connectivity for downloading docker image. +5. You need to know where to get basic openstack Keystone authorization info, such as + OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL, OS_USERNAME. +6. To run a Storperf container, you need to have OpenStack Controller environment + variables defined and passed to Storperf container. The best way to do this is to + put environment variables in a "storperf_admin-rc" file. The storperf_admin-rc + should include credential environment variables at least: + +* OS_AUTH_URL +* OS_TENANT_ID +* OS_TENANT_NAME +* OS_PROJECT_NAME +* OS_USERNAME +* OS_PASSWORD +* OS_REGION_NAME + +For this storperf_admin-rc file, during environment preparation a "prepare_storperf_admin-rc.sh" +script can be used to generate it. +:: + + #!/bin/bash + AUTH_URL=${OS_AUTH_URL} + USERNAME=${OS_USERNAME:-admin} + PASSWORD=${OS_PASSWORD:-console} + TENANT_NAME=${OS_TENANT_NAME:-admin} + VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-2} + PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME} + TENANT_ID=`keystone tenant-get admin|grep 'id'|awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` + rm -f ~/storperf_admin-rc + touch ~/storperf_admin-rc + echo "OS_AUTH_URL="$AUTH_URL >> ~/storperf_admin-rc + echo "OS_USERNAME="$USERNAME >> ~/storperf_admin-rc + echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc + echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc + echo "OS_VOLUME_API_VERSION="$VOLUME_API_VERSION >> ~/storperf_admin-rc + echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc + echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc + +The generated "storperf_admin-rc" file will be stored under the root directory. If you installed Yardstick using Docker, this file will be located in the container. You may need to copy it to the root directory of the deployed host. + +Step 1: Plug-in configuration file preparation +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +To install a plug-in, first you need to prepare a plug-in configuration file in +YAML format and store it in the "plugin" directory. The plugin configration file +work as the input of yardstick "plugin" command. Below is the Storperf plug-in +configuration file sample: +:: + + --- + # StorPerf plugin configuration file + # Used for integration StorPerf into Yardstick as a plugin + schema: "yardstick:plugin:0.1" + plugins: + name: storperf + deployment: + ip: 192.168.23.2 + user: root + password: root + +In the plug-in configuration file, you need to specify the plug-in name and the +plug-in deployment info, including node ip, node login username and password. +Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host +in my local environment. + +Step 2: Plug-in install/remove scripts preparation +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Under "yardstick/resource/scripts directory", there are two folders: a "install" +folder and a "remove" folder. You need to store the plug-in install/remove script +in these two folders respectively. + +The detailed installation or remove operation should de defined in these two scripts. +The name of both install and remove scripts should match the plugin-in name that you +specified in the plug-in configuration file. +For example, the install and remove scripts for Storperf are both named to "storperf.bash". + + +Step 3: Install and remove Storperf +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +To install Storperf, simply execute the following command +:: + + # Install Storperf + yardstick plugin install plugin/storperf.yaml + +removing Storperf from yardstick +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To remove Storperf, simply execute the following command +:: + + # Remove Storperf + yardstick plugin remove plugin/storperf.yaml + +What yardstick plugin command does is using the username and password to log into the deployment target and then execute the corresponding install or remove script. diff --git a/docs/testing/user/userguide/06-apexlake_api.rst b/docs/testing/user/userguide/06-apexlake_api.rst deleted file mode 100644 index 35a1dbe3e..000000000 --- a/docs/testing/user/userguide/06-apexlake_api.rst +++ /dev/null @@ -1,89 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - - -================================= -Apexlake API Interface Definition -================================= - -Abstract --------- - -The API interface provided by the framework to enable the execution of test -cases is defined as follows. - - -init ----- - -**static init()** - - Initializes the Framework - - **Returns** None - - -execute_framework ------------------ - -**static execute_framework** (test_cases, - - iterations, - - heat_template, - - heat_template_parameters, - - deployment_configuration, - - openstack_credentials) - - Executes the framework according the specified inputs - - **Parameters** - - - **test_cases** - - Test cases to be run with the workload (dict() of dict()) - - Example: - test_case = dict() - - test_case[’name’] = ‘module.Class’ - - test_case[’params’] = dict() - - test_case[’params’][’throughput’] = ‘1’ - - test_case[’params’][’vlan_sender’] = ‘1000’ - - test_case[’params’][’vlan_receiver’] = ‘1001’ - - test_cases = [test_case] - - - **iterations** - Number of test cycles to be executed (int) - - - **heat_template** - (string) File name of the heat template corresponding to the workload to be deployed. - It contains the parameters to be evaluated in the form of #parameter_name. - (See heat_templates/vTC.yaml as example). - - - **heat_template_parameters** - (dict) Parameters to be provided as input to the - heat template. See http://docs.openstack.org/developer/heat/ template_guide/hot_guide.html - section “Template input parameters” for further info. - - - **deployment_configuration** - ( dict[string] = list(strings) ) ) Dictionary of parameters - representing the deployment configuration of the workload. - - The key is a string corresponding to the name of the parameter, - the value is a list of strings representing the value to be - assumed by a specific param. The parameters are user defined: - they have to correspond to the place holders (#parameter_name) - specified in the heat template. - - **Returns** dict() containing results diff --git a/docs/testing/user/userguide/06-result-store-InfluxDB.rst b/docs/testing/user/userguide/06-result-store-InfluxDB.rst new file mode 100644 index 000000000..a0bb48a80 --- /dev/null +++ b/docs/testing/user/userguide/06-result-store-InfluxDB.rst @@ -0,0 +1,86 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others. + +============================================== +Store Other Project's Test Results in InfluxDB +============================================== + +Abstract +======== + +.. _Framework: https://wiki.opnfv.org/download/attachments/6827660/wiki.png?version=1&modificationDate=1470298075000&api=v2 + +This chapter illustrates how to run plug-in test cases and store test results +into community's InfluxDB. The framework is shown in Framework_. + + +.. image:: images/InfluxDB_store.png + :width: 800px + :alt: Store Other Project's Test Results in InfluxDB + +Store Storperf Test Results into Community's InfluxDB +===================================================== + +.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py +.. _Mingjiang: limingjiang@huawei.com +.. _Visual: https://wiki.opnfv.org/download/attachments/6827660/tc074.PNG?version=1&modificationDate=1470298075000&api=v2 +.. _Login: http://testresults.opnfv.org/grafana/login + +As shown in Framework_, there are two ways to store Storperf test results +into community's InfluxDB: + +1. Yardstick asks Storperf to run the test case. After the test case is + completed, Yardstick reads test results via ReST API from Storperf and + posts test data to the influxDB. + +2. Additionally, Storperf can run tests by itself and post the test result + directly to the InfluxDB. The method for posting data directly to influxDB + will be supported in the future. + +Our plan is to support rest-api in D release so that other testing projects can +call the rest-api to use yardstick dispatcher service to push data to yardstick's +influxdb database. + +For now, influxdb only support line protocol, and the json protocol is deprecated. + +Take ping test case for example, the raw_result is json format like this: +:: + + "benchmark": { + "timestamp": 1470315409.868095, + "errors": "", + "data": { + "rtt": { + "ares": 1.125 + } + }, + "sequence": 1 + }, + "runner_id": 2625 + } + +With the help of "influxdb_line_protocol", the json is transform to like below as a line string: +:: + + 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown,pod_name=unknown, + runner_id=2625,scenarios=Ping,target=ares.demo,task_id=77755f38-1f6a-4667-a7f3- + 301c99963656,version=unknown rtt.ares=1.125 1470315409868094976' + +So, for data output of json format, you just need to transform json into line format and call +influxdb api to post the data into the database. All this function has been implemented in Influxdb_. +If you need support on this, please contact Mingjiang_. +:: + + curl -i -XPOST 'http://104.197.68.199:8086/write?db=yardstick' -- + data-binary 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown, ...' + +Grafana will be used for visualizing the collected test data, which is shown in Visual_. Grafana +can be accessed by Login_. + + +.. image:: images/results_visualization.png + :width: 800px + :alt: results visualization + diff --git a/docs/testing/user/userguide/07-grafana.rst b/docs/testing/user/userguide/07-grafana.rst new file mode 100644 index 000000000..416857b71 --- /dev/null +++ b/docs/testing/user/userguide/07-grafana.rst @@ -0,0 +1,119 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) 2016 Huawei Technologies Co.,Ltd and others + +================= +Grafana dashboard +================= + + +Abstract +======== + +This chapter describes the Yardstick grafana dashboard. The Yardstick grafana +dashboard can be found here: http://testresults.opnfv.org/grafana/ + + +.. image:: images/login.png + :width: 800px + :alt: Yardstick grafana dashboard + + +Public access +============= + +Yardstick provids a public account for accessing to the dashboard. The username +and password are both set to ‘opnfv’. + + +Testcase dashboard +================== + +For each test case, there is a dedicated dashboard. Shown here is the dashboard +of TC002. + + +.. image:: images/TC002.png + :width: 800px + :alt:TC002 dashboard + +For each test case dashboard. On the top left, we have a dashboard selection, +you can switch to different test cases using this pull-down menu. + +Underneath, we have a pod and scenario selection. +All the pods and scenarios that have ever published test data to the InfluxDB +will be shown here. + +You can check multiple pods or scenarios. + +For each test case, we have a short description and a link to detailed test +case information in Yardstick user guide. + +Underneath, it is the result presentation section. +You can use the time period selection on the top right corner to zoom in or +zoom out the chart. + + +Administration access +===================== + +For a user with administration rights it is easy to update and save any +dashboard configuration. Saved updates immediately take effect and become live. +This may cause issues like: + +- Changes and updates made to the live configuration in Grafana can compromise + existing Grafana content in an unwanted, unpredicted or incompatible way. + Grafana as such is not version controlled, there exists one single Grafana + configuration per dashboard. +- There is a risk several people can disturb each other when doing updates to + the same Grafana dashboard at the same time. + +Any change made by administrator should be careful. + + +Add a dashboard into yardstick grafana +====================================== + +Due to security concern, users that using the public opnfv account are not able +to edit the yardstick grafana directly.It takes a few more steps for a +non-yardstick user to add a custom dashboard into yardstick grafana. + +There are 6 steps to go. + + +.. image:: images/add.png + :width: 800px + :alt: Add a dashboard into yardstick grafana + + +1. You need to build a local influxdb and grafana, so you can do the work + locally. You can refer to How to deploy InfluxDB and Grafana locally wiki + page about how to do this. + +2. Once step one is done, you can fetch the existing grafana dashboard + configuration file from the yardstick repository and import it to your local + grafana. After import is done, you grafana dashboard will be ready to use + just like the community’s dashboard. + +3. The third step is running some test cases to generate test results and + publishing it to your local influxdb. + +4. Now you have some data to visualize in your dashboard. In the fourth step, + it is time to create your own dashboard. You can either modify an existing + dashboard or try to create a new one from scratch. If you choose to modify + an existing dashboard then in the curtain menu of the existing dashboard do + a "Save As..." into a new dashboard copy instance, and then continue doing + all updates and saves within the dashboard copy. + +5. When finished with all Grafana configuration changes in this temporary + dashboard then chose "export" of the updated dashboard copy into a JSON file + and put it up for review in Gerrit, in file /yardstick/dashboard/Yardstick-TCxxx-yyyyyyyyyyyyy. + For instance a typical default name of the file would be "Yardstick-TC001 Copy-1234567891234". + +6. Once you finish your dashboard, the next step is exporting the configuration + file and propose a patch into Yardstick. Yardstick team will review and + merge it into Yardstick repository. After approved review Yardstick team + will do an "import" of the JSON file and also a "save dashboard" as soon as + possible to replace the old live dashboard configuration. + diff --git a/docs/testing/user/userguide/07-nsb-overview.rst b/docs/testing/user/userguide/07-nsb-overview.rst deleted file mode 100644 index 19719f1a7..000000000 --- a/docs/testing/user/userguide/07-nsb-overview.rst +++ /dev/null @@ -1,177 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. - -===================================== -Network Services Benchmarking (NSB) -===================================== - -Abstract -======== - -.. _Yardstick: https://wiki.opnfv.org/yardstick - -This chapter provides an overview of the NSB, a contribution to OPNFV -Yardstick_ from Intel. - -Overview -======== - -GOAL: Extend Yardstick to perform real world VNFs and NFVi Characterization and -benchmarking with repeatable and deterministic methods. - -The Network Service Benchmarking (NSB) extends the yardstick framework to do -VNF characterization and benchmarking in three different execution -environments viz., bare metal i.e. native Linux environment, standalone virtual -environment and managed virtualized environment (e.g. Open stack etc.). -It also brings in the capability to interact with external traffic generators -both hardware & software based for triggering and validating the traffic -according to user defined profiles. - -NSB extension includes: - • Generic data models of Network Services, based on ETSI specs - • New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc - • Generic VNF configuration models and metrics implemented with Python - classes - • Traffic generator features and traffic profiles - • L1-L3 state-less traffic profiles - • L4-L7 state-full traffic profiles - • Tunneling protocol / network overlay support - • Test case samples - • Ping - • Trex - • vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc - • Traffic generators like Trex, ab/nginx, ixia, iperf etc - • KPIs for a given use case: - • System agent support for collecting NFvi KPI. This includes: - o CPU statistic - o Memory BW - o OVS-DPDK Stats - • Network KPIs – eg, inpackets, outpackets, thoughput, latency etc - • VNF KPIs – packet_in, packet_drop, packet_fwd etc - -Architecture -============ -The Network Service (NS) defines a set of Virtual Network Functions (VNF) -connected together using NFV infrastructure. - -The Yardstick NSB extension can support multiple VNFs created by different -vendors including traffic generators. Every VNF being tested has its -own data model. The Network service defines a VNF modelling on base of performed -network functionality. The part of the data model is a set of the configuration -parameters, number of connection points used and flavor including core and -memory amount. - -The ETSI defines a Network Service as a set of configurable VNFs working in -some NFV Infrastructure connecting each other using Virtual Links available -through Connection Points. The ETSI MANO specification defines a set of -management entities called Network Service Descriptors (NSD) and -VNF Descriptors (VNFD) that define real Network Service. The picture below -makes an example how the real Network Operator use-case can map into ETSI -Network service definition - -Network Service framework performs the necessary test steps. It may involve - o Interacting with traffic generator and providing the inputs on traffic - type / packet structure to generate the required traffic as per the - test case. Traffic profiles will be used for this. - o Executing the commands required for the test procedure and analyses the - command output for confirming whether the command got executed correctly - or not. E.g. As per the test case, run the traffic for the given - time period / wait for the necessary time delay - o Verify the test result. - o Validate the traffic flow from SUT - o Fetch the table / data from SUT and verify the value as per the test case - o Upload the logs from SUT onto the Test Harness server - o Read the KPI’s provided by particular VNF - -Components of Network Service ------------------------------- - -* *Models for Network Service benchmarking*: The Network Service benchmarking - requires the proper modelling approach. The NSB provides models using Python - files and defining of NSDs and VNFDs. - -The benchmark control application being a part of OPNFV yardstick can call -that python models to instantiate and configure the VNFs. Depending on -infrastructure type (bare-metal or fully virtualized) that calls could be -made directly or using MANO system. - -* *Traffic generators in NSB*: Any benchmark application requires a set of - traffic generator and traffic profiles defining the method in which traffic - is generated. - -The Network Service benchmarking model extends the Network Service -definition with a set of Traffic Generators (TG) that are treated -same way as other VNFs being a part of benchmarked network service. -Same as other VNFs the traffic generator are instantiated and terminated. - -Every traffic generator has own configuration defined as a traffic profile and -a set of KPIs supported. The python models for TG is extended by specific calls -to listen and generate traffic. - -* *The stateless TREX traffic generator*: The main traffic generator used as - Network Service stimulus is open source TREX tool. - -The TREX tool can generate any kind of stateless traffic. - -.. code-block:: console - - +--------+ +-------+ +--------+ - | | | | | | - | Trex | ---> | VNF | ---> | Trex | - | | | | | | - +--------+ +-------+ +--------+ - -Supported testcases scenarios: -• Correlated UDP traffic using TREX traffic generator and replay VNF. - o using different IMIX configuration like pure voice, pure video traffic etc - o using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows - o Using different number of rules configured like 1 rule, 1K, 10K rules - -For UDP correlated traffic following Key Performance Indicators are collected -for every combination of test case parameters: - • RFC2544 throughput for various loss rate defined (1% is a default) - -Graphical Overview -================== - -NSB Testing with yardstick framework facilitate performance testing of various -VNFs provided. - -.. code-block:: console - +-----------+ - | | +-----------+ - | vPE | ->|TGen Port 0| - | TestCase | | +-----------+ - | | | - +-----------+ +------------------+ +-------+ | - | | -- API --> | VNF | <---> - +-----------+ | Yardstick | +-------+ | - | Test Case | --> | NSB Testing | | - +-----------+ | | | - | | | | - | +------------------+ | - +-----------+ | +-----------+ - | Traffic | ->|TGen Port 1| - | patterns | +-----------+ - +-----------+ - Figure 1: Network Service - 2 server configuration - - -Install -======= - -run the nsb_install.sh with root privileges - -Run -=== - -source ~/.bash_profile -cd /yardstick/cmd -sudo -E ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml - -Development Environment -======================= - -Ubuntu 14.04, Ubuntu 16.04 diff --git a/docs/testing/user/userguide/08-nsb_installation.rst b/docs/testing/user/userguide/08-nsb_installation.rst deleted file mode 100644 index a390bb7d7..000000000 --- a/docs/testing/user/userguide/08-nsb_installation.rst +++ /dev/null @@ -1,253 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016-2017 Intel Corporation. - -Yardstick - NSB Testing -Installation -===================================== - -Abstract --------- - -Yardstick supports installation on Ubuntu 14.04 or via a Docker image. The -installation procedure on Ubuntu 14.04 or via the docker image are detailed in -the section below. - -The Network Service Benchmarking (NSB) extends the yardstick framework to do -VNF characterization and benchmarking in three different execution -environments viz., bare metal i.e. native Linux environment, standalone virtual -environment and managed virtualized environment (e.g. Open stack etc.). -It also brings in the capability to interact with external traffic generators -both hardware & software based for triggering and validating the traffic -according to user defined profiles. - -The steps needed to run Yardstick with NSB testing are: - -* Install Yardstick (NSB Testing). -* Setup pod.yaml describing Test topology -* Create the test configuration yaml file. -* Run the test case. - - -Prerequisites -------------- - -Refer chapter 08-instalaltion.rst for more information on yardstick -prerequisites - -Several prerequisites are needed for Yardstick(VNF testing): -* Python Modules: pyzmq, pika. -* flex -* bison -* build-essential -* automake -* libtool -* librabbitmq-dev -* rabbitmq-server -* collectd -* intel-cmt-cat - -Installing Yardstick on Ubuntu 14.04 ------------------------------------- - -.. _install-framework: - -You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu -14.04 Docker image. No matter which way you choose to install Yardstick -framework, the following installation steps are identical. - -If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu -14.04 Docker image from Docker hub: - -:: - - docker pull ubuntu:14.04 - -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Download source code and install python dependencies: - -:: - - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - ./nsb_setup.sh - -It will automatically download all the packages needed for NSB Testing setup. - -System Topology: ------------------ - -.. code-block:: console - - +----------+ +----------+ - | | | | - | | (0)----->(0) | Ping/ | - | TG1 | | vPE/ | - | | | 2Trex | - | | (1)<-----(1) | | - +----------+ +----------+ - trafficgen_1 vnf - - -OpenStack parameters and credentials ------------------------------------- - -Environment variables -^^^^^^^^^^^^^^^^^^^^^ -Before running Yardstick (NSB Testing) it is necessary to export traffic -generator libraries. - -:: - source ~/.bash_profile - -Config yardstick conf -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf - -vi /etc/yardstick/yardstick.conf - -Config yardstick.conf -:: - - [DEFAULT] - debug = True - dispatcher = influxdb - - [dispatcher_influxdb] - timeout = 5 - target = http://{YOUR_IP_HERE}:8086 - db_name = yardstick - username = root - password = root - - [nsb] - trex_path=/opt/nsb_bin/trex/scripts - bin_path=/opt/nsb_bin - - -Config pod.yaml describing Topology -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Before executing Yardstick test cases, make sure that pod.yaml reflects the -topology and update all the required fields. - -copy /etc/yardstick/nodes/pod.yaml.nsb.example to /etc/yardstick/nodes/pod.yaml - -Config pod.yaml -:: - nodes: - - - name: trafficgen_1 - role: TrafficGen - ip: 1.1.1.1 - user: root - password: r00t - interfaces: - xe0: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.0" - driver: i40e # default kernel driver - dpdk_port_num: 0 - local_ip: "152.16.100.20" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:01" - xe1: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.1" - driver: i40e # default kernel driver - dpdk_port_num: 1 - local_ip: "152.16.40.20" - netmask: "255.255.255.0" - local_mac: "00:00.00:00:00:02" - - - - name: vnf - role: vnf - ip: 1.1.1.2 - user: root - password: r00t - host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node - interfaces: - xe0: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.0" - driver: i40e # default kernel driver - dpdk_port_num: 0 - local_ip: "152.16.100.19" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:03" - - xe1: # logical name from topology.yaml and vnfd.yaml - vpci: "0000:07:00.1" - driver: i40e # default kernel driver - dpdk_port_num: 1 - local_ip: "152.16.40.19" - netmask: "255.255.255.0" - local_mac: "00:00:00:00:00:04" - routing_table: - - network: "152.16.100.20" - netmask: "255.255.255.0" - gateway: "152.16.100.20" - if: "xe0" - - network: "152.16.40.20" - netmask: "255.255.255.0" - gateway: "152.16.40.20" - if: "xe1" - nd_route_tbl: - - network: "0064:ff9b:0:0:0:0:9810:6414" - netmask: "112" - gateway: "0064:ff9b:0:0:0:0:9810:6414" - if: "xe0" - - network: "0064:ff9b:0:0:0:0:9810:2814" - netmask: "112" - gateway: "0064:ff9b:0:0:0:0:9810:2814" - if: "xe1" - -Enable yardstick virtual environment -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Before executing yardstick test cases, make sure to activate yardstick -python virtual environment - -:: - source /opt/nsb_bin/yardstick_venv/bin/activate - - -Examples and verifying the install ----------------------------------- - -It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Before executing yardstick -test cases make sure yardstick flavor and building yardstick-trusty-server -image can be found in glance and openrc file is sourced. Below is an example -invocation of yardstick help command and ping.py test sample: -:: - - yardstick –h - yardstick task start samples/ping.yaml - -Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. - -Default location for the output is ``/tmp/yardstick.out``. - - -Run Yardstick - Network Service Testcases ------------------------------------------ - -NS testing - using NSBperf CLI -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -:: - - source /opt/nsb_setup/yardstick_venv/bin/activate - PYTHONPATH: ". ~/.bash_profile" - cd /yardstick/cmd - Execute command: ./NSPerf.py -h - ./NSBperf.py --vnf --test - eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml - -NS testing - using yardstick CLI -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -:: - - source /opt/nsb_setup/yardstick_venv/bin/activate - PYTHONPATH: ". ~/.bash_profile" - Go to test case forlder type we want to execute. - e.g. /samples/vnf_samples/nsut// - run: yardstick --debug task start diff --git a/docs/testing/user/userguide/08-vtc-overview.rst b/docs/testing/user/userguide/08-vtc-overview.rst new file mode 100644 index 000000000..f30bf7cc5 --- /dev/null +++ b/docs/testing/user/userguide/08-vtc-overview.rst @@ -0,0 +1,125 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, National Center of Scientific Research "Demokritos" and others. + +========================== +Virtual Traffic Classifier +========================== + +Abstract +======== + +.. _TNOVA: http://www.t-nova.eu/ +.. _TNOVAresults: http://www.t-nova.eu/results/ +.. _Yardstick: https://wiki.opnfv.org/yardstick + +This chapter provides an overview of the virtual Traffic Classifier, a +contribution to OPNFV Yardstick_ from the EU Project TNOVA_. +Additional documentation is available in TNOVAresults_. + +Overview +======== + +The virtual Traffic Classifier (:term:`VTC`) :term:`VNF`, comprises of a +Virtual Network Function Component (:term:`VNFC`). The :term:`VNFC` contains +both the Traffic Inspection module, and the Traffic forwarding module, needed +to run the :term:`VNF`. The exploitation of Deep Packet Inspection +(:term:`DPI`) methods for traffic classification is built around two basic +assumptions: + +* third parties unaffiliated with either source or recipient are able to +inspect each IP packet’s payload + +* the classifier knows the relevant syntax of each application’s packet +payloads (protocol signatures, data patterns, etc.). + +The proposed :term:`DPI` based approach will only use an indicative, small +number of the initial packets from each flow in order to identify the content +and not inspect each packet. + +In this respect it follows the Packet Based per Flow State (term:`PBFS`). This +method uses a table to track each session based on the 5-tuples (src address, +dest address, src port,dest port, transport protocol) that is maintained for +each flow. + +Concepts +======== + +* *Traffic Inspection*: The process of packet analysis and application +identification of network traffic that passes through the :term:`VTC`. + +* *Traffic Forwarding*: The process of packet forwarding from an incoming +network interface to a pre-defined outgoing network interface. + +* *Traffic Rule Application*: The process of packet tagging, based on a +predefined set of rules. Packet tagging may include e.g. Type of Service +(:term:`ToS`) field modification. + +Architecture +============ + +The Traffic Inspection module is the most computationally intensive component +of the :term:`VNF`. It implements filtering and packet matching algorithms in +order to support the enhanced traffic forwarding capability of the :term:`VNF`. +The component supports a flow table (exploiting hashing algorithms for fast +indexing of flows) and an inspection engine for traffic classification. + +The implementation used for these experiments exploits the nDPI library. +The packet capturing mechanism is implemented using libpcap. When the +:term:`DPI` engine identifies a new flow, the flow register is updated with the +appropriate information and transmitted across the Traffic Forwarding module, +which then applies any required policy updates. + +The Traffic Forwarding moudle is responsible for routing and packet forwarding. +It accepts incoming network traffic, consults the flow table for classification +information for each incoming flow and then applies pre-defined policies +marking e.g. :term:`ToS`/Differentiated Services Code Point (:term:`DSCP`) +multimedia traffic for Quality of Service (:term:`QoS`) enablement on the +forwarded traffic. +It is assumed that the traffic is forwarded using the default policy until it +is identified and new policies are enforced. + +The expected response delay is considered to be negligible, as only a small +number of packets are required to identify each flow. + +Graphical Overview +================== + +.. code-block:: console + + +----------------------------+ + | | + | Virtual Traffic Classifier | + | | + | Analysing/Forwarding | + | ------------> | + | ethA ethB | + | | + +----------------------------+ + | ^ + | | + v | + +----------------------------+ + | | + | Virtual Switch | + | | + +----------------------------+ + +Install +======= + +run the build.sh with root privileges + +Run +=== + +:: + + sudo ./pfbridge -a eth1 -b eth2 + + +Development Environment +======================= + +Ubuntu 14.04 Ubuntu 16.04 diff --git a/docs/testing/user/userguide/09-apexlake_installation.rst b/docs/testing/user/userguide/09-apexlake_installation.rst new file mode 100644 index 000000000..0d8ef143f --- /dev/null +++ b/docs/testing/user/userguide/09-apexlake_installation.rst @@ -0,0 +1,302 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + + +.. _DPDK: http://dpdk.org/doc/nics +.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/ +.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking +.. _PORTSEC: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver +.. _here: https://wiki.opnfv.org/vtc + + +============================ +Apexlake Installation Guide +============================ + +Abstract +-------- + +ApexLake is a framework that provides automatic execution of experiments and +related data collection to enable a user validate infrastructure from the +perspective of a Virtual Network Function (:term:`VNF`). + +In the context of Yardstick, a virtual Traffic Classifier (:term:`VTC`) network +function is utilized. + + +Framework Hardware Dependencies +=============================== + +In order to run the framework there are some hardware related dependencies for +ApexLake. + +The framework needs to be installed on the same physical node where DPDK-pktgen_ +is installed. + +The installation requires the physical node hosting the packet generator must +have 2 NICs which are DPDK_ compatible. + +The 2 NICs will be connected to the switch where the OpenStack VM +network is managed. + +The switch used must support multicast traffic and :term:`IGMP` snooping. +Further details about the configuration are provided at the following here_. + +The corresponding ports to which the cables are connected need to be configured +as VLAN trunks using two of the VLAN IDs available for Neutron. +Note the VLAN IDs used as they will be required in later configuration steps. + + +Framework Software Dependencies +=============================== +Before starting the framework, a number of dependencies must first be installed. +The following describes the set of instructions to be executed via the Linux +shell in order to install and configure the required dependencies. + +1. Install Dependencies. + +To support the framework dependencies the following packages must be installed. +The example provided is based on Ubuntu and needs to be executed in root mode. + +:: + + apt-get install python-dev + apt-get install python-pip + apt-get install python-mock + apt-get install tcpreplay + apt-get install libpcap-dev + +2. Source OpenStack openrc file. + +:: + + source openrc + +3. Configure Openstack Neutron + +In order to support traffic generation and management by the virtual +Traffic Classifier, the configuration of the port security driver +extension is required for Neutron. + +For further details please follow the following link: PORTSEC_ +This step can be skipped in case the target OpenStack is Juno or Kilo release, +but it is required to support Liberty. +It is therefore required to indicate the release version in the configuration +file located in ./yardstick/vTC/apexlake/apexlake.conf + + +4. Create Two Networks based on VLANs in Neutron. + +To enable network communications between the packet generator and the compute +node, two networks must be created via Neutron and mapped to the VLAN IDs +that were previously used in the configuration of the physical switch. +The following shows the typical set of commands required to configure Neutron +correctly. +The physical switches need to be configured accordingly. + +:: + + VLAN_1=2032 + VLAN_2=2033 + PHYSNET=physnet2 + neutron net-create apexlake_inbound_network \ + --provider:network_type vlan \ + --provider:segmentation_id $VLAN_1 \ + --provider:physical_network $PHYSNET + + neutron subnet-create apexlake_inbound_network \ + 192.168.0.0/24 --name apexlake_inbound_subnet + + neutron net-create apexlake_outbound_network \ + --provider:network_type vlan \ + --provider:segmentation_id $VLAN_2 \ + --provider:physical_network $PHYSNET + + neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \ + --name apexlake_outbound_subnet + + +5. Download Ubuntu Cloud Image and load it on Glance + +The virtual Traffic Classifier is supported on top of Ubuntu 14.04 cloud image. +The image can be downloaded on the local machine and loaded on Glance +using the following commands: + +:: + + wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img + glance image-create \ + --name ubuntu1404 \ + --is-public true \ + --disk-format qcow \ + --container-format bare \ + --file trusty-server-cloudimg-amd64-disk1.img + + + +6. Configure the Test Cases + +The VLAN tags must also be included in the test case Yardstick yaml file +as parameters for the following test cases: + + * :doc:`opnfv_yardstick_tc006` + + * :doc:`opnfv_yardstick_tc007` + + * :doc:`opnfv_yardstick_tc020` + + * :doc:`opnfv_yardstick_tc021` + + +Install and Configure DPDK Pktgen ++++++++++++++++++++++++++++++++++ + +Execution of the framework is based on DPDK Pktgen. +If DPDK Pktgen has not installed, it is necessary to download, install, compile +and configure it. +The user can create a directory and download the dpdk packet generator source +code: + +:: + + cd experimental_framework/libraries + mkdir dpdk_pktgen + git clone https://github.com/pktgen/Pktgen-DPDK.git + +For instructions on the installation and configuration of DPDK and DPDK Pktgen +please follow the official DPDK Pktgen README file. +Once the installation is completed, it is necessary to load the DPDK kernel +driver, as follow: + +:: + + insmod uio + insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko + +It is necessary to set the configuration file to support the desired Pktgen +configuration. +A description of the required configuration parameters and supporting examples +is provided in the following: + +:: + + [PacketGen] + packet_generator = dpdk_pktgen + + # This is the directory where the packet generator is installed + # (if the user previously installed dpdk-pktgen, + # it is required to provide the director where it is installed). + pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/ + + # This is the directory where DPDK is installed + dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/ + + # Name of the dpdk-pktgen program that starts the packet generator + program_name = app/app/x86_64-native-linuxapp-gcc/pktgen + + # DPDK coremask (see DPDK-Pktgen readme) + coremask = 1f + + # DPDK memory channels (see DPDK-Pktgen readme) + memory_channels = 3 + + # Name of the interface of the pktgen to be used to send traffic (vlan_sender) + name_if_1 = p1p1 + + # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver) + name_if_2 = p1p2 + + # PCI bus address correspondent to if_1 + bus_slot_nic_1 = 01:00.0 + + # PCI bus address correspondent to if_2 + bus_slot_nic_2 = 01:00.1 + + +To find the parameters related to names of the NICs and the addresses of the PCI buses +the user may find it useful to run the :term:`DPDK` tool nic_bind as follows: + +:: + + DPDK_DIR/tools/dpdk_nic_bind.py --status + +Lists the NICs available on the system, and shows the available drivers and bus addresses for each interface. +Please make sure to select NICs which are :term:`DPDK` compatible. + +Installation and Configuration of smcroute +++++++++++++++++++++++++++++++++++++++++++ + +The user is required to install smcroute which is used by the framework to +support multicast communications. + +The following is the list of commands required to download and install smroute. + +:: + + cd ~ + git clone https://github.com/troglobit/smcroute.git + cd smcroute + git reset --hard c3f5c56 + sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh + sed -i 's/automake-1.11/automake/g' ./autogen.sh + ./autogen.sh + ./configure + make + sudo make install + cd .. + +It is required to do the reset to the specified commit ID. +It is also requires the creation a configuration file using the following +command: + +:: + + SMCROUTE_NIC=(name of the nic) + +where name of the nic is the name used previously for the variable "name_if_2". +For example: + +:: + + SMCROUTE_NIC=p1p2 + +Then create the smcroute configuration file /etc/smcroute.conf + +:: + + echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf + + +At the end of this procedure it will be necessary to perform the following +actions to add the user to the sudoers: + +:: + + adduser USERNAME sudo + echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers + + +Experiment using SR-IOV Configuration on the Compute Node ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +To enable :term:`SR-IOV` interfaces on the physical NIC of the compute node, a +compatible NIC is required. +NIC configuration depends on model and vendor. After proper configuration to +support :term:`SR-IOV`, a proper configuration of OpenStack is required. +For further information, please refer to the SRIOV_ configuration guide + +Finalize installation the framework on the system +================================================= + +The installation of the framework on the system requires the setup of the project. +After entering into the apexlake directory, it is sufficient to run the following +command. + +:: + + python setup.py install + +Since some elements are copied into the /tmp directory (see configuration file) +it could be necessary to repeat this step after a reboot of the host. diff --git a/docs/testing/user/userguide/09-installation.rst b/docs/testing/user/userguide/09-installation.rst deleted file mode 100644 index 9c2082a27..000000000 --- a/docs/testing/user/userguide/09-installation.rst +++ /dev/null @@ -1,401 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. - -Yardstick Installation -====================== - -Abstract --------- - -Yardstick supports installation on Ubuntu 14.04 or via a Docker image. The -installation procedure on Ubuntu 14.04 or via the docker image are detailed in -the section below. - -To use Yardstick you should have access to an OpenStack environment, with at -least Nova, Neutron, Glance, Keystone and Heat installed. - -The steps needed to run Yardstick are: - -1. Install Yardstick. -2. Load OpenStack environment variables. -3. Create a Neutron external network. -4. Build Yardstick flavor and a guest image. -5. Load the guest image into the OpenStack environment. -6. Create the test configuration .yaml file. -7. Run the test case. - - -Prerequisites -------------- - -The OPNFV deployment is out of the scope of this document but it can be -found in http://artifacts.opnfv.org/opnfvdocs/colorado/docs/configguide/index.html. -The OPNFV platform is considered as the System Under Test (SUT) in this -document. - -Several prerequisites are needed for Yardstick: - - #. A Jumphost to run Yardstick on - #. A Docker daemon shall be installed on the Jumphost - #. A public/external network created on the SUT - #. Connectivity from the Jumphost to the SUT public/external network - -WARNING: Connectivity from Jumphost is essential and it is of paramount -importance to make sure it is working before even considering to install -and run Yardstick. Make also sure you understand how your networking is -designed to work. - -NOTE: **Jumphost** refers to any server which meets the previous -requirements. Normally it is the same server from where the OPNFV -deployment has been triggered previously. - -NOTE: If your Jumphost is operating behind a company http proxy and/or -Firewall, please consult first the section `Proxy Support`_, towards -the end of this document. The section details some tips/tricks which -*may* be of help in a proxified environment. - - -Installing Yardstick on Ubuntu 14.04 ------------------------------------- - -.. _install-framework: - -You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu -14.04 Docker image. No matter which way you choose to install Yardstick -framework, the following installation steps are identical. - -If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu -14.04 Docker image from Docker hub: - -:: - - docker pull ubuntu:14.04 - -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Download source code and install python dependencies: - -:: - - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - ./install.sh - - -Installing Yardstick using Docker ---------------------------------- - -Yardstick has a Docker image, this Docker image (**Yardstick-stable**) -serves as a replacement for installing the Yardstick framework in a virtual -environment (for example as done in :ref:`install-framework`). -It is recommended to use this Docker image to run Yardstick test. - -Pulling the Yardstick Docker image -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ - -Pull the Yardstick Docker image ('opnfv/yardstick') from the public dockerhub -registry under the OPNFV account: [dockerhub_], with the following docker -command:: - - docker pull opnfv/yardstick:stable - -After pulling the Docker image, check that it is available with the -following docker command:: - - [yardsticker@jumphost ~]$ docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB - -Run the Docker image: - -:: - - docker run --privileged=true -it opnfv/yardstick:stable /bin/bash - -In the container the Yardstick repository is located in the /home/opnfv/repos -directory. - - -OpenStack parameters and credentials ------------------------------------- - -Environment variables -^^^^^^^^^^^^^^^^^^^^^ -Before running Yardstick it is necessary to export OpenStack environment variables -from the OpenStack *openrc* file (using the ``source`` command) and export the -external network name ``export EXTERNAL_NETWORK="external-network-name"``, -the default name for the external network is ``net04_ext``. - -Credential environment variables in the *openrc* file have to include at least: - -* OS_AUTH_URL -* OS_USERNAME -* OS_PASSWORD -* OS_TENANT_NAME - -A sample openrc file may look like this: - -* export OS_PASSWORD=console -* export OS_TENANT_NAME=admin -* export OS_AUTH_URL=http://172.16.1.222:35357/v2.0 -* export OS_USERNAME=admin -* export OS_VOLUME_API_VERSION=2 -* export EXTERNAL_NETWORK=net04_ext - - -Yardstick falvor and guest images ---------------------------------- - -Before executing Yardstick test cases, make sure that yardstick guest image and -yardstick flavor are available in OpenStack. -Detailed steps about creating yardstick flavor and building yardstick-trusty-server -image can be found below. - -Yardstick-flavor -^^^^^^^^^^^^^^^^ -Most of the sample test cases in Yardstick are using an OpenStack flavor called -*yardstick-flavor* which deviates from the OpenStack standard m1.tiny flavor by the -disk size - instead of 1GB it has 3GB. Other parameters are the same as in m1.tiny. - -Create yardstick-flavor: - -:: - - nova flavor-create yardstick-flavor 100 512 3 1 - - -.. _guest-image: - -Building a guest image -^^^^^^^^^^^^^^^^^^^^^^ -Most of the sample test cases in Yardstick are using a guest image called -*yardstick-trusty-server* which deviates from an Ubuntu Cloud Server image -containing all the required tools to run test cases supported by Yardstick. -Yardstick has a tool for building this custom image. It is necessary to have -sudo rights to use this tool. - -Also you may need install several additional packages to use this tool, by -follwing the commands below: - -:: - - apt-get update && apt-get install -y \ - qemu-utils \ - kpartx - -This image can be built using the following command while in the directory where -Yardstick is installed (``~/yardstick`` if the framework is installed -by following the commands above): - -:: - - export YARD_IMG_ARCH="amd64" - sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers - sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh - -**Warning:** the script will create files by default in: -``/tmp/workspace/yardstick`` and the files will be owned by root! - -If you are building this guest image in inside a docker container make sure the -container is granted with privilege. - -The created image can be added to OpenStack using the ``glance image-create`` or -via the OpenStack Dashboard. - -Example command: - -:: - - glance --os-image-api-version 1 image-create \ - --name yardstick-image --is-public true \ - --disk-format qcow2 --container-format bare \ - --file /tmp/workspace/yardstick/yardstick-image.img - -Some Yardstick test cases use a Cirros image, you can find one at -http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img - - -Automatic flavor and image creation ------------------------------------ -Yardstick has a script for automatic creating yardstick flavor and building -guest images. This script is mainly used in CI, but you can still use it in -your local environment. - -Example command: - -:: - - export YARD_IMG_ARCH="amd64" - sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers - source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh - - -Yardstick default key pair -^^^^^^^^^^^^^^^^^^^^^^^^^^ -Yardstick uses a SSH key pair to connect to the guest image. This key pair can -be found in the ``resources/files`` directory. To run the ``ping-hot.yaml`` test -sample, this key pair needs to be imported to the OpenStack environment. - - -Examples and verifying the install ----------------------------------- - -It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Before executing yardstick -test cases make sure yardstick flavor and building yardstick-trusty-server -image can be found in glance and openrc file is sourced. Below is an example -invocation of yardstick help command and ping.py test sample: -:: - - yardstick –h - yardstick task start samples/ping.yaml - -Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. - -Default location for the output is ``/tmp/yardstick.out``. - - -Deploy InfluxDB and Grafana locally ------------------------------------- - -.. pull docker images - -Pull docker images - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -:: - - docker pull tutum/influxdb - docker pull grafana/grafana - -Run influxdb and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run influxdb -:: - - docker run -d --name influxdb \ - -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ - tutum/influxdb - docker exec -it influxdb bash - -Config influxdb -:: - - influx - >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES - >CREATE DATABASE yardstick; - >use yardstick; - >show MEASUREMENTS; - -Run grafana and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run grafana -:: - - docker run -d --name grafana -p 3000:3000 grafana/grafana - -Config grafana -:: - - http://{YOUR_IP_HERE}:3000 - log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 - -.. image:: images/Grafana_config.png - :width: 800px - :alt: Grafana data source configration - -Config yardstick conf -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf - -vi /etc/yardstick/yardstick.conf -Config yardstick.conf -:: - - [DEFAULT] - debug = True - dispatcher = influxdb - - [dispatcher_influxdb] - timeout = 5 - target = http://{YOUR_IP_HERE}:8086 - db_name = yardstick - username = root - password = root - -Now you can run yardstick test cases and store the results in influxdb -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - - -Create a test suite for yardstick ------------------------------------- - -A test suite in yardstick is a yaml file which include one or more test cases. -Yardstick is able to support running test suite task, so you can customize you -own test suite and run it in one task. - -"tests/opnfv/test_suites" is where yardstick put ci test-suite. A typical test -suite is like below: - -fuel_test_suite.yaml - -:: - - --- - # Fuel integration test task suite - - schema: "yardstick:suite:0.1" - - name: "fuel_test_suite" - test_cases_dir: "samples/" - test_cases: - - - file_name: ping.yaml - - - file_name: iperf3.yaml - -As you can see, there are two test cases in fuel_test_suite, the syntax is simple -here, you must specify the schema and the name, then you just need to list the -test cases in the tag "test_cases" and also mark their relative directory in the -tag "test_cases_dir". - -Yardstick test suite also support constraints and task args for each test case. -Here is another sample to show this, which is digested from one big test suite. - -os-nosdn-nofeature-ha.yaml - -:: - - --- - - schema: "yardstick:suite:0.1" - - name: "os-nosdn-nofeature-ha" - test_cases_dir: "tests/opnfv/test_cases/" - test_cases: - - - file_name: opnfv_yardstick_tc002.yaml - - - file_name: opnfv_yardstick_tc005.yaml - - - file_name: opnfv_yardstick_tc043.yaml - constraint: - installer: compass - pod: huawei-pod1 - task_args: - huawei-pod1: '{"pod_info": "etc/yardstick/.../pod.yaml", - "host": "node4.LF","target": "node5.LF"}' - -As you can see in test case "opnfv_yardstick_tc043.yaml", there are two tags, "constraint" and -"task_args". "constraint" is where you can specify which installer or pod it can be run in -the ci environment. "task_args" is where you can specify the task arguments for each pod. - -All in all, to create a test suite in yardstick, you just need to create a suite yaml file -and add test cases and constraint or task arguments if necessary. - diff --git a/docs/testing/user/userguide/10-apexlake_api.rst b/docs/testing/user/userguide/10-apexlake_api.rst new file mode 100644 index 000000000..35a1dbe3e --- /dev/null +++ b/docs/testing/user/userguide/10-apexlake_api.rst @@ -0,0 +1,89 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + + +================================= +Apexlake API Interface Definition +================================= + +Abstract +-------- + +The API interface provided by the framework to enable the execution of test +cases is defined as follows. + + +init +---- + +**static init()** + + Initializes the Framework + + **Returns** None + + +execute_framework +----------------- + +**static execute_framework** (test_cases, + + iterations, + + heat_template, + + heat_template_parameters, + + deployment_configuration, + + openstack_credentials) + + Executes the framework according the specified inputs + + **Parameters** + + - **test_cases** + + Test cases to be run with the workload (dict() of dict()) + + Example: + test_case = dict() + + test_case[’name’] = ‘module.Class’ + + test_case[’params’] = dict() + + test_case[’params’][’throughput’] = ‘1’ + + test_case[’params’][’vlan_sender’] = ‘1000’ + + test_case[’params’][’vlan_receiver’] = ‘1001’ + + test_cases = [test_case] + + - **iterations** + Number of test cycles to be executed (int) + + - **heat_template** + (string) File name of the heat template corresponding to the workload to be deployed. + It contains the parameters to be evaluated in the form of #parameter_name. + (See heat_templates/vTC.yaml as example). + + - **heat_template_parameters** + (dict) Parameters to be provided as input to the + heat template. See http://docs.openstack.org/developer/heat/ template_guide/hot_guide.html + section “Template input parameters” for further info. + + - **deployment_configuration** + ( dict[string] = list(strings) ) ) Dictionary of parameters + representing the deployment configuration of the workload. + + The key is a string corresponding to the name of the parameter, + the value is a list of strings representing the value to be + assumed by a specific param. The parameters are user defined: + they have to correspond to the place holders (#parameter_name) + specified in the heat template. + + **Returns** dict() containing results diff --git a/docs/testing/user/userguide/10-yardstick_plugin.rst b/docs/testing/user/userguide/10-yardstick_plugin.rst deleted file mode 100644 index f16dedd02..000000000 --- a/docs/testing/user/userguide/10-yardstick_plugin.rst +++ /dev/null @@ -1,144 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. - -=================================== -Installing a plug-in into yardstick -=================================== - -Abstract -======== - -Yardstick currently provides a ``plugin`` CLI command to support integration -with other OPNFV testing projects. Below is an example invocation of yardstick -plugin command and Storperf plug-in sample. - - -Installing Storperf into yardstick -================================== - -Storperf is delivered as a Docker container from -https://hub.docker.com/r/opnfv/storperf/tags/. - -There are two possible methods for installation in your environment: - -* Run container on Jump Host -* Run container in a VM - -In this introduction we will install Storperf on Jump Host. - - -Step 0: Environment preparation ->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> - -Running Storperf on Jump Host -Requirements: - -* Docker must be installed -* Jump Host must have access to the OpenStack Controller API -* Jump Host must have internet connectivity for downloading docker image -* Enough floating IPs must be available to match your agent count - -Before installing Storperf into yardstick you need to check your openstack -environment and other dependencies: - -1. Make sure docker is installed. -2. Make sure Keystone, Nova, Neutron, Glance, Heat are installed correctly. -3. Make sure Jump Host have access to the OpenStack Controller API. -4. Make sure Jump Host must have internet connectivity for downloading docker image. -5. You need to know where to get basic openstack Keystone authorization info, such as - OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL, OS_USERNAME. -6. To run a Storperf container, you need to have OpenStack Controller environment - variables defined and passed to Storperf container. The best way to do this is to - put environment variables in a "storperf_admin-rc" file. The storperf_admin-rc - should include credential environment variables at least: - -* OS_AUTH_URL -* OS_TENANT_ID -* OS_TENANT_NAME -* OS_PROJECT_NAME -* OS_USERNAME -* OS_PASSWORD -* OS_REGION_NAME - -For this storperf_admin-rc file, during environment preparation a "prepare_storperf_admin-rc.sh" -script can be used to generate it. -:: - - #!/bin/bash - AUTH_URL=${OS_AUTH_URL} - USERNAME=${OS_USERNAME:-admin} - PASSWORD=${OS_PASSWORD:-console} - TENANT_NAME=${OS_TENANT_NAME:-admin} - VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-2} - PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME} - TENANT_ID=`keystone tenant-get admin|grep 'id'|awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` - rm -f ~/storperf_admin-rc - touch ~/storperf_admin-rc - echo "OS_AUTH_URL="$AUTH_URL >> ~/storperf_admin-rc - echo "OS_USERNAME="$USERNAME >> ~/storperf_admin-rc - echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc - echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc - echo "OS_VOLUME_API_VERSION="$VOLUME_API_VERSION >> ~/storperf_admin-rc - echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc - echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc - - -Step 1: Plug-in configuration file preparation -++++++++++++++++++++++++++++++++++++++++++++++ - -To install a plug-in, first you need to prepare a plug-in configuration file in -YAML format and store it in the "plugin" directory. The plugin configration file -work as the input of yardstick "plugin" command. Below is the Storperf plug-in -configuration file sample: -:: - - --- - # StorPerf plugin configuration file - # Used for integration StorPerf into Yardstick as a plugin - schema: "yardstick:plugin:0.1" - plugins: - name: storperf - deployment: - ip: 192.168.23.2 - user: root - password: root - -In the plug-in configuration file, you need to specify the plug-in name and the -plug-in deployment info, including node ip, node login username and password. -Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host -in my local environment. - -Step 2: Plug-in install/remove scripts preparation -++++++++++++++++++++++++++++++++++++++++++++++++++ - -Under "yardstick/resource/scripts directory", there are two folders: a "install" -folder and a "remove" folder. You need to store the plug-in install/remove script -in these two folders respectively. - -The detailed installation or remove operation should de defined in these two scripts. -The name of both install and remove scripts should match the plugin-in name that you -specified in the plug-in configuration file. -For example, the install and remove scripts for Storperf are both named to "storperf.bash". - - -Step 3: Install and remove Storperf -+++++++++++++++++++++++++++++++++++ - -To install Storperf, simply execute the following command -:: - - # Install Storperf - yardstick plugin install plugin/storperf.yaml - -removing Storperf from yardstick -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -To remove Storperf, simply execute the following command -:: - - # Remove Storperf - yardstick plugin remove plugin/storperf.yaml - -What yardstick plugin command does is using the username and password to log into the deployment target and then execute the corresponding install or remove script. diff --git a/docs/testing/user/userguide/11-nsb-overview.rst b/docs/testing/user/userguide/11-nsb-overview.rst new file mode 100644 index 000000000..6dfa521d1 --- /dev/null +++ b/docs/testing/user/userguide/11-nsb-overview.rst @@ -0,0 +1,213 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016-2017 Intel Corporation. + +===================================== +Network Services Benchmarking (NSB) +===================================== + +Abstract +======== + +.. _Yardstick: https://wiki.opnfv.org/yardstick + +This chapter provides an overview of the NSB, a contribution to OPNFV +Yardstick_ from Intel. + +Overview +======== + +GOAL: Extend Yardstick to perform real world VNFs and NFVi Characterization and +benchmarking with repeatable and deterministic methods. + +The Network Service Benchmarking (NSB) extends the yardstick framework to do +VNF characterization and benchmarking in three different execution +environments - bare metal i.e. native Linux environment, standalone virtual +environment and managed virtualized environment (e.g. Open stack etc.). +It also brings in the capability to interact with external traffic generators +both hardware & software based for triggering and validating the traffic +according to user defined profiles. + +NSB extension includes: + + - Generic data models of Network Services, based on ETSI spec (ETSI GS NFV-TST 001) + .. _ETSI GS NFV-TST 001: http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_nfv-tst001v010101p.pdf + + - New Standalone context for VNF testing like SRIOV, OVS, OVS-DPDK etc + + - Generic VNF configuration models and metrics implemented with Python + classes + + - Traffic generator features and traffic profiles + + - L1-L3 state-less traffic profiles + + - L4-L7 state-full traffic profiles + + - Tunneling protocol / network overlay support + + - Test case samples + + - Ping + + - Trex + + - vPE,vCGNAT, vFirewall etc - ipv4 throughput, latency etc + + - Traffic generators like Trex, ab/nginx, ixia, iperf etc + + - KPIs for a given use case: + + - System agent support for collecting NFVi KPI. This includes: + + - CPU statistic + + - Memory BW + + - OVS-DPDK Stats + + - Network KPIs, e.g., inpackets, outpackets, thoughput, latency etc + + - VNF KPIs, e.g., packet_in, packet_drop, packet_fwd etc + +Architecture +============ +The Network Service (NS) defines a set of Virtual Network Functions (VNF) +connected together using NFV infrastructure. + +The Yardstick NSB extension can support multiple VNFs created by different +vendors including traffic generators. Every VNF being tested has its +own data model. The Network service defines a VNF modelling on base of performed +network functionality. The part of the data model is a set of the configuration +parameters, number of connection points used and flavor including core and +memory amount. + +The ETSI defines a Network Service as a set of configurable VNFs working in +some NFV Infrastructure connecting each other using Virtual Links available +through Connection Points. The ETSI MANO specification defines a set of +management entities called Network Service Descriptors (NSD) and +VNF Descriptors (VNFD) that define real Network Service. The picture below +makes an example how the real Network Operator use-case can map into ETSI +Network service definition + +Network Service framework performs the necessary test steps. It may involve + + - Interacting with traffic generator and providing the inputs on traffic + type / packet structure to generate the required traffic as per the + test case. Traffic profiles will be used for this. + + - Executing the commands required for the test procedure and analyses the + command output for confirming whether the command got executed correctly + or not. E.g. As per the test case, run the traffic for the given + time period / wait for the necessary time delay + + - Verify the test result. + + - Validate the traffic flow from SUT + + - Fetch the table / data from SUT and verify the value as per the test case + + - Upload the logs from SUT onto the Test Harness server + + - Read the KPI's provided by particular VNF + +Components of Network Service +------------------------------ + +* *Models for Network Service benchmarking*: The Network Service benchmarking + requires the proper modelling approach. The NSB provides models using Python + files and defining of NSDs and VNFDs. + +The benchmark control application being a part of OPNFV yardstick can call +that python models to instantiate and configure the VNFs. Depending on +infrastructure type (bare-metal or fully virtualized) that calls could be +made directly or using MANO system. + +* *Traffic generators in NSB*: Any benchmark application requires a set of + traffic generator and traffic profiles defining the method in which traffic + is generated. + +The Network Service benchmarking model extends the Network Service +definition with a set of Traffic Generators (TG) that are treated +same way as other VNFs being a part of benchmarked network service. +Same as other VNFs the traffic generator are instantiated and terminated. + +Every traffic generator has own configuration defined as a traffic profile and +a set of KPIs supported. The python models for TG is extended by specific calls +to listen and generate traffic. + +* *The stateless TREX traffic generator*: The main traffic generator used as + Network Service stimulus is open source TREX tool. + +The TREX tool can generate any kind of stateless traffic. + +.. code-block:: console + + +--------+ +-------+ +--------+ + | | | | | | + | Trex | ---> | VNF | ---> | Trex | + | | | | | | + +--------+ +-------+ +--------+ + +Supported testcases scenarios: + + - Correlated UDP traffic using TREX traffic generator and replay VNF. + + - using different IMIX configuration like pure voice, pure video traffic etc + + - using different number IP flows like 1 flow, 1K, 16K, 64K, 256K, 1M flows + + - Using different number of rules configured like 1 rule, 1K, 10K rules + +For UDP correlated traffic following Key Performance Indicators are collected +for every combination of test case parameters: + + - RFC2544 throughput for various loss rate defined (1% is a default) + +Graphical Overview +================== + +NSB Testing with yardstick framework facilitate performance testing of various +VNFs provided. + +.. code-block:: console + + +-----------+ + | | +-----------+ + | vPE | ->|TGen Port 0| + | TestCase | | +-----------+ + | | | + +-----------+ +------------------+ +-------+ | + | | -- API --> | VNF | <---> + +-----------+ | Yardstick | +-------+ | + | Test Case | --> | NSB Testing | | + +-----------+ | | | + | | | | + | +------------------+ | + +-----------+ | +-----------+ + | Traffic | ->|TGen Port 1| + | patterns | +-----------+ + +-----------+ + + Figure 1: Network Service - 2 server configuration + + +Install +======= + +run the nsb_install.sh with root privileges + +Run +=== + +:: + + source ~/.bash_profile + cd /yardstick/cmd + sudo -E ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml + +Development Environment +======================= + +Ubuntu 14.04, Ubuntu 16.04 diff --git a/docs/testing/user/userguide/11-result-store-InfluxDB.rst b/docs/testing/user/userguide/11-result-store-InfluxDB.rst deleted file mode 100644 index a0bb48a80..000000000 --- a/docs/testing/user/userguide/11-result-store-InfluxDB.rst +++ /dev/null @@ -1,86 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others. - -============================================== -Store Other Project's Test Results in InfluxDB -============================================== - -Abstract -======== - -.. _Framework: https://wiki.opnfv.org/download/attachments/6827660/wiki.png?version=1&modificationDate=1470298075000&api=v2 - -This chapter illustrates how to run plug-in test cases and store test results -into community's InfluxDB. The framework is shown in Framework_. - - -.. image:: images/InfluxDB_store.png - :width: 800px - :alt: Store Other Project's Test Results in InfluxDB - -Store Storperf Test Results into Community's InfluxDB -===================================================== - -.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py -.. _Mingjiang: limingjiang@huawei.com -.. _Visual: https://wiki.opnfv.org/download/attachments/6827660/tc074.PNG?version=1&modificationDate=1470298075000&api=v2 -.. _Login: http://testresults.opnfv.org/grafana/login - -As shown in Framework_, there are two ways to store Storperf test results -into community's InfluxDB: - -1. Yardstick asks Storperf to run the test case. After the test case is - completed, Yardstick reads test results via ReST API from Storperf and - posts test data to the influxDB. - -2. Additionally, Storperf can run tests by itself and post the test result - directly to the InfluxDB. The method for posting data directly to influxDB - will be supported in the future. - -Our plan is to support rest-api in D release so that other testing projects can -call the rest-api to use yardstick dispatcher service to push data to yardstick's -influxdb database. - -For now, influxdb only support line protocol, and the json protocol is deprecated. - -Take ping test case for example, the raw_result is json format like this: -:: - - "benchmark": { - "timestamp": 1470315409.868095, - "errors": "", - "data": { - "rtt": { - "ares": 1.125 - } - }, - "sequence": 1 - }, - "runner_id": 2625 - } - -With the help of "influxdb_line_protocol", the json is transform to like below as a line string: -:: - - 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown,pod_name=unknown, - runner_id=2625,scenarios=Ping,target=ares.demo,task_id=77755f38-1f6a-4667-a7f3- - 301c99963656,version=unknown rtt.ares=1.125 1470315409868094976' - -So, for data output of json format, you just need to transform json into line format and call -influxdb api to post the data into the database. All this function has been implemented in Influxdb_. -If you need support on this, please contact Mingjiang_. -:: - - curl -i -XPOST 'http://104.197.68.199:8086/write?db=yardstick' -- - data-binary 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown, ...' - -Grafana will be used for visualizing the collected test data, which is shown in Visual_. Grafana -can be accessed by Login_. - - -.. image:: images/results_visualization.png - :width: 800px - :alt: results visualization - diff --git a/docs/testing/user/userguide/12-grafana.rst b/docs/testing/user/userguide/12-grafana.rst deleted file mode 100644 index 416857b71..000000000 --- a/docs/testing/user/userguide/12-grafana.rst +++ /dev/null @@ -1,119 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) 2016 Huawei Technologies Co.,Ltd and others - -================= -Grafana dashboard -================= - - -Abstract -======== - -This chapter describes the Yardstick grafana dashboard. The Yardstick grafana -dashboard can be found here: http://testresults.opnfv.org/grafana/ - - -.. image:: images/login.png - :width: 800px - :alt: Yardstick grafana dashboard - - -Public access -============= - -Yardstick provids a public account for accessing to the dashboard. The username -and password are both set to ‘opnfv’. - - -Testcase dashboard -================== - -For each test case, there is a dedicated dashboard. Shown here is the dashboard -of TC002. - - -.. image:: images/TC002.png - :width: 800px - :alt:TC002 dashboard - -For each test case dashboard. On the top left, we have a dashboard selection, -you can switch to different test cases using this pull-down menu. - -Underneath, we have a pod and scenario selection. -All the pods and scenarios that have ever published test data to the InfluxDB -will be shown here. - -You can check multiple pods or scenarios. - -For each test case, we have a short description and a link to detailed test -case information in Yardstick user guide. - -Underneath, it is the result presentation section. -You can use the time period selection on the top right corner to zoom in or -zoom out the chart. - - -Administration access -===================== - -For a user with administration rights it is easy to update and save any -dashboard configuration. Saved updates immediately take effect and become live. -This may cause issues like: - -- Changes and updates made to the live configuration in Grafana can compromise - existing Grafana content in an unwanted, unpredicted or incompatible way. - Grafana as such is not version controlled, there exists one single Grafana - configuration per dashboard. -- There is a risk several people can disturb each other when doing updates to - the same Grafana dashboard at the same time. - -Any change made by administrator should be careful. - - -Add a dashboard into yardstick grafana -====================================== - -Due to security concern, users that using the public opnfv account are not able -to edit the yardstick grafana directly.It takes a few more steps for a -non-yardstick user to add a custom dashboard into yardstick grafana. - -There are 6 steps to go. - - -.. image:: images/add.png - :width: 800px - :alt: Add a dashboard into yardstick grafana - - -1. You need to build a local influxdb and grafana, so you can do the work - locally. You can refer to How to deploy InfluxDB and Grafana locally wiki - page about how to do this. - -2. Once step one is done, you can fetch the existing grafana dashboard - configuration file from the yardstick repository and import it to your local - grafana. After import is done, you grafana dashboard will be ready to use - just like the community’s dashboard. - -3. The third step is running some test cases to generate test results and - publishing it to your local influxdb. - -4. Now you have some data to visualize in your dashboard. In the fourth step, - it is time to create your own dashboard. You can either modify an existing - dashboard or try to create a new one from scratch. If you choose to modify - an existing dashboard then in the curtain menu of the existing dashboard do - a "Save As..." into a new dashboard copy instance, and then continue doing - all updates and saves within the dashboard copy. - -5. When finished with all Grafana configuration changes in this temporary - dashboard then chose "export" of the updated dashboard copy into a JSON file - and put it up for review in Gerrit, in file /yardstick/dashboard/Yardstick-TCxxx-yyyyyyyyyyyyy. - For instance a typical default name of the file would be "Yardstick-TC001 Copy-1234567891234". - -6. Once you finish your dashboard, the next step is exporting the configuration - file and propose a patch into Yardstick. Yardstick team will review and - merge it into Yardstick repository. After approved review Yardstick team - will do an "import" of the JSON file and also a "save dashboard" as soon as - possible to replace the old live dashboard configuration. - diff --git a/docs/testing/user/userguide/12-nsb_installation.rst b/docs/testing/user/userguide/12-nsb_installation.rst new file mode 100644 index 000000000..0b0840029 --- /dev/null +++ b/docs/testing/user/userguide/12-nsb_installation.rst @@ -0,0 +1,268 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016-2017 Intel Corporation. + +Yardstick - NSB Testing -Installation +===================================== + +Abstract +-------- + +The Network Service Benchmarking (NSB) extends the yardstick framework to do +VNF characterization and benchmarking in three different execution +environments viz., bare metal i.e. native Linux environment, standalone virtual +environment and managed virtualized environment (e.g. Open stack etc.). +It also brings in the capability to interact with external traffic generators +both hardware & software based for triggering and validating the traffic +according to user defined profiles. + +The steps needed to run Yardstick with NSB testing are: + +* Install Yardstick (NSB Testing). +* Setup pod.yaml describing Test topology +* Create the test configuration yaml file. +* Run the test case. + + +Prerequisites +------------- + +Refer chapter Yardstick Instalaltion for more information on yardstick +prerequisites + +Several prerequisites are needed for Yardstick(VNF testing): + +- Python Modules: pyzmq, pika. + +- flex + +- bison + +- build-essential + +- automake + +- libtool + +- librabbitmq-dev + +- rabbitmq-server + +- collectd + +- intel-cmt-cat + +Installing Yardstick on Ubuntu 14.04 +------------------------------------ + +.. _install-framework: + +You can install Yardstick framework directly on Ubuntu 14.04 or in an Ubuntu +14.04 Docker image. No matter which way you choose to install Yardstick +framework, the following installation steps are identical. + +If you choose to use the Ubuntu 14.04 Docker image, You can pull the Ubuntu +14.04 Docker image from Docker hub: + +:: + + docker pull ubuntu:14.04 + +Installing Yardstick framework +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Download source code and install Yardstick framework: + +:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + cd yardstick + ./nsb_setup.sh + +It will also automatically download all the packages needed for NSB Testing setup. + +System Topology: +----------------- + +.. code-block:: console + + +----------+ +----------+ + | | | | + | | (0)----->(0) | Ping/ | + | TG1 | | vPE/ | + | | | 2Trex | + | | (1)<-----(1) | | + +----------+ +----------+ + trafficgen_1 vnf + + +OpenStack parameters and credentials +------------------------------------ + +Environment variables +^^^^^^^^^^^^^^^^^^^^^ + +Before running Yardstick (NSB Testing) it is necessary to export traffic +generator libraries. + +:: + + source ~/.bash_profile + +Config yardstick conf +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + vi /etc/yardstick/yardstick.conf + +Add trex_path and bin_path in 'nsb' section. + +:: + + [DEFAULT] + debug = True + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root + + [nsb] + trex_path=/opt/nsb_bin/trex/scripts + bin_path=/opt/nsb_bin + + +Config pod.yaml describing Topology +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Before executing Yardstick test cases, make sure that pod.yaml reflects the +topology and update all the required fields. + +:: + + cp /etc/yardstick/nodes/pod.yaml.nsb.sample /etc/yardstick/nodes/pod.yaml + +Config pod.yaml +:: + nodes: + - + name: trafficgen_1 + role: TrafficGen + ip: 1.1.1.1 + user: root + password: r00t + interfaces: + xe0: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.0" + driver: i40e # default kernel driver + dpdk_port_num: 0 + local_ip: "152.16.100.20" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:01" + xe1: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.1" + driver: i40e # default kernel driver + dpdk_port_num: 1 + local_ip: "152.16.40.20" + netmask: "255.255.255.0" + local_mac: "00:00.00:00:00:02" + + - + name: vnf + role: vnf + ip: 1.1.1.2 + user: root + password: r00t + host: 1.1.1.2 #BM - host == ip, virtualized env - Host - compute node + interfaces: + xe0: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.0" + driver: i40e # default kernel driver + dpdk_port_num: 0 + local_ip: "152.16.100.19" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:03" + + xe1: # logical name from topology.yaml and vnfd.yaml + vpci: "0000:07:00.1" + driver: i40e # default kernel driver + dpdk_port_num: 1 + local_ip: "152.16.40.19" + netmask: "255.255.255.0" + local_mac: "00:00:00:00:00:04" + routing_table: + - network: "152.16.100.20" + netmask: "255.255.255.0" + gateway: "152.16.100.20" + if: "xe0" + - network: "152.16.40.20" + netmask: "255.255.255.0" + gateway: "152.16.40.20" + if: "xe1" + nd_route_tbl: + - network: "0064:ff9b:0:0:0:0:9810:6414" + netmask: "112" + gateway: "0064:ff9b:0:0:0:0:9810:6414" + if: "xe0" + - network: "0064:ff9b:0:0:0:0:9810:2814" + netmask: "112" + gateway: "0064:ff9b:0:0:0:0:9810:2814" + if: "xe1" + +Enable yardstick virtual environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Before executing yardstick test cases, make sure to activate yardstick +python virtual environment + +:: + source /opt/nsb_bin/yardstick_venv/bin/activate + + +Examples and verifying the install +---------------------------------- + +It is recommended to verify that Yardstick was installed successfully +by executing some simple commands and test samples. Before executing yardstick +test cases make sure yardstick flavor and building yardstick-trusty-server +image can be found in glance and openrc file is sourced. Below is an example +invocation of yardstick help command and ping.py test sample: +:: + + yardstick -h + yardstick task start samples/ping.yaml + +Each testing tool supported by Yardstick has a sample configuration file. +These configuration files can be found in the **samples** directory. + +Default location for the output is ``/tmp/yardstick.out``. + + +Run Yardstick - Network Service Testcases +----------------------------------------- + +NS testing - using NSBperf CLI +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + source /opt/nsb_setup/yardstick_venv/bin/activate + PYTHONPATH: ". ~/.bash_profile" + cd /yardstick/cmd + Execute command: ./NSPerf.py -h + ./NSBperf.py --vnf --test + eg: ./NSBperf.py --vnf vpe --test tc_baremetal_rfc2544_ipv4_1flow_64B.yaml + +NS testing - using yardstick CLI +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + source /opt/nsb_setup/yardstick_venv/bin/activate + PYTHONPATH: ". ~/.bash_profile" + Go to test case forlder type we want to execute. + e.g. /samples/vnf_samples/nsut// + run: yardstick --debug task start diff --git a/docs/testing/user/userguide/images/Yardstick_framework_architecture_in_D.png b/docs/testing/user/userguide/images/Yardstick_framework_architecture_in_D.png new file mode 100644 index 000000000..f4065cb5e Binary files /dev/null and b/docs/testing/user/userguide/images/Yardstick_framework_architecture_in_D.png differ diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst index 1b963af61..a732fcc47 100644 --- a/docs/testing/user/userguide/index.rst +++ b/docs/testing/user/userguide/index.rst @@ -13,15 +13,15 @@ Performance Testing User Guide (Yardstick) 01-introduction 02-methodology 03-architecture - 04-vtc-overview - 05-apexlake_installation - 06-apexlake_api - 07-nsb-overview - 08-nsb_installation - 09-installation - 10-yardstick_plugin - 11-result-store-InfluxDB - 12-grafana + 04-installation + 05-yardstick_plugin + 06-result-store-InfluxDB + 07-grafana + 08-vtc-overview + 09-apexlake_installation + 10-apexlake_api + 11-nsb-overview + 12-nsb_installation 13-list-of-tcs glossary references -- cgit 1.2.3-korg