diff options
Diffstat (limited to 'docs/testing')
8 files changed, 887 insertions, 329 deletions
diff --git a/docs/testing/developer/devguide/devguide.rst b/docs/testing/developer/devguide/devguide.rst index 4f69456fc..dade49b75 100755 --- a/docs/testing/developer/devguide/devguide.rst +++ b/docs/testing/developer/devguide/devguide.rst @@ -123,9 +123,9 @@ In this Yaml file, you can easily find it consists of two sections. One is “Sc context: name: demo - image: cirros-0.3.5 + image: yardstick-image flavor: yardstick-flavor - user: cirros + user: ubuntu placement_groups: pgrp1: @@ -361,7 +361,7 @@ Verify your patch locally before submitting Once you finish a patch, you can submit it to Gerrit for code review. A developer sends a new patch to Gerrit will trigger patch verify job on Jenkins -CI. The yardstick patch verify job includes python flake8 check, unit test and +CI. The yardstick patch verify job includes python pylint check, unit test and code coverage test. Before you submit your patch, it is recommended to run the patch verification in your local environment first. diff --git a/docs/testing/user/userguide/04-installation.rst b/docs/testing/user/userguide/04-installation.rst index f11a9c28e..caebecc09 100644 --- a/docs/testing/user/userguide/04-installation.rst +++ b/docs/testing/user/userguide/04-installation.rst @@ -3,13 +3,11 @@ .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. +====================== Yardstick Installation ====================== -Abstract --------- - Yardstick supports installation by Docker or directly in Ubuntu. The installation procedure for Docker and direct installation are detailed in the sections below. @@ -21,126 +19,136 @@ The steps needed to run Yardstick are: 1. Install Yardstick. 2. Load OpenStack environment variables. -#. Create Yardstick flavor. -#. Build a guest image and load it into the OpenStack environment. -#. Create the test configuration ``.yaml`` file and run the test case/suite. +3. Create Yardstick flavor. +4. Build a guest image and load it into the OpenStack environment. +5. Create the test configuration ``.yaml`` file and run the test case/suite. Prerequisites ------------- -The OPNFV deployment is out of the scope of this document and can be found `here <http://artifacts.opnfv.org/opnfvdocs/colorado/docs/configguide/index.html>`_. The OPNFV platform is considered as the System Under Test (SUT) in this document. +The OPNFV deployment is out of the scope of this document and can be found in +`User Guide & Configuration Guide`_. The OPNFV platform is considered as the +System Under Test (SUT) in this document. Several prerequisites are needed for Yardstick: -#. A Jumphost to run Yardstick on -#. A Docker daemon or a virtual environment installed on the Jumphost -#. A public/external network created on the SUT -#. Connectivity from the Jumphost to the SUT public/external network +1. A Jumphost to run Yardstick on +2. A Docker daemon or a virtual environment installed on the Jumphost +3. A public/external network created on the SUT +4. Connectivity from the Jumphost to the SUT public/external network -**NOTE:** *Jumphost* refers to any server which meets the previous +.. note:: *Jumphost* refers to any server which meets the previous requirements. Normally it is the same server from where the OPNFV deployment has been triggered. -**WARNING:** Connectivity from Jumphost is essential and it is of paramount +.. warning:: Connectivity from Jumphost is essential and it is of paramount importance to make sure it is working before even considering to install and run Yardstick. Make also sure you understand how your networking is designed to work. -**NOTE:** If your Jumphost is operating behind a company http proxy and/or -Firewall, please consult first the section `Proxy Support (**Todo**)`_, towards -the end of this document. That section details some tips/tricks which -*may* be of help in a proxified environment. +.. note:: If your Jumphost is operating behind a company http proxy and/or +Firewall, please first consult `Proxy Support`_ section which is towards the +end of this document. That section details some tips/tricks which *may* be of +help in a proxified environment. -Install Yardstick using Docker (**recommended**) ---------------------------------------------------- +Install Yardstick using Docker (first option) (**recommended**) +--------------------------------------------------------------- -Yardstick has a Docker image. It is recommended to use this Docker image to run Yardstick test. +Yardstick has a Docker image. It is recommended to use this Docker image to run +Yardstick test. Prepare the Yardstick container -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install docker on your guest system with the following command, if not done yet:: +Install docker on your guest system with the following command, if not done +yet:: - wget -qO- https://get.docker.com/ | sh + wget -qO- https://get.docker.com/ | sh Pull the Yardstick Docker image (``opnfv/yardstick``) from the public dockerhub -registry under the OPNFV account: dockerhub_, with the following docker +registry under the OPNFV account in dockerhub_, with the following docker command:: - docker pull opnfv/yardstick:stable + sudo -EH docker pull opnfv/yardstick:stable After pulling the Docker image, check that it is available with the following docker command:: - [yardsticker@jumphost ~]$ docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB + [yardsticker@jumphost ~]$ docker images + REPOSITORY TAG IMAGE ID CREATED SIZE + opnfv/yardstick stable a4501714757a 1 day ago 915.4 MB Run the Docker image to get a Yardstick container:: - docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock -p 8888:5000 --name yardstick opnfv/yardstick:stable - -Note: - -+----------------------------------------------+------------------------------+ -| parameters | Detail | -+==============================================+==============================+ -| -itd | -i: interactive, Keep STDIN | -| | open even if not attached. | -| | -t: allocate a pseudo-TTY. | -| | -d: run container in | -| | detached mode, in the | -| | background. | -+----------------------------------------------+------------------------------+ -| --privileged | If you want to build | -| | ``yardstick-image`` in | -| | Yardstick container, this | -| | parameter is needed. | -+----------------------------------------------+------------------------------+ -| -p 8888:5000 | If you want to call | -| | Yardstick API out of | -| | Yardstick container, this | -| | parameter is needed. | -+----------------------------------------------+------------------------------+ -| -v /var/run/docker.sock:/var/run/docker.sock | If you want to use yardstick | -| | env grafana/influxdb to | -| | create a grafana/influxdb | -| | container out of Yardstick | -| | container, this parameter is | -| | needed. | -+----------------------------------------------+------------------------------+ -| --name yardstick | The name for this container, | -| | not needed and can be | -| | defined by the user. | -+----------------------------------------------+------------------------------+ + docker run -itd --privileged -v /var/run/docker.sock:/var/run/docker.sock \ + -p 8888:5000 --name yardstick opnfv/yardstick:stable + +.. table:: Description of the parameters used with ``docker run`` command + + ======================= ==================================================== + Parameters Detail + ======================= ==================================================== + -itd -i: interactive, Keep STDIN open even if not + attached + -t: allocate a pseudo-TTY detached mode, in the + background + ======================= ==================================================== + --privileged If you want to build ``yardstick-image`` in + Yardstick container, this parameter is needed + ======================= ==================================================== + -p 8888:5000 Redirect the a host port (8888) to a container port + (5000) + ======================= ==================================================== + -v /var/run/docker.sock If you want to use yardstick env grafana/influxdb to + :/var/run/docker.sock create a grafana/influxdb container out of Yardstick + container + ======================= ==================================================== + --name yardstick The name for this container + +If the host is restarted +^^^^^^^^^^^^^^^^^^^^^^^^ + +The yardstick container must be started if the host is rebooted:: + + docker start yardstick Configure the Yardstick container environment -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -There are three ways to configure environments for running Yardstick, which will be shown in the following sections. Before that, enter the Yardstick container:: +There are three ways to configure environments for running Yardstick, explained +in the following sections. Before that, access the Yardstick container:: - docker exec -it yardstick /bin/bash + docker exec -it yardstick /bin/bash and then configure Yardstick environments in the Yardstick container. -The first way (**recommended**) -################################### +Using the CLI command ``env prepare`` (first way) (**recommended**) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the Yardstick container, the Yardstick repository is located in the +``/home/opnfv/repos`` directory. Yardstick provides a CLI to prepare OpenStack +environment variables and create Yardstick flavor and guest images +automatically:: + + yardstick env prepare -In the Yardstick container, the Yardstick repository is located in the ``/home/opnfv/repos`` directory. Yardstick provides a CLI to prepare OpenStack environment variables and create Yardstick flavor and guest images automatically:: +.. note:: Since Euphrates release, the above command will not be able to +automatically configure the ``/etc/yardstick/openstack.creds`` file. So before +running the above command, it is necessary to create the +``/etc/yardstick/openstack.creds`` file and save OpenStack environment +variables into it manually. If you have the openstack credential file saved +outside the Yardstick Docker container, you can do this easily by mapping the +credential file into Yardstick container using:: - yardstick env prepare + '-v /path/to/credential_file:/etc/yardstick/openstack.creds' -**NOTE**: Since Euphrates release, the above command will not able to automatically configure the /etc/yardstick/openstack.creds file. -So before running the above command, it is necessary to create the /etc/yardstick/openstack.creds file and save OpenStack environment variables into it manually. -If you have the openstack credential file saved outside the Yardstcik Docker container, you can do this easily by mapping the credential file into Yardstick container - using '-v /path/to/credential_file:/etc/yardstick/openstack.creds' when running the Yardstick container. -For details of the required OpenStack environment variables please refer to section **Export OpenStack environment variables** +when running the Yardstick container. For details of the required OpenStack +environment variables please refer to section `Export OpenStack environment +variables`_. -The env prepare command may take up to 6-8 minutes to finish building +The ``env prepare`` command may take up to 6-8 minutes to finish building yardstick-image and other environment preparation. Meanwhile if you wish to monitor the env prepare process, you can enter the Yardstick container in a new terminal window and execute the following command:: @@ -148,25 +156,26 @@ terminal window and execute the following command:: tail -f /var/log/yardstick/uwsgi.log -The second way -################ +Manually exporting the env variables and initializing OpenStack (second way) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Export OpenStack environment variables ->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> +###################################### -Before running Yardstick it is necessary to export OpenStack environment variables:: +Before running Yardstick it is necessary to export OpenStack environment +variables:: - source openrc + source openrc -Environment variables in the ``openrc`` file have to include at least: +Environment variables in the ``openrc`` file have to include at least:: -* ``OS_AUTH_URL`` -* ``OS_USERNAME`` -* ``OS_PASSWORD`` -* ``OS_TENANT_NAME`` -* ``EXTERNAL_NETWORK`` + OS_AUTH_URL + OS_USERNAME + OS_PASSWORD + OS_TENANT_NAME + EXTERNAL_NETWORK -A sample `openrc` file may look like this:: +A sample ``openrc`` file may look like this:: export OS_PASSWORD=console export OS_TENANT_NAME=admin @@ -175,17 +184,23 @@ A sample `openrc` file may look like this:: export OS_VOLUME_API_VERSION=2 export EXTERNAL_NETWORK=net04_ext -Manually create Yardstick falvor and guest images ->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Before executing Yardstick test cases, make sure that Yardstick flavor and guest image are available in OpenStack. Detailed steps about creating the Yardstick flavor and building the Yardstick guest image can be found below. +Manual creation of Yardstick flavor and guest images +#################################################### + +Before executing Yardstick test cases, make sure that Yardstick flavor and +guest image are available in OpenStack. Detailed steps about creating the +Yardstick flavor and building the Yardstick guest image can be found below. Most of the sample test cases in Yardstick are using an OpenStack flavor called -``yardstick-flavor`` which deviates from the OpenStack standard ``m1.tiny`` flavor by the disk size - instead of 1GB it has 3GB. Other parameters are the same as in ``m1.tiny``. +``yardstick-flavor`` which deviates from the OpenStack standard ``m1.tiny`` +flavor by the disk size; instead of 1GB it has 3GB. Other parameters are the +same as in ``m1.tiny``. Create ``yardstick-flavor``:: - nova flavor-create yardstick-flavor 100 512 3 1 + openstack flavor create --disk 3 --vcpus 1 --ram 512 --swap 100 \ + yardstick-flavor Most of the sample test cases in Yardstick are using a guest image called ``yardstick-image`` which deviates from an Ubuntu Cloud Server image @@ -196,136 +211,229 @@ Yardstick has a tool for building this custom image. It is necessary to have Also you may need install several additional packages to use this tool, by follwing the commands below:: - sudo apt-get update && sudo apt-get install -y qemu-utils kpartx + sudo -EH apt-get update && sudo -EH apt-get install -y qemu-utils kpartx -This image can be built using the following command in the directory where Yardstick is installed:: +This image can be built using the following command in the directory where +Yardstick is installed:: - export YARD_IMG_ARCH='amd64' - sudo echo "Defaults env_keep += \'YARD_IMG_ARCH\'" >> /etc/sudoers - sudo tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh + export YARD_IMG_ARCH='amd64' + echo "Defaults env_keep += \'YARD_IMG_ARCH\'" | sudo tee --append \ + /etc/sudoers > /dev/null + sudo -EH tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh -**Warning:** Before building the guest image inside the Yardstick container, make sure the container is granted with privilege. The script will create files by default in ``/tmp/workspace/yardstick`` and the files will be owned by root! +.. warning:: Before building the guest image inside the Yardstick container, +make sure the container is granted with privilege. The script will create files +by default in ``/tmp/workspace/yardstick`` and the files will be owned by root. -The created image can be added to OpenStack using the ``glance image-create`` or via the OpenStack Dashboard. Example command is:: +The created image can be added to OpenStack using the OpenStack client or via +the OpenStack Dashboard:: - glance --os-image-api-version 1 image-create \ - --name yardstick-image --is-public true \ - --disk-format qcow2 --container-format bare \ - --file /tmp/workspace/yardstick/yardstick-image.img + openstack image create --disk-format qcow2 --container-format bare \ + --public --file /tmp/workspace/yardstick/yardstick-image.img \ + yardstick-image -.. _`Cirros 0.3.5`: http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img -.. _`Ubuntu 16.04`: https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img -Some Yardstick test cases use a `Cirros 0.3.5`_ image and/or a `Ubuntu 16.04`_ image. Add Cirros and Ubuntu images to OpenStack:: +Some Yardstick test cases use a `Cirros 0.3.5`_ image and/or a `Ubuntu 16.04`_ +image. Add Cirros and Ubuntu images to OpenStack:: - openstack image create \ - --disk-format qcow2 \ - --container-format bare \ - --file $cirros_image_file \ - cirros-0.3.5 + openstack image create --disk-format qcow2 --container-format bare \ + --public --file $cirros_image_file cirros-0.3.5 + openstack image create --disk-format qcow2 --container-format bare \ + --file $ubuntu_image_file Ubuntu-16.04 - openstack image create \ - --disk-format qcow2 \ - --container-format bare \ - --file $ubuntu_image_file \ - Ubuntu-16.04 +Automatic initialization of OpenStack (third way) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The third way -################ +Similar to the second way, the first step is also to +`Export OpenStack environment variables`_. Then the following steps should be +done. -Similar to the second way, the first step is also to `Export OpenStack environment variables`_. Then the following steps should be done. - -Automatically create Yardstcik flavor and guest images ->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> +Automatic creation of Yardstick flavor and guest images +####################################################### Yardstick has a script for automatically creating Yardstick flavor and building -Yardstick guest images. This script is mainly used for CI and can be also used in the local environment:: +Yardstick guest images. This script is mainly used for CI and can be also used +in the local environment:: - source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh + source $YARDSTICK_REPO_DIR/tests/ci/load_images.sh The Yardstick container GUI ^^^^^^^^^^^^^^^^^^^^^^^^^^^ -In Euphrates release, Yardstick implemeted a GUI for Yardstick Docker container. -After booting up Yardstick container, you can visit the GUI at <container_host_ip>:8888/gui/index.html +In Euphrates release, Yardstick implemented a GUI for Yardstick Docker +container. After booting up Yardstick container, you can visit the GUI at +``<container_host_ip>:8888/gui/index.html``. -For usage of Yardstick GUI, please watch our demo video at https://www.youtube.com/watch?v=M3qbJDp6QBk -**Note:** The Yardstick GUI is still in development, the GUI layout and features may change. +For usage of Yardstick GUI, please watch our demo video at +`Yardstick GUI demo`_. +.. note:: The Yardstick GUI is still in development, the GUI layout and +features may change. Delete the Yardstick container -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you want to uninstall Yardstick, just delete the Yardstick container:: - docker stop yardstick && docker rm yardstick + sudo docker stop yardstick && docker rm yardstick + -Install Yardstick directly in Ubuntu ---------------------------------------- +Install Yardstick directly in Ubuntu (second option) +---------------------------------------------------- .. _install-framework: -Alternatively you can install Yardstick framework directly in Ubuntu or in an Ubuntu Docker image. No matter which way you choose to install Yardstick, the following installation steps are identical. +Alternatively you can install Yardstick framework directly in Ubuntu or in an +Ubuntu Docker image. No matter which way you choose to install Yardstick, the +following installation steps are identical. If you choose to use the Ubuntu Docker image, you can pull the Ubuntu Docker image from Docker hub:: - docker pull ubuntu:16.04 + sudo -EH docker pull ubuntu:16.04 Install Yardstick -^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^ Prerequisite preparation:: - apt-get update && apt-get install -y git python-setuptools python-pip - easy_install -U setuptools==30.0.0 - pip install appdirs==1.4.0 - pip install virtualenv + sudo -EH apt-get update && sudo -EH apt-get install -y \ + git python-setuptools python-pip + sudo -EH easy_install -U setuptools==30.0.0 + sudo -EH pip install appdirs==1.4.0 + sudo -EH pip install virtualenv + +Download the source code and install Yardstick from it:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + export YARDSTICK_REPO_DIR=~/yardstick + cd ~/yardstick + sudo -EH ./install.sh + +If the host is ever restarted, nginx and uwsgi need to be restarted:: + + service nginx restart + uwsgi -i /etc/yardstick/yardstick.ini + +Configure the Yardstick environment (**Todo**) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +For installing Yardstick directly in Ubuntu, the ``yardstick env`` command is +not available. You need to prepare OpenStack environment variables and create +Yardstick flavor and guest images manually. + + +Uninstall Yardstick +^^^^^^^^^^^^^^^^^^^ + +For uninstalling Yardstick, just delete the virtual environment:: + + rm -rf ~/yardstick_venv + + +Install Yardstick directly in OpenSUSE +-------------------------------------- + +.. _install-framework: + +You can install Yardstick framework directly in OpenSUSE. + + +Install Yardstick +^^^^^^^^^^^^^^^^^ + +Prerequisite preparation:: + + sudo -EH zypper -n install -y gcc \ + wget \ + git \ + sshpass \ + qemu-tools \ + kpartx \ + libffi-devel \ + libopenssl-devel \ + python \ + python-devel \ + python-virtualenv \ + libxml2-devel \ + libxslt-devel \ + python-setuptools-git Create a virtual environment:: - virtualenv ~/yardstick_venv - export YARDSTICK_VENV=~/yardstick_venv - source ~/yardstick_venv/bin/activate + virtualenv ~/yardstick_venv + export YARDSTICK_VENV=~/yardstick_venv + source ~/yardstick_venv/bin/activate + sudo -EH easy_install -U setuptools Download the source code and install Yardstick from it:: - git clone https://gerrit.opnfv.org/gerrit/yardstick - export YARDSTICK_REPO_DIR=~/yardstick - cd yardstick - ./install.sh + git clone https://gerrit.opnfv.org/gerrit/yardstick + export YARDSTICK_REPO_DIR=~/yardstick + cd yardstick + sudo -EH python setup.py install + sudo -EH pip install -r requirements.txt +Install missing python modules:: -Configure the Yardstick environment (**Todo**) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + sudo -EH pip install pyyaml \ + oslo_utils \ + oslo_serialization \ + oslo_config \ + paramiko \ + python.heatclient \ + python.novaclient \ + python.glanceclient \ + python.neutronclient \ + scp \ + jinja2 + + +Configure the Yardstick environment +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Source the OpenStack environment variables:: + + source DEVSTACK_DIRECTORY/openrc -For installing Yardstick directly in Ubuntu, the ``yardstick env`` command is not available. You need to prepare OpenStack environment variables and create Yardstick flavor and guest images manually. +Export the Openstack external network. The default installation of Devstack +names the external network public:: + + export EXTERNAL_NETWORK=public + export OS_USERNAME=demo + +Change the API version used by Yardstick to v2.0 (the devstack openrc sets it +to v3):: + + export OS_AUTH_URL=http://PUBLIC_IP_ADDRESS:5000/v2.0 Uninstall Yardstick -^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^^ For unistalling Yardstick, just delete the virtual environment:: - rm -rf ~/yardstick_venv + rm -rf ~/yardstick_venv Verify the installation ------------------------------ +----------------------- It is recommended to verify that Yardstick was installed successfully by executing some simple commands and test samples. Before executing Yardstick -test cases make sure ``yardstick-flavor`` and ``yardstick-image`` can be found in OpenStack and the ``openrc`` file is sourced. Below is an example -invocation of Yardstick ``help`` command and ``ping.py`` test sample:: +test cases make sure ``yardstick-flavor`` and ``yardstick-image`` can be found +in OpenStack and the ``openrc`` file is sourced. Below is an example invocation +of Yardstick ``help`` command and ``ping.py`` test sample:: - yardstick -h - yardstick task start samples/ping.yaml + yardstick -h + yardstick task start samples/ping.yaml -**NOTE:** The above commands could be run in both the Yardstick container and the Ubuntu directly. +.. note:: The above commands could be run in both the Yardstick container and +the Ubuntu directly. Each testing tool supported by Yardstick has a sample configuration file. These configuration files can be found in the ``samples`` directory. @@ -334,166 +442,145 @@ Default location for the output is ``/tmp/yardstick.out``. Deploy InfluxDB and Grafana using Docker -------------------------------------------- +---------------------------------------- -Without InfluxDB, Yardstick stores results for runnning test case in the file -``/tmp/yardstick.out``. However, it's unconvenient to retrieve and display +Without InfluxDB, Yardstick stores results for running test case in the file +``/tmp/yardstick.out``. However, it's inconvenient to retrieve and display test results. So we will show how to use InfluxDB to store data and use Grafana to display data in the following sections. -Automatically deploy InfluxDB and Grafana containers (**recommended**) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Automatic deployment of InfluxDB and Grafana containers (**recommended**) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Firstly, enter the Yardstick container:: - docker exec -it yardstick /bin/bash + sudo -EH docker exec -it yardstick /bin/bash Secondly, create InfluxDB container and configure with the following command:: - yardstick env influxdb + yardstick env influxdb Thirdly, create and configure Grafana container:: - yardstick env grafana + yardstick env grafana -Then you can run a test case and visit http://host_ip:3000 (``admin``/``admin``) to see the results. +Then you can run a test case and visit http://host_ip:3000 +(``admin``/``admin``) to see the results. -**NOTE:** Executing ``yardstick env`` command to deploy InfluxDB and Grafana requires Jumphost's docker API version => 1.24. Run the following command to check the docker API version on the Jumphost:: +.. note:: Executing ``yardstick env`` command to deploy InfluxDB and Grafana +requires Jumphost's docker API version => 1.24. Run the following command to +check the docker API version on the Jumphost:: - docker version - -Manually deploy InfluxDB and Grafana containers -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -You could also deploy influxDB and Grafana containers manually on the Jumphost. -The following sections show how to do. + docker version -.. pull docker images -Pull docker images -#################### +Manual deployment of InfluxDB and Grafana containers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -:: +You can also deploy influxDB and Grafana containers manually on the Jumphost. +The following sections show how to do. - docker pull tutum/influxdb - docker pull grafana/grafana +Pull docker images:: -Run and configure influxDB -############################### + sudo -EH docker pull tutum/influxdb + sudo -EH docker pull grafana/grafana Run influxDB:: - docker run -d --name influxdb \ - -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ - tutum/influxdb - docker exec -it influxdb bash + sudo -EH docker run -d --name influxdb \ + -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 \ + tutum/influxdb + docker exec -it influxdb bash Configure influxDB:: - influx - >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES - >CREATE DATABASE yardstick; - >use yardstick; - >show MEASUREMENTS; - -Run and configure Grafana -############################### + influx + >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + >CREATE DATABASE yardstick; + >use yardstick; + >show MEASUREMENTS; Run Grafana:: - docker run -d --name grafana -p 3000:3000 grafana/grafana + sudo -EH docker run -d --name grafana -p 3000:3000 grafana/grafana -Log on http://{YOUR_IP_HERE}:3000 using ``admin``/``admin`` and configure database resource to be ``{YOUR_IP_HERE}:8086``. +Log on http://{YOUR_IP_HERE}:3000 using ``admin``/``admin`` and configure +database resource to be ``{YOUR_IP_HERE}:8086``. .. image:: images/Grafana_config.png :width: 800px - :alt: Grafana data source configration + :alt: Grafana data source configuration -Configure ``yardstick.conf`` -############################## +Configure ``yardstick.conf``:: -:: - - docker exec -it yardstick /bin/bash - cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf - vi /etc/yardstick/yardstick.conf + sudo -EH docker exec -it yardstick /bin/bash + sudo cp etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + sudo vi /etc/yardstick/yardstick.conf Modify ``yardstick.conf``:: - [DEFAULT] - debug = True - dispatcher = influxdb + [DEFAULT] + debug = True + dispatcher = influxdb - [dispatcher_influxdb] - timeout = 5 - target = http://{YOUR_IP_HERE}:8086 - db_name = yardstick - username = root - password = root + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root Now you can run Yardstick test cases and store the results in influxDB. Deploy InfluxDB and Grafana directly in Ubuntu (**Todo**) ------------------------------------------------------------ +--------------------------------------------------------- Yardstick common CLI -------------------- -list test cases ->>>>>>>>>>>>>>> -**yardstick testcase list** - -This command line would list all test cases in yardstick. -It would show like below:: - - +--------------------------------------------------------------------------------------- - | Testcase Name | Description - +--------------------------------------------------------------------------------------- - | opnfv_yardstick_tc001 | Measure network throughput using pktgen - | opnfv_yardstick_tc002 | measure network latency using ping - | opnfv_yardstick_tc005 | Measure Storage IOPS, throughput and latency using fio. - | opnfv_yardstick_tc006 | Measure volume storage IOPS, throughput and latency using fio. - | opnfv_yardstick_tc008 | Measure network throughput and packet loss using Pktgen - | opnfv_yardstick_tc009 | Measure network throughput and packet loss using pktgen - | opnfv_yardstick_tc010 | measure memory read latency using lmbench. - | opnfv_yardstick_tc011 | Measure packet delay variation (jitter) using iperf3. - | opnfv_yardstick_tc012 | Measure memory read and write bandwidth using lmbench. - | opnfv_yardstick_tc014 | Measure Processing speed using unixbench. - | opnfv_yardstick_tc019 | Sample test case for the HA of controller node service. - ... - +--------------------------------------------------------------------------------------- -show a test case config file ->>>>>>>>>>>>>>>>>>>>>>>>>>>> -Take opnfv_yardstick_tc002 for an example. This test case measure network latency. -You just need to type in **yardstick testcase show opnfv_yardstick_tc002**, and the console -would show the config yaml of this test case:: - ############################################################################## - # Copyright (c) 2017 kristian.hunt@gmail.com and others. - # - # All rights reserved. This program and the accompanying materials - # are made available under the terms of the Apache License, Version 2.0 - # which accompanies this distribution, and is available at - # http://www.apache.org/licenses/LICENSE-2.0 - ############################################################################## - --- - - schema: "yardstick:task:0.1" - description: > +List test cases +^^^^^^^^^^^^^^^ + +``yardstick testcase list``: This command line would list all test cases in +Yardstick. It would show like below:: + + +--------------------------------------------------------------------------------------- + | Testcase Name | Description + +--------------------------------------------------------------------------------------- + | opnfv_yardstick_tc001 | Measure network throughput using pktgen + | opnfv_yardstick_tc002 | measure network latency using ping + | opnfv_yardstick_tc005 | Measure Storage IOPS, throughput and latency using fio. + ... + +--------------------------------------------------------------------------------------- + + +Show a test case config file +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Take opnfv_yardstick_tc002 for an example. This test case measure network +latency. You just need to type in ``yardstick testcase show +opnfv_yardstick_tc002``, and the console would show the config yaml of this +test case:: + + --- + + schema: "yardstick:task:0.1" + description: > Yardstick TC002 config file; measure network latency using ping; - {% set image = image or "cirros-0.3.5" %} + {% set image = image or "cirros-0.3.5" %} - {% set provider = provider or none %} - {% set physical_network = physical_network or 'physnet1' %} - {% set segmentation_id = segmentation_id or none %} - {% set packetsize = packetsize or 100 %} + {% set provider = provider or none %} + {% set physical_network = physical_network or 'physnet1' %} + {% set segmentation_id = segmentation_id or none %} + {% set packetsize = packetsize or 100 %} - scenarios: - {% for i in range(2) %} - - + scenarios: + {% for i in range(2) %} + - type: Ping options: packetsize: {{packetsize}} @@ -508,9 +595,9 @@ would show the config yaml of this test case:: sla: max_rtt: 10 action: monitor - {% endfor %} + {% endfor %} - context: + context: name: demo image: {{image}} flavor: yardstick-flavor @@ -538,39 +625,41 @@ would show the config yaml of this test case:: {% endif %} {% endif %} -start a task to run yardstick test case ->>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -If you want run a test case, then you need to use **yardstick task start <test_case_path>** -this command support some parameters as below: - -+---------------------+--------------------------------------------------+ -| Parameters | Detail | -+=====================+==================================================+ -| -d | show debug log of yardstick running | -| | | -+---------------------+--------------------------------------------------+ -| --task-args | If you want to customize test case parameters, | -| | use "--task-args" to pass the value. The format | -| | is a json string with parameter key-value pair. | -| | | -+---------------------+--------------------------------------------------+ -| --task-args-file | If you want to use yardstick | -| | env prepare command(or | -| | related API) to load the | -+---------------------+--------------------------------------------------+ -| --parse-only | | -| | | -| | | -+---------------------+--------------------------------------------------+ -| --output-file \ | Specify where to output the log. if not pass, | -| OUTPUT_FILE_PATH | the default value is | -| | "/tmp/yardstick/yardstick.log" | -| | | -+---------------------+--------------------------------------------------+ -| --suite \ | run a test suite, TEST_SUITE_PATH speciy where | -| TEST_SUITE_PATH | the test suite locates | -| | | -+---------------------+--------------------------------------------------+ + +Start a task to run yardstick test case +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you want run a test case, then you need to use ``yardstick task start +<test_case_path>`` this command support some parameters as below:: + + +---------------------+--------------------------------------------------+ + | Parameters | Detail | + +=====================+==================================================+ + | -d | show debug log of yardstick running | + | | | + +---------------------+--------------------------------------------------+ + | --task-args | If you want to customize test case parameters, | + | | use "--task-args" to pass the value. The format | + | | is a json string with parameter key-value pair. | + | | | + +---------------------+--------------------------------------------------+ + | --task-args-file | If you want to use yardstick | + | | env prepare command(or | + | | related API) to load the | + +---------------------+--------------------------------------------------+ + | --parse-only | | + | | | + | | | + +---------------------+--------------------------------------------------+ + | --output-file \ | Specify where to output the log. if not pass, | + | OUTPUT_FILE_PATH | the default value is | + | | "/tmp/yardstick/yardstick.log" | + | | | + +---------------------+--------------------------------------------------+ + | --suite \ | run a test suite, TEST_SUITE_PATH specify where | + | TEST_SUITE_PATH | the test suite locates | + | | | + +---------------------+--------------------------------------------------+ Run Yardstick in a local environment @@ -578,7 +667,7 @@ Run Yardstick in a local environment We also have a guide about how to run Yardstick in a local environment. This work is contributed by Tapio Tallgren. -You can find this guide at `here <https://wiki.opnfv.org/display/yardstick/How+to+run+Yardstick+in+a+local+environment>`_. +You can find this guide at `How to run Yardstick in a local environment`_. Create a test suite for Yardstick @@ -588,19 +677,20 @@ A test suite in yardstick is a yaml file which include one or more test cases. Yardstick is able to support running test suite task, so you can customize your own test suite and run it in one task. -``tests/opnfv/test_suites`` is the folder where Yardstick puts CI test suite. A typical test suite is like below (the ``fuel_test_suite.yaml`` example):: +``tests/opnfv/test_suites`` is the folder where Yardstick puts CI test suite. +A typical test suite is like below (the ``fuel_test_suite.yaml`` example):: - --- - # Fuel integration test task suite + --- + # Fuel integration test task suite - schema: "yardstick:suite:0.1" + schema: "yardstick:suite:0.1" - name: "fuel_test_suite" - test_cases_dir: "samples/" - test_cases: - - + name: "fuel_test_suite" + test_cases_dir: "samples/" + test_cases: + - file_name: ping.yaml - - + - file_name: iperf3.yaml As you can see, there are two test cases in the ``fuel_test_suite.yaml``. The @@ -612,18 +702,18 @@ Yardstick test suite also supports constraints and task args for each test case. Here is another sample (the ``os-nosdn-nofeature-ha.yaml`` example) to show this, which is digested from one big test suite:: - --- + --- - schema: "yardstick:suite:0.1" + schema: "yardstick:suite:0.1" - name: "os-nosdn-nofeature-ha" - test_cases_dir: "tests/opnfv/test_cases/" - test_cases: - - + name: "os-nosdn-nofeature-ha" + test_cases_dir: "tests/opnfv/test_cases/" + test_cases: + - file_name: opnfv_yardstick_tc002.yaml - - + - file_name: opnfv_yardstick_tc005.yaml - - + - file_name: opnfv_yardstick_tc043.yaml constraint: installer: compass @@ -641,6 +731,77 @@ All in all, to create a test suite in Yardstick, you just need to create a yaml file and add test cases, constraint or task arguments if necessary. -Proxy Support (**Todo**) ---------------------------- +Proxy Support +------------- + +To configure the Jumphost to access Internet through a proxy its necessary to +export several variables to the environment, contained in the following +script:: + + #!/bin/sh + _proxy=<proxy_address> + _proxyport=<proxy_port> + _ip=$(hostname -I | awk '{print $1}') + export ftp_proxy=http://$_proxy:$_proxyport + export FTP_PROXY=http://$_proxy:$_proxyport + export http_proxy=http://$_proxy:$_proxyport + export HTTP_PROXY=http://$_proxy:$_proxyport + export https_proxy=http://$_proxy:$_proxyport + export HTTPS_PROXY=http://$_proxy:$_proxyport + export no_proxy=127.0.0.1,localhost,$_ip,$(hostname),<.localdomain> + export NO_PROXY=127.0.0.1,localhost,$_ip,$(hostname),<.localdomain> + +To enable Internet access from a container using ``docker``, depends on the OS +version. On Ubuntu 14.04 LTS, which uses SysVinit, ``/etc/default/docker`` must +be modified:: + + ....... + # If you need Docker to use an HTTP proxy, it can also be specified here. + export http_proxy="http://<proxy_address>:<proxy_port>/" + export https_proxy="https://<proxy_address>:<proxy_port>/" + +Then its necessary to restart the ``docker`` service:: + + sudo -EH service docker restart + +In Ubuntu 16.04 LTS, which uses Systemd, its necessary to create a drop-in +directory:: + + sudo mkdir /etc/systemd/system/docker.service.d + +Then, the proxy configuration will be stored in the following file:: + + # cat /etc/systemd/system/docker.service.d/http-proxy.conf + [Service] + Environment="HTTP_PROXY=https://<proxy_address>:<proxy_port>/" + Environment="HTTPS_PROXY=https://<proxy_address>:<proxy_port>/" + Environment="NO_PROXY=localhost,127.0.0.1,<localaddress>,<.localdomain>" + +The changes need to be flushed and the ``docker`` service restarted:: + + sudo systemctl daemon-reload + sudo systemctl restart docker + +Any container is already created won't contain these modifications. If needed, +stop and delete the container:: + + sudo docker stop yardstick + sudo docker rm yardstick + +.. warning:: Be careful, the above ``rm`` command will delete the container +completely. Everything on this container will be lost. + +Then follow the previous instructions `Prepare the Yardstick container`_ to +rebuild the Yardstick container. + + +References +---------- + +.. _`User Guide & Configuration Guide`: http://docs.opnfv.org/en/latest/release/userguide.introduction.html +.. _dockerhub: https://hub.docker.com/r/opnfv/yardstick/ +.. _`Cirros 0.3.5`: http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img +.. _`Ubuntu 16.04`: https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img +.. _`Yardstick GUI demo`: https://www.youtube.com/watch?v=M3qbJDp6QBk +.. _`How to run Yardstick in a local environment`: https://wiki.opnfv.org/display/yardstick/How+to+run+Yardstick+in+a+local+environment diff --git a/docs/testing/user/userguide/12-nsb_installation.rst b/docs/testing/user/userguide/12-nsb_installation.rst index 8cc26acd5..a584ca231 100644 --- a/docs/testing/user/userguide/12-nsb_installation.rst +++ b/docs/testing/user/userguide/12-nsb_installation.rst @@ -112,12 +112,52 @@ Download the source code and install Yardstick from it # git checkout <tag or stable branch> git checkout stable/euphrates - # For Bare-Metal or Standalone Virtualization - ./nsb_setup.sh +Configure the network proxy, either using the environment variables or setting +the global environment file: - # For OpenStack - ./nsb_setup.sh <path to admin-openrc.sh> +.. code-block:: ini + cat /etc/environment + http_proxy='http://proxy.company.com:port' + https_proxy='http://proxy.company.com:port' +.. code-block:: console + export http_proxy='http://proxy.company.com:port' + export https_proxy='http://proxy.company.com:port' + +The last step is to modify the Yardstick installation inventory, used by +Ansible: + +.. code-block:: ini + cat ./ansible/yardstick-install-inventory.ini + [jumphost] + localhost ansible_connection=local + + [yardstick-standalone] + yardstick-standalone-node ansible_host=192.168.1.2 + yardstick-standalone-node-2 ansible_host=192.168.1.3 + + # section below is only due backward compatibility. + # it will be removed later + [yardstick:children] + jumphost + + [all:vars] + ansible_user=root + ansible_pass=root + + +To execute an installation for a Bare-Metal or a Standalone context: + +.. code-block:: console + + ./nsb_setup.sh + + +To execute an installation for an OpenStack context: + +.. code-block:: console + + ./nsb_setup.sh <path to admin-openrc.sh> Above command setup docker with latest yardstick code. To execute diff --git a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst index 43aa3d69a..895837283 100644 --- a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst +++ b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst @@ -26,3 +26,5 @@ NSB PROX Test Case Descriptions tc_prox_context_mpls_tagging_port tc_prox_context_buffering_port tc_prox_context_load_balancer_port + tc_prox_context_vpe_port + tc_prox_context_lw_after_port diff --git a/docs/testing/user/userguide/nsb/tc_prox_context_lw_aftr_port.rst b/docs/testing/user/userguide/nsb/tc_prox_context_lw_aftr_port.rst new file mode 100644 index 000000000..5a1fada05 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_prox_context_lw_aftr_port.rst @@ -0,0 +1,107 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2017 Intel Corporation. + +************************************************ +Yardstick Test Case Description: NSB PROX LwAFTR +************************************************ + ++-----------------------------------------------------------------------------+ +|NSB PROX test for NFVI characterization | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_prox_{context}_lw_aftr-{port_num} | +| | | +| | * context = baremetal or heat_context; | +| | * port_num = 4; | +| | | ++--------------+--------------------------------------------------------------+ +|metric | * Network Throughput; | +| | * TG Packets Out; | +| | * TG Packets In; | +| | * VNF Packets Out; | +| | * VNF Packets In; | +| | * Dropped packets; | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The PROX LW_AFTR test will take packets in from one | +| | port and remove the ipv6 encapsulation and forward them to | +| | another port. While forwarded packets in other direction | +| | will be encapsulated in an ipv6 header. | +| | | +| | The lw_aftr test cases are implemented to run in baremetal | +| | and heat context an require 4 port topology to run the | +| | default configuration. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The LW_AFTR test cases are listed below: | +| | | +| | * tc_prox_baremetal_lw_aftr-4.yaml | +| | * tc_prox_heat_context_lw_aftr-4.yaml | +| | | +| | Test duration is set as 300sec for each test. | +| | The minimum packet size for MPLS test is 68 bytes. This is | +| | set in the traffic profile and can be configured to use | +| | higher packet sizes. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | PROX | +| | PROX is a DPDK application that can simulate VNF workloads | +| | and can generate traffic and used for NFVI characterization | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | The PROX lwAFTR test cases can be configured with | +| | different: | +| | | +| | * packet sizes; | +| | * test durations; | +| | * tolerated loss; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | For Openstack test case image (yardstick-samplevnfs) needs | +|conditions | to be installed into Glance with Prox and Dpdk included in | +| | it. | +| | | +| | For Baremetal tests cases Prox and Dpdk must be installed in | +| | the hosts where the test is executed. The pod.yaml file must | +| | have the necessary system and NIC information | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | For Baremetal test: The TG and VNF are started on the hosts | +| | based on the pod file. | +| | | +| | For Heat test: Two host VMs are booted, as Traffic generator | +| | and VNF(LW_AFTR workload) based on the test flavor. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the TG and VNF by using ssh. | +| | The test will resolve the topology and instantiate the VNF | +| | and TG and collect the KPI's/metrics. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | The TG will send packets to the VNF. If the number of | +| | dropped packets is more than the tolerated loss the line | +| | rate or throughput is halved. This is done until the dropped | +| | packets are within an acceptable tolerated loss. | +| | | +| | The KPI is the number of packets per second for 86 bytes | +| | packet size with an accepted minimal packet loss for the | +| | default configuration. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | In Baremetal test: The test quits the application and unbind | +| | the dpdk ports. | +| | | +| | In Heat test: Two host VMs are deleted on test completion. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The test case will achieve a Throughput with an accepted | +| | minimal tolerated packet loss. | ++--------------+--------------------------------------------------------------+ + diff --git a/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst b/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst new file mode 100644 index 000000000..6827b0525 --- /dev/null +++ b/docs/testing/user/userguide/nsb/tc_prox_context_vpe_port.rst @@ -0,0 +1,108 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2017 Intel Corporation. + +********************************************** +Yardstick Test Case Description: NSB PROXi VPE +********************************************** + ++-----------------------------------------------------------------------------+ +|NSB PROX test for NFVI characterization | +| | ++--------------+--------------------------------------------------------------+ +|test case id | tc_prox_{context}_vpe-{port_num} | +| | | +| | * context = baremetal or heat_context; | +| | * port_num = 4; | +| | | ++--------------+--------------------------------------------------------------+ +|metric | * Network Throughput; | +| | * TG Packets Out; | +| | * TG Packets In; | +| | * VNF Packets Out; | +| | * VNF Packets In; | +| | * Dropped packets; | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The PROX VPE test handles packet processing, routing, QinQ | +| | encapsulation, flows, ACL rules, adds/removes MPLS tagging | +| | and performs QoS before forwarding packet to another port. | +| | The reverse applies to forwarded packets in the other | +| | direction. | +| | | +| | The VPE test cases are implemented to run in baremetal | +| | and heat context an require 4 port topology to run the | +| | default configuration. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | The VPE test cases are listed below: | +| | | +| | * tc_prox_baremetal_vpe-4.yaml | +| | * tc_prox_heat_context_vpe-4.yaml | +| | | +| | Test duration is set as 300sec for each test. | +| | The minimum packet size for VPE test is 68 bytes. This is | +| | set in the traffic profile and can be configured to use | +| | higher packet sizes. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | PROX | +| | PROX is a DPDK application that can simulate VNF workloads | +| | and can generate traffic and used for NFVI characterization | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | The PROX VPE test cases can be configured with | +| | different: | +| | | +| | * packet sizes; | +| | * test durations; | +| | * tolerated loss; | +| | | +| | Default values exist. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | For Openstack test case image (yardstick-samplevnfs) needs | +|conditions | to be installed into Glance with Prox and Dpdk included in | +| | it. | +| | | +| | For Baremetal tests cases Prox and Dpdk must be installed in | +| | the hosts where the test is executed. The pod.yaml file must | +| | have the necessary system and NIC information | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | For Baremetal test: The TG and VNF are started on the hosts | +| | based on the pod file. | +| | | +| | For Heat test: Two host VMs are booted, as Traffic generator | +| | and VNF(VPE workload) based on the test flavor. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Yardstick is connected with the TG and VNF by using ssh. | +| | The test will resolve the topology and instantiate the VNF | +| | and TG and collect the KPI's/metrics. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | The TG will send packets to the VNF. If the number of | +| | dropped packets is more than the tolerated loss the line | +| | rate or throughput is halved. This is done until the dropped | +| | packets are within an acceptable tolerated loss. | +| | | +| | The KPI is the number of packets per second for 68 bytes | +| | packet size with an accepted minimal packet loss for the | +| | default configuration. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | In Baremetal test: The test quits the application and unbind | +| | the dpdk ports. | +| | | +| | In Heat test: Two host VMs are deleted on test completion. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | The test case will achieve a Throughput with an accepted | +| | minimal tolerated packet loss. | ++--------------+--------------------------------------------------------------+ + diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst index 90af8a382..793c3fdd5 100644 --- a/docs/testing/user/userguide/opnfv_yardstick_tc081.rst +++ b/docs/testing/user/userguide/opnfv_yardstick_tc081.rst @@ -4,7 +4,7 @@ .. (c) OPNFV, Huawei Technologies Co.,Ltd and others. ************************************* -Yardstick Test Case Description TC080 +Yardstick Test Case Description TC081 ************************************* .. _cirros-image: https://download.cirros-cloud.net @@ -21,7 +21,7 @@ Yardstick Test Case Description TC080 |metric | RTT (Round Trip Time) | | | | +--------------+--------------------------------------------------------------+ -|test purpose | The purpose of TC080 is to do a basic verification that | +|test purpose | The purpose of TC081 is to do a basic verification that | | | network latency is within acceptable boundaries when packets | | | travel between a containers and a VM. | | | | diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc084.rst b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst new file mode 100644 index 000000000..2e7b28e25 --- /dev/null +++ b/docs/testing/user/userguide/opnfv_yardstick_tc084.rst @@ -0,0 +1,140 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC084 +************************************* + +.. _spec_cpu_2006: https://www.spec.org/cpu2006/ + ++-----------------------------------------------------------------------------+ +|Compute Performance | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC084_SPEC CPU 2006 FOR VM | +| | | ++--------------+--------------------------------------------------------------+ +|metric | compute-intensive performance | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | The purpose of TC084 is to evaluate the IaaS compute | +| | performance by using SPEC CPU 2006 benchmark. The SPEC CPU | +| | 2006 benchmark has several different ways to measure | +| | computer performance. One way is to measure how fast the | +| | computer completes a single task; this is called a speed | +| | measurement. Another way is to measure how many tasks | +| | computer can accomplish in a certain amount of time; this is | +| | called a throughput, capacity or rate measurement. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | SPEC CPU 2006 | +| | | +| | The SPEC CPU 2006 benchmark is SPEC's industry-standardized, | +| | CPU-intensive benchmark suite, stressing a system's | +| | processor, memory subsystem and compiler. This benchmark | +| | suite includes the SPECint benchmarks and the SPECfp | +| | benchmarks. The SPECint 2006 benchmark contains 12 different | +| | benchmark tests and the SPECfp 2006 benchmark contains 19 | +| | different benchmark tests. | +| | | +| | SPEC CPU 2006 is not always part of a Linux distribution. | +| | SPEC requires that users purchase a license and agree with | +| | their terms and conditions. For this test case, users must | +| | manually download cpu2006-1.2.iso from the SPEC website and | +| | save it under the yardstick/resources folder (e.g. /home/ | +| | opnfv/repos/yardstick/yardstick/resources/cpu2006-1.2.iso) | +| | SPEC CPU® 2006 benchmark is available for purchase via the | +| | SPEC order form (https://www.spec.org/order.html). | +| | | ++--------------+--------------------------------------------------------------+ +|test | This test case uses SPEC CPU 2006 benchmark to measure | +|description | compute-intensive performance of VMs. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc084.yaml | +| | | +| | benchmark_subset is set to int. | +| | | +| | SLA is not available in this test case. | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * benchmark_subset - a subset of SPEC CPU 2006 benchmarks | +| | to run; | +| | * SPECint_benchmark - a SPECint benchmark to run; | +| | * SPECint_benchmark - a SPECfp benchmark to run; | +| | * output_format - desired report format; | +| | * runspec_config - SPEC CPU 2006 config file provided to | +| | the runspec binary; | +| | * runspec_iterations - the number of benchmark iterations | +| | to execute. For a reportable run, must be 3; | +| | * runspec_tune - tuning to use (base, peak, or all). For a | +| | reportable run, must be either base or all. Reportable | +| | runs do base first, then (optionally) peak; | +| | * runspec_size - size of input data to run (test, train, or | +| | ref). Reportable runs ensure that your binaries can | +| | produce correct results with the test and train workloads | +| | | ++--------------+--------------------------------------------------------------+ +|usability | This test case is used for executing SPEC CPU 2006 benchmark | +| | on virtual machines. The SPECint 2006 benchmark takes | +| | approximately 5 hours. (The time may vary due to different | +| | VM cpu configurations) | +| | | ++--------------+--------------------------------------------------------------+ +|references | spec_cpu_2006_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | To run and install SPEC CPU 2006, the following are | +|conditions | required: | +| | * For SPECint 2006: Both C99 and C++98 compilers are | +| | installed in VM images; | +| | * For SPECfp 2006: All three of C99, C++98 and Fortran-95 | +| | compilers installed in VM images; | +| | * At least 4GB of disk space availabile on VM. | +| | | +| | gcc 4.8.* and g++ 4.8.* version have been tested in Ubuntu | +| | 14.04, Ubuntu 16.04 and Redhat Enterprise Linux 7.4 image. | +| | Higher gcc and g++ version may cause compiling error. | +| | | +| | For more SPEC CPU 2006 dependencies please visit | +| | (https://www.spec.org/cpu2006/Docs/techsupport.html) | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | cpu2006-1.2.iso has been saved under the yardstick/resources | +| | folder (e.g. /home/opnfv/repos/yardstick/yardstick/resources | +| | /cpu2006-1.2.iso). Additionally, to use your custom runspec | +| | config file you can save it under the yardstick/resources/ | +| | files folder and specify the config file name in the | +| | runspec_config parameter. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Upload SPEC CPU 2006 ISO to the target VM using scp and | +| | install SPEC CPU 2006. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | Connect to the target server using SSH. | +| | If custom runspec config file is used, copy this file from | +| | yardstick to the target VM via the SSH tunnel. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | SPEC CPU 2006 benchmark is invoked and SPEC CPU 2006 metrics | +| | are generated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 5 | Text, HTML, CSV, PDF, and Configuration file outputs for the | +| | SPEC CPU 2006 metrics are fetched from the VM and stored | +| | under /tmp/result folder. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. SPEC CPU 2006 results are collected and stored. | +| | | ++--------------+--------------------------------------------------------------+ |