From 5a7bd787c86bf706153b3cc6f41fc8d35569c955 Mon Sep 17 00:00:00 2001 From: JingLu5 Date: Thu, 18 Aug 2016 09:59:25 +0800 Subject: Restructure user-guide Add 09-result-store-InfluxDB.rst Update index.rst for 09-result-store-InfluxDB.rst Change-Id: I44e662db07e206e58812d7a997036936a95b5137 Signed-off-by: JingLu5 --- docs/userguide/03-architecture.rst | 263 +++++++++++++++++++ docs/userguide/03-installation.rst | 321 ------------------------ docs/userguide/03-list-of-tcs.rst | 108 -------- docs/userguide/05-apexlake_installation.rst | 300 ++++++++++++++++++++++ docs/userguide/06-apexlake_api.rst | 89 +++++++ docs/userguide/07-installation.rst | 321 ++++++++++++++++++++++++ docs/userguide/08-yardstick_plugin.rst | 144 +++++++++++ docs/userguide/09-result-store-InfluxDB.rst | 86 +++++++ docs/userguide/10-list-of-tcs.rst | 108 ++++++++ docs/userguide/apexlake_api.rst | 89 ------- docs/userguide/apexlake_installation.rst | 300 ---------------------- docs/userguide/architecture.rst | 263 ------------------- docs/userguide/images/InfluxDB_store.png | Bin 0 -> 1623955 bytes docs/userguide/images/results_visualization.png | Bin 0 -> 41905 bytes docs/userguide/index.rst | 12 +- 15 files changed, 1318 insertions(+), 1086 deletions(-) create mode 100755 docs/userguide/03-architecture.rst delete mode 100644 docs/userguide/03-installation.rst delete mode 100644 docs/userguide/03-list-of-tcs.rst create mode 100644 docs/userguide/05-apexlake_installation.rst create mode 100644 docs/userguide/06-apexlake_api.rst create mode 100644 docs/userguide/07-installation.rst create mode 100644 docs/userguide/08-yardstick_plugin.rst create mode 100644 docs/userguide/09-result-store-InfluxDB.rst create mode 100644 docs/userguide/10-list-of-tcs.rst delete mode 100644 docs/userguide/apexlake_api.rst delete mode 100644 docs/userguide/apexlake_installation.rst delete mode 100755 docs/userguide/architecture.rst create mode 100644 docs/userguide/images/InfluxDB_store.png create mode 100644 docs/userguide/images/results_visualization.png diff --git a/docs/userguide/03-architecture.rst b/docs/userguide/03-architecture.rst new file mode 100755 index 000000000..3abb67b7d --- /dev/null +++ b/docs/userguide/03-architecture.rst @@ -0,0 +1,263 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) 2016 Huawei Technologies Co.,Ltd and others + +============ +Architecture +============ + +Abstract +======== +This chapter describes the yardstick framework software architecture. we will introduce it from Use-Case View, +Logical View, Process View and Deployment View. More technical details will be introduced in this chapter. + +Overview +======== + +Architecture overview +--------------------- +Yardstick is mainly written in Python, and test configurations are made +in YAML. Documentation is written in reStructuredText format, i.e. .rst +files. Yardstick is inspired by Rally. Yardstick is intended to run on a +computer with access and credentials to a cloud. The test case is described +in a configuration file given as an argument. + +How it works: the benchmark task configuration file is parsed and converted into +an internal model. The context part of the model is converted into a Heat +template and deployed into a stack. Each scenario is run using a runner, either +serially or in parallel. Each runner runs in its own subprocess executing +commands in a VM using SSH. The output of each scenario is written as json +records to a file or influxdb or http server, we use influxdb as the backend, +the test result will be shown with grafana. + + +Concept +------- +**Benchmark** - assess the relative performance of something + +**Benchmark** configuration file - describes a single test case in yaml format + +**Context** - The set of Cloud resources used by a scenario, such as user +names, image names, affinity rules and network configurations. A context is +converted into a simplified Heat template, which is used to deploy onto the +Openstack environment. + +**Data** - Output produced by running a benchmark, written to a file in json format + +**Runner** - Logic that determines how a test scenario is run and reported, for +example the number of test iterations, input value stepping and test duration. +Predefined runner types exist for re-usage, see `Runner types`_. + +**Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, LmBench, ...) + +**SLA** - Relates to what result boundary a test case must meet to pass. For +example a latency limit, amount or ratio of lost packets and so on. Action +based on :term:`SLA` can be configured, either just to log (monitor) or to stop +further testing (assert). The :term:`SLA` criteria is set in the benchmark +configuration file and evaluated by the runner. + + +Runner types +------------ + +There exists several predefined runner types to choose between when designing +a test scenario: + +**Arithmetic:** +Every test run arithmetically steps the specified input value(s) in the +test scenario, adding a value to the previous input value. It is also possible +to combine several input values for the same test case in different +combinations. + +Snippet of an Arithmetic runner configuration: +:: + + + runner: + type: Arithmetic + iterators: + - + name: stride + start: 64 + stop: 128 + step: 64 + +**Duration:** +The test runs for a specific period of time before completed. + +Snippet of a Duration runner configuration: +:: + + + runner: + type: Duration + duration: 30 + +**Sequence:** +The test changes a specified input value to the scenario. The input values +to the sequence are specified in a list in the benchmark configuration file. + +Snippet of a Sequence runner configuration: +:: + + + runner: + type: Sequence + scenario_option_name: packetsize + sequence: + - 100 + - 200 + - 250 + + +**Iteration:** +Tests are run a specified number of times before completed. + +Snippet of an Iteration runner configuration: +:: + + + runner: + type: Iteration + iterations: 2 + + + + +Use-Case View +============= +Yardstick Use-Case View shows two kinds of users. One is the Tester who will +do testing in cloud, the other is the User who is more concerned with test result +and result analyses. + +For testers, they will run a single test case or test case suite to verify +infrastructure compliance or bencnmark their own infrastructure performance. +Test result will be stored by dispatcher module, three kinds of store method +(file, influxdb and http) can be configured. The detail information of +scenarios and runners can be queried with CLI by testers. + +For users, they would check test result with four ways. + +If dispatcher module is configured as file(default), there are two ways to +check test result. One is to get result from yardstick.out ( default path: +/tmp/yardstick.out), the other is to get plot of test result, it will be shown +if users execute command "yardstick-plot". + +If dispatcher module is configured as influxdb, users will check test +result on Grafana which is most commonly used for visualizing time series data. + +If dispatcher module is configured as http, users will check test result +on OPNFV testing dashboard which use MongoDB as backend. + +.. image:: images/Use_case.png + :width: 800px + :alt: Yardstick Use-Case View + +Logical View +============ +Yardstick Logical View describes the most important classes, their +organization, and the most important use-case realizations. + +Main classes: + +**TaskCommands** - "yardstick task" subcommand handler. + +**HeatContext** - Do test yaml file context section model convert to HOT, +deploy and undeploy Openstack heat stack. + +**Runner** - Logic that determines how a test scenario is run and reported. + +**TestScenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, +LmBench, ...) + +**Dispatcher** - Choose user defined way to store test results. + +TaskCommands is the "yardstick task" subcommand's main entry. It takes yaml +file (e.g. test.yaml) as input, and uses HeatContext to convert the yaml +file's context section to HOT. After Openstacik heat stack is deployed by +HeatContext with the converted HOT, TaskCommands use Runner to run specified +TestScenario. During first runner initialization, it will create output +process. The output process use Dispatcher to push test results. The Runner +will also create a process to execute TestScenario. And there is a +multiprocessing queue between each runner process and output process, so the +runner process can push the real-time test results to the storage media. +TestScenario is commonly connected with VMs by using ssh. It sets up VMs and +run test measurement scripts through the ssh tunnel. After all TestScenaio +is finished, TaskCommands will undeploy the heat stack. Then the whole test is +finished. + +.. image:: images/Logical_view.png + :width: 800px + :alt: Yardstick Logical View + +Process View (Test execution flow) +================================== +Yardstick process view shows how yardstick runs a test case. Below is the +sequence graph about the test execution flow using heat context, and each +object represents one module in yardstick: + +.. image:: images/test_execution_flow.png + :width: 800px + :alt: Yardstick Process View + +A user wants to do a test with yardstick. He can use the CLI to input the +command to start a task. "TaskCommands" will receive the command and ask +"HeatContext" to parse the context. "HeatContext" will then ask "Model" to +convert the model. After the model is generated, "HeatContext" will inform +"Openstack" to deploy the heat stack by heat template. After "Openstack" +deploys the stack, "HeatContext" will inform "Runner" to run the specific test +case. + +Firstly, "Runner" would ask "TestScenario" to process the specific scenario. +Then "TestScenario" will start to log on the openstack by ssh protocal and +execute the test case on the specified VMs. After the script execution +finishes, "TestScenario" will send a message to inform "Runner". When the +testing job is done, "Runner" will inform "Dispatcher" to output the test +result via file, influxdb or http. After the result is output, "HeatContext" +will call "Openstack" to undeploy the heat stack. Once the stack is +undepoyed, the whole test ends. + +Deployment View +=============== +Yardstick deployment view shows how the yardstick tool can be deployed into the +underlying platform. Generally, yardstick tool is installed on JumpServer(see +`03-installation` for detail installation steps), and JumpServer is +connected with other control/compute servers by networking. Based on this +deployment, yardstick can run the test cases on these hosts, and get the test +result for better showing. + +.. image:: images/Deployment.png + :width: 800px + :alt: Yardstick Deployment View + +Yardstick Directory structure +============================= + +**yardstick/** - Yardstick main directory. + +*ci/* - Used for continuous integration of Yardstick at different PODs and + with support for different installers. + +*docs/* - All documentation is stored here, such as configuration guides, + user guides and Yardstick descriptions. + +*etc/* - Used for test cases requiring specific POD configurations. + +*samples/* - test case samples are stored here, most of all scenario and + feature's samples are shown in this directory. + +*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as + well as the test cases run to verify the NFVI (*opnfv/*) are stored. + Also configurations of what to run daily and weekly at the different + PODs is located here. + +*tools/* - Currently contains tools to build image for VMs which are deployed + by Heat. Currently contains how to build the yardstick-trusty-server + image with the different tools that are needed from within the image. + +*vTC/* - Contains the files for running the virtual Traffic Classifier tests. + +*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts, + CLI parsing, keys, plotting tools, dispatcher and so on. + diff --git a/docs/userguide/03-installation.rst b/docs/userguide/03-installation.rst deleted file mode 100644 index 25c125851..000000000 --- a/docs/userguide/03-installation.rst +++ /dev/null @@ -1,321 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. - -Yardstick Installation -====================== - -Abstract --------- - -Yardstick currently supports installation on Ubuntu 14.04 or by using a Docker -image. Detailed steps about installing Yardstick using both of these options -can be found below. - -To use Yardstick you should have access to an OpenStack environment, -with at least Nova, Neutron, Glance, Keystone and Heat installed. - -The steps needed to run Yardstick are: - -1. Install Yardstick and create the test configuration .yaml file. -2. Build a guest image and load the image into the OpenStack environment. -3. Create a Neutron external network and load OpenStack environment variables. -4. Run the test case. - - -Installing Yardstick on Ubuntu 14.04 ------------------------------------- - -.. _install-framework: - -Installing Yardstick framework -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install dependencies: -:: - - sudo apt-get update && sudo apt-get install -y \ - wget \ - git \ - sshpass \ - qemu-utils \ - kpartx \ - libffi-dev \ - libssl-dev \ - python \ - python-dev \ - python-virtualenv \ - libxml2-dev \ - libxslt1-dev \ - python-setuptools - -Create a python virtual environment, source it and update setuptools: -:: - - virtualenv ~/yardstick_venv - source ~/yardstick_venv/bin/activate - easy_install -U setuptools - -Download source code and install python dependencies: -:: - - git clone https://gerrit.opnfv.org/gerrit/yardstick - cd yardstick - python setup.py install - -There is also a YouTube video, showing the above steps: - -.. image:: http://img.youtube.com/vi/4S4izNolmR0/0.jpg - :alt: http://www.youtube.com/watch?v=4S4izNolmR0 - :target: http://www.youtube.com/watch?v=4S4izNolmR0 - -Installing extra tools -^^^^^^^^^^^^^^^^^^^^^^ -yardstick-plot -"""""""""""""" -Yardstick has an internal plotting tool ``yardstick-plot``, which can be installed -using the following command: -:: - - sudo apt-get install -y g++ libfreetype6-dev libpng-dev pkg-config - python setup.py develop easy_install yardstick[plot] - -.. _guest-image: - -Building a guest image -^^^^^^^^^^^^^^^^^^^^^^ -Yardstick has a tool for building an Ubuntu Cloud Server image containing all -the required tools to run test cases supported by Yardstick. It is necessary to -have sudo rights to use this tool. - -Also you may need install several additional packages to use this tool, by -follwing the commands below: -:: - - apt-get update && apt-get install -y \ - qemu-utils \ - kpartx - -This image can be built using the following command while in the directory where -Yardstick is installed (``~/yardstick`` if the framework is installed -by following the commands above): -:: - - sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh - -**Warning:** the script will create files by default in: -``/tmp/workspace/yardstick`` and the files will be owned by root! - -The created image can be added to OpenStack using the ``glance image-create`` or -via the OpenStack Dashboard. - -Example command: -:: - - glance --os-image-api-version 1 image-create \ - --name yardstick-trusty-server --is-public true \ - --disk-format qcow2 --container-format bare \ - --file /tmp/workspace/yardstick/yardstick-trusty-server.img - - -Installing Yardstick using Docker ---------------------------------- - -Yardstick has two Docker images, first one (**Yardstick-framework**) serves as a -replacement for installing the Yardstick framework in a virtual environment (for -example as done in :ref:`install-framework`), while the other image is mostly for -CI purposes (**Yardstick-CI**). - -Yardstick-framework image -^^^^^^^^^^^^^^^^^^^^^^^^^ -Download the source code: - -:: - - git clone https://gerrit.opnfv.org/gerrit/yardstick - -Build the Docker image and tag it as *yardstick-framework*: - -:: - - cd yardstick - docker build -t yardstick-framework . - -Run the Docker instance: - -:: - - docker run --name yardstick_instance -i -t yardstick-framework - -To build a guest image for Yardstick, see :ref:`guest-image`. - -Yardstick-CI image -^^^^^^^^^^^^^^^^^^ -Pull the Yardstick-CI Docker image from Docker hub: - -:: - - docker pull opnfv/yardstick:$DOCKER_TAG - -Where ``$DOCKER_TAG`` is latest for master branch, as for the release branches, -this coincides with its release name, such as brahmaputra.1.0. - -Run the Docker image: - -:: - - docker run \ - --privileged=true \ - --rm \ - -t \ - -e "INSTALLER_TYPE=${INSTALLER_TYPE}" \ - -e "INSTALLER_IP=${INSTALLER_IP}" \ - opnfv/yardstick \ - exec_tests.sh ${YARDSTICK_DB_BACKEND} ${YARDSTICK_SUITE_NAME} - -Where ``${INSTALLER_TYPE}`` can be apex, compass, fuel or joid, ``${INSTALLER_IP}`` -is the installer master node IP address (i.e. 10.20.0.2 is default for fuel). ``${YARDSTICK_DB_BACKEND}`` -is the IP and port number of DB, ``${YARDSTICK_SUITE_NAME}`` is the test suite you want to run. -For more details, please refer to the Jenkins job defined in Releng project, labconfig information -and sshkey are required. See the link -https://git.opnfv.org/cgit/releng/tree/jjb/yardstick/yardstick-ci-jobs.yml. - -Note: exec_tests.sh is used for executing test suite here, furthermore, if someone wants to execute the -test suite manually, it can be used as long as the parameters are configured correct. Another script -called run_tests.sh is used for unittest in Jenkins verify job, in local manaul environment, -it is recommended to run before test suite execuation. - -Basic steps performed by the **Yardstick-CI** container: - -1. clone yardstick and releng repos -2. setup OS credentials (releng scripts) -3. install yardstick and dependencies -4. build yardstick cloud image and upload it to glance -5. upload cirros-0.3.3 cloud image to glance -6. run yardstick test scenarios -7. cleanup - - -OpenStack parameters and credentials ------------------------------------- - -Yardstick-flavor -^^^^^^^^^^^^^^^^ -Most of the sample test cases in Yardstick are using an OpenStack flavor called -*yardstick-flavor* which deviates from the OpenStack standard m1.tiny flavor by the -disk size - instead of 1GB it has 3GB. Other parameters are the same as in m1.tiny. - -Environment variables -^^^^^^^^^^^^^^^^^^^^^ -Before running Yardstick it is necessary to export OpenStack environment variables -from the OpenStack *openrc* file (using the ``source`` command) and export the -external network name ``export EXTERNAL_NETWORK="external-network-name"``, -the default name for the external network is ``net04_ext``. - -Credential environment variables in the *openrc* file have to include at least: - -* OS_AUTH_URL -* OS_USERNAME -* OS_PASSWORD -* OS_TENANT_NAME - -Yardstick default key pair -^^^^^^^^^^^^^^^^^^^^^^^^^^ -Yardstick uses a SSH key pair to connect to the guest image. This key pair can -be found in the ``resources/files`` directory. To run the ``ping-hot.yaml`` test -sample, this key pair needs to be imported to the OpenStack environment. - - -Examples and verifying the install ----------------------------------- - -It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Below is an example invocation -of yardstick help command and ping.py test sample: -:: - - yardstick –h - yardstick task start samples/ping.yaml - -Each testing tool supported by Yardstick has a sample configuration file. -These configuration files can be found in the **samples** directory. - -Example invocation of ``yardstick-plot`` tool: -:: - - yardstick-plot -i /tmp/yardstick.out -o /tmp/plots/ - -Default location for the output is ``/tmp/yardstick.out``. - -More info about the tool can be found by executing: -:: - - yardstick-plot -h - - -Deploy InfluxDB and Grafana locally ------------------------------------- - -.. pull docker images - -Pull docker images -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -:: - - docker pull tutum/influxdb - docker pull grafana/grafana - -Run influxdb and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run influxdb -:: - - docker run -d --name influxdb -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 tutum/influxdb - docker exec -it influxdb bash - -Config influxdb -:: - - influx - >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES - >CREATE DATABASE yardstick; - >use yardstick; - >show MEASUREMENTS; - -Run grafana and config -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Run grafana -:: - - docker run -d --name grafana -p 3000:3000 grafana/grafana - -Config grafana -:: - - http://{YOUR_IP_HERE}:3000 - log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 - -.. image:: images/Grafana_config.png - -Config yardstick conf -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf - -vi /etc/yardstick/yardstick.conf -Config yardstick.conf -:: - - [DEFAULT] - debug = True - dispatcher = influxdb - - [dispatcher_influxdb] - timeout = 5 - target = http://{YOUR_IP_HERE}:8086 - db_name = yardstick - username = root - password = root - -Now you can run yardstick test case and store the results in influxdb -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/docs/userguide/03-list-of-tcs.rst b/docs/userguide/03-list-of-tcs.rst deleted file mode 100644 index 7e8c85433..000000000 --- a/docs/userguide/03-list-of-tcs.rst +++ /dev/null @@ -1,108 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Ericsson AB and others. - -==================== -Yardstick Test Cases -==================== - -Abstract -======== - -This chapter lists available Yardstick test cases. -Yardstick test cases are divided in two main categories: - -* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology -described in :doc:`02-methodology` - -* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more -aspect of a feature delivered by an OPNFV Project, including the test cases -developed for the :term:`VTC`. - -Generic NFVI Test Case Descriptions -=================================== - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc001.rst - opnfv_yardstick_tc002.rst - opnfv_yardstick_tc004.rst - opnfv_yardstick_tc005.rst - opnfv_yardstick_tc008.rst - opnfv_yardstick_tc009.rst - opnfv_yardstick_tc010.rst - opnfv_yardstick_tc011.rst - opnfv_yardstick_tc012.rst - opnfv_yardstick_tc014.rst - opnfv_yardstick_tc024.rst - opnfv_yardstick_tc037.rst - opnfv_yardstick_tc038.rst - opnfv_yardstick_tc042.rst - opnfv_yardstick_tc043.rst - opnfv_yardstick_tc044.rst - opnfv_yardstick_tc055.rst - opnfv_yardstick_tc061.rst - opnfv_yardstick_tc063.rst - opnfv_yardstick_tc069.rst - opnfv_yardstick_tc070.rst - opnfv_yardstick_tc071.rst - opnfv_yardstick_tc072.rst - opnfv_yardstick_tc075.rst - -OPNFV Feature Test Cases -======================== - -H A ---- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc019.rst - opnfv_yardstick_tc025.rst - -IPv6 ----- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc027.rst - -KVM ---- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc028.rst - -Parser ------- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc040.rst - -virtual Traffic Classifier --------------------------- - -.. toctree:: - :maxdepth: 1 - - opnfv_yardstick_tc006.rst - opnfv_yardstick_tc007.rst - opnfv_yardstick_tc020.rst - opnfv_yardstick_tc021.rst - -Templates -========= - -.. toctree:: - :maxdepth: 1 - - testcase_description_v2_template - Yardstick_task_templates diff --git a/docs/userguide/05-apexlake_installation.rst b/docs/userguide/05-apexlake_installation.rst new file mode 100644 index 000000000..d4493e0f8 --- /dev/null +++ b/docs/userguide/05-apexlake_installation.rst @@ -0,0 +1,300 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + + +.. _DPDK: http://dpdk.org/doc/nics +.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/ +.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking +.. _PORTSEC: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver +.. _here: https://wiki.opnfv.org/vtc + + +============================ +Apexlake Installation Guide +============================ + +Abstract +-------- + +ApexLake is a framework that provides automatic execution of experiments and +related data collection to enable a user validate infrastructure from the +perspective of a Virtual Network Function (:term:`VNF`). + +In the context of Yardstick, a virtual Traffic Classifier (:term:`VTC`) network +function is utilized. + + +Framework Hardware Dependencies +=============================== + +In order to run the framework there are some hardware related dependencies for +ApexLake. + +The framework needs to be installed on the same physical node where DPDK-pktgen_ +is installed. + +The installation requires the physical node hosting the packet generator must +have 2 NICs which are DPDK_ compatible. + +The 2 NICs will be connected to the switch where the OpenStack VM +network is managed. + +The switch used must support multicast traffic and :term:`IGMP` snooping. +Further details about the configuration are provided at the following here_. + +The corresponding ports to which the cables are connected need to be configured +as VLAN trunks using two of the VLAN IDs available for Neutron. +Note the VLAN IDs used as they will be required in later configuration steps. + + +Framework Software Dependencies +=============================== +Before starting the framework, a number of dependencies must first be installed. +The following describes the set of instructions to be executed via the Linux +shell in order to install and configure the required dependencies. + +1. Install Dependencies. + +To support the framework dependencies the following packages must be installed. +The example provided is based on Ubuntu and needs to be executed in root mode. + +:: + + apt-get install python-dev + apt-get install python-pip + apt-get install python-mock + apt-get install tcpreplay + apt-get install libpcap-dev + +2. Source OpenStack openrc file. + +:: + + source openrc + +3. Configure Openstack Neutron + +In order to support traffic generation and management by the virtual +Traffic Classifier, the configuration of the port security driver +extension is required for Neutron. + +For further details please follow the following link: PORTSEC_ +This step can be skipped in case the target OpenStack is Juno or Kilo release, +but it is required to support Liberty. +It is therefore required to indicate the release version in the configuration +file located in ./yardstick/vTC/apexlake/apexlake.conf + + +4. Create Two Networks based on VLANs in Neutron. + +To enable network communications between the packet generator and the compute +node, two networks must be created via Neutron and mapped to the VLAN IDs +that were previously used in the configuration of the physical switch. +The following shows the typical set of commands required to configure Neutron +correctly. +The physical switches need to be configured accordingly. + +:: + + VLAN_1=2032 + VLAN_2=2033 + PHYSNET=physnet2 + neutron net-create apexlake_inbound_network \ + --provider:network_type vlan \ + --provider:segmentation_id $VLAN_1 \ + --provider:physical_network $PHYSNET + + neutron subnet-create apexlake_inbound_network \ + 192.168.0.0/24 --name apexlake_inbound_subnet + + neutron net-create apexlake_outbound_network \ + --provider:network_type vlan \ + --provider:segmentation_id $VLAN_2 \ + --provider:physical_network $PHYSNET + + neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \ + --name apexlake_outbound_subnet + + +5. Download Ubuntu Cloud Image and load it on Glance + +The virtual Traffic Classifier is supported on top of Ubuntu 14.04 cloud image. +The image can be downloaded on the local machine and loaded on Glance +using the following commands: + +:: + + wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img + glance image-create \ + --name ubuntu1404 \ + --is-public true \ + --disk-format qcow \ + --container-format bare \ + --file trusty-server-cloudimg-amd64-disk1.img + + + +6. Configure the Test Cases + +The VLAN tags must also be included in the test case Yardstick yaml file +as parameters for the following test cases: + + * :doc:`opnfv_yardstick_tc006` + + * :doc:`opnfv_yardstick_tc007` + + * :doc:`opnfv_yardstick_tc020` + + * :doc:`opnfv_yardstick_tc021` + + +Install and Configure DPDK Pktgen ++++++++++++++++++++++++++++++++++ + +Execution of the framework is based on DPDK Pktgen. +If DPDK Pktgen has not installed, it is necessary to download, install, compile +and configure it. +The user can create a directory and download the dpdk packet generator source +code: + +:: + + cd experimental_framework/libraries + mkdir dpdk_pktgen + git clone https://github.com/pktgen/Pktgen-DPDK.git + +For instructions on the installation and configuration of DPDK and DPDK Pktgen +please follow the official DPDK Pktgen README file. +Once the installation is completed, it is necessary to load the DPDK kernel +driver, as follow: + +:: + + insmod uio + insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko + +It is necessary to set the configuration file to support the desired Pktgen +configuration. +A description of the required configuration parameters and supporting examples +is provided in the following: + +:: + + [PacketGen] + packet_generator = dpdk_pktgen + + # This is the directory where the packet generator is installed + # (if the user previously installed dpdk-pktgen, + # it is required to provide the director where it is installed). + pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/ + + # This is the directory where DPDK is installed + dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/ + + # Name of the dpdk-pktgen program that starts the packet generator + program_name = app/app/x86_64-native-linuxapp-gcc/pktgen + + # DPDK coremask (see DPDK-Pktgen readme) + coremask = 1f + + # DPDK memory channels (see DPDK-Pktgen readme) + memory_channels = 3 + + # Name of the interface of the pktgen to be used to send traffic (vlan_sender) + name_if_1 = p1p1 + + # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver) + name_if_2 = p1p2 + + # PCI bus address correspondent to if_1 + bus_slot_nic_1 = 01:00.0 + + # PCI bus address correspondent to if_2 + bus_slot_nic_2 = 01:00.1 + + +To find the parameters related to names of the NICs and the addresses of the PCI buses +the user may find it useful to run the :term:`DPDK` tool nic_bind as follows: + +:: + + DPDK_DIR/tools/dpdk_nic_bind.py --status + +Lists the NICs available on the system, and shows the available drivers and bus addresses for each interface. +Please make sure to select NICs which are :term:`DPDK` compatible. + +Installation and Configuration of smcroute +++++++++++++++++++++++++++++++++++++++++++ + +The user is required to install smcroute which is used by the framework to +support multicast communications. + +The following is the list of commands required to download and install smroute. + +:: + + cd ~ + git clone https://github.com/troglobit/smcroute.git + cd smcroute + git reset --hard c3f5c56 + sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh + sed -i 's/automake-1.11/automake/g' ./autogen.sh + ./autogen.sh + ./configure + make + sudo make install + cd .. + +It is required to do the reset to the specified commit ID. +It is also requires the creation a configuration file using the following +command: + + SMCROUTE_NIC=(name of the nic) + +where name of the nic is the name used previously for the variable "name_if_2". +For example: + +:: + + SMCROUTE_NIC=p1p2 + +Then create the smcroute configuration file /etc/smcroute.conf + +:: + + echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf + + +At the end of this procedure it will be necessary to perform the following +actions to add the user to the sudoers: + +:: + + adduser USERNAME sudo + echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers + + +Experiment using SR-IOV Configuration on the Compute Node ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +To enable :term:`SR-IOV` interfaces on the physical NIC of the compute node, a +compatible NIC is required. +NIC configuration depends on model and vendor. After proper configuration to +support :term:`SR-IOV`, a proper configuration of OpenStack is required. +For further information, please refer to the SRIOV_ configuration guide + +Finalize installation the framework on the system +================================================= + +The installation of the framework on the system requires the setup of the project. +After entering into the apexlake directory, it is sufficient to run the following +command. + +:: + + python setup.py install + +Since some elements are copied into the /tmp directory (see configuration file) +it could be necessary to repeat this step after a reboot of the host. diff --git a/docs/userguide/06-apexlake_api.rst b/docs/userguide/06-apexlake_api.rst new file mode 100644 index 000000000..35a1dbe3e --- /dev/null +++ b/docs/userguide/06-apexlake_api.rst @@ -0,0 +1,89 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Intel Corporation and others. + + +================================= +Apexlake API Interface Definition +================================= + +Abstract +-------- + +The API interface provided by the framework to enable the execution of test +cases is defined as follows. + + +init +---- + +**static init()** + + Initializes the Framework + + **Returns** None + + +execute_framework +----------------- + +**static execute_framework** (test_cases, + + iterations, + + heat_template, + + heat_template_parameters, + + deployment_configuration, + + openstack_credentials) + + Executes the framework according the specified inputs + + **Parameters** + + - **test_cases** + + Test cases to be run with the workload (dict() of dict()) + + Example: + test_case = dict() + + test_case[’name’] = ‘module.Class’ + + test_case[’params’] = dict() + + test_case[’params’][’throughput’] = ‘1’ + + test_case[’params’][’vlan_sender’] = ‘1000’ + + test_case[’params’][’vlan_receiver’] = ‘1001’ + + test_cases = [test_case] + + - **iterations** + Number of test cycles to be executed (int) + + - **heat_template** + (string) File name of the heat template corresponding to the workload to be deployed. + It contains the parameters to be evaluated in the form of #parameter_name. + (See heat_templates/vTC.yaml as example). + + - **heat_template_parameters** + (dict) Parameters to be provided as input to the + heat template. See http://docs.openstack.org/developer/heat/ template_guide/hot_guide.html + section “Template input parameters” for further info. + + - **deployment_configuration** + ( dict[string] = list(strings) ) ) Dictionary of parameters + representing the deployment configuration of the workload. + + The key is a string corresponding to the name of the parameter, + the value is a list of strings representing the value to be + assumed by a specific param. The parameters are user defined: + they have to correspond to the place holders (#parameter_name) + specified in the heat template. + + **Returns** dict() containing results diff --git a/docs/userguide/07-installation.rst b/docs/userguide/07-installation.rst new file mode 100644 index 000000000..25c125851 --- /dev/null +++ b/docs/userguide/07-installation.rst @@ -0,0 +1,321 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. + +Yardstick Installation +====================== + +Abstract +-------- + +Yardstick currently supports installation on Ubuntu 14.04 or by using a Docker +image. Detailed steps about installing Yardstick using both of these options +can be found below. + +To use Yardstick you should have access to an OpenStack environment, +with at least Nova, Neutron, Glance, Keystone and Heat installed. + +The steps needed to run Yardstick are: + +1. Install Yardstick and create the test configuration .yaml file. +2. Build a guest image and load the image into the OpenStack environment. +3. Create a Neutron external network and load OpenStack environment variables. +4. Run the test case. + + +Installing Yardstick on Ubuntu 14.04 +------------------------------------ + +.. _install-framework: + +Installing Yardstick framework +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Install dependencies: +:: + + sudo apt-get update && sudo apt-get install -y \ + wget \ + git \ + sshpass \ + qemu-utils \ + kpartx \ + libffi-dev \ + libssl-dev \ + python \ + python-dev \ + python-virtualenv \ + libxml2-dev \ + libxslt1-dev \ + python-setuptools + +Create a python virtual environment, source it and update setuptools: +:: + + virtualenv ~/yardstick_venv + source ~/yardstick_venv/bin/activate + easy_install -U setuptools + +Download source code and install python dependencies: +:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + cd yardstick + python setup.py install + +There is also a YouTube video, showing the above steps: + +.. image:: http://img.youtube.com/vi/4S4izNolmR0/0.jpg + :alt: http://www.youtube.com/watch?v=4S4izNolmR0 + :target: http://www.youtube.com/watch?v=4S4izNolmR0 + +Installing extra tools +^^^^^^^^^^^^^^^^^^^^^^ +yardstick-plot +"""""""""""""" +Yardstick has an internal plotting tool ``yardstick-plot``, which can be installed +using the following command: +:: + + sudo apt-get install -y g++ libfreetype6-dev libpng-dev pkg-config + python setup.py develop easy_install yardstick[plot] + +.. _guest-image: + +Building a guest image +^^^^^^^^^^^^^^^^^^^^^^ +Yardstick has a tool for building an Ubuntu Cloud Server image containing all +the required tools to run test cases supported by Yardstick. It is necessary to +have sudo rights to use this tool. + +Also you may need install several additional packages to use this tool, by +follwing the commands below: +:: + + apt-get update && apt-get install -y \ + qemu-utils \ + kpartx + +This image can be built using the following command while in the directory where +Yardstick is installed (``~/yardstick`` if the framework is installed +by following the commands above): +:: + + sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh + +**Warning:** the script will create files by default in: +``/tmp/workspace/yardstick`` and the files will be owned by root! + +The created image can be added to OpenStack using the ``glance image-create`` or +via the OpenStack Dashboard. + +Example command: +:: + + glance --os-image-api-version 1 image-create \ + --name yardstick-trusty-server --is-public true \ + --disk-format qcow2 --container-format bare \ + --file /tmp/workspace/yardstick/yardstick-trusty-server.img + + +Installing Yardstick using Docker +--------------------------------- + +Yardstick has two Docker images, first one (**Yardstick-framework**) serves as a +replacement for installing the Yardstick framework in a virtual environment (for +example as done in :ref:`install-framework`), while the other image is mostly for +CI purposes (**Yardstick-CI**). + +Yardstick-framework image +^^^^^^^^^^^^^^^^^^^^^^^^^ +Download the source code: + +:: + + git clone https://gerrit.opnfv.org/gerrit/yardstick + +Build the Docker image and tag it as *yardstick-framework*: + +:: + + cd yardstick + docker build -t yardstick-framework . + +Run the Docker instance: + +:: + + docker run --name yardstick_instance -i -t yardstick-framework + +To build a guest image for Yardstick, see :ref:`guest-image`. + +Yardstick-CI image +^^^^^^^^^^^^^^^^^^ +Pull the Yardstick-CI Docker image from Docker hub: + +:: + + docker pull opnfv/yardstick:$DOCKER_TAG + +Where ``$DOCKER_TAG`` is latest for master branch, as for the release branches, +this coincides with its release name, such as brahmaputra.1.0. + +Run the Docker image: + +:: + + docker run \ + --privileged=true \ + --rm \ + -t \ + -e "INSTALLER_TYPE=${INSTALLER_TYPE}" \ + -e "INSTALLER_IP=${INSTALLER_IP}" \ + opnfv/yardstick \ + exec_tests.sh ${YARDSTICK_DB_BACKEND} ${YARDSTICK_SUITE_NAME} + +Where ``${INSTALLER_TYPE}`` can be apex, compass, fuel or joid, ``${INSTALLER_IP}`` +is the installer master node IP address (i.e. 10.20.0.2 is default for fuel). ``${YARDSTICK_DB_BACKEND}`` +is the IP and port number of DB, ``${YARDSTICK_SUITE_NAME}`` is the test suite you want to run. +For more details, please refer to the Jenkins job defined in Releng project, labconfig information +and sshkey are required. See the link +https://git.opnfv.org/cgit/releng/tree/jjb/yardstick/yardstick-ci-jobs.yml. + +Note: exec_tests.sh is used for executing test suite here, furthermore, if someone wants to execute the +test suite manually, it can be used as long as the parameters are configured correct. Another script +called run_tests.sh is used for unittest in Jenkins verify job, in local manaul environment, +it is recommended to run before test suite execuation. + +Basic steps performed by the **Yardstick-CI** container: + +1. clone yardstick and releng repos +2. setup OS credentials (releng scripts) +3. install yardstick and dependencies +4. build yardstick cloud image and upload it to glance +5. upload cirros-0.3.3 cloud image to glance +6. run yardstick test scenarios +7. cleanup + + +OpenStack parameters and credentials +------------------------------------ + +Yardstick-flavor +^^^^^^^^^^^^^^^^ +Most of the sample test cases in Yardstick are using an OpenStack flavor called +*yardstick-flavor* which deviates from the OpenStack standard m1.tiny flavor by the +disk size - instead of 1GB it has 3GB. Other parameters are the same as in m1.tiny. + +Environment variables +^^^^^^^^^^^^^^^^^^^^^ +Before running Yardstick it is necessary to export OpenStack environment variables +from the OpenStack *openrc* file (using the ``source`` command) and export the +external network name ``export EXTERNAL_NETWORK="external-network-name"``, +the default name for the external network is ``net04_ext``. + +Credential environment variables in the *openrc* file have to include at least: + +* OS_AUTH_URL +* OS_USERNAME +* OS_PASSWORD +* OS_TENANT_NAME + +Yardstick default key pair +^^^^^^^^^^^^^^^^^^^^^^^^^^ +Yardstick uses a SSH key pair to connect to the guest image. This key pair can +be found in the ``resources/files`` directory. To run the ``ping-hot.yaml`` test +sample, this key pair needs to be imported to the OpenStack environment. + + +Examples and verifying the install +---------------------------------- + +It is recommended to verify that Yardstick was installed successfully +by executing some simple commands and test samples. Below is an example invocation +of yardstick help command and ping.py test sample: +:: + + yardstick –h + yardstick task start samples/ping.yaml + +Each testing tool supported by Yardstick has a sample configuration file. +These configuration files can be found in the **samples** directory. + +Example invocation of ``yardstick-plot`` tool: +:: + + yardstick-plot -i /tmp/yardstick.out -o /tmp/plots/ + +Default location for the output is ``/tmp/yardstick.out``. + +More info about the tool can be found by executing: +:: + + yardstick-plot -h + + +Deploy InfluxDB and Grafana locally +------------------------------------ + +.. pull docker images + +Pull docker images +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +:: + + docker pull tutum/influxdb + docker pull grafana/grafana + +Run influxdb and config +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run influxdb +:: + + docker run -d --name influxdb -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 tutum/influxdb + docker exec -it influxdb bash + +Config influxdb +:: + + influx + >CREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES + >CREATE DATABASE yardstick; + >use yardstick; + >show MEASUREMENTS; + +Run grafana and config +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Run grafana +:: + + docker run -d --name grafana -p 3000:3000 grafana/grafana + +Config grafana +:: + + http://{YOUR_IP_HERE}:3000 + log on using admin/admin and config database resource to be {YOUR_IP_HERE}:8086 + +.. image:: images/Grafana_config.png + +Config yardstick conf +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +cp ./etc/yardstick/yardstick.conf.sample /etc/yardstick/yardstick.conf + +vi /etc/yardstick/yardstick.conf +Config yardstick.conf +:: + + [DEFAULT] + debug = True + dispatcher = influxdb + + [dispatcher_influxdb] + timeout = 5 + target = http://{YOUR_IP_HERE}:8086 + db_name = yardstick + username = root + password = root + +Now you can run yardstick test case and store the results in influxdb +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/docs/userguide/08-yardstick_plugin.rst b/docs/userguide/08-yardstick_plugin.rst new file mode 100644 index 000000000..e68db650d --- /dev/null +++ b/docs/userguide/08-yardstick_plugin.rst @@ -0,0 +1,144 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB, Huawei Technologies Co.,Ltd and others. + +=================================== +Installing a plug-in into yardstick +=================================== + +Abstract +======== + +Yardstick currently provides a ``plugin`` CLI command to support integration +with other OPNFV testing projects. Below is an example invocation of yardstick +plugin command and Storperf plug-in sample. + + +Installing Storperf into yardstick +================================== + +Storperf is delivered as a Docker container from +https://hub.docker.com/r/opnfv/storperf/tags/. + +There are two possible methods for installation in your environment: + +* Run container on Jump Host +* Run container in a VM + +In this introduction we will install Storperf on Jump Host. + + +Step 0: Environment preparation +>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> + +Running Storperf on Jump Host +Requirements: + +* Docker must be installed +* Jump Host must have access to the OpenStack Controller API +* Jump Host must have internet connectivity for downloading docker image +* Enough floating IPs must be available to match your agent count + +Before installing Storperf into yardstick you need to check your openstack +environment and other dependencies: + +1. Make sure docker is installed. +2. Make sure Keystone, Nova, Neutron, Glance, Heat are installed correctly. +3. Make sure Jump Host have access to the OpenStack Controller API. +4. Make sure Jump Host must have internet connectivity for downloading docker image. +5. You need to know where to get basic openstack Keystone authorization info, such as +OS_PASSWORD, OS_TENANT_NAME, OS_AUTH_URL, OS_USERNAME. +6. To run a Storperf container, you need to have OpenStack Controller environment +variables defined and passed to Storperf container. The best way to do this is to +put environment variables in a "storperf_admin-rc" file. The storperf_admin-rc +should include credential environment variables at least: + +* OS_AUTH_URL +* OS_TENANT_ID +* OS_TENANT_NAME +* OS_PROJECT_NAME +* OS_USERNAME +* OS_PASSWORD +* OS_REGION_NAME + +For this storperf_admin-rc file, during environment preparation a "prepare_storperf_admin-rc.sh" +script can be used to generate it. +:: + + #!/bin/bash + AUTH_URL=${OS_AUTH_URL} + USERNAME=${OS_USERNAME:-admin} + PASSWORD=${OS_PASSWORD:-console} + TENANT_NAME=${OS_TENANT_NAME:-admin} + VOLUME_API_VERSION=${OS_VOLUME_API_VERSION:-2} + PROJECT_NAME=${OS_PROJECT_NAME:-$TENANT_NAME} + TENANT_ID=`keystone tenant-get admin|grep 'id'|awk -F '|' '{print $3}'|sed -e 's/^[[:space:]]*//'` + rm -f ~/storperf_admin-rc + touch ~/storperf_admin-rc + echo "OS_AUTH_URL="$AUTH_URL >> ~/storperf_admin-rc + echo "OS_USERNAME="$USERNAME >> ~/storperf_admin-rc + echo "OS_PASSWORD="$PASSWORD >> ~/storperf_admin-rc + echo "OS_TENANT_NAME="$TENANT_NAME >> ~/storperf_admin-rc + echo "OS_VOLUME_API_VERSION="$VOLUME_API_VERSION >> ~/storperf_admin-rc + echo "OS_PROJECT_NAME="$PROJECT_NAME >> ~/storperf_admin-rc + echo "OS_TENANT_ID="$TENANT_ID >> ~/storperf_admin-rc + + +Step 1: Plug-in configuration file preparation +++++++++++++++++++++++++++++++++++++++++++++++ + +To install a plug-in, first you need to prepare a plug-in configuration file in +YAML format and store it in the "plugin" directory. The plugin configration file +work as the input of yardstick "plugin" command. Below is the Storperf plug-in +configuration file sample: +:: + + --- + # StorPerf plugin configuration file + # Used for integration StorPerf into Yardstick as a plugin + schema: "yardstick:plugin:0.1" + plugins: + name: storperf + deployment: + ip: 192.168.23.2 + user: root + password: root + +In the plug-in configuration file, you need to specify the plug-in name and the +plug-in deployment info, including node ip, node login username and password. +Here the Storperf will be installed on IP 192.168.23.2 which is the Jump Host +in my local environment. + +Step 2: Plug-in install/remove scripts preparation +++++++++++++++++++++++++++++++++++++++++++++++++++ + +Under "yardstick/resource/scripts directory", there are two folders: a "install" +folder and a "remove" folder. You need to store the plug-in install/remove script +in these two folders respectively. + +The detailed installation or remove operation should de defined in these two scripts. +The name of both install and remove scripts should match the plugin-in name that you +specified in the plug-in configuration file. +For example, the install and remove scripts for Storperf are both named to "storperf.bash". + + +Step 3: Install and remove Storperf ++++++++++++++++++++++++++++++++++++ + +To install Storperf, simply execute the following command +:: + + # Install Storperf + yardstick plugin install plugin/storperf.yaml + +removing Storperf from yardstick +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To remove Storperf, simply execute the following command +:: + + # Remove Storperf + yardstick plugin remove plugin/storperf.yaml + +What yardstick plugin command does is using the username and password to log into the deployment target and then execute the corresponding install or remove script. diff --git a/docs/userguide/09-result-store-InfluxDB.rst b/docs/userguide/09-result-store-InfluxDB.rst new file mode 100644 index 000000000..5c49e9f7c --- /dev/null +++ b/docs/userguide/09-result-store-InfluxDB.rst @@ -0,0 +1,86 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, 2016 Huawei Technologies Co.,Ltd and others. + +============================================== +Store Other Project's Test Results in InfluxDB +============================================== + +Abstract +======== + +.. _Framework: https://wiki.opnfv.org/download/attachments/6827660/wiki.png?version=1&modificationDate=1470298075000&api=v2 + +This chapter illustrates how to run plug-in test cases and store test results +into community's InfluxDB. The framework is shown in Framework_. + + +.. image:: images/InfluxDB_store.png + :width: 1200px + :alt: Store Other Project's Test Results in InfluxDB + +Store Storperf Test Results into Community's InfluxDB +===================================================== + +.. _Influxdb: https://git.opnfv.org/cgit/yardstick/tree/yardstick/dispatcher/influxdb.py +.. _Mingjiang: limingjiang@huawei.com +.. _Visual: https://wiki.opnfv.org/download/attachments/6827660/tc074.PNG?version=1&modificationDate=1470298075000&api=v2 +.. _Login: http://testresults.opnfv.org/grafana/login + +As shown in Framework_, there are two ways to store Storperf test results +into community's InfluxDB: + +1. Yardstick asks Storperf to run the test case. After the test case is + completed, Yardstick reads test results via ReST API from Storperf and + posts test data to the influxDB. + +2. Additionally, Storperf can run tests by itself and post the test result + directly to the InfluxDB. The method for posting data directly to influxDB + will be supported in the future. + +Our plan is to support rest-api in D release so that other testing projects can +call the rest-api to use yardstick dispatcher service to push data to yardstick's +influxdb database. + +For now, influxdb only support line protocol, and the json protocol is deprecated. + +Take ping test case for example, the raw_result is json format like this: +:: + + "benchmark": { + "timestamp": 1470315409.868095, + "errors": "", + "data": { + "rtt": { + "ares": 1.125 + } + }, + "sequence": 1 + }, + "runner_id": 2625 + } + +With the help of "influxdb_line_protocol", the json is transform to like below as a line string: +:: + + 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown,pod_name=unknown, + runner_id=2625,scenarios=Ping,target=ares.demo,task_id=77755f38-1f6a-4667-a7f3- + 301c99963656,version=unknown rtt.ares=1.125 1470315409868094976' + +So, for data output of json format, you just need to transform json into line format and call +influxdb api to post the data into the database. All this function has been implemented in Influxdb_. +If you need support on this, please contact Mingjiang_. +:: + + curl -i -XPOST 'http://104.197.68.199:8086/write?db=yardstick' -- + data-binary 'ping,deploy_scenario=unknown,host=athena.demo,installer=unknown, ...' + +Grafana will be used for visualizing the collected test data, which is shown in Visual_. Grafana +can be accessed by Login_. + + +.. image:: images/results_visualization.png + :width: 1200px + :alt: results visualization + diff --git a/docs/userguide/10-list-of-tcs.rst b/docs/userguide/10-list-of-tcs.rst new file mode 100644 index 000000000..7e8c85433 --- /dev/null +++ b/docs/userguide/10-list-of-tcs.rst @@ -0,0 +1,108 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Ericsson AB and others. + +==================== +Yardstick Test Cases +==================== + +Abstract +======== + +This chapter lists available Yardstick test cases. +Yardstick test cases are divided in two main categories: + +* *Generic NFVI Test Cases* - Test Cases developed to realize the methodology +described in :doc:`02-methodology` + +* *OPNFV Feature Test Cases* - Test Cases developed to verify one or more +aspect of a feature delivered by an OPNFV Project, including the test cases +developed for the :term:`VTC`. + +Generic NFVI Test Case Descriptions +=================================== + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc001.rst + opnfv_yardstick_tc002.rst + opnfv_yardstick_tc004.rst + opnfv_yardstick_tc005.rst + opnfv_yardstick_tc008.rst + opnfv_yardstick_tc009.rst + opnfv_yardstick_tc010.rst + opnfv_yardstick_tc011.rst + opnfv_yardstick_tc012.rst + opnfv_yardstick_tc014.rst + opnfv_yardstick_tc024.rst + opnfv_yardstick_tc037.rst + opnfv_yardstick_tc038.rst + opnfv_yardstick_tc042.rst + opnfv_yardstick_tc043.rst + opnfv_yardstick_tc044.rst + opnfv_yardstick_tc055.rst + opnfv_yardstick_tc061.rst + opnfv_yardstick_tc063.rst + opnfv_yardstick_tc069.rst + opnfv_yardstick_tc070.rst + opnfv_yardstick_tc071.rst + opnfv_yardstick_tc072.rst + opnfv_yardstick_tc075.rst + +OPNFV Feature Test Cases +======================== + +H A +--- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc019.rst + opnfv_yardstick_tc025.rst + +IPv6 +---- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc027.rst + +KVM +--- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc028.rst + +Parser +------ + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc040.rst + +virtual Traffic Classifier +-------------------------- + +.. toctree:: + :maxdepth: 1 + + opnfv_yardstick_tc006.rst + opnfv_yardstick_tc007.rst + opnfv_yardstick_tc020.rst + opnfv_yardstick_tc021.rst + +Templates +========= + +.. toctree:: + :maxdepth: 1 + + testcase_description_v2_template + Yardstick_task_templates diff --git a/docs/userguide/apexlake_api.rst b/docs/userguide/apexlake_api.rst deleted file mode 100644 index 35a1dbe3e..000000000 --- a/docs/userguide/apexlake_api.rst +++ /dev/null @@ -1,89 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - - -================================= -Apexlake API Interface Definition -================================= - -Abstract --------- - -The API interface provided by the framework to enable the execution of test -cases is defined as follows. - - -init ----- - -**static init()** - - Initializes the Framework - - **Returns** None - - -execute_framework ------------------ - -**static execute_framework** (test_cases, - - iterations, - - heat_template, - - heat_template_parameters, - - deployment_configuration, - - openstack_credentials) - - Executes the framework according the specified inputs - - **Parameters** - - - **test_cases** - - Test cases to be run with the workload (dict() of dict()) - - Example: - test_case = dict() - - test_case[’name’] = ‘module.Class’ - - test_case[’params’] = dict() - - test_case[’params’][’throughput’] = ‘1’ - - test_case[’params’][’vlan_sender’] = ‘1000’ - - test_case[’params’][’vlan_receiver’] = ‘1001’ - - test_cases = [test_case] - - - **iterations** - Number of test cycles to be executed (int) - - - **heat_template** - (string) File name of the heat template corresponding to the workload to be deployed. - It contains the parameters to be evaluated in the form of #parameter_name. - (See heat_templates/vTC.yaml as example). - - - **heat_template_parameters** - (dict) Parameters to be provided as input to the - heat template. See http://docs.openstack.org/developer/heat/ template_guide/hot_guide.html - section “Template input parameters” for further info. - - - **deployment_configuration** - ( dict[string] = list(strings) ) ) Dictionary of parameters - representing the deployment configuration of the workload. - - The key is a string corresponding to the name of the parameter, - the value is a list of strings representing the value to be - assumed by a specific param. The parameters are user defined: - they have to correspond to the place holders (#parameter_name) - specified in the heat template. - - **Returns** dict() containing results diff --git a/docs/userguide/apexlake_installation.rst b/docs/userguide/apexlake_installation.rst deleted file mode 100644 index d4493e0f8..000000000 --- a/docs/userguide/apexlake_installation.rst +++ /dev/null @@ -1,300 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) OPNFV, Intel Corporation and others. - - -.. _DPDK: http://dpdk.org/doc/nics -.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/ -.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking -.. _PORTSEC: https://wiki.openstack.org/wiki/Neutron/ML2PortSecurityExtensionDriver -.. _here: https://wiki.opnfv.org/vtc - - -============================ -Apexlake Installation Guide -============================ - -Abstract --------- - -ApexLake is a framework that provides automatic execution of experiments and -related data collection to enable a user validate infrastructure from the -perspective of a Virtual Network Function (:term:`VNF`). - -In the context of Yardstick, a virtual Traffic Classifier (:term:`VTC`) network -function is utilized. - - -Framework Hardware Dependencies -=============================== - -In order to run the framework there are some hardware related dependencies for -ApexLake. - -The framework needs to be installed on the same physical node where DPDK-pktgen_ -is installed. - -The installation requires the physical node hosting the packet generator must -have 2 NICs which are DPDK_ compatible. - -The 2 NICs will be connected to the switch where the OpenStack VM -network is managed. - -The switch used must support multicast traffic and :term:`IGMP` snooping. -Further details about the configuration are provided at the following here_. - -The corresponding ports to which the cables are connected need to be configured -as VLAN trunks using two of the VLAN IDs available for Neutron. -Note the VLAN IDs used as they will be required in later configuration steps. - - -Framework Software Dependencies -=============================== -Before starting the framework, a number of dependencies must first be installed. -The following describes the set of instructions to be executed via the Linux -shell in order to install and configure the required dependencies. - -1. Install Dependencies. - -To support the framework dependencies the following packages must be installed. -The example provided is based on Ubuntu and needs to be executed in root mode. - -:: - - apt-get install python-dev - apt-get install python-pip - apt-get install python-mock - apt-get install tcpreplay - apt-get install libpcap-dev - -2. Source OpenStack openrc file. - -:: - - source openrc - -3. Configure Openstack Neutron - -In order to support traffic generation and management by the virtual -Traffic Classifier, the configuration of the port security driver -extension is required for Neutron. - -For further details please follow the following link: PORTSEC_ -This step can be skipped in case the target OpenStack is Juno or Kilo release, -but it is required to support Liberty. -It is therefore required to indicate the release version in the configuration -file located in ./yardstick/vTC/apexlake/apexlake.conf - - -4. Create Two Networks based on VLANs in Neutron. - -To enable network communications between the packet generator and the compute -node, two networks must be created via Neutron and mapped to the VLAN IDs -that were previously used in the configuration of the physical switch. -The following shows the typical set of commands required to configure Neutron -correctly. -The physical switches need to be configured accordingly. - -:: - - VLAN_1=2032 - VLAN_2=2033 - PHYSNET=physnet2 - neutron net-create apexlake_inbound_network \ - --provider:network_type vlan \ - --provider:segmentation_id $VLAN_1 \ - --provider:physical_network $PHYSNET - - neutron subnet-create apexlake_inbound_network \ - 192.168.0.0/24 --name apexlake_inbound_subnet - - neutron net-create apexlake_outbound_network \ - --provider:network_type vlan \ - --provider:segmentation_id $VLAN_2 \ - --provider:physical_network $PHYSNET - - neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \ - --name apexlake_outbound_subnet - - -5. Download Ubuntu Cloud Image and load it on Glance - -The virtual Traffic Classifier is supported on top of Ubuntu 14.04 cloud image. -The image can be downloaded on the local machine and loaded on Glance -using the following commands: - -:: - - wget cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img - glance image-create \ - --name ubuntu1404 \ - --is-public true \ - --disk-format qcow \ - --container-format bare \ - --file trusty-server-cloudimg-amd64-disk1.img - - - -6. Configure the Test Cases - -The VLAN tags must also be included in the test case Yardstick yaml file -as parameters for the following test cases: - - * :doc:`opnfv_yardstick_tc006` - - * :doc:`opnfv_yardstick_tc007` - - * :doc:`opnfv_yardstick_tc020` - - * :doc:`opnfv_yardstick_tc021` - - -Install and Configure DPDK Pktgen -+++++++++++++++++++++++++++++++++ - -Execution of the framework is based on DPDK Pktgen. -If DPDK Pktgen has not installed, it is necessary to download, install, compile -and configure it. -The user can create a directory and download the dpdk packet generator source -code: - -:: - - cd experimental_framework/libraries - mkdir dpdk_pktgen - git clone https://github.com/pktgen/Pktgen-DPDK.git - -For instructions on the installation and configuration of DPDK and DPDK Pktgen -please follow the official DPDK Pktgen README file. -Once the installation is completed, it is necessary to load the DPDK kernel -driver, as follow: - -:: - - insmod uio - insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko - -It is necessary to set the configuration file to support the desired Pktgen -configuration. -A description of the required configuration parameters and supporting examples -is provided in the following: - -:: - - [PacketGen] - packet_generator = dpdk_pktgen - - # This is the directory where the packet generator is installed - # (if the user previously installed dpdk-pktgen, - # it is required to provide the director where it is installed). - pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/ - - # This is the directory where DPDK is installed - dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/ - - # Name of the dpdk-pktgen program that starts the packet generator - program_name = app/app/x86_64-native-linuxapp-gcc/pktgen - - # DPDK coremask (see DPDK-Pktgen readme) - coremask = 1f - - # DPDK memory channels (see DPDK-Pktgen readme) - memory_channels = 3 - - # Name of the interface of the pktgen to be used to send traffic (vlan_sender) - name_if_1 = p1p1 - - # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver) - name_if_2 = p1p2 - - # PCI bus address correspondent to if_1 - bus_slot_nic_1 = 01:00.0 - - # PCI bus address correspondent to if_2 - bus_slot_nic_2 = 01:00.1 - - -To find the parameters related to names of the NICs and the addresses of the PCI buses -the user may find it useful to run the :term:`DPDK` tool nic_bind as follows: - -:: - - DPDK_DIR/tools/dpdk_nic_bind.py --status - -Lists the NICs available on the system, and shows the available drivers and bus addresses for each interface. -Please make sure to select NICs which are :term:`DPDK` compatible. - -Installation and Configuration of smcroute -++++++++++++++++++++++++++++++++++++++++++ - -The user is required to install smcroute which is used by the framework to -support multicast communications. - -The following is the list of commands required to download and install smroute. - -:: - - cd ~ - git clone https://github.com/troglobit/smcroute.git - cd smcroute - git reset --hard c3f5c56 - sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh - sed -i 's/automake-1.11/automake/g' ./autogen.sh - ./autogen.sh - ./configure - make - sudo make install - cd .. - -It is required to do the reset to the specified commit ID. -It is also requires the creation a configuration file using the following -command: - - SMCROUTE_NIC=(name of the nic) - -where name of the nic is the name used previously for the variable "name_if_2". -For example: - -:: - - SMCROUTE_NIC=p1p2 - -Then create the smcroute configuration file /etc/smcroute.conf - -:: - - echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf - - -At the end of this procedure it will be necessary to perform the following -actions to add the user to the sudoers: - -:: - - adduser USERNAME sudo - echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers - - -Experiment using SR-IOV Configuration on the Compute Node -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -To enable :term:`SR-IOV` interfaces on the physical NIC of the compute node, a -compatible NIC is required. -NIC configuration depends on model and vendor. After proper configuration to -support :term:`SR-IOV`, a proper configuration of OpenStack is required. -For further information, please refer to the SRIOV_ configuration guide - -Finalize installation the framework on the system -================================================= - -The installation of the framework on the system requires the setup of the project. -After entering into the apexlake directory, it is sufficient to run the following -command. - -:: - - python setup.py install - -Since some elements are copied into the /tmp directory (see configuration file) -it could be necessary to repeat this step after a reboot of the host. diff --git a/docs/userguide/architecture.rst b/docs/userguide/architecture.rst deleted file mode 100755 index 3abb67b7d..000000000 --- a/docs/userguide/architecture.rst +++ /dev/null @@ -1,263 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) 2016 Huawei Technologies Co.,Ltd and others - -============ -Architecture -============ - -Abstract -======== -This chapter describes the yardstick framework software architecture. we will introduce it from Use-Case View, -Logical View, Process View and Deployment View. More technical details will be introduced in this chapter. - -Overview -======== - -Architecture overview ---------------------- -Yardstick is mainly written in Python, and test configurations are made -in YAML. Documentation is written in reStructuredText format, i.e. .rst -files. Yardstick is inspired by Rally. Yardstick is intended to run on a -computer with access and credentials to a cloud. The test case is described -in a configuration file given as an argument. - -How it works: the benchmark task configuration file is parsed and converted into -an internal model. The context part of the model is converted into a Heat -template and deployed into a stack. Each scenario is run using a runner, either -serially or in parallel. Each runner runs in its own subprocess executing -commands in a VM using SSH. The output of each scenario is written as json -records to a file or influxdb or http server, we use influxdb as the backend, -the test result will be shown with grafana. - - -Concept -------- -**Benchmark** - assess the relative performance of something - -**Benchmark** configuration file - describes a single test case in yaml format - -**Context** - The set of Cloud resources used by a scenario, such as user -names, image names, affinity rules and network configurations. A context is -converted into a simplified Heat template, which is used to deploy onto the -Openstack environment. - -**Data** - Output produced by running a benchmark, written to a file in json format - -**Runner** - Logic that determines how a test scenario is run and reported, for -example the number of test iterations, input value stepping and test duration. -Predefined runner types exist for re-usage, see `Runner types`_. - -**Scenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, LmBench, ...) - -**SLA** - Relates to what result boundary a test case must meet to pass. For -example a latency limit, amount or ratio of lost packets and so on. Action -based on :term:`SLA` can be configured, either just to log (monitor) or to stop -further testing (assert). The :term:`SLA` criteria is set in the benchmark -configuration file and evaluated by the runner. - - -Runner types ------------- - -There exists several predefined runner types to choose between when designing -a test scenario: - -**Arithmetic:** -Every test run arithmetically steps the specified input value(s) in the -test scenario, adding a value to the previous input value. It is also possible -to combine several input values for the same test case in different -combinations. - -Snippet of an Arithmetic runner configuration: -:: - - - runner: - type: Arithmetic - iterators: - - - name: stride - start: 64 - stop: 128 - step: 64 - -**Duration:** -The test runs for a specific period of time before completed. - -Snippet of a Duration runner configuration: -:: - - - runner: - type: Duration - duration: 30 - -**Sequence:** -The test changes a specified input value to the scenario. The input values -to the sequence are specified in a list in the benchmark configuration file. - -Snippet of a Sequence runner configuration: -:: - - - runner: - type: Sequence - scenario_option_name: packetsize - sequence: - - 100 - - 200 - - 250 - - -**Iteration:** -Tests are run a specified number of times before completed. - -Snippet of an Iteration runner configuration: -:: - - - runner: - type: Iteration - iterations: 2 - - - - -Use-Case View -============= -Yardstick Use-Case View shows two kinds of users. One is the Tester who will -do testing in cloud, the other is the User who is more concerned with test result -and result analyses. - -For testers, they will run a single test case or test case suite to verify -infrastructure compliance or bencnmark their own infrastructure performance. -Test result will be stored by dispatcher module, three kinds of store method -(file, influxdb and http) can be configured. The detail information of -scenarios and runners can be queried with CLI by testers. - -For users, they would check test result with four ways. - -If dispatcher module is configured as file(default), there are two ways to -check test result. One is to get result from yardstick.out ( default path: -/tmp/yardstick.out), the other is to get plot of test result, it will be shown -if users execute command "yardstick-plot". - -If dispatcher module is configured as influxdb, users will check test -result on Grafana which is most commonly used for visualizing time series data. - -If dispatcher module is configured as http, users will check test result -on OPNFV testing dashboard which use MongoDB as backend. - -.. image:: images/Use_case.png - :width: 800px - :alt: Yardstick Use-Case View - -Logical View -============ -Yardstick Logical View describes the most important classes, their -organization, and the most important use-case realizations. - -Main classes: - -**TaskCommands** - "yardstick task" subcommand handler. - -**HeatContext** - Do test yaml file context section model convert to HOT, -deploy and undeploy Openstack heat stack. - -**Runner** - Logic that determines how a test scenario is run and reported. - -**TestScenario** - Type/class of measurement for example Ping, Pktgen, (Iperf, -LmBench, ...) - -**Dispatcher** - Choose user defined way to store test results. - -TaskCommands is the "yardstick task" subcommand's main entry. It takes yaml -file (e.g. test.yaml) as input, and uses HeatContext to convert the yaml -file's context section to HOT. After Openstacik heat stack is deployed by -HeatContext with the converted HOT, TaskCommands use Runner to run specified -TestScenario. During first runner initialization, it will create output -process. The output process use Dispatcher to push test results. The Runner -will also create a process to execute TestScenario. And there is a -multiprocessing queue between each runner process and output process, so the -runner process can push the real-time test results to the storage media. -TestScenario is commonly connected with VMs by using ssh. It sets up VMs and -run test measurement scripts through the ssh tunnel. After all TestScenaio -is finished, TaskCommands will undeploy the heat stack. Then the whole test is -finished. - -.. image:: images/Logical_view.png - :width: 800px - :alt: Yardstick Logical View - -Process View (Test execution flow) -================================== -Yardstick process view shows how yardstick runs a test case. Below is the -sequence graph about the test execution flow using heat context, and each -object represents one module in yardstick: - -.. image:: images/test_execution_flow.png - :width: 800px - :alt: Yardstick Process View - -A user wants to do a test with yardstick. He can use the CLI to input the -command to start a task. "TaskCommands" will receive the command and ask -"HeatContext" to parse the context. "HeatContext" will then ask "Model" to -convert the model. After the model is generated, "HeatContext" will inform -"Openstack" to deploy the heat stack by heat template. After "Openstack" -deploys the stack, "HeatContext" will inform "Runner" to run the specific test -case. - -Firstly, "Runner" would ask "TestScenario" to process the specific scenario. -Then "TestScenario" will start to log on the openstack by ssh protocal and -execute the test case on the specified VMs. After the script execution -finishes, "TestScenario" will send a message to inform "Runner". When the -testing job is done, "Runner" will inform "Dispatcher" to output the test -result via file, influxdb or http. After the result is output, "HeatContext" -will call "Openstack" to undeploy the heat stack. Once the stack is -undepoyed, the whole test ends. - -Deployment View -=============== -Yardstick deployment view shows how the yardstick tool can be deployed into the -underlying platform. Generally, yardstick tool is installed on JumpServer(see -`03-installation` for detail installation steps), and JumpServer is -connected with other control/compute servers by networking. Based on this -deployment, yardstick can run the test cases on these hosts, and get the test -result for better showing. - -.. image:: images/Deployment.png - :width: 800px - :alt: Yardstick Deployment View - -Yardstick Directory structure -============================= - -**yardstick/** - Yardstick main directory. - -*ci/* - Used for continuous integration of Yardstick at different PODs and - with support for different installers. - -*docs/* - All documentation is stored here, such as configuration guides, - user guides and Yardstick descriptions. - -*etc/* - Used for test cases requiring specific POD configurations. - -*samples/* - test case samples are stored here, most of all scenario and - feature's samples are shown in this directory. - -*tests/* - Here both Yardstick internal tests (*functional/* and *unit/*) as - well as the test cases run to verify the NFVI (*opnfv/*) are stored. - Also configurations of what to run daily and weekly at the different - PODs is located here. - -*tools/* - Currently contains tools to build image for VMs which are deployed - by Heat. Currently contains how to build the yardstick-trusty-server - image with the different tools that are needed from within the image. - -*vTC/* - Contains the files for running the virtual Traffic Classifier tests. - -*yardstick/* - Contains the internals of Yardstick: Runners, Scenario, Contexts, - CLI parsing, keys, plotting tools, dispatcher and so on. - diff --git a/docs/userguide/images/InfluxDB_store.png b/docs/userguide/images/InfluxDB_store.png new file mode 100644 index 000000000..1770fd255 Binary files /dev/null and b/docs/userguide/images/InfluxDB_store.png differ diff --git a/docs/userguide/images/results_visualization.png b/docs/userguide/images/results_visualization.png new file mode 100644 index 000000000..cd092808b Binary files /dev/null and b/docs/userguide/images/results_visualization.png differ diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst index 72a92a69f..0aa112a45 100644 --- a/docs/userguide/index.rst +++ b/docs/userguide/index.rst @@ -12,11 +12,13 @@ Yardstick Overview 01-introduction 02-methodology - architecture + 03-architecture 04-vtc-overview - apexlake_installation - apexlake_api - 03-installation - 03-list-of-tcs + 05-apexlake_installation + 06-apexlake_api + 07-installation + 08-yardstick_plugin + 09-result-store-InfluxDB + 10-list-of-tcs glossary references -- cgit 1.2.3-korg