aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing')
-rw-r--r--docs/testing/developer/devguide/index.rst1
-rw-r--r--docs/testing/developer/devguide/web.rst100
-rw-r--r--docs/testing/user/configguide/configuration.rst5
-rw-r--r--docs/testing/user/configguide/index.rst1
-rw-r--r--docs/testing/user/configguide/web.rst74
-rw-r--r--docs/testing/user/userguide/compute.rst35
-rw-r--r--docs/testing/user/userguide/index.rst3
-rw-r--r--docs/testing/user/userguide/network.rst114
-rw-r--r--docs/testing/user/userguide/network_testcase_description.rst90
-rw-r--r--docs/testing/user/userguide/storage.rst19
-rw-r--r--docs/testing/user/userguide/web.rst70
11 files changed, 249 insertions, 263 deletions
diff --git a/docs/testing/developer/devguide/index.rst b/docs/testing/developer/devguide/index.rst
index ab411005..0b583cc5 100644
--- a/docs/testing/developer/devguide/index.rst
+++ b/docs/testing/developer/devguide/index.rst
@@ -16,6 +16,5 @@ QTIP Developer Guide
framework.rst
cli.rst
api.rst
- web.rst
compute-qpi.rst
storage-qpi.rst
diff --git a/docs/testing/developer/devguide/web.rst b/docs/testing/developer/devguide/web.rst
deleted file mode 100644
index ae4e3156..00000000
--- a/docs/testing/developer/devguide/web.rst
+++ /dev/null
@@ -1,100 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-***************************************
-Web Portal for Benchmarking Services
-***************************************
-
-QTIP consists of different tools(metrics) to benchmark the NFVI. These metrics
-fall under different NFVI subsystems(QPI's) such as compute, storage and network.
-QTIP benchmarking tasks are built upon `Ansible`_ playbooks and roles.
-QTIP web portal is a platform to expose QTIP as a benchmarking service hosted on a central host.
-
-Framework
-=========
-
-The web travel has been developed on Python `Django`_ framework. Dig into the documentation to learn about Django.
-
-Design
-======
-
-Django is a MTV (Model Template View) framework. Database objects are mapped to models in ``models.py``. Views handle the
-requests from client side and interact with database using Django ORM. Templates are responsible for
-UI rendering based on response context from Views.
-
-Models
-------
-
-Repo
-~~~~
-
-Model for `workspace`_ repos
-
-::
-
- Repo:
- name
- git_link
-
-
-Task
-~~~~
-
-Tasks keep track of every benchmark run through QTIP-Web Services. Whenever you run a benchmark,
-a new task is created which keep track of time stats and log task progress and ansible output for
-the respective playbook.
-
-::
-
- Task
- start_time
- end_time
- status
- run_time
- repo
- log
-
-
-Views
------
-
-Dashboard
-~~~~~~~~~
-
- - Base class - TemplateVIew
-
-Class based view serving as home page for the application.
-
-
-ReposView
-~~~~~~~~~
-
- - Base class - LoginRequiredMixin, CreateView
-
-Class based view for listing and add new repos
-
-
-RepoUpdate
-~~~~~~~~~~
-
- - Base class - LoginRequiredMixin, UpdateView
-
-Class based View for listing and updating an existing repo details.
-
-*Both ReposView and RepoUpdate View use same template ``repo_form.html``. The context has an extra variable ``template_role`` which is used to distinguish if repo form is for create or edit operation.*
-
-
-Run
-~~~
-
- - Base class - LoginRequiredMixin, View
- - template name - run.html
-
-Class based View for adding new task and run benchmark based on task details. The logs are saved
-in ``logs/run_<log_id>`` directory.
-
-
-.. _Ansible: https://www.ansible.com/
-.. _Django: https://docs.djangoproject.com/en/1.11/
-.. _workspace: https://github.com/opnfv/qtip/blob/master/docs/testing/developer/devguide/ansible.rst#create-workspace
diff --git a/docs/testing/user/configguide/configuration.rst b/docs/testing/user/configguide/configuration.rst
index ae745341..d04f5bab 100644
--- a/docs/testing/user/configguide/configuration.rst
+++ b/docs/testing/user/configguide/configuration.rst
@@ -40,8 +40,9 @@ Run and enter the docker instance
1. If you want to run benchmarks:
::
- envs="INSTALLER_TYPE={INSTALLER_TYPE} -e INSTALLER_IP={INSTALLER_IP} -e NODE_NAME={NODE_NAME}"
+ envs="INSTALLER_TYPE={INSTALLER_TYPE} -e INSTALLER_IP={INSTALLER_IP} -e NODE_NAME={NODE_NAME}"
docker run -p [HOST_IP:]<HOST_PORT>:5000 --name qtip -id -e $envs opnfv/qtip
+ docker start qtip
docker exec -i -t qtip /bin/bash
``INSTALLER_TYPE`` should be one of OPNFV installer, e.g. apex, compass, daisy, fuel
@@ -90,7 +91,7 @@ Environment configuration
Hardware configuration
----------------------
-QTIP does not have specific hardware requriements, and it can runs over any
+QTIP does not have specific hardware requirements, and it can runs over any
OPNFV installer.
diff --git a/docs/testing/user/configguide/index.rst b/docs/testing/user/configguide/index.rst
index fa893e5e..9c72ecd2 100644
--- a/docs/testing/user/configguide/index.rst
+++ b/docs/testing/user/configguide/index.rst
@@ -12,4 +12,3 @@ QTIP Installation Guide
:maxdepth: 2
./configuration.rst
- ./web.rst
diff --git a/docs/testing/user/configguide/web.rst b/docs/testing/user/configguide/web.rst
deleted file mode 100644
index 83365abe..00000000
--- a/docs/testing/user/configguide/web.rst
+++ /dev/null
@@ -1,74 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-***************************************
-Web Portal installation & configuration
-***************************************
-
-Web Portal for Benchmarking is developed on python `Django`_ Framework. Right now the installation
-is need to be done from source.
-
-
-
-Clone QTIP Repo
-===============
-
-::
-
- git clone https://github.com/opnfv/qtip.git
-
-
-Setup database and Initialize user data
-=======================================
-
-CD into `web` directory.
-------------------------
-
-::
-
- cd qtip/qtip/web
-
-
-Setup migrations
-----------------
-
-::
-
- python manage.py makemigrations
-
-
-In usual case migrations will be already available with source. Console willll notify you
-of the same.
-
-Run migrations
---------------
-
-::
-
- python manage.py migrate
-
-
-Create superuser
-----------------
-::
-
- python manage.py createsuperuser
-
-
-Console will prompt for adding new web admin. Enter new credentials.
-
-
-
-Collecting Static Dependencies
-------------------------------
-::
-
- python manage.py importstatic
-
-
-This will import js and css dependencies for UI in static directory. Now the web application is
-ready to run.
-
-
-.. _Django: https://docs.djangoproject.com/en/1.11/
diff --git a/docs/testing/user/userguide/compute.rst b/docs/testing/user/userguide/compute.rst
index f889bfe6..7c5adc26 100644
--- a/docs/testing/user/userguide/compute.rst
+++ b/docs/testing/user/userguide/compute.rst
@@ -16,10 +16,11 @@ test compute components.
All the compute benchmarks could be run in the scenario:
On Baremetal Machines provisioned by an OPNFV installer (Host machines)
+On Virtual machines provisioned by OpenStack deployed by an OPNFV installer
Note: The Compute benchmank constains relatively old benchmarks such as dhrystone
and whetstone. The suite would be updated for better benchmarks such as Linbench for
-the OPNFV E release.
+the OPNFV future release.
Getting started
@@ -32,7 +33,7 @@ Inventory File
QTIP uses Ansible to trigger benchmark test. Ansible uses an inventory file to
determine what hosts to work against. QTIP can automatically generate a inventory
-file via OPNFV installer. Users also can write their own inventory infomation into
+file via OPNFV installer. Users also can write their own inventory information into
``/home/opnfv/qtip/hosts``. This file is just a text file containing a list of host
IP addresses. For example:
::
@@ -53,19 +54,33 @@ manual. If *CI_DEBUG* is not set or set to *false*, QTIP will delete the key fro
remote hosts before the execution ends. Please make sure the key deleted from remote
hosts or it can introduce a security flaw.
-Commands
---------
+Execution
+---------
-In a QTIP container, you can run compute QPI by using QTIP CLI:
-::
+There are two ways to execute compute QPI:
+
+* Script
+
+ You can run compute QPI with docker exec:
+ ::
+
+ # run with baremetal machines provisioned by an OPNFV installer
+ docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh -q compute
+
+ # run with virtual machines provisioned by OpenStack
+ docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh -q compute -u vnf
+
+* Commands
+
+ In a QTIP container, you can run compute QPI by using QTIP CLI. You can get more details from
+ *userguide/cli.rst*.
- mkdir result
- qtip plan run <plan_name> -p $PWD/result
+Test result
+------------
-QTIP generates results in the ``$PWD/result`` directory are listed down under the
+QTIP generates results in the ``/home/opnfv/<project_name>/results/`` directory are listed down under the
timestamp name.
-you can get more details from *userguide/cli.rst*.
Metrics
-------
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index 262ddd70..93adc8a9 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -15,6 +15,7 @@ QTIP User Guide
getting-started.rst
cli.rst
api.rst
- web.rst
compute.rst
storage.rst
+ network.rst
+ network_testcase_description.rst
diff --git a/docs/testing/user/userguide/network.rst b/docs/testing/user/userguide/network.rst
new file mode 100644
index 00000000..68c39974
--- /dev/null
+++ b/docs/testing/user/userguide/network.rst
@@ -0,0 +1,114 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2018 Spirent Communications Corp.
+
+
+********************************
+Network Performance Benchmarking
+********************************
+Like compute or storage QPI, network QPI gives users an overall score for system network performance.
+For now it focuses on L2 virtual switch performance on NFVI. Current testcase are from RFC2544 standart and
+implemntation is based on Spirent Testcenter Virtual.
+
+For now, network QPI runs against on the baremetal/virtual scenario deployed by
+the OPNFV installer `APEX`_.
+
+Getting started
+===============
+Notice: All descriptions are based on containers.
+
+Requirements
+------------
+
+* Git must be installed.
+* Docker and docker-compose must be installed.
+* Spirent Testcenter Virtual image must be uploaded to the target cloud and the
+ associated flavor must be created before test.
+* Spirent License Server and Spirent LabServer must be set up and keep them ip
+ reachable from target cloud external network before test.
+
+Git Clone QTIP Repo
+-------------------
+
+::
+
+ git clone https://git.opnfv.org/qtip
+
+Running QTIP container and Nettest Containers
+----------------------------------------------
+
+With Docker Compose, we can use a YAML file to configure application's services and
+use a single command to create and start all the services.
+
+There is a YAML file ``./qtip/tests/ci/network/docker-compose.yaml`` from QTIP repos.
+It can help you to create and start the network QPI service.
+
+Before running docker-compose, you must specify these three variables:
+
+* DOCKER_TAG, which specified the Docker tag(ie: latest)
+* SSH_CREDENTIALS, a directory which includes an SSH key pair will be mounted into QTIP container.
+ QTIP use this SSH key pair to connect to remote hosts.
+* ENV_FILE, which includes the environment variables required by QTIP and Storperf containers
+
+ A example of ENV_FILE:
+
+ ::
+
+ INSTALLER_TYPE=apex
+ INSTALLER_IP=192.168.122.247
+ TEST_SUITE=network
+ NODE_NAME=zte-virtual5
+ SCENARIO=generic
+ TESTAPI_URL=
+ OPNFV_RELEASE=euphrates
+ # The below environment variables are Openstack Credentials.
+ OS_USERNAME=admin
+ OS_USER_DOMAIN_NAME=Default
+ OS_PROJECT_DOMAIN_NAME=Default
+ OS_BAREMETAL_API_VERSION=1.29
+ NOVA_VERSION=1.1
+ OS_PROJECT_NAME=admin
+ OS_PASSWORD=ZjmZJmkCvVXf9ry9daxgwmz3s
+ OS_NO_CACHE=True
+ COMPUTE_API_VERSION=1.1
+ no_proxy=,192.168.37.10,192.0.2.5
+ OS_CLOUDNAME=overcloud
+ OS_AUTH_URL=http://192.168.37.10:5000/v3
+ IRONIC_API_VERSION=1.29
+ OS_IDENTITY_API_VERSION=3
+ OS_AUTH_TYPE=password
+ # The below environment variables are extra info with Spirent.
+ SPT_LICENSE_SERVER_IP=192.168.37.251
+ SPT_LAB_SERVER_IP=192.168.37.122
+ SPT_STCV_IMAGE_NAME=stcv-4.79
+ SPT_STCV_FLAVOR_NAME=m1.tiny
+
+Then, you use the following commands to start network QPI service.
+
+::
+
+ docker-compose -f docker-compose.yaml pull
+ docker-compose -f docker-compose.yaml up -d
+
+Execution
+---------
+
+You can run network QPI with docker exec:
+::
+
+ docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh
+
+QTIP generates results in the ``$PWD/results/`` directory are listed down under the
+timestamp name.
+
+Metrics
+-------
+
+Nettest provides the following `metrics`_:
+
+* RFC2544 througput
+* RFC2544 latency
+
+
+.. _APEX: https://wiki.opnfv.org/display/apex
+.. _metrics: https://tools.ietf.org/html/rfc2544
diff --git a/docs/testing/user/userguide/network_testcase_description.rst b/docs/testing/user/userguide/network_testcase_description.rst
new file mode 100644
index 00000000..0f1a0b45
--- /dev/null
+++ b/docs/testing/user/userguide/network_testcase_description.rst
@@ -0,0 +1,90 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2018 Spirent Communications Corp.
+.. Template to be used for test case descriptions in QTIP Project.
+
+
+Test Case Description
+=====================
+
++-----------------------------------------------------------------------------+
+|Network throughput |
++==============+==============================================================+
+|test case id | qtip_throughput |
++--------------+--------------------------------------------------------------+
+|metric | rfc2544 throughput |
++--------------+--------------------------------------------------------------+
+|test purpose | get the max throughput of the pathway on same host or accross|
+| | hosts |
++--------------+--------------------------------------------------------------+
+|configuration | None |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Test Center Virtual |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
++--------------+--------------------------------------------------------------+
+|applicability | 1. test the switch throughput on same host or accross hosts |
+| | 2. test the switch throughput for different packet sizes |
++--------------+--------------------------------------------------------------+
+|pre-test | 1. deploy STC license server and LabServer on public network |
+|conditions | and verify it can operate correctlly |
+| | 2. upload STC virtual image and create STCv flavor on the |
+| | deployed cloud environment |
++--------------+------+----------------------------------+--------------------+
+|test sequence | step | description | result |
+| +------+----------------------------------+--------------------+
+| | 1 | deploy STCv stack on the target | 2 STCv VM will be |
+| | | cloud with affinity attribute | established on the |
+| | | according to requirements. | cloud |
+| +------+----------------------------------+--------------------+
+| | 2 | run rfc2544 throughput test with | test result report |
+| | | different packet size | will be produced in|
+| | | | QTIP container |
+| +------+----------------------------------+--------------------+
+| | 3 | destory STCv stack | STCv stack |
+| | | different packet size | destoried |
++--------------+------+----------------------------------+--------------------+
+|test verdict | find the test result report in QTIP container running |
+| | directory |
++--------------+--------------------------------------------------------------+
+
++-----------------------------------------------------------------------------+
+|Network throughput |
++==============+==============================================================+
+|test case id | qtip_latency |
++--------------+--------------------------------------------------------------+
+|metric | rfc2544 lantency |
++--------------+--------------------------------------------------------------+
+|test purpose | get the latency value of the pathway on same host or accross |
+| | hosts |
++--------------+--------------------------------------------------------------+
+|configuration | None |
++--------------+--------------------------------------------------------------+
+|test tool | Spirent Test Center Virtual |
++--------------+--------------------------------------------------------------+
+|references | RFC2544 |
++--------------+--------------------------------------------------------------+
+|applicability | 1. test the switch latency on same host or accross hosts |
+| | 2. test the switch latency for different packet sizes |
++--------------+--------------------------------------------------------------+
+|pre-test | 1. deploy STC license server and LabServer on public network |
+|conditions | and verify it can operate correctlly |
+| | 2. upload STC virtual image and create STCv flavor on the |
+| | deployed cloud environment |
++--------------+------+----------------------------------+--------------------+
+|test sequence | step | description | result |
+| +------+----------------------------------+--------------------+
+| | 1 | deploy STCv stack on the target | 2 STCv VM will be |
+| | | cloud with affinity attribute | established on the |
+| | | according to requirements. | cloud |
+| +------+----------------------------------+--------------------+
+| | 2 | run rfc2544 latency test with | test result report |
+| | | different packet size | will be produced in|
+| | | | QTIP container |
+| +------+----------------------------------+--------------------+
+| | 3 | destroy STCv stack | STCv stack |
+| | | | destried |
++--------------+------+----------------------------------+--------------------+
+|test verdict | find the test result report in QTIP container running |
+| | directory |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/storage.rst b/docs/testing/user/userguide/storage.rst
index 7681ff7a..9457e67e 100644
--- a/docs/testing/user/userguide/storage.rst
+++ b/docs/testing/user/userguide/storage.rst
@@ -87,12 +87,23 @@ Then, you use the following commands to start storage QPI service.
Execution
---------
-You can run storage QPI with docker exec:
-::
+* Script
+
+ You can run storage QPI with docker exec:
+ ::
+
+ docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh
+
+* Commands
- docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh
+ In a QTIP container, you can run storage QPI by using QTIP CLI. You can get more
+ details from *userguide/cli.rst*.
+
+
+Test result
+------------
-QTIP generates results in the ``$PWD/results/`` directory are listed down under the
+QTIP generates results in the ``/home/opnfv/<project_name>/results/`` directory are listed down under the
timestamp name.
Metrics
diff --git a/docs/testing/user/userguide/web.rst b/docs/testing/user/userguide/web.rst
deleted file mode 100644
index 79f180d9..00000000
--- a/docs/testing/user/userguide/web.rst
+++ /dev/null
@@ -1,70 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-**********************
-Web Portal User Manual
-**********************
-
-QTIP consists of different tools(metrics) to benchmark the NFVI. These metrics
-fall under different NFVI subsystems(QPI's) such as compute, storage and network.
-QTIP benchmarking tasks are built upon `Ansible`_ playbooks and roles.
-QTIP web portal is a platform to expose QTIP as a benchmarking service hosted on a central host.
-
-
-Running
-=======
-
-After setting up the web portal as instructed in config guide, cd into the `web` directory.
-
-and run.
-
-::
-
- python manage.py runserver 0.0.0.0
-
-
-You can access the portal by logging onto `<host>:8000/bench/login/`
-
-If you want to use port 80, you may need sudo permission.
-
-::
-
- sudo python manage.py runserver 0.0.0.0:80
-
-To Deploy on `wsgi`_, Use the Django `deployment tutorial`_
-
-
-Features
-========
-
-After logging in You'll be redirect to QTIP-Web Dashboard. You'll see following menus on left.
-
- * Repos
- * Run Benchmarks
- * Tasks
-
-Repo
-----
-
- Repos are links to qtip `workspaces`_. This menu list all the aded repos. Links to new repos
- can be added here.
-
-Run Benchmarks
---------------
-
- To run a benchmark, select the corresponding repo and run. QTIP Benchmarking service will clone
- the workspace and run the benchmarks. Inventories used are predefined in the workspace repo in the `/hosts/` config file.
-
-Tasks
------
-
- All running or completed benchmark jobs can be seen in Tasks menu with their status.
-
-
-*New users can be added by Admin on the Django Admin app by logging into `/admin/'.*
-
-.. _Ansible: https://www.ansible.com/
-.. _wsgi: https://wsgi.readthedocs.io/en/latest/what.html
-.. _deployment tutorial: https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
-.. _workspaces: https://github.com/opnfv/qtip/blob/master/docs/testing/developer/devguide/ansible.rst#create-workspace