aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing/user/userguide')
-rw-r--r--docs/testing/user/userguide/compute.rst35
-rw-r--r--docs/testing/user/userguide/index.rst1
-rw-r--r--docs/testing/user/userguide/network.rst1
-rw-r--r--docs/testing/user/userguide/network_testcase_description.rst37
-rw-r--r--docs/testing/user/userguide/storage.rst19
-rw-r--r--docs/testing/user/userguide/web.rst70
6 files changed, 40 insertions, 123 deletions
diff --git a/docs/testing/user/userguide/compute.rst b/docs/testing/user/userguide/compute.rst
index f889bfe6..7c5adc26 100644
--- a/docs/testing/user/userguide/compute.rst
+++ b/docs/testing/user/userguide/compute.rst
@@ -16,10 +16,11 @@ test compute components.
All the compute benchmarks could be run in the scenario:
On Baremetal Machines provisioned by an OPNFV installer (Host machines)
+On Virtual machines provisioned by OpenStack deployed by an OPNFV installer
Note: The Compute benchmank constains relatively old benchmarks such as dhrystone
and whetstone. The suite would be updated for better benchmarks such as Linbench for
-the OPNFV E release.
+the OPNFV future release.
Getting started
@@ -32,7 +33,7 @@ Inventory File
QTIP uses Ansible to trigger benchmark test. Ansible uses an inventory file to
determine what hosts to work against. QTIP can automatically generate a inventory
-file via OPNFV installer. Users also can write their own inventory infomation into
+file via OPNFV installer. Users also can write their own inventory information into
``/home/opnfv/qtip/hosts``. This file is just a text file containing a list of host
IP addresses. For example:
::
@@ -53,19 +54,33 @@ manual. If *CI_DEBUG* is not set or set to *false*, QTIP will delete the key fro
remote hosts before the execution ends. Please make sure the key deleted from remote
hosts or it can introduce a security flaw.
-Commands
---------
+Execution
+---------
-In a QTIP container, you can run compute QPI by using QTIP CLI:
-::
+There are two ways to execute compute QPI:
+
+* Script
+
+ You can run compute QPI with docker exec:
+ ::
+
+ # run with baremetal machines provisioned by an OPNFV installer
+ docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh -q compute
+
+ # run with virtual machines provisioned by OpenStack
+ docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh -q compute -u vnf
+
+* Commands
+
+ In a QTIP container, you can run compute QPI by using QTIP CLI. You can get more details from
+ *userguide/cli.rst*.
- mkdir result
- qtip plan run <plan_name> -p $PWD/result
+Test result
+------------
-QTIP generates results in the ``$PWD/result`` directory are listed down under the
+QTIP generates results in the ``/home/opnfv/<project_name>/results/`` directory are listed down under the
timestamp name.
-you can get more details from *userguide/cli.rst*.
Metrics
-------
diff --git a/docs/testing/user/userguide/index.rst b/docs/testing/user/userguide/index.rst
index e05a5e90..93adc8a9 100644
--- a/docs/testing/user/userguide/index.rst
+++ b/docs/testing/user/userguide/index.rst
@@ -15,7 +15,6 @@ QTIP User Guide
getting-started.rst
cli.rst
api.rst
- web.rst
compute.rst
storage.rst
network.rst
diff --git a/docs/testing/user/userguide/network.rst b/docs/testing/user/userguide/network.rst
index 4d48d4d5..68c39974 100644
--- a/docs/testing/user/userguide/network.rst
+++ b/docs/testing/user/userguide/network.rst
@@ -112,4 +112,3 @@ Nettest provides the following `metrics`_:
.. _APEX: https://wiki.opnfv.org/display/apex
.. _metrics: https://tools.ietf.org/html/rfc2544
-
diff --git a/docs/testing/user/userguide/network_testcase_description.rst b/docs/testing/user/userguide/network_testcase_description.rst
index 66fda073..0f1a0b45 100644
--- a/docs/testing/user/userguide/network_testcase_description.rst
+++ b/docs/testing/user/userguide/network_testcase_description.rst
@@ -88,40 +88,3 @@ Test Case Description
|test verdict | find the test result report in QTIP container running |
| | directory |
+--------------+--------------------------------------------------------------+
-
-+-----------------------------------------------------------------------------+
-|Network Latency |
-+==============+==============================================================+
-|test case id | e.g. qtip_throughput |
-+--------------+--------------------------------------------------------------+
-|metric | what will be measured, e.g. latency |
-+--------------+--------------------------------------------------------------+
-|test purpose | describe what is the purpose of the test case |
-+--------------+--------------------------------------------------------------+
-|configuration | what .yaml file to use, state SLA if applicable, state |
-| | test duration, list and describe the scenario options used in|
-| | this TC and also list the options using default values. |
-+--------------+--------------------------------------------------------------+
-|test tool | e.g. ping |
-+--------------+--------------------------------------------------------------+
-|references | RFC2544 |
-+--------------+--------------------------------------------------------------+
-|applicability | describe variations of the test case which can be |
-| | performend, e.g. run the test for different packet sizes |
-+--------------+--------------------------------------------------------------+
-|pre-test | describe configuration in the tool(s) used to perform |
-|conditions | the measurements (e.g. fio, pktgen), POD-specific |
-| | configuration required to enable running the test |
-+--------------+------+----------------------------------+--------------------+
-|test sequence | step | description | result |
-| +------+----------------------------------+--------------------+
-| | 1 | use this to describe tests that | what happens in |
-| | | require several steps e.g. | this step |
-| | | step 1 collect logs | e.g. logs collected|
-| +------+----------------------------------+--------------------+
-| | 2 | remove interface | interface down |
-| +------+----------------------------------+--------------------+
-| | N | what is done in step N | what happens |
-+--------------+------+----------------------------------+--------------------+
-|test verdict | expected behavior, or SLA, pass/fail criteria |
-+--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/storage.rst b/docs/testing/user/userguide/storage.rst
index 7681ff7a..9457e67e 100644
--- a/docs/testing/user/userguide/storage.rst
+++ b/docs/testing/user/userguide/storage.rst
@@ -87,12 +87,23 @@ Then, you use the following commands to start storage QPI service.
Execution
---------
-You can run storage QPI with docker exec:
-::
+* Script
+
+ You can run storage QPI with docker exec:
+ ::
+
+ docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh
+
+* Commands
- docker exec <qtip container> bash -x /home/opnfv/repos/qtip/qtip/scripts/quickstart.sh
+ In a QTIP container, you can run storage QPI by using QTIP CLI. You can get more
+ details from *userguide/cli.rst*.
+
+
+Test result
+------------
-QTIP generates results in the ``$PWD/results/`` directory are listed down under the
+QTIP generates results in the ``/home/opnfv/<project_name>/results/`` directory are listed down under the
timestamp name.
Metrics
diff --git a/docs/testing/user/userguide/web.rst b/docs/testing/user/userguide/web.rst
deleted file mode 100644
index 79f180d9..00000000
--- a/docs/testing/user/userguide/web.rst
+++ /dev/null
@@ -1,70 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-**********************
-Web Portal User Manual
-**********************
-
-QTIP consists of different tools(metrics) to benchmark the NFVI. These metrics
-fall under different NFVI subsystems(QPI's) such as compute, storage and network.
-QTIP benchmarking tasks are built upon `Ansible`_ playbooks and roles.
-QTIP web portal is a platform to expose QTIP as a benchmarking service hosted on a central host.
-
-
-Running
-=======
-
-After setting up the web portal as instructed in config guide, cd into the `web` directory.
-
-and run.
-
-::
-
- python manage.py runserver 0.0.0.0
-
-
-You can access the portal by logging onto `<host>:8000/bench/login/`
-
-If you want to use port 80, you may need sudo permission.
-
-::
-
- sudo python manage.py runserver 0.0.0.0:80
-
-To Deploy on `wsgi`_, Use the Django `deployment tutorial`_
-
-
-Features
-========
-
-After logging in You'll be redirect to QTIP-Web Dashboard. You'll see following menus on left.
-
- * Repos
- * Run Benchmarks
- * Tasks
-
-Repo
-----
-
- Repos are links to qtip `workspaces`_. This menu list all the aded repos. Links to new repos
- can be added here.
-
-Run Benchmarks
---------------
-
- To run a benchmark, select the corresponding repo and run. QTIP Benchmarking service will clone
- the workspace and run the benchmarks. Inventories used are predefined in the workspace repo in the `/hosts/` config file.
-
-Tasks
------
-
- All running or completed benchmark jobs can be seen in Tasks menu with their status.
-
-
-*New users can be added by Admin on the Django Admin app by logging into `/admin/'.*
-
-.. _Ansible: https://www.ansible.com/
-.. _wsgi: https://wsgi.readthedocs.io/en/latest/what.html
-.. _deployment tutorial: https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
-.. _workspaces: https://github.com/opnfv/qtip/blob/master/docs/testing/developer/devguide/ansible.rst#create-workspace