diff options
author | MofassirArif <Mofassir_Arif@dellteam.com> | 2016-01-21 06:42:23 -0800 |
---|---|---|
committer | MofassirArif <Mofassir_Arif@dellteam.com> | 2016-01-21 07:28:54 -0800 |
commit | 971a7c98515a9d83661f5e423f7e8390f35dca59 (patch) | |
tree | ee1d930fca39fa6875de6e18a2ae3dd9dba6f70f /docs | |
parent | 688380c212d1fc7cceb969a4d150c7764fcdeb77 (diff) |
bug fix: result collection bug fix for docker images
Change-Id: Ia4ea09b90c7a4f4e3699af456c6d66e85661cc0b
Signed-off-by: MofassirArif <Mofassir_Arif@dellteam.com>
Diffstat (limited to 'docs')
-rw-r--r-- | docs/compute_testcases.rst | 2 | ||||
-rw-r--r-- | docs/how-to-use-docs/03-usage-guide.rst | 161 | ||||
-rw-r--r-- | docs/network_testcases.rst (renamed from docs/iperf_testcase.rst) | 9 |
3 files changed, 105 insertions, 67 deletions
diff --git a/docs/compute_testcases.rst b/docs/compute_testcases.rst index 4463691b..6e91698d 100644 --- a/docs/compute_testcases.rst +++ b/docs/compute_testcases.rst @@ -15,7 +15,7 @@ All the compute benchmarks could be run in 2 scenarios: 1. On Baremetal Machines provisioned by an OPNFV installer (Host machines) 2. On Virtual Machines brought up through OpenStack on an OPNFV platform -Note: The Compute benchmark suite constains relatively old benchmarks such as dhrystone and whetstone. The suite would be updated for better benchmarks such as Linbench for the OPNFV C release. +Note: The Compute benchmark suite constains relatively old benchmarks such as dhrystone and whetstone. The suite would be updated for better benchmarks such as Linbench for the OPNFV C release. ============ Benchmarks diff --git a/docs/how-to-use-docs/03-usage-guide.rst b/docs/how-to-use-docs/03-usage-guide.rst index 2bd2f034..2829d669 100644 --- a/docs/how-to-use-docs/03-usage-guide.rst +++ b/docs/how-to-use-docs/03-usage-guide.rst @@ -10,10 +10,10 @@ Guide to run QTIP:
==================
-This guide will serve as a first step to familiarize the user with how to
-run QTIP the first time when the user clones QTIP on to their host machine.
-In order to clone QTIP please follow the instructions in the
-installation.rst located in docs/userguide/installation.rst.
+This guide will serve as a first step to familiarize the user with how to
+run QTIP the first time when the user clones QTIP on to their host machine.
+In order to clone QTIP please follow the instructions in the
+installation.rst located in docs/userguide/installation.rst.
QTIP Directory structure:
-------------------------
@@ -26,10 +26,10 @@ test_cases/: ------------
This folder is used to store all the config files which are used to setup the
- environment prior to a test. This folder is further divided into opnfv pods
- which run QTIP. Inside each pod there are folders which contain the config
+ environment prior to a test. This folder is further divided into opnfv pods
+ which run QTIP. Inside each pod there are folders which contain the config
files segmented based on test cases. Namely, these include, `Compute`,
- `Network` and `Storage`. The default folder is there for the end user who
+ `Network` and `Storage`. The default folder is there for the end user who
is interested in testing their infrastructure but arent part of a opnfv pod.
The structure of the directory for the user appears as follows
@@ -39,8 +39,8 @@ The structure of the directory for the user appears as follows test_cases/default/network
test_cases/default/storage
-The benchmarks that are part of the QTIP framework are listed under these
-folders. An example of the compute folder is shown below.
+The benchmarks that are part of the QTIP framework are listed under these
+folders. An example of the compute folder is shown below.
Their naming convention is <BENCHMARK>_<VM/BM>.yaml
::
@@ -55,16 +55,16 @@ Their naming convention is <BENCHMARK>_<VM/BM>.yaml dpi_vm.yaml
dpi_bm.yaml
-The above listed files are used to configure the environment. The VM/BM tag
-distinguishes between a test to be run on the Virtual Machine or the compute
+The above listed files are used to configure the environment. The VM/BM tag
+distinguishes between a test to be run on the Virtual Machine or the compute
node itself, respectively.
test_list/:
-----------
-This folder contains three files, namely `compute`, `network` and `storage`.
-These files list the benchmarks are to be run by the QTIP framework. Sample
+This folder contains three files, namely `compute`, `network` and `storage`.
+These files list the benchmarks are to be run by the QTIP framework. Sample
compute test file is shown below
::
@@ -73,20 +73,20 @@ compute test file is shown below whetstone_vm.yaml
ssl_bm.yaml
-The compute file will now run all the benchmarks listed above one after
-another on the environment. `NOTE: Please ensure there are no blank lines
+The compute file will now run all the benchmarks listed above one after
+another on the environment. `NOTE: Please ensure there are no blank lines
in this file as that has been known to throw an exception`.
Preparing a config file for test:
---------------------------------
-We will be using dhrystone as a example to list out the changes that the
+We will be using dhrystone as a example to list out the changes that the
user will need to do in order to run the benchmark.
Dhrystone on Compute Nodes:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
-QTIP framework can run benchmarks on the actual compute nodes as well. In
-order to run dhrystone on the compute nodes we will be editing the
+QTIP framework can run benchmarks on the actual compute nodes as well. In
+order to run dhrystone on the compute nodes we will be editing the
dhrystone_bm.yaml file.
::
@@ -96,12 +96,12 @@ dhrystone_bm.yaml file. host: machine_1, machine_2
server:
-The `Scenario` field is used by to specify the name of the benchmark to
-run as done by `benchmark: dhrystone`. The `host` and `server` tag are
-not used for the compute benchmarks but are included here to help the
-user `IF` they wish to control the execution. By default both machine_1
-and machine_2 will have dhrystone run on them in parallel but the user
-can change this so that machine_1 run dhrystone before machine_2. This
+The `Scenario` field is used by to specify the name of the benchmark to
+run as done by `benchmark: dhrystone`. The `host` and `server` tag are
+not used for the compute benchmarks but are included here to help the
+user `IF` they wish to control the execution. By default both machine_1
+and machine_2 will have dhrystone run on them in parallel but the user
+can change this so that machine_1 run dhrystone before machine_2. This
will be elaborated in the `Context` tag.
::
@@ -120,13 +120,13 @@ will be elaborated in the `Context` tag. Virtual_Machines:
The `Context` tag helps the user list the number of compute nodes they want
- to run dhrystone on. The user can list all the compute nodes under the
- `Host_Machines` tag. All the machines under test must be listed under the
- `Host_Machines` and naming it incrementally higher. The `ip:` tag is used
- to specify the IP of the particular compute node. The `pw:` tag can be left
- blank because QTIP uses its own key for ssh. In order to run dhrystone on
- one compute node at a time the user needs to edit the `role:` tag. `role:
- host` for machine_1 and `role: server` for machine_2 will allow for
+ to run dhrystone on. The user can list all the compute nodes under the
+ `Host_Machines` tag. All the machines under test must be listed under the
+ `Host_Machines` and naming it incrementally higher. The `ip:` tag is used
+ to specify the IP of the particular compute node. The `pw:` tag can be left
+ blank because QTIP uses its own key for ssh. In order to run dhrystone on
+ one compute node at a time the user needs to edit the `role:` tag. `role:
+ host` for machine_1 and `role: server` for machine_2 will allow for
dhrystone to be run on machine_1 and then run on machine_2.
::
@@ -136,11 +136,11 @@ The `Context` tag helps the user list the number of compute nodes they want Test_category: "Compute"
Benchmark: "dhrystone"
Overview: >
- ''' This test will run the dhrystone benchmark in parallel on
+ ''' This test will run the dhrystone benchmark in parallel on
machine_1 and machine_2.
-The above field is purely for a description purpose to explain to the user
-the working of the test and is not fed to the framework.
+The above field is purely for a description purpose to explain to the user
+the working of the test and is not fed to the framework.
Sample dhrystone_bm.yaml file:
------------------------------
@@ -169,12 +169,12 @@ Sample dhrystone_bm.yaml file: Test_category: "Compute"
Benchmark: "dhrystone"
Overview: >
- ''' This test will run the dhrystone benchmark in parallel on
+ ''' This test will run the dhrystone benchmark in parallel on
machine_1 and machine_2.\n
Dhrystone on Virtual Machine:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-To run dhrystone on the VMs we will be editing dhrystone_vm.yaml file.
+To run dhrystone on the VMs we will be editing dhrystone_vm.yaml file.
Snippets on the file are given below.
::
@@ -182,23 +182,23 @@ Snippets on the file are given below. Scenario:
benchmark: dhrystone
host: virtualmachine_1, virtualmachine_2
- server:
+ server:
-The `Scenario` field is used by to specify the name of the benchmark to
-run as done by `benchmark: dhrystone`. The `host` and `server` tag are
-not used for the compute benchmarks but are included here to help the
-user `IF` they wish to control the execution. By default both
-virtualmachine_1 and virtualmachine_2 will have dhrystone run on them
-in parallel but the user can change this so that virtualmachine_1 run
-dhrystone before virtualmachine_2. This will be elaborated in the
+The `Scenario` field is used by to specify the name of the benchmark to
+run as done by `benchmark: dhrystone`. The `host` and `server` tag are
+not used for the compute benchmarks but are included here to help the
+user `IF` they wish to control the execution. By default both
+virtualmachine_1 and virtualmachine_2 will have dhrystone run on them
+in parallel but the user can change this so that virtualmachine_1 run
+dhrystone before virtualmachine_2. This will be elaborated in the
`Context` tag.
::
Context:
Host_Machines:
- Virtual_Machines:
+ Virtual_Machines:
virtualmachine_1:
availability_zone: compute1
public_network: 'net04_ext'
@@ -212,20 +212,20 @@ dhrystone before virtualmachine_2. This will be elaborated in the flavor: m1.large
role: host
-The `Context` tag helps the user list the number of VMs and their
-characteristic. The user can list all the VMs they want to bring up
-under the `Virtual_Machines:` tag. In the above example we will be
-bringing up two VMs. One on Compute1 and the other on Compute2. The
-user can change this as desired `NOTE: Please ensure you have the
-necessary compute nodes before listing under the 'availability_zone:'
+The `Context` tag helps the user list the number of VMs and their
+characteristic. The user can list all the VMs they want to bring up
+under the `Virtual_Machines:` tag. In the above example we will be
+bringing up two VMs. One on Compute1 and the other on Compute2. The
+user can change this as desired `NOTE: Please ensure you have the
+necessary compute nodes before listing under the 'availability_zone:'
tag`. The rest of the options do not need to be modified by the user.
Running dhrystone sequentially (Optional):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-In order to run dhrystone on one VM at a time the user needs to edit
-the `role:` tag. `role: host` for virtualmachine_1 and `role: server`
-for virtualmachine_2 will allow for dhrystone to be run on
+In order to run dhrystone on one VM at a time the user needs to edit
+the `role:` tag. `role: host` for virtualmachine_1 and `role: server`
+for virtualmachine_2 will allow for dhrystone to be run on
virtualmachine_1 and then run on virtualmachine_2.
::
@@ -233,11 +233,11 @@ virtualmachine_1 and then run on virtualmachine_2. Test_Description:
Test_category: "Compute"
Benchmark: "dhrystone"
- Overview:
- This test will run the dhrystone benchmark in parallel on
+ Overview:
+ This test will run the dhrystone benchmark in parallel on
virtualmachine_1 and virtualmachine_2
-The above field is purely for a decription purpose to explain to
+The above field is purely for a decription purpose to explain to
the user the working of the test and is not fed to the framework.
Sample dhrystone_vm.yaml file:
@@ -247,12 +247,12 @@ Sample dhrystone_vm.yaml file: Scenario:
benchmark: dhrystone
host: virtualmachine_1, virtualmachine_2
- server:
+ server:
Context:
Host_Machines:
- Virtual_Machines:
+ Virtual_Machines:
virtualmachine_1:
availability_zone: compute1
public_network: 'net04_ext'
@@ -265,10 +265,49 @@ Sample dhrystone_vm.yaml file: OS_image: QTIP_CentOS
flavor: m1.large
role: host
-
+
Test_Description:
Test_category: "Compute"
Benchmark: "dhrystone"
Overview: >
- This test will run the dhrystone benchmark in parallel on
+ This test will run the dhrystone benchmark in parallel on
machine_1 and machine_2.\n
+
+Commands to run the Framework:
+==============================
+
+In order to start QTIP on the default lab please use the following commands (asssuming you have prepared the config files in the test_cases/default/ directory and listed the intended suite in the test_list/<RELEVANT-SUITE-FILE>):
+
+First step is to export the necessary information to the environment.
+::
+
+ source get_env_info.sh -n <INSTALLER_TYPE> -i <INSTALLER_IP>
+
+for running qtip on an openstack deployed using FUEL with the Installer IP 10.20.0.2
+::
+
+ source get_env_info.sh -n fuel -i 10.20.0.2
+
+This will generate the `opnfv-creds.sh` file needed to use the python clients for keystone, glance, nova, and neutron.
+::
+
+ source opnfv-creds.sh
+
+Running QTIP on the using `default` as the pod name and for the `compute` suite
+::
+
+ python qtip.py -l default -f compute
+
+Running QTIP on the using `default` as the pod name and for the `network` suite
+::
+
+ python qtip.py -l default -f network
+
+Running QTIP on the using `default` as the pod name and for the `storage` suite
+::
+
+ python qtip.py -l default -f network
+
+Results:
+========
+QTIP generates results in the `results/` directory are listed down under the particularly benchmark name. So all the results for dhrystone would be listed and time stamped.
\ No newline at end of file diff --git a/docs/iperf_testcase.rst b/docs/network_testcases.rst index fa2b44a4..ac68b11b 100644 --- a/docs/iperf_testcase.rst +++ b/docs/network_testcases.rst @@ -1,13 +1,13 @@ NETWORK THROUGHPUT TESTCASE
QTIP uses IPerf3 as the main tool for testing the network throughput.
-There are two tests that are run through the QTIP framework.
+There are two tests that are run through the QTIP framework.
Network Throughput for VMs
Network Throughput for Compute Nodes
-For the throughout of the compute nodes we simply go into the systems-under-test
-and install iperf3 on the nodes. One of the SUTs is used a server and the other as a
+For the throughout of the compute nodes we simply go into the systems-under-test
+and install iperf3 on the nodes. One of the SUTs is used a server and the other as a
client. The client pushes traffic to the server for a duration specified by the user
configuration file for iperf. These files can be found in the test_cases/{POD}/network/
directory. The bandwidth is limited only by the physical link layer speed available to the server.
@@ -32,11 +32,10 @@ involved in this topology, only the OVS (Integration bridge) is being benchmarke of 14-15 Gbps.
For the topology where the VMs are spawned on different compute nodes, the path the packet takes becomes more cumbersome.
-The packet leaves a VM and makes its way to the Integration Bridge as in the first topology however the integration bridge
+The packet leaves a VM and makes its way to the Integration Bridge as in the first topology however the integration bridge
forwards the packet to the physical link through the ethernet bridge. The packet then gets a VLAN/Tunnel depending on the network
and is forwarded to the particular Compute node where the second VM is spwaned. The packets enter the compute node through the physical
ethernet port and makes its way to the VM through the integration bridge and linux bridge. As seen here the path is much more involved
even when discussed without the mention of overheads faced at all the internfaces so we are seeing the results in the range of 2 Gbps.
-
\ No newline at end of file |