aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorMofassirArif <Mofassir_Arif@dellteam.com>2016-01-19 10:07:50 -0800
committerMofassirArif <Mofassir_Arif@dellteam.com>2016-01-19 10:08:53 -0800
commit67373633d382f3152d970a22192b4fc7c11248b7 (patch)
tree4f9cb8f4e0b5487ea1efc9b38d50eecd7c43b38b /docs
parent559b51c45a93621a96c6c29332fdcc768578eb71 (diff)
docs: add docs for usage, introduction and iperf testcase
Change-Id: Ida3460ddd5d2b377351681e5f1d2457ec76ae95f Signed-off-by: MofassirArif <Mofassir_Arif@dellteam.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/how-to-use-docs/03-usage-guide.rst274
-rw-r--r--docs/how-to-use-docs/index.rst3
-rw-r--r--docs/iperf_testcase.rst42
3 files changed, 319 insertions, 0 deletions
diff --git a/docs/how-to-use-docs/03-usage-guide.rst b/docs/how-to-use-docs/03-usage-guide.rst
new file mode 100644
index 00000000..2bd2f034
--- /dev/null
+++ b/docs/how-to-use-docs/03-usage-guide.rst
@@ -0,0 +1,274 @@
+..
+ TODO As things will change, then this document has to be revised before the
+ next release. Steps:
+ 1. Verify that the instructions below are correct and have not been changed.
+ 2. Add everything that is currently missing and should be included in this document.
+ 3. Make sure each title has a paragraph or an introductory sentence under it.
+ 4. Make sure each sentence is grammatically correct and easily understandable.
+ 5. Remove this comment section.
+
+Guide to run QTIP:
+==================
+
+This guide will serve as a first step to familiarize the user with how to
+run QTIP the first time when the user clones QTIP on to their host machine.
+In order to clone QTIP please follow the instructions in the
+installation.rst located in docs/userguide/installation.rst.
+
+QTIP Directory structure:
+-------------------------
+
+The QTIP directory has been sectioned off into multiple folders to facilitate
+ segmenting information into relevant categories. The folders that concern
+ the end user are `test_cases/` and `test_list/`.
+
+test_cases/:
+------------
+
+This folder is used to store all the config files which are used to setup the
+ environment prior to a test. This folder is further divided into opnfv pods
+ which run QTIP. Inside each pod there are folders which contain the config
+ files segmented based on test cases. Namely, these include, `Compute`,
+ `Network` and `Storage`. The default folder is there for the end user who
+ is interested in testing their infrastructure but arent part of a opnfv pod.
+
+The structure of the directory for the user appears as follows
+::
+
+ test_cases/default/compute
+ test_cases/default/network
+ test_cases/default/storage
+
+The benchmarks that are part of the QTIP framework are listed under these
+folders. An example of the compute folder is shown below.
+Their naming convention is <BENCHMARK>_<VM/BM>.yaml
+::
+
+ dhrystone_bm.yaml
+ dhrystone_vm.yaml
+ whetstone_vm.yaml
+ whetstone_bm.yaml
+ ssl_vm.yaml
+ ssl_bm.yaml
+ ramspeed_vm.yaml
+ ramspeed_bm.yaml
+ dpi_vm.yaml
+ dpi_bm.yaml
+
+The above listed files are used to configure the environment. The VM/BM tag
+distinguishes between a test to be run on the Virtual Machine or the compute
+node itself, respectively.
+
+
+test_list/:
+-----------
+
+This folder contains three files, namely `compute`, `network` and `storage`.
+These files list the benchmarks are to be run by the QTIP framework. Sample
+compute test file is shown below
+::
+
+ dhrystone_vm.yaml
+ dhrystone_bm.yaml
+ whetstone_vm.yaml
+ ssl_bm.yaml
+
+The compute file will now run all the benchmarks listed above one after
+another on the environment. `NOTE: Please ensure there are no blank lines
+in this file as that has been known to throw an exception`.
+
+Preparing a config file for test:
+---------------------------------
+
+We will be using dhrystone as a example to list out the changes that the
+user will need to do in order to run the benchmark.
+Dhrystone on Compute Nodes:
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+QTIP framework can run benchmarks on the actual compute nodes as well. In
+order to run dhrystone on the compute nodes we will be editing the
+dhrystone_bm.yaml file.
+
+::
+
+ Scenario:
+ benchmark: dhrystone
+ host: machine_1, machine_2
+ server:
+
+The `Scenario` field is used by to specify the name of the benchmark to
+run as done by `benchmark: dhrystone`. The `host` and `server` tag are
+not used for the compute benchmarks but are included here to help the
+user `IF` they wish to control the execution. By default both machine_1
+and machine_2 will have dhrystone run on them in parallel but the user
+can change this so that machine_1 run dhrystone before machine_2. This
+will be elaborated in the `Context` tag.
+
+::
+
+ Context:
+ Host_Machines:
+ machine_1:
+ ip: 10.20.0.6
+ pw:
+ role: host
+ machine_2:
+ ip: 10.20.0.5
+ pw:
+ role: host
+
+ Virtual_Machines:
+
+The `Context` tag helps the user list the number of compute nodes they want
+ to run dhrystone on. The user can list all the compute nodes under the
+ `Host_Machines` tag. All the machines under test must be listed under the
+ `Host_Machines` and naming it incrementally higher. The `ip:` tag is used
+ to specify the IP of the particular compute node. The `pw:` tag can be left
+ blank because QTIP uses its own key for ssh. In order to run dhrystone on
+ one compute node at a time the user needs to edit the `role:` tag. `role:
+ host` for machine_1 and `role: server` for machine_2 will allow for
+ dhrystone to be run on machine_1 and then run on machine_2.
+
+::
+
+
+ Test_Description:
+ Test_category: "Compute"
+ Benchmark: "dhrystone"
+ Overview: >
+ ''' This test will run the dhrystone benchmark in parallel on
+ machine_1 and machine_2.
+
+The above field is purely for a description purpose to explain to the user
+the working of the test and is not fed to the framework.
+
+Sample dhrystone_bm.yaml file:
+------------------------------
+::
+
+ Scenario:
+ benchmark: dhrystone
+ host: machine_1, machine_2
+ server:
+
+ Context:
+ Host_Machines:
+ machine_1:
+ ip: 10.20.0.6
+ pw:
+ role: host
+ machine_2:
+ ip: 10.20.0.5
+ pw:
+ role: host
+
+ Virtual_Machines:
+
+
+ Test_Description:
+ Test_category: "Compute"
+ Benchmark: "dhrystone"
+ Overview: >
+ ''' This test will run the dhrystone benchmark in parallel on
+ machine_1 and machine_2.\n
+
+Dhrystone on Virtual Machine:
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+To run dhrystone on the VMs we will be editing dhrystone_vm.yaml file.
+Snippets on the file are given below.
+
+::
+
+ Scenario:
+ benchmark: dhrystone
+ host: virtualmachine_1, virtualmachine_2
+ server:
+
+
+The `Scenario` field is used by to specify the name of the benchmark to
+run as done by `benchmark: dhrystone`. The `host` and `server` tag are
+not used for the compute benchmarks but are included here to help the
+user `IF` they wish to control the execution. By default both
+virtualmachine_1 and virtualmachine_2 will have dhrystone run on them
+in parallel but the user can change this so that virtualmachine_1 run
+dhrystone before virtualmachine_2. This will be elaborated in the
+`Context` tag.
+::
+
+ Context:
+ Host_Machines:
+
+ Virtual_Machines:
+ virtualmachine_1:
+ availability_zone: compute1
+ public_network: 'net04_ext'
+ OS_image: QTIP_CentOS
+ flavor: m1.large
+ role: host
+ virtualmachine_2:
+ availability_zone: compute2
+ public_network: 'net04_ext'
+ OS_image: QTIP_CentOS
+ flavor: m1.large
+ role: host
+
+The `Context` tag helps the user list the number of VMs and their
+characteristic. The user can list all the VMs they want to bring up
+under the `Virtual_Machines:` tag. In the above example we will be
+bringing up two VMs. One on Compute1 and the other on Compute2. The
+user can change this as desired `NOTE: Please ensure you have the
+necessary compute nodes before listing under the 'availability_zone:'
+tag`. The rest of the options do not need to be modified by the user.
+
+Running dhrystone sequentially (Optional):
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In order to run dhrystone on one VM at a time the user needs to edit
+the `role:` tag. `role: host` for virtualmachine_1 and `role: server`
+for virtualmachine_2 will allow for dhrystone to be run on
+virtualmachine_1 and then run on virtualmachine_2.
+
+::
+
+ Test_Description:
+ Test_category: "Compute"
+ Benchmark: "dhrystone"
+ Overview:
+ This test will run the dhrystone benchmark in parallel on
+ virtualmachine_1 and virtualmachine_2
+
+The above field is purely for a decription purpose to explain to
+the user the working of the test and is not fed to the framework.
+
+Sample dhrystone_vm.yaml file:
+------------------------------
+::
+
+ Scenario:
+ benchmark: dhrystone
+ host: virtualmachine_1, virtualmachine_2
+ server:
+
+ Context:
+ Host_Machines:
+
+ Virtual_Machines:
+ virtualmachine_1:
+ availability_zone: compute1
+ public_network: 'net04_ext'
+ OS_image: QTIP_CentOS
+ flavor: m1.large
+ role: host
+ virtualmachine_2:
+ availability_zone: compute2
+ public_network: 'net04_ext'
+ OS_image: QTIP_CentOS
+ flavor: m1.large
+ role: host
+
+ Test_Description:
+ Test_category: "Compute"
+ Benchmark: "dhrystone"
+ Overview: >
+ This test will run the dhrystone benchmark in parallel on
+ machine_1 and machine_2.\n
diff --git a/docs/how-to-use-docs/index.rst b/docs/how-to-use-docs/index.rst
index bbe991b0..713599c0 100644
--- a/docs/how-to-use-docs/index.rst
+++ b/docs/how-to-use-docs/index.rst
@@ -20,6 +20,9 @@ Contents:
documentation-example.rst
01-introduction.rst
+ 02-methodology.rst
+ 03-usage-guide.rst
+
Indices and tables
==================
diff --git a/docs/iperf_testcase.rst b/docs/iperf_testcase.rst
new file mode 100644
index 00000000..fa2b44a4
--- /dev/null
+++ b/docs/iperf_testcase.rst
@@ -0,0 +1,42 @@
+NETWORK THROUGHPUT TESTCASE
+
+QTIP uses IPerf3 as the main tool for testing the network throughput.
+There are two tests that are run through the QTIP framework.
+
+Network Throughput for VMs
+Network Throughput for Compute Nodes
+
+For the throughout of the compute nodes we simply go into the systems-under-test
+and install iperf3 on the nodes. One of the SUTs is used a server and the other as a
+client. The client pushes traffic to the server for a duration specified by the user
+configuration file for iperf. These files can be found in the test_cases/{POD}/network/
+directory. The bandwidth is limited only by the physical link layer speed available to the server.
+The result file inlcudes the b/s bandwidth and the CPU usage for both the client and server.
+
+For the VMs we are running two topologies through the framework.
+
+1: VMs on the same compute nodes
+2: VMs on different compute nodes
+
+QTIP framework sets up a stack with a private network, security groups, routers and attaches the VMs to this network. Iperf3 is installed
+on the VMs and one is assigned the role of client while other serves as a server. Traffic is pushed
+over the QTIP private network between the VMs. A closer look in needed to see how the traffic actually
+flows between the VMs in this configuration to understand what is happening to the packet as traverses
+the openstack network.
+
+The packet originates from VM1 and its sent to the linux bridge via a tap interface where the security groups
+are written. Afterwards the packet is forwarded to the Integration bridge via a patch port. Since VM2 is also connected
+to the Integration bridge in a similar manner as VM1 so the packet gets forwarded to the linux bridge connecting
+VM2. After the linux bridge the packet is sent to VM2 and is recieved by the Iperf3 server. Since no physical link is
+involved in this topology, only the OVS (Integration bridge) is being benchmarked and we are seeing bandwidth in the range
+of 14-15 Gbps.
+
+For the topology where the VMs are spawned on different compute nodes, the path the packet takes becomes more cumbersome.
+The packet leaves a VM and makes its way to the Integration Bridge as in the first topology however the integration bridge
+forwards the packet to the physical link through the ethernet bridge. The packet then gets a VLAN/Tunnel depending on the network
+and is forwarded to the particular Compute node where the second VM is spwaned. The packets enter the compute node through the physical
+ethernet port and makes its way to the VM through the integration bridge and linux bridge. As seen here the path is much more involved
+even when discussed without the mention of overheads faced at all the internfaces so we are seeing the results in the range of 2 Gbps.
+
+
+ \ No newline at end of file