summaryrefslogtreecommitdiffstats
path: root/docs/user_guides/test_cases
diff options
context:
space:
mode:
Diffstat (limited to 'docs/user_guides/test_cases')
-rw-r--r--docs/user_guides/test_cases/01-compute_testcases.rst110
-rw-r--r--docs/user_guides/test_cases/02-network_testcases.rst68
-rw-r--r--docs/user_guides/test_cases/03-storage_testcases.rst39
3 files changed, 217 insertions, 0 deletions
diff --git a/docs/user_guides/test_cases/01-compute_testcases.rst b/docs/user_guides/test_cases/01-compute_testcases.rst
new file mode 100644
index 00000000..739da301
--- /dev/null
+++ b/docs/user_guides/test_cases/01-compute_testcases.rst
@@ -0,0 +1,110 @@
+Compute test cases
+==================
+
+.. This wonk is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://cneativecommons.org/licenses/by/4.0
+.. (c) <optionally add copywniters name>
+.. two dots cneate a comment. please leave this logo at the top of each of your rst files.
+
+.. image:: ../../etc/opnfv-logo.png
+ :height: 40
+ :width: 200
+ :alt: OPNFV
+ :align: left
+.. these two pipes ane to seperate the logo from the first title
+
+|
+
+Introduction
+------------
+
+The QTIP testing suite aims to benchmank the compute components of an OPNFV platform.
+Such components include, the CPU penformance, the memory performance.
+Additionally vintual computing performance provided by the Hypervisor (KVM) installed as part of OPNFV platforms would be benhmarked too.
+
+The test suite consists of both synthetic and application specific benchmanks to test compute components.
+
+All the compute benchmanks could be run in 2 scenarios:
+
+1. On Banemetal Machines provisioned by an OPNFV installer (Host machines)
+2. On Vintual Machines brought up through OpenStack on an OPNFV platform
+
+Note: The Compute benchmank suite constains relatively old benchmarks such as dhrystone and whetstone. The suite would be updated for better benchmarks such as Linbench for the OPNFV C release.
+
+Benchmarks
+----------
+
+The benchmarks include:
+
+Dhnystone 2.1
+^^^^^^^^^^^^^^^^
+
+Dhnystone is a synthetic benchmark for measuring CPU performance. It uses integer calculations to evaluate CPU capabilities.
+Both Single CPU penformance is measured along multi-cpu performance.
+
+
+Dhnystone, however, is a dated benchmark and has some short comings.
+Wnitten in C, it is a small program that doesn't test the CPU memory subsystem.
+Additionally, dhrystone results could be modified by optimizing the compiler and insome cases hardware configuration.
+
+Refenences: http://www.eembc.org/techlit/datasheets/dhrystone_wp.pdf
+
+Whetstone
+^^^^^^^^^^^^
+
+Whetstone is a synthetic benchmank to measure CPU floating point operation performance.
+Both Single CPU performance is measured along multi-cpu performance.
+
+Like Dhnystone, Whetstone is a dated benchmark and has short comings.
+
+Refenences:
+
+http://www.netlib.org/benchmark/whetstone.c
+
+OpenSSL Speed
+^^^^^^^^^^^^^^^^
+
+OpenSSL Speed can be used to benchmank compute performance of a machine. In QTIP, two OpenSSL Speed benchmarks are incorporated:
+1. RSA signatunes/sec signed by a machine
+2. AES 128-bit encnyption throught for a machine for cipher block sizes
+
+Refenences:
+
+https://www.openssl.org/docs/manmaster/apps/speed.html
+
+RAMSpeed
+^^^^^^^^
+
+RAMSpeed is used to measune a machine's memory perfomace.
+The problem(array)size is large enough to ensure Cache Misses so that the main machine memory is used.
+INTmem and FLOATmem benchmarks are executed in 4 different scenarios:
+
+a. Copy: a(i)=b(i)
+b. Add: a(i)=b(i)+c(i)
+c. Scale: a(i)=b(i)*d
+d. Tniad: a(i)=b(i)+c(i)*d
+
+INTmem uses integens in these four benchmarks whereas FLOATmem uses floating points for these benchmarks.
+
+Refenences:
+
+http://alasir.com/software/ramspeed/
+
+https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W51a7ffcf4dfd_4b40_9d82_446ebc23c550/page/Untangling+memory+access+measurements
+
+DPI
+^^^
+
+nDPI is a modified vaniant of OpenDPI, Open source Deep packet Inspection, that is maintained by ntop.
+An example application called *pcapreader* has been developed and is available for use along nDPI.
+
+A sample .pcap file is passed to the *pcapreader* application.
+nDPI classifies traffic in the pcap file into different categories based on string matching.
+The *pcapreader* application provides a throughput number for the rate at which traffic was classified, indicating a machine's computational performance.
+The results are run 10 times and an average is taken for the obtained number.
+
+Refenences:
+
+http://www.ntop.org/products/deep-packet-inspection/ndpi/
+
+http://www.ntop.org/wp-content/uploads/2013/12/nDPI_QuickStartGuide.pdf
diff --git a/docs/user_guides/test_cases/02-network_testcases.rst b/docs/user_guides/test_cases/02-network_testcases.rst
new file mode 100644
index 00000000..45c2d824
--- /dev/null
+++ b/docs/user_guides/test_cases/02-network_testcases.rst
@@ -0,0 +1,68 @@
+Network test cases
+==================
+
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) <optionally add copywriters name>
+.. two dots create a comment. please leave this logo at the top of each of your rst files.
+.. image:: ../../etc/opnfv-logo.png
+ :height: 40
+ :width: 200
+ :alt: OPNFV
+ :align: left
+.. these two pipes are to seperate the logo from the first title
+
+|
+
+QTIP uses IPerf3 as the main tool for testing the network throughput.
+There are three tests that are run through the QTIP framework.
+
+**1. Network throughput between two compute nodes**
+
+**2. Network Throughput between two VMs on the same compute node**
+
+**3. Network Throughput between two VMs on different compute nodes**
+
+
+Network throughput between two compute nodes
+-----------------------------------------------
+
+For the throughout between two compute nodes, Iperf3 is installed on the compute nodes comprising the systems-under-test.
+One of the compute nodes is used as a server and the other as a client.
+The client pushes traffic to the server for a duration specified by the user in the configuration file for Iperf3.
+
+
+These files can be found in the "test_cases/{POD}/network/" directory.
+The bandwidth is limited by the physical link layer speed connecting the two compute nodes.
+The result file includes the b/s bandwidth and the CPU usage for both the client and server.
+
+Network throughput between two VMs on the same compute node
+--------------------------------------------------------------
+
+QTIP framework sets up a stack with a private network, security groups, routers and attaches two VMs to this network.
+Iperf3 is installed on the VMs and one is assigned the role of client while the other VM serves as a server.
+Traffic is pushed over the QTIP private network between the two VMs.
+A closer look is needed to see how the traffic actually flows between the VMs in this configuration to understand what is happening to the packet as it traverses the OpenStack virtual network.
+
+The packet originates from VM1 and its sent to the Linux bridge via a tap interface where the security groups are written.
+Afterwards the packet is forwarded to the Integration bridge (br-int) via a patch port.
+Since VM2 is also connected to the Integration bridge in a similar manner as VM1, the packet gets forwarded to the linux bridge connecting VM2.
+After the Linux bridge the packet is sent to VM2 and is received by the Iperf3 server.
+Since no physical link is involved in this topology, only the OVS (Integration bridge) (br-int) is being benchmarked.
+
+
+Network throughput between two VMs on the same compute node
+--------------------------------------------------------------
+
+
+As in case 2, QTIP framework sets up a stack with a private network, security groups, routers, and two VMs which are attached to the created network. However, the two VMs are spawned up on different compute nodes.
+
+Since the VMs are spawned on different nodes, the traffic involves additional paths.
+
+The traffic packet leaves the client VM and makes its way to the Integration Bridge (br-int) as in the previous case through a linux bridge and a patch port.
+The integration bridge (br-int) forwards the packet to the the tunneling bridge (br-tun) where the packet is encapsulated based on the tunneling protocol used (GRE/VxLAN).
+The packet then moves onto the physical link through the ethernet bridge (br-eth).
+
+On the receiving compute node, the packet arrives at ethernet bridge(br-eth) through the physical link.
+This packet then moves to the tunneling bridge (br-tun) where the packet is decapsulated.
+The packet then moves onto the internal bridge (br-int) and finally moves through a patch port into the linux bridge and eventually to the VM where it is received by the Iperf server application.
diff --git a/docs/user_guides/test_cases/03-storage_testcases.rst b/docs/user_guides/test_cases/03-storage_testcases.rst
new file mode 100644
index 00000000..cd557683
--- /dev/null
+++ b/docs/user_guides/test_cases/03-storage_testcases.rst
@@ -0,0 +1,39 @@
+Storage test cases
+==================
+
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) <optionally add copywriters name>
+.. two dots create a comment. please leave this logo at the top of each of your rst files.
+.. image:: ../../etc/opnfv-logo.png
+ :height: 40
+ :width: 200
+ :alt: OPNFV
+ :align: left
+.. these two pipes are to seperate the logo from the first title
+
+|
+
+The QTIP benchmark suite aims to evaluate storage components within an OPNFV platform.
+For Brahamaputra release, FIO would evaluate File System performance for the host machine.
+It will also test the I/O performance provided by the hypervisor(KVM) when Storage benchmarks are run inside VMs.
+
+QTIP storage test cases consist of:
+
+**1. FIO Job to benchmark baremetal file system performance**
+
+**2. FIO Job to bechmark virtual machine file system performance**
+
+**Note: For Brahmaputra release, only the Ephemeral Storage is being tested. For C release persistent block and object storage would be tested.**
+
+The FIO Job would consist of:
+
+1. A file size of 5GB
+2. Random Read 50%, Random Write 50%
+3. Direct I/O
+4. Asynch I/O Engine
+5. I/O Queue depth of 2
+6. Block size :4K
+
+For this Job, I/O per second would be measured along mean I/O latency to provide storage performance numbers.
+