diff options
author | Nauman_Ahad <nauman_ahad@xflowresearch.com> | 2016-02-09 20:21:09 +0500 |
---|---|---|
committer | Nauman Ahad <nauman.ahad@xflowresearch.com> | 2016-02-22 07:29:57 +0000 |
commit | eea2d0d2a5f26f2e46ec085c40f361405fa19743 (patch) | |
tree | 1ddf21cf27e7d4e80ba160e0b4d6c6801dd5b4af /docs/network_testcases.rst | |
parent | 3ee7b5b436f0c698577c07ceab2f7421f7d328e4 (diff) |
Modifed documentation to generate improved PDFs in tool chain
Modified the location for documentation files within the docs directory.
Index files modified to generate better PDFs
Change-Id: Ie21b1021a8d09013df48afc6d737d95ee8aeed92
Signed-off-by: Nauman_Ahad <nauman_ahad@xflowresearch.com>
(cherry picked from commit 6700444ea735f678ed2841fec29ab11947905a1a)
Diffstat (limited to 'docs/network_testcases.rst')
-rw-r--r-- | docs/network_testcases.rst | 42 |
1 files changed, 0 insertions, 42 deletions
diff --git a/docs/network_testcases.rst b/docs/network_testcases.rst deleted file mode 100644 index 1c6fb910..00000000 --- a/docs/network_testcases.rst +++ /dev/null @@ -1,42 +0,0 @@ -NETWORK THROUGHPUT TESTCASE
-
-QTIP uses IPerf3 as the main tool for testing the network throughput.
-There are two tests that are run through the QTIP framework.
-
-Network Throughput for VMs
-Network Throughput for Compute Nodes
-
-For the throughout of the compute nodes we simply go into the systems-under-test
- and install iperf3 on the nodes. One of the SUTs is used a server and the
- other as a client. The client pushes traffic to the server for a duration specified by
- the user
- configuration file for iperf. These files can be found in the test_cases/{POD}/network/
- directory. The bandwidth is limited only by the physical link layer speed available to the server.
- The result file inlcudes the b/s bandwidth and the CPU usage for both the client and server.
-
-For the VMs we are running two topologies through the framework.
-
-1: VMs on the same compute nodes
-2: VMs on different compute nodes
-
-QTIP framework sets up a stack with a private network, security groups, routers and attaches the VMs to this network. Iperf3 is installed
- on the VMs and one is assigned the role of client while other serves as a server. Traffic is pushed
- over the QTIP private network between the VMs. A closer look in needed to see how the traffic actually
- flows between the VMs in this configuration to understand what is happening to the packet as traverses
- the openstack network.
-
-The packet originates from VM1 and its sent to the linux bridge via a tap interface where the security groups
- are written. Afterwards the packet is forwarded to the Integration bridge via a patch port. Since VM2 is also connected
- to the Integration bridge in a similar manner as VM1 so the packet gets forwarded to the linux bridge connecting
- VM2. After the linux bridge the packet is sent to VM2 and is recieved by the Iperf3 server. Since no physical link is
- involved in this topology, only the OVS (Integration bridge) is being benchmarked and we are seeing bandwidth in the range
- of 14-15 Gbps.
-
-For the topology where the VMs are spawned on different compute nodes, the path the packet takes becomes more cumbersome.
-The packet leaves a VM and makes its way to the Integration Bridge as in the first topology however the integration bridge
-forwards the packet to the physical link through the ethernet bridge. The packet then gets a VLAN/Tunnel depending on the network
-and is forwarded to the particular Compute node where the second VM is spwaned. The packets enter the compute node through the physical
-ethernet port and makes its way to the VM through the integration bridge and linux bridge. As seen here the path is much more involved
-even when discussed without the mention of overheads faced at all the internfaces so we are seeing the results in the range of 2 Gbps.
-
-
|