aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMofassirArif <Mofassir_Arif@dellteam.com>2016-01-21 23:56:01 -0800
committerNauman Ahad <nauman.ahad@xflowresearch.com>2016-02-01 22:28:21 +0000
commit635fc10d93342baedc23b8e683d9d75cd355c8c4 (patch)
tree10e6ed177bd1b77a53f11042f251fba16d73cf5f
parent30f974a996efbb4b7f7b4fc3a47a1454c3fe80e5 (diff)
bug-fix: fix bug in docker run file, replace $$ with &&
Change-Id: Ic48483ae97aab2f844ee753cecf5fc3714a13cdb Signed-off-by: MofassirArif <Mofassir_Arif@dellteam.com> (cherry picked from commit bab7c5360e2bd4eabc63e3b78cb6fcea8730b608)
-rwxr-xr-xdocker/run_qtip.sh12
-rw-r--r--docs/network_testcases.rst31
2 files changed, 22 insertions, 21 deletions
diff --git a/docker/run_qtip.sh b/docker/run_qtip.sh
index 79c2a225..a4729c06 100755
--- a/docker/run_qtip.sh
+++ b/docker/run_qtip.sh
@@ -11,17 +11,17 @@ source ${QTIP_DIR}/opnfv-creds.sh
if [ "$TEST_CASE" == "compute" ]; then
cd ${QTIP_DIR} && python qtip.py -l ${NODE_NAME} -f compute
- cd ${QTIP_DIR}/data/ref_results/ $$ python compute_suite.py
+ cd ${QTIP_DIR}/data/ref_results/ && python compute_suite.py
fi
if [ "$TEST_CASE" == "storage" ]; then
cd ${QTIP_DIR} && python qtip.py -l ${NODE_NAME} -f storage
- cd ${QTIP_DIR}/data/ref_results/ $$ python storage_suite.py
+ cd ${QTIP_DIR}/data/ref_results/ && python storage_suite.py
fi
if [ "$TEST_CASE" == "network" ]; then
cd ${QTIP_DIR} && python qtip.py -l ${NODE_NAME} -f network
- cd ${QTIP_DIR}/data/ref_results/ $$ python network_suite.py
+ cd ${QTIP_DIR}/data/ref_results/ && python network_suite.py
fi
@@ -30,9 +30,9 @@ if [ "$TEST_CASE" == "all" ]; then
cd ${QTIP_DIR} && python qtip.py -l ${NODE_NAME} -f storage
cd ${QTIP_DIR} && python qtip.py -l ${NODE_NAME} -f network
- cd ${QTIP_DIR}/data/ref_results/ $$ python compute_suite.py
- cd ${QTIP_DIR}/data/ref_results/ $$ python storage_suite.py
- cd ${QTIP_DIR}/data/ref_results/ $$ python network_suite.py
+ cd ${QTIP_DIR}/data/ref_results/ && python compute_suite.py
+ cd ${QTIP_DIR}/data/ref_results/ && python storage_suite.py
+ cd ${QTIP_DIR}/data/ref_results/ && python network_suite.py
fi
diff --git a/docs/network_testcases.rst b/docs/network_testcases.rst
index ac68b11b..1c6fb910 100644
--- a/docs/network_testcases.rst
+++ b/docs/network_testcases.rst
@@ -7,11 +7,12 @@ Network Throughput for VMs
Network Throughput for Compute Nodes
For the throughout of the compute nodes we simply go into the systems-under-test
-and install iperf3 on the nodes. One of the SUTs is used a server and the other as a
-client. The client pushes traffic to the server for a duration specified by the user
-configuration file for iperf. These files can be found in the test_cases/{POD}/network/
-directory. The bandwidth is limited only by the physical link layer speed available to the server.
-The result file inlcudes the b/s bandwidth and the CPU usage for both the client and server.
+ and install iperf3 on the nodes. One of the SUTs is used a server and the
+ other as a client. The client pushes traffic to the server for a duration specified by
+ the user
+ configuration file for iperf. These files can be found in the test_cases/{POD}/network/
+ directory. The bandwidth is limited only by the physical link layer speed available to the server.
+ The result file inlcudes the b/s bandwidth and the CPU usage for both the client and server.
For the VMs we are running two topologies through the framework.
@@ -19,20 +20,20 @@ For the VMs we are running two topologies through the framework.
2: VMs on different compute nodes
QTIP framework sets up a stack with a private network, security groups, routers and attaches the VMs to this network. Iperf3 is installed
-on the VMs and one is assigned the role of client while other serves as a server. Traffic is pushed
-over the QTIP private network between the VMs. A closer look in needed to see how the traffic actually
-flows between the VMs in this configuration to understand what is happening to the packet as traverses
-the openstack network.
+ on the VMs and one is assigned the role of client while other serves as a server. Traffic is pushed
+ over the QTIP private network between the VMs. A closer look in needed to see how the traffic actually
+ flows between the VMs in this configuration to understand what is happening to the packet as traverses
+ the openstack network.
The packet originates from VM1 and its sent to the linux bridge via a tap interface where the security groups
-are written. Afterwards the packet is forwarded to the Integration bridge via a patch port. Since VM2 is also connected
-to the Integration bridge in a similar manner as VM1 so the packet gets forwarded to the linux bridge connecting
-VM2. After the linux bridge the packet is sent to VM2 and is recieved by the Iperf3 server. Since no physical link is
-involved in this topology, only the OVS (Integration bridge) is being benchmarked and we are seeing bandwidth in the range
-of 14-15 Gbps.
+ are written. Afterwards the packet is forwarded to the Integration bridge via a patch port. Since VM2 is also connected
+ to the Integration bridge in a similar manner as VM1 so the packet gets forwarded to the linux bridge connecting
+ VM2. After the linux bridge the packet is sent to VM2 and is recieved by the Iperf3 server. Since no physical link is
+ involved in this topology, only the OVS (Integration bridge) is being benchmarked and we are seeing bandwidth in the range
+ of 14-15 Gbps.
For the topology where the VMs are spawned on different compute nodes, the path the packet takes becomes more cumbersome.
-The packet leaves a VM and makes its way to the Integration Bridge as in the first topology however the integration bridge
+The packet leaves a VM and makes its way to the Integration Bridge as in the first topology however the integration bridge
forwards the packet to the physical link through the ethernet bridge. The packet then gets a VLAN/Tunnel depending on the network
and is forwarded to the particular Compute node where the second VM is spwaned. The packets enter the compute node through the physical
ethernet port and makes its way to the VM through the integration bridge and linux bridge. As seen here the path is much more involved