diff options
author | JingLu5 <lvjing5@huawei.com> | 2016-09-07 10:49:18 +0800 |
---|---|---|
committer | JingLu5 <lvjing5@huawei.com> | 2016-09-18 11:40:50 +0800 |
commit | 5bdaffe7a8b573c4755e46ec86647e303f22bb26 (patch) | |
tree | 4928fe47d26a1d91f37aff0641988b2464775865 /docs/results/fuel-os-odl_l2-nofeature-ha.rst | |
parent | 1ba625ef70951c84270eafdb436432dd99cc0b16 (diff) |
Doc for Xreview by other test projects
Change-Id: I9976969344c5ac4859b0e79b88157e54ec4198d9
Signed-off-by: JingLu5 <lvjing5@huawei.com>
Diffstat (limited to 'docs/results/fuel-os-odl_l2-nofeature-ha.rst')
-rw-r--r-- | docs/results/fuel-os-odl_l2-nofeature-ha.rst | 171 |
1 files changed, 143 insertions, 28 deletions
diff --git a/docs/results/fuel-os-odl_l2-nofeature-ha.rst b/docs/results/fuel-os-odl_l2-nofeature-ha.rst index 914781684..e9ef8fe65 100644 --- a/docs/results/fuel-os-odl_l2-nofeature-ha.rst +++ b/docs/results/fuel-os-odl_l2-nofeature-ha.rst @@ -14,7 +14,7 @@ Test Results for fuel-os-odl_l2-nofeature-ha Details ======= -.. _Grafana: http://130.211.154.108/grafana/dashboard/db/yardstick-main +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main .. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs Overview of test results @@ -24,21 +24,17 @@ See Grafana_ for viewing test result metrics for each respective test case. It is possible to chose which specific scenarios to look at, and then to zoom in on the details of each run test scenario as well. -All of the test case results below are based on 6 scenario test -runs, each run on the Ericsson POD2_ between February 13 and 24 in 2016. Test -case TC011_ is the greater exception for which there are only 2 test runs -available, due to earlier problems with InfluxDB test result population. -The best would be to have more runs to draw better conclusions from, but these -are the only runs available at the time of OPNFV R2 release. +All of the test case results below are based on 4 scenario test +runs, each run on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in +2016. TC002 ----- The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.3 and 0.5 ms, but -one date (Feb. 23) sticks out with an RTT average of 1 ms. -A few runs start with a 1 - 2 ms RTT spike (This could be because of normal ARP -handling). One test run has a greater RTT spike of 3.9 ms, which is the same -one with the 0.9 ms average. The other runs have no similar spike at all. +ping. Most test run measurements result on average between 0.5 and 0.6 ms. +A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP +handling). One test run has a greater RTT spike of 1.9 ms, which is the same +one with the 0.7 ms average. The other runs have no similar spike at all. To be able to draw conclusions more runs should be made. SLA set to 10 ms. The SLA value is used as a reference, it has not been defined by OPNFV. @@ -46,10 +42,10 @@ been defined by OPNFV. TC005 ----- The IO read bandwidth looks similar between different dates, with an -average between approx. 165 and 185 MB/s. Within each test run the results -vary, with a minimum 2 MB/s and maximum 617 MB/s on the totality. Most runs +average between approx. 170 and 200 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in -absolute numbers between the dates, between 566 and 617 MB/s. +absolute numbers between the dates, between 617 and 838 MB/s. SLA set to 400 MB/s. The SLA value is used as a reference, it has not been defined by OPNFV. @@ -64,8 +60,6 @@ by OPNFV. TC011 ----- -Only 2 test runs are available to report results on. - Packet delay variation between 2 VMs on different blades is measured using Iperf3. On the first date the reported packet delay variation varies between 0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. @@ -74,9 +68,7 @@ an average delay variation of 0.004 ms. TC012 ----- -Results are reported for 5 test runs. It is not known why the 6:th test run -is missing. -Between test dates the average measurements for memory bandwidth vary between +Between test dates, the average measurements for memory bandwidth vary between 17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. SLA set to 15 GB/s. The SLA value is used as a reference, it has not been @@ -84,16 +76,12 @@ defined by OPNFV. TC014 ----- -Results are reported for 5 test runs. It is not known why the 6:th test run -is missing. The Unixbench processor test run results vary between scores 3080 and 3240, one result each date. The average score on the total is 3150. No SLA set. TC037 ----- -Results are reported for 5 test runs. It is not currently known why the 6:th -test run is missing. The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs on different blades are measured when increasing the amount of UDP flows sent between the VMs using pktgen as packet generator tool. @@ -126,12 +114,138 @@ The lost amount of packets normally range between 100 and 1000 per test run, but there are spikes in the range of 10000 lost packets as well, and even more in a rare cases. +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +Between test dates, the average measurements for memory bandwidth vary between +15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal +BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + + +TC070 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + Detailed test results --------------------- -The scenario was run on Ericsson POD2_ with: -Fuel 8.0 -OpenStack Liberty -OpenVirtualSwitch 2.3.1 +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 OpenDayLight Beryllium Rationale for decisions @@ -154,3 +268,4 @@ to make further analyzes to find patterns and reasons for lost traffic. Also of interest could be to see if there are continuous variations where some test cases stand out with better or worse results than the general test case. + |