diff options
author | liang gao <jean.gaoliang@huawei.com> | 2016-09-20 09:55:09 +0000 |
---|---|---|
committer | Gerrit Code Review <gerrit@172.30.200.206> | 2016-09-20 09:55:09 +0000 |
commit | ad8e0f8b2203d837e5f9fa46ae34496751aa42d3 (patch) | |
tree | 57269d04f164090defa154f2f98ac8417eac6cb4 /docs/results/os-nosdn-kvm-ha.rst | |
parent | 18738b34abe226ff1a6a2d0dd7564737684c022e (diff) | |
parent | 197cef580315e942dffb517412374fd297c34b33 (diff) |
Merge "Update scenario test results files for Colorado release"
Diffstat (limited to 'docs/results/os-nosdn-kvm-ha.rst')
-rw-r--r-- | docs/results/os-nosdn-kvm-ha.rst | 270 |
1 files changed, 270 insertions, 0 deletions
diff --git a/docs/results/os-nosdn-kvm-ha.rst b/docs/results/os-nosdn-kvm-ha.rst new file mode 100644 index 000000000..a8a56f80e --- /dev/null +++ b/docs/results/os-nosdn-kvm-ha.rst @@ -0,0 +1,270 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +================================ +Test Results for os-nosdn-kvm-ha +================================ + +.. toctree:: + :maxdepth: 2 + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test +runs, each run on the Ericsson POD2_ or LF POD2_ between August 24 and 30 in +2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.44 and 0.75 ms. +A few runs start with a 0.65 - 0.68 ms RTT spike (This could be because of +normal ARP handling). One test run has a greater RTT spike of 1.49 ms. +To be able to draw conclusions more runs should be made. SLA set to 10 ms. +The SLA value is used as a reference, it has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 92 and 204 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 819 MB/s on the totality. Most runs +have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 238 and 819 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 2.07 ns. The variations within each test run are similar, between +1.41 and 3.53 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. The reported packet delay variation varies between 0.0051 and 0.0243 ms, +with an average delay variation between 0.0081 ms and 0.0195 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth result in +approx. 13.6 GB/s. Within each test run the results vary more, with a minimal +BW of 6.09 GB/s and maximum of 16.47 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 2316 and 3619, +one result each date. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal +BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + +Conclusions and recommendations +------------------------------- +The pktgen test configuration has a relatively large base effect on RTT in +TC037 compared to TC002, where there is no background load at all. Approx. +15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage +difference in RTT results. +Especially RTT and throughput come out with better results than for instance +the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should +probably be further analyzed and understood. Also of interest could be +to make further analyzes to find patterns and reasons for lost traffic. +Also of interest could be to see if there are continuous variations where +some test cases stand out with better or worse results than the general test +case. + |