From 197cef580315e942dffb517412374fd297c34b33 Mon Sep 17 00:00:00 2001 From: JingLu5 Date: Sun, 18 Sep 2016 15:31:20 +0800 Subject: Update scenario test results files for Colorado release JIRA: YARDSTICK-351 Also fix some errors in userguide Change-Id: Icd70f1cc99d735c62e235808eb721a7482e0a218 Signed-off-by: JingLu5 --- docs/release/release-notes.rst | 3 +- docs/results/apex-os-nosdn-nofeature-ha.rst | 267 --------------- docs/results/fuel-os-nosdn-kvm-ha.rst | 38 --- docs/results/fuel-os-odl_l2-nofeature-ha.rst | 271 --------------- docs/results/fuel-os-odl_l3-nofeature-ha.rst | 38 --- docs/results/joid-os-onos-sfc-ha.rst | 36 -- docs/results/os-nosdn-kvm-ha.rst | 270 +++++++++++++++ docs/results/os-nosdn-nofeature-ha.rst | 492 +++++++++++++++++++++++++++ docs/results/os-nosdn-nofeature-noha.rst | 259 ++++++++++++++ docs/results/os-odl_l2-bgpvpn-ha.rst | 53 +++ docs/results/os-odl_l2-nofeature-ha.rst | 274 +++++++++++++++ docs/results/os-onos-nofeature-ha.rst | 257 ++++++++++++++ docs/results/os-onos-sfc-ha.rst | 274 +++++++++++++++ docs/results/overview.rst | 30 +- docs/results/results.rst | 54 +-- docs/userguide/07-installation.rst | 143 +++----- 16 files changed, 1960 insertions(+), 799 deletions(-) delete mode 100644 docs/results/apex-os-nosdn-nofeature-ha.rst delete mode 100644 docs/results/fuel-os-nosdn-kvm-ha.rst delete mode 100644 docs/results/fuel-os-odl_l2-nofeature-ha.rst delete mode 100644 docs/results/fuel-os-odl_l3-nofeature-ha.rst delete mode 100644 docs/results/joid-os-onos-sfc-ha.rst create mode 100644 docs/results/os-nosdn-kvm-ha.rst create mode 100644 docs/results/os-nosdn-nofeature-ha.rst create mode 100644 docs/results/os-nosdn-nofeature-noha.rst create mode 100644 docs/results/os-odl_l2-bgpvpn-ha.rst create mode 100644 docs/results/os-odl_l2-nofeature-ha.rst create mode 100644 docs/results/os-onos-nofeature-ha.rst create mode 100644 docs/results/os-onos-sfc-ha.rst (limited to 'docs') diff --git a/docs/release/release-notes.rst b/docs/release/release-notes.rst index df20e4375..f1b8a8de1 100644 --- a/docs/release/release-notes.rst +++ b/docs/release/release-notes.rst @@ -502,7 +502,8 @@ Known Issues/Faults - IPv6 support - Boot up VM failed in joid-os-nosdn-lxd-ha and joid-os-nosdn-lxd-noha scenarios - Yardstick CI job timeout in fuel-os-onos-nofeature-ha scenario - + - SSH timeout in apex-os-onos-sfc-ha, apex-os-onos-nofeature-ha and apex-os-odl_l3-nofeature-ha scenarios + - Scp /home/stack/overcloudrc failed in apex-os-nosdn-ovs-noha and apex-os-odl_l2-sfc-noha scenarios .. note:: The faults not related to *Yardstick* framework, addressing scenarios which were not fully verified, are listed in the OPNFV installer's release diff --git a/docs/results/apex-os-nosdn-nofeature-ha.rst b/docs/results/apex-os-nosdn-nofeature-ha.rst deleted file mode 100644 index faf5e62fb..000000000 --- a/docs/results/apex-os-nosdn-nofeature-ha.rst +++ /dev/null @@ -1,267 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -=========================================== -Test Results for apex-os-nosdn-nofeature-ha -=========================================== - -.. toctree:: - :maxdepth: 2 - - -Details -======= - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs - - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test -runs, each run on the LF POD1_ between August 25 and 28 in -2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.74 and 1.08 ms. -A few runs start with a 0.99 - 1.07 ms RTT spike (This could be because of -normal ARP handling). One test run has a greater RTT spike of 1.35 ms. -To be able to draw conclusions more runs should be made. SLA set to 10 ms. -The SLA value is used as a reference, it has not been defined by OPNFV. - -TC005 ------ -The IO read bandwidth looks similar between different dates, with an -average between approx. 128 and 136 MB/s. Within each test run the results -vary, with a minimum 5 MB/s and maximum 446 MB/s on the totality. Most runs -have a minimum BW of 5 MB/s (one run at 6 MB/s). The maximum BW varies more in -absolute numbers between the dates, between 416 and 446 MB/s. -SLA set to 400 MB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC010 ------ -The measurements for memory latency are similar between test dates and result -in approx. 1.09 ns. The variations within each test run are similar, between -1.0860 and 1.0880 ns. -SLA set to 30 ns. The SLA value is used as a reference, it has not been defined -by OPNFV. - -TC011 ------ -Packet delay variation between 2 VMs on different blades is measured using -Iperf3. The reported packet delay variation varies between 0.0025 and 0.0148 ms, -with an average delay variation between 0.0056 ms and 0.0157 ms. - -TC012 ------ -Between test dates, the average measurements for memory bandwidth result in -approx. 19.70 GB/s. Within each test run the results vary more, with a minimal -BW of 18.16 GB/s and maximum of 20.13 GB/s on the totality. -SLA set to 15 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC014 ------ -The Unixbench processor test run results vary between scores 3224.4 and 3842.8, -one result each date. The average score on the total is 3659.5. -No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -CPU utilization statistics are collected during UDP flows sent between the VMs -using pktgen as packet generator tool. The average measurements for CPU -utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio -appears around 7%. - -TC069 -Between test dates, the average measurements for memory bandwidth vary between -22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal -BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. -SLA set to 6 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - - -TC070 -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Memory utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for memory -utilization vary between 225MB to 246MB. The peak of memory utilization appears -around 340MB. - -TC071 -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Cache utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for cache -utilization vary between 205MB to 212MB. - -TC072 -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. Total number of packets received per -second was average on 200 kpps and total number of packets transmitted per -second was average on 600 kpps. - -Detailed test results ---------------------- -The scenario was run on LF POD1_ with: -Apex -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - -Conclusions and recommendations -------------------------------- -The pktgen test configuration has a relatively large base effect on RTT in -TC037 compared to TC002, where there is no background load at all. Approx. -15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage -difference in RTT results. -Especially RTT and throughput come out with better results than for instance -the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should -probably be further analyzed and understood. Also of interest could be -to make further analyzes to find patterns and reasons for lost traffic. -Also of interest could be to see if there are continuous variations where -some test cases stand out with better or worse results than the general test -case. - diff --git a/docs/results/fuel-os-nosdn-kvm-ha.rst b/docs/results/fuel-os-nosdn-kvm-ha.rst deleted file mode 100644 index 217bab7c0..000000000 --- a/docs/results/fuel-os-nosdn-kvm-ha.rst +++ /dev/null @@ -1,38 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -===================================== -Test Results for fuel-os-nosdn-kvm-ha -===================================== - -.. toctree:: - :maxdepth: 2 - - -Details -======= - -.. after this doc is filled, remove all comments and include the scenario in -.. results.rst by removing the comment on the file name. - - -Overview of test results ------------------------- - -.. general on metrics collected, number of iterations - -Detailed test results ---------------------- - -.. info on lab, installer, scenario - -Rationale for decisions ------------------------ -.. result analysis, pass/fail - -Conclusions and recommendations -------------------------------- - -.. did the expected behavior occured? diff --git a/docs/results/fuel-os-odl_l2-nofeature-ha.rst b/docs/results/fuel-os-odl_l2-nofeature-ha.rst deleted file mode 100644 index e9ef8fe65..000000000 --- a/docs/results/fuel-os-odl_l2-nofeature-ha.rst +++ /dev/null @@ -1,271 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -============================================ -Test Results for fuel-os-odl_l2-nofeature-ha -============================================ - -.. toctree:: - :maxdepth: 2 - - -Details -======= - -.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main -.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs - -Overview of test results ------------------------- - -See Grafana_ for viewing test result metrics for each respective test case. It -is possible to chose which specific scenarios to look at, and then to zoom in -on the details of each run test scenario as well. - -All of the test case results below are based on 4 scenario test -runs, each run on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in -2016. - -TC002 ------ -The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.5 and 0.6 ms. -A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP -handling). One test run has a greater RTT spike of 1.9 ms, which is the same -one with the 0.7 ms average. The other runs have no similar spike at all. -To be able to draw conclusions more runs should be made. -SLA set to 10 ms. The SLA value is used as a reference, it has not -been defined by OPNFV. - -TC005 ------ -The IO read bandwidth looks similar between different dates, with an -average between approx. 170 and 200 MB/s. Within each test run the results -vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs -have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in -absolute numbers between the dates, between 617 and 838 MB/s. -SLA set to 400 MB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC010 ------ -The measurements for memory latency are similar between test dates and result -in approx. 1.2 ns. The variations within each test run are similar, between -1.215 and 1.219 ns. One exception is February 16, where the average is 1.222 -and varies between 1.22 and 1.28 ns. -SLA set to 30 ns. The SLA value is used as a reference, it has not been defined -by OPNFV. - -TC011 ------ -Packet delay variation between 2 VMs on different blades is measured using -Iperf3. On the first date the reported packet delay variation varies between -0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. -On the second date the delay variation varies between 0.002 and 0.006 ms, with -an average delay variation of 0.004 ms. - -TC012 ------ -Between test dates, the average measurements for memory bandwidth vary between -17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal -BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. -SLA set to 15 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - -TC014 ------ -The Unixbench processor test run results vary between scores 3080 and 3240, -one result each date. The average score on the total is 3150. -No SLA set. - -TC037 ------ -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -CPU utilization statistics are collected during UDP flows sent between the VMs -using pktgen as packet generator tool. The average measurements for CPU -utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio -appears around 7%. - -TC069 -Between test dates, the average measurements for memory bandwidth vary between -15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal -BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. -SLA set to 6 GB/s. The SLA value is used as a reference, it has not been -defined by OPNFV. - - -TC070 -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Memory utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for memory -utilization vary between 225MB to 246MB. The peak of memory utilization appears -around 340MB. - -TC071 -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Cache utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. The average measurements for cache -utilization vary between 205MB to 212MB. - -TC072 -The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs -on different blades are measured when increasing the amount of UDP flows sent -between the VMs using pktgen as packet generator tool. - -Round trip times and packet throughput between VMs can typically be affected by -the amount of flows set up and result in higher RTT and less PPS throughput. - -The RTT results are similar throughout the different test dates and runs at -approx. 15 ms. Some test runs show an increase with many flows, in the range -towards 16 to 17 ms. One exception standing out is Feb. 15 where the average -RTT is stable at approx. 13 ms. The PPS results are not as consistent as the -RTT results. -In some test runs when running with less than approx. 10000 flows the PPS -throughput is normally flatter compared to when running with more flows, after -which the PPS throughput decreases. Around 20 percent decrease in the worst -case. For the other test runs there is however no significant change to the PPS -throughput when the number of flows are increased. In some test runs the PPS -is also greater with 1000000 flows compared to other test runs where the PPS -result is less with only 2 flows. - -The average PPS throughput in the different runs varies between 414000 and -452000 PPS. The total amount of packets in each test run is approx. 7500000 to -8200000 packets. One test run Feb. 15 sticks out with a PPS average of -558000 and approx. 1100000 packets in total (same as the on mentioned earlier -for RTT results). - -There are lost packets reported in most of the test runs. There is no observed -correlation between the amount of flows and the amount of lost packets. -The lost amount of packets normally range between 100 and 1000 per test run, -but there are spikes in the range of 10000 lost packets as well, and even -more in a rare cases. - -Network utilization statistics are collected during UDP flows sent between the -VMs using pktgen as packet generator tool. Total number of packets received per -second was average on 200 kpps and total number of packets transmitted per -second was average on 600 kpps. - -Detailed test results ---------------------- -The scenario was run on Ericsson POD2_ and LF POD2_ with: -Fuel 9.0 -OpenStack Mitaka -OpenVirtualSwitch 2.5.90 -OpenDayLight Beryllium - -Rationale for decisions ------------------------ -Pass - -Tests were successfully executed and metrics collected. -No SLA was verified. To be decided on in next release of OPNFV. - -Conclusions and recommendations -------------------------------- -The pktgen test configuration has a relatively large base effect on RTT in -TC037 compared to TC002, where there is no background load at all. Approx. -15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage -difference in RTT results. -Especially RTT and throughput come out with better results than for instance -the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should -probably be further analyzed and understood. Also of interest could be -to make further analyzes to find patterns and reasons for lost traffic. -Also of interest could be to see if there are continuous variations where -some test cases stand out with better or worse results than the general test -case. - diff --git a/docs/results/fuel-os-odl_l3-nofeature-ha.rst b/docs/results/fuel-os-odl_l3-nofeature-ha.rst deleted file mode 100644 index 7c9c377cb..000000000 --- a/docs/results/fuel-os-odl_l3-nofeature-ha.rst +++ /dev/null @@ -1,38 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -============================================ -Test Results for fuel-os-odl_l3-nofeature-ha -============================================ - -.. toctree:: - :maxdepth: 2 - - -Details -======= - -.. after this doc is filled, remove all comments and include the scenario in -.. results.rst by removing the comment on the file name. - - -Overview of test results ------------------------- - -.. general on metrics collected, number of iterations - -Detailed test results ---------------------- - -.. info on lab, installer, scenario - -Rationale for decisions ------------------------ -.. result analysis, pass/fail - -Conclusions and recommendations -------------------------------- - -.. did the expected behavior occured? diff --git a/docs/results/joid-os-onos-sfc-ha.rst b/docs/results/joid-os-onos-sfc-ha.rst deleted file mode 100644 index 3d80d38ef..000000000 --- a/docs/results/joid-os-onos-sfc-ha.rst +++ /dev/null @@ -1,36 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International -.. License. -.. http://creativecommons.org/licenses/by/4.0 - - -===================================== -Test Results for joid-os-onos-sfc-ha -===================================== - -.. toctree:: - :maxdepth: 2 - - -Details -======= - -.. after this doc is filled, remove all comments and include the scenario in -.. results.rst by removing the comment on the file name. - - -Overview of test results ------------------------- - -.. general on metrics collected, number of iterations - -Detailed test results ---------------------- - -.. info on lab, installer, scenario - -Rationale for decisions ------------------------ -.. result analysis, pass/fail - -Conclusions and recommendations -------------------------------- diff --git a/docs/results/os-nosdn-kvm-ha.rst b/docs/results/os-nosdn-kvm-ha.rst new file mode 100644 index 000000000..a8a56f80e --- /dev/null +++ b/docs/results/os-nosdn-kvm-ha.rst @@ -0,0 +1,270 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +================================ +Test Results for os-nosdn-kvm-ha +================================ + +.. toctree:: + :maxdepth: 2 + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test +runs, each run on the Ericsson POD2_ or LF POD2_ between August 24 and 30 in +2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.44 and 0.75 ms. +A few runs start with a 0.65 - 0.68 ms RTT spike (This could be because of +normal ARP handling). One test run has a greater RTT spike of 1.49 ms. +To be able to draw conclusions more runs should be made. SLA set to 10 ms. +The SLA value is used as a reference, it has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 92 and 204 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 819 MB/s on the totality. Most runs +have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 238 and 819 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 2.07 ns. The variations within each test run are similar, between +1.41 and 3.53 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. The reported packet delay variation varies between 0.0051 and 0.0243 ms, +with an average delay variation between 0.0081 ms and 0.0195 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth result in +approx. 13.6 GB/s. Within each test run the results vary more, with a minimal +BW of 6.09 GB/s and maximum of 16.47 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 2316 and 3619, +one result each date. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal +BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + +Conclusions and recommendations +------------------------------- +The pktgen test configuration has a relatively large base effect on RTT in +TC037 compared to TC002, where there is no background load at all. Approx. +15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage +difference in RTT results. +Especially RTT and throughput come out with better results than for instance +the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should +probably be further analyzed and understood. Also of interest could be +to make further analyzes to find patterns and reasons for lost traffic. +Also of interest could be to see if there are continuous variations where +some test cases stand out with better or worse results than the general test +case. + diff --git a/docs/results/os-nosdn-nofeature-ha.rst b/docs/results/os-nosdn-nofeature-ha.rst new file mode 100644 index 000000000..9e52731d5 --- /dev/null +++ b/docs/results/os-nosdn-nofeature-ha.rst @@ -0,0 +1,492 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +====================================== +Test Results for os-nosdn-nofeature-ha +====================================== + +.. toctree:: + :maxdepth: 2 + + +apex +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs + + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test +runs, each run on the LF POD1_ between August 25 and 28 in +2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.74 and 1.08 ms. +A few runs start with a 0.99 - 1.07 ms RTT spike (This could be because of +normal ARP handling). One test run has a greater RTT spike of 1.35 ms. +To be able to draw conclusions more runs should be made. SLA set to 10 ms. +The SLA value is used as a reference, it has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 128 and 136 MB/s. Within each test run the results +vary, with a minimum 5 MB/s and maximum 446 MB/s on the totality. Most runs +have a minimum BW of 5 MB/s (one run at 6 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 416 and 446 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 1.09 ns. The variations within each test run are similar, between +1.0860 and 1.0880 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. The reported packet delay variation varies between 0.0025 and 0.0148 ms, +with an average delay variation between 0.0056 ms and 0.0157 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth result in +approx. 19.70 GB/s. Within each test run the results vary more, with a minimal +BW of 18.16 GB/s and maximum of 20.13 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 3224.4 and 3842.8, +one result each date. The average score on the total is 3659.5. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal +BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on LF POD1_ with: +Apex +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + + +Joid +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs + + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Intel POD5_ between September 11 and 14 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 1.59 and 1.70 ms. +Two test runs have reached the same greater RTT spike of 3.06 ms, which are +1.66 and 1.70 ms average, but only one has the lower RTT of 1.35 ms. The other +two runs have no similar spike at all. To be able to draw conclusions more runs +should be made. SLA set to be 10 ms. The SLA value is used as a reference, it +has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput and the +greatest IO read bandwidth of the four runs is 173.3 MB/s. The IO read +bandwidth of the four runs looks similar on different four days, with an +average between 32.7 and 60.4 MB/s. One of the runs has a minimum BW of 429 +KM/s and other has a maximum BW of 173.3 MB/s. The SLA of read bandwidth sets +to be 400 MB/s, which is used as a reference, and it has not been defined by +OPNFV. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the four runs is 1.1 ns on average. The +variations within each test run are different, some vary from a large range and +others have a small change. For example, the largest change is on September 14, +the memory read latency of which is ranging from 1.12 ns to 1.22 ns. However, +the results on September 12 change very little, which range from 1.14 ns to +1.17 ns. The SLA sets to be 30 ns. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC011 +----- +Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on +different blades. The reported pocket delay variations of the four test runs +differ from each other. The results on September 13 within the date look +similar and the values are between 0.0087 and 0.0190 ms, which is 0.0126 ms on +average. However, on the fourth day, the pocket delay variation has a large +wide change within the date, which ranges from 0.0032 ms to 0.0121 ms and has +the minimum average value. The pocket delay variations of other two test runs +look relatively similar, which are 0.0076 ms and 0.0152 ms on average. The SLA +value sets to be 10 ms. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the four test runs, the memory +bandwidth within the second day almost keep stable, which is 11.58 GB/s on +average. And the memory bandwidth of the fourth day look similar as that of the +second day, both of which remain stable. The other two test runs relatively +change from a large wide range, in which the minimum memory bandwidth is 11.22 +GB/s and the maximum bandwidth is 16.65 GB/s with an average bandwidth of about +12.20 GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, +it has not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to measure processing speed, that is instructions per +second. It can be seen from the dashboard that the processing test results +vary from scores 3272 to 3444, and there is only one result one date. The +overall average score is 3371. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the four test runs is 119.85, 128.02, 121.40 and +126.08 kpps, of which the result of the second is the highest. The RTT results +of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS +results are not as consistent as the RTT results. + +The No. flows of the four test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 184 kpps and the +minimum throughput is 49 K respectively. + +There are no errors of packets received in the four runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the four +runs have the similar average vaue, that is 38 ms, of which the worest RTT is +93 ms on Sep. 14th. + +CPU load of the four test runs have a large change, since the minimum value and +the peak of CPU load is 0 percent and 51 percent respectively. And the best +result is obtained on Sep. 14th. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 22.3 GB/s to 26.8 GB/s and then to 18.5 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth +look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. SLA +sets to be 7 GB/s. The SLA value is used as a a reference, it has not been +defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT can reach +more than 80 ms and the average RTT is usually approx. 38 ms. On the whole, the +average RTTs of the four runs keep flat. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 268 MiB on Sep +14, which also has the largest minimum memory. Besides, the rest three test +runs have the similar used memory. On the other hand, the free memory of the +four runs have the same smallest minimum value, that is about 223 MiB, and the +maximum free memory of three runs have the similar result, that is 337 MiB, +except that on Sep. 14th, whose maximum free memory is 254 MiB. On the whole, +all the test runs have similar average free memory. + +Network throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +network throughput of the four test runs seem quite different, ranging from +119.85 kpps to 128.02 kpps. The average number of flows in these tests is +24000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding packet throughput +differ between 49.4k and 193.3k with an average packet throughput of approx. +125k. On the whole, the PPS results seem consistent. Within each test run of +the four runs, when number of flows becomes larger, the packet throughput seems +not larger in the meantime. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT can reach +more than 94 ms and the average RTT is usually approx. 35 ms. On the whole, the +average RTTs of the four runs keep flat. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool.The largest +cache size is 212 MiB in the four runs, and the smallest cache size is 75 MiB. +On the whole, the average cache size of the four runs is approx. 208 MiB. +Meanwhile, the tread of the buffer size looks similar with each other. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs seem quite different, ranging from 119.85 kpps to 128.02 +kpps. The average number of flows in these tests is 239.7k, and each run has a +minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the +same time, the corresponding packet throughput differ between 49.4k and 193.3k +with an average packet throughput of approx. 125k. On the whole, the PPS results +seem consistent. Within each test run of the four runs, when number of flows +becomes larger, the packet throughput seems not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 32 ms. The PPS results are not as consistent as the RTT results. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second differs from each other, in which the smallest number of +packets transmitted per second is 6 pps on Sep. 12ed and the largest of that is +210.8 kpps. Meanwhile, the largest total number of packets received per second +differs from each other, in which the smallest number of packets received per +second is 2 pps on Sep. 13rd and the largest of that is 250.2 kpps. + +In some test runs when running with less than approx. 90000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 1000000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Intel POD5_ with: +Joid +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + + diff --git a/docs/results/os-nosdn-nofeature-noha.rst b/docs/results/os-nosdn-nofeature-noha.rst new file mode 100644 index 000000000..8b7c184bb --- /dev/null +++ b/docs/results/os-nosdn-nofeature-noha.rst @@ -0,0 +1,259 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +======================================== +Test Results for os-nosdn-nofeature-noha +======================================== + +.. toctree:: + :maxdepth: 2 + + +Joid +===== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Intel POD5_ between September 12 and 15 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 1.50 and 1.68 ms. +Only one test run has reached greatest RTT spike of 2.92 ms, which has +the smallest RTT of 1.06 ms. The other three runs have no similar spike at all, +the minimum and average RTTs of which are approx. 1.50 ms and 1.68 ms. SLA set to +be 10 ms. The SLA value is used as a reference, it has not been defined by +OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput, which is +measured by fio and the greatest IO read bandwidth of the four runs is 177.5 +MB/s. The IO read bandwidth of the four runs looks similar on different four +days, with an average between 46.7 and 62.5 MB/s. One of the runs has a minimum +BW of 680 KM/s and other has a maximum BW of 177.5 MB/s. The SLA of read +bandwidth sets to be 400 MB/s, which is used as a reference, and it has not +been defined by OPNFV. + +The results of storage IOPS for the four runs look similar with each other. The +test runs all have an approx. 1.55 K/s for IO reading with an minimum value of +less than 60 times per second. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the four runs is between 1.134 ns and 1.227 +ns on average. The variations within each test run are quite different, some +vary from a large range and others have a small change. For example, the +largest change is on September 15, the memory read latency of which is ranging +from 1.116 ns to 1.393 ns. However, the results on September 12 change very +little, which mainly keep flat and range from 1.124 ns to 1.55 ns. The SLA sets +to be 30 ns. The SLA value is used as a reference, it has not been defined by +OPNFV. + +TC011 +----- +Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on +different blades. The reported pocket delay variations of the four test runs +differ from each other. The results on September 13 within the date look +similar and the values are between 0.0213 and 0.0225 ms, which is 0.0217 ms on +average. However, on the third day, the packet delay variation has a large +wide change within the date, which ranges from 0.008 ms to 0.0225 ms and has +the minimum value. On Sep. 12, the packet delay is quite long, for the value is +between 0.0236 and 0.0287 ms and it also has the maximum packet delay of 0.0287 +ms. The packet delay of the last test run is 0.0151 ms on average. The SLA +value sets to be 10 ms. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the four test runs, the memory +bandwidth of three test runs almost keep stable within each run, which is +11.65, 11.57 and 11.64 GB/s on average. However, the memory read and write +bandwidth on Sep. 14 has a large range, for it ranges from 11.36 GB/s to 16.68 +GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to evaluate the IaaS processing speed with regards to +score of single cpu running and parallel running. It can be seen from the +dashboard that the processing test results vary from scores 3222 to 3585, and +there is only one result one date. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the four test runs is 124.8, 160.1, 113.8 and +137.3 kpps, of which the result of the second is the highest. The RTT results +of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS +results are not as consistent as the RTT results. + +The No. flows of the four test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 243.1 kpps and the +minimum throughput is 37.6 kpps respectively. + +There are no errors of packets received in the four runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the four +runs have the similar average vaue, that is between 32 ms and 41 ms, of which +the worest RTT is 155 ms on Sep. 14th. + +CPU load is measured by mpstat, and CPU load of the four test runs seem a +little similar, since the minimum value and the peak of CPU load is between 0 +percent and 9 percent respectively. And the best result is obtained on Sep. +15th, with an CPU load of nine percent. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 22.4 GB/s to 26.5 GB/s and then to 18.6 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth +look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. And +then with the block size becoming larger, the memory write bandwidth tends to +decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has +not been defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of three test runs look +similar with each other, and Within these test runs, the maximum RTT can reach +95 ms and the average RTT is usually approx. 36 ms. The network latency tested +on Sep. 14 shows that it has a peak latency of 155 ms. But on the whole, the +average RTTs of the four runs keep flat. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 270 MiB on Sep +13, which also has the smallest minimum memory utilization. Besides, the rest +three test runs have the similar used memory with an average memory usage of +264 MiB. On the other hand, the free memory of the four runs have the same +smallest minimum value, that is about 223 MiB, and the maximum free memory of +three runs have the similar result, that is 226 MiB, except that on Sep. 13th, +whose maximum free memory is 273 MiB. On the whole, all the test runs have +similar average free memory. + +Network throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +network throughput of the four test runs seem quite different, ranging from +119.85 kpps to 128.02 kpps. The average number of flows in these tests is +240000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding packet throughput +differ between 38k and 243k with an average packet throughput of approx. 134k. +On the whole, the PPS results seem consistent. Within each test run of the four +runs, when number of flows becomes larger, the packet throughput seems not +larger in the meantime. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the four test runs +look similar with each other. Within each test run, the maximum RTT can reach +79 ms and the average RTT is usually approx. 35 ms. On the whole, the average +RTTs of the four runs keep flat. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool.The largest +cache size is 214 MiB in the four runs, and the smallest cache size is 100 MiB. +On the whole, the average cache size of the four runs is approx. 210 MiB. +Meanwhile, the tread of the buffer size looks similar with each other. On the +other hand, the mean buffer size of the four runs keep flat, since they have a +minimum value of approx. 7 MiB and a maximum value of 8 MiB, with an average +value of about 8 MiB. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs seem quite different, ranging from 113.8 kpps to 124.8 kpps. +The average number of flows in these tests is 240k, and each run has a minimum +number of flows of 2 and a maximum number of flows of 1.001 Mil. At the same +time, the corresponding packet throughput differ between 47.6k and 243.1k with +an average packet throughput between 113.8k and 160.1k. Within each test run of +the four runs, when number of flows becomes larger, the packet throughput seems +not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs +between 0 ms and 79 ms with an average leatency of approx. 35 ms. The PPS +results are not as consistent as the RTT results, for the mean packet +throughput of the four runs differ from 113.8 kpps to 124.8 kpps. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second look similar on the first three runs with a minimum +number of 10 pps and a maximum number of 97 kpps, except the one on Sep. 15th, +in which the number of packets transmitted per second is 10 pps. Meanwhile, the +largest total number of packets received per second differs from each other, +in which the smallest number of packets received per second is 1 pps and the +largest of that is 276 kpps. + +In some test runs when running with less than approx. 90000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 1000000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Intel POD5_ with: +Joid +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. diff --git a/docs/results/os-odl_l2-bgpvpn-ha.rst b/docs/results/os-odl_l2-bgpvpn-ha.rst new file mode 100644 index 000000000..2bd6dc35d --- /dev/null +++ b/docs/results/os-odl_l2-bgpvpn-ha.rst @@ -0,0 +1,53 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +==================================== +Test Results for os-odl_l2-bgpvpn-ha +==================================== + +.. toctree:: + :maxdepth: 2 + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Ericsson POD2_ between September 7 and 11 in 2016. + +TC043 +----- +The round-trip-time (RTT) between 2 nodes is measured using +ping. Most test run measurements result on average between 0.21 and 0.28 ms. +A few runs start with a 0.32 - 0.35 ms RTT spike (This could be because of +normal ARP handling). To be able to draw conclusions more runs should be made. +SLA set to 10 ms. The SLA value is used as a reference, it has not been defined +by OPNFV. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + diff --git a/docs/results/os-odl_l2-nofeature-ha.rst b/docs/results/os-odl_l2-nofeature-ha.rst new file mode 100644 index 000000000..6eb6252af --- /dev/null +++ b/docs/results/os-odl_l2-nofeature-ha.rst @@ -0,0 +1,274 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +======================================= +Test Results for os-odl_l2-nofeature-ha +======================================= + +.. toctree:: + :maxdepth: 2 + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test +runs, each run on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in +2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.5 and 0.6 ms. +A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP +handling). One test run has a greater RTT spike of 1.9 ms, which is the same +one with the 0.7 ms average. The other runs have no similar spike at all. +To be able to draw conclusions more runs should be made. +SLA set to 10 ms. The SLA value is used as a reference, it has not +been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 170 and 200 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs +have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 617 and 838 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 1.2 ns. The variations within each test run are similar, between +1.215 and 1.219 ns. One exception is February 16, where the average is 1.222 +and varies between 1.22 and 1.28 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. On the first date the reported packet delay variation varies between +0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. +On the second date the delay variation varies between 0.002 and 0.006 ms, with +an average delay variation of 0.004 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth vary between +17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal +BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 3080 and 3240, +one result each date. The average score on the total is 3150. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal +BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + +Conclusions and recommendations +------------------------------- +The pktgen test configuration has a relatively large base effect on RTT in +TC037 compared to TC002, where there is no background load at all. Approx. +15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage +difference in RTT results. +Especially RTT and throughput come out with better results than for instance +the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should +probably be further analyzed and understood. Also of interest could be +to make further analyzes to find patterns and reasons for lost traffic. +Also of interest could be to see if there are continuous variations where +some test cases stand out with better or worse results than the general test +case. + diff --git a/docs/results/os-onos-nofeature-ha.rst b/docs/results/os-onos-nofeature-ha.rst new file mode 100644 index 000000000..e5587505f --- /dev/null +++ b/docs/results/os-onos-nofeature-ha.rst @@ -0,0 +1,257 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +====================================== +Test Results for os-onos-nofeature-ha +====================================== + +.. toctree:: + :maxdepth: 2 + + +Joid +===== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 5 scenario test runs, each run +on the Intel POD6_ between September 13 and 16 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 1.50 and 1.68 ms. +Only one test run has reached greatest RTT spike of 2.62 ms, which has +the smallest RTT of 1.00 ms. The other four runs have no similar spike at all, +the minimum and average RTTs of which are approx. 1.06 ms and 1.32 ms. SLA set +to be 10 ms. The SLA value is used as a reference, it has not been defined by +OPNFV. + +TC005 +----- +The IO read bandwidth actually refers to the storage throughput, which is +measured by fio and the greatest IO read bandwidth of the four runs is 175.4 +MB/s. The IO read bandwidth of the four runs looks similar on different four +days, with an average between 58.1 and 62.0 MB/s, except one on Sep. 14, for +its maximum storage throughput is only 133.0 MB/s. One of the runs has a +minimum BW of 497 KM/s and other has a maximum BW of 177.4 MB/s. The SLA of read +bandwidth sets to be 400 MB/s, which is used as a reference, and it has not +been defined by OPNFV. + +The results of storage IOPS for the five runs look similar with each other. The +IO read times per second of the five test runs have an average value between +1.20 K/s and 1.61 K/s, and meanwhile, the minimum result is only 41 times per +second. + +TC010 +----- +The tool we use to measure memory read latency is lmbench, which is a series of +micro benchmarks intended to measure basic operating system and hardware system +metrics. The memory read latency of the five runs is between 1.146 ns and 1.172 +ns on average. The variations within each test run are quite different, some +vary from a large range and others have a small change. For example, the +largest change is on September 13, the memory read latency of which is ranging +from 1.152 ns to 1.221 ns. However, the results on September 14 change very +little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC011 +----- +Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on +different blades. The reported packet delay variations of the five test runs +differ from each other. In general, the packet delay of the first two runs look +similar, for they both stay stable within each run. And the mean packet delay of +of them are 0.07714 ms and 0.07982 ms respectively. Of the five runs, the third +has the worst result, because the packet delay reaches 0.08384 ms. The trend of +therest two runs look the same, for the average packet delay are 0.07808 ms and +0.07727 ms respectively. The SLA value sets to be 10 ms. The SLA value is used +as a reference, it has not been defined by OPNFV. + +TC012 +----- +Lmbench is also used to measure the memory read and write bandwidth, in which +we use bw_mem to obtain the results. Among the five test runs, the memory +bandwidth of last three test runs almost keep stable within each run, which is +11.64, 11.71 and 11.61 GB/s on average. However, the memory read and write +bandwidth on Sep. 13 has a large range, for it ranges from 6.68 GB/s to 11.73 +GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has +not been defined by OPNFV. + +TC014 +----- +The Unixbench is used to evaluate the IaaS processing speed with regards to +score of single cpu running and parallel running. It can be seen from the +dashboard that the processing test results vary from scores 3208 to 3314, and +there is only one result one date. No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The mean packet throughput of the five test runs is between 259.6 kpps and +318.4 kpps, of which the result of the second run is the highest. The RTT +results of all the test runs keep flat at approx. 20 ms. It is obvious that the +PPS results are not as consistent as the RTT results. + +The No. flows of the five test runs are 240 k on average and the PPS results +look a little waved since the largest packet throughput is 398.9 kpps and the +minimum throughput is 250.6 kpps respectively. + +There are no errors of packets received in the five runs, but there are still +lost packets in all the test runs. The RTT values obtained by ping of the five +runs have the similar average vaue, that is between 17 ms and 22 ms, of which +the worest RTT is 53 ms on Sep. 14th. + +CPU load is measured by mpstat, and CPU load of the four test runs seem a +little similar, since the minimum value and the peak of CPU load is between 0 +percent and 10 percent respectively. And the best result is obtained on Sep. +13rd, with an CPU load of 10 percent. + +TC069 +----- +With the block size changing from 1 kb to 512 kb, the memory write bandwidth +tends to become larger first and then smaller within every run test, which +rangs from 21.6 GB/s to 26.8 GB/s and then to 18.4 GB/s on average. Since the +test id is one, it is that only the INT memory write bandwidth is tested. On +the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth +look similar with a minimal BW of 23.0 GB/s and peak value of 28.6 GB/s. And +then with the block size becoming larger, the memory write bandwidth tends to +decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has +not been defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the five test runs +look similar with each other, and within these test runs, the maximum RTT can +reach 53 ms and the average RTT is usually approx. 18 ms. The network latency +tested on Sep. 14 shows that it has a peak latency of 53 ms. But on the whole, +the average RTTs of the five runs keep flat and the network latency is +relatively short. + +Memory utilization is measured by free, which can display amount of free and +used memory in the system. The largest amount of used memory is 272 MiB on Sep +14. In general, the mean used memory of the five test runs have the similar +trend and the minimum memory used size is approx. 150 MiB, and the average +used memory size is about 250 MiB. On the other hand, for the mean free memory, +the five test runs have the similar trend, whose mean free memory change from +218 MiB to 342 MiB, with an average value of approx. 38 MiB. + +Packet throughput and packet loss can be measured by pktgen, which is a tool +in the network for generating traffic loads for network experiments. The mean +packet throughput of the five test runs seem quite different, ranging from +285.29 kpps to 297.76 kpps. The average number of flows in these tests is +240000, and each run has a minimum number of flows of 2 and a maximum number +of flows of 1.001 Mil. At the same time, the corresponding packet throughput +differ between 250.6k and 398.9k with an average packet throughput between +277.2 K and 318.4 K. In summary, the PPS results seem consistent. Within each +test run of the five runs, when number of flows becomes larger, the packet +throughput seems not larger at the same time. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The network latency is measured by ping, and the results of the five test runs +look similar with each other. Within each test run, the maximum RTT is only 49 +ms and the average RTT is usually approx. 20 ms. On the whole, the average +RTTs of the five runs keep stable and the network latency is relatively short. + +Cache utilization is measured by cachestat, which can display size of cache and +buffer in the system. Cache utilization statistics are collected during UDP +flows sent between the VMs using pktgen as packet generator tool.The largest +cache size is 215 MiB in the four runs, and the smallest cache size is 95 MiB. +On the whole, the average cache size of the five runs change a little and is +about 200 MiB, except the one on Sep. 14th, the mean cache size is very small, +which keeps 102 MiB. Meanwhile, the tread of the buffer size keep flat, since +they have a minimum value of 7 MiB and a maximum value of 8 MiB, with an +average value of about 7.8 MiB. + +Packet throughput can be measured by pktgen, which is a tool in the network for +generating traffic loads for network experiments. The mean packet throughput of +the four test runs seem quite different, ranging from 285.29 kpps to 297.76 +kpps. The average number of flows in these tests is 239.7k, and each run has a +minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the +same time, the corresponding packet throughput differ between 227.3k and 398.9k +with an average packet throughput between 277.2k and 318.4k. Within each test +run of the five runs, when number of flows becomes larger, the packet +throughput seems not larger in the meantime. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs + between 0 ms and 49 ms with an average leatency of less than 22 ms. The PPS +results are not as consistent as the RTT results, for the mean packet +throughput of the five runs differ from 250.6 kpps to 398.9 kpps. + +Network utilization is measured by sar, that is system activity reporter, which +can display the average statistics for the time since the system was started. +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The largest total number of packets +transmitted per second look similar for four test runs, whose values change a +lot from 10 pps to 399 kpps, except the one on Sep. 14th, whose total number +of transmitted per second keep stable, that is 10 pps. Similarly, the total +number of packets received per second look the same for four runs, except the +one on Sep. 14th, whose value is only 10 pps. + +In some test runs when running with less than approx. 90000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. For the other test runs there is however no +significant change to the PPS throughput when the number of flows are +increased. In some test runs the PPS is also greater with 250000 flows +compared to other test runs where the PPS result is less with only 2 flows. + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally differs a lot per test run. + +Detailed test results +--------------------- +The scenario was run on Intel POD5_ with: +Joid +OpenStack Mitaka +Onos Goldeneye +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Conclusions and recommendations +------------------------------- +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. diff --git a/docs/results/os-onos-sfc-ha.rst b/docs/results/os-onos-sfc-ha.rst new file mode 100644 index 000000000..1a09f53d7 --- /dev/null +++ b/docs/results/os-onos-sfc-ha.rst @@ -0,0 +1,274 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +=============================== +Test Results for os-onos-sfc-ha +=============================== + +.. toctree:: + :maxdepth: 2 + + +fuel +==== + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test runs, each run +on the Ericsson POD2_ or LF POD2_ between September 5 and 10 in 2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.5 and 0.6 ms. +A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP +handling). One test run has a greater RTT spike of 1.9 ms, which is the same +one with the 0.7 ms average. The other runs have no similar spike at all. +To be able to draw conclusions more runs should be made. +SLA set to 10 ms. The SLA value is used as a reference, it has not +been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 170 and 200 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs +have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 617 and 838 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 1.2 ns. The variations within each test run are similar, between +1.215 and 1.219 ns. One exception is February 16, where the average is 1.222 +and varies between 1.22 and 1.28 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. On the first date the reported packet delay variation varies between +0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. +On the second date the delay variation varies between 0.002 and 0.006 ms, with +an average delay variation of 0.004 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth vary between +17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal +BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 3080 and 3240, +one result each date. The average score on the total is 3150. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +----- +Between test dates, the average measurements for memory bandwidth vary between +15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal +BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC070 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +Onos Goldeneye +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + +Conclusions and recommendations +------------------------------- +The pktgen test configuration has a relatively large base effect on RTT in +TC037 compared to TC002, where there is no background load at all. Approx. +15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage +difference in RTT results. +Especially RTT and throughput come out with better results than for instance +the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should +probably be further analyzed and understood. Also of interest could be +to make further analyzes to find patterns and reasons for lost traffic. +Also of interest could be to see if there are continuous variations where +some test cases stand out with better or worse results than the general test +case. + diff --git a/docs/results/overview.rst b/docs/results/overview.rst index ee0ebe504..b4a050545 100644 --- a/docs/results/overview.rst +++ b/docs/results/overview.rst @@ -45,39 +45,41 @@ The list of scenarios supported by each installer can be described as follows: +-------------------------+---------+---------+---------+---------+ | Scenario | Apex | Compass | Fuel | Joid | +=========================+=========+=========+=========+=========+ -| os-nosdn-nofeature-noha | | | | X | +| os-nosdn-nofeature-noha | | | X | X | +-------------------------+---------+---------+---------+---------+ -| os-nosdn-nofeature-ha | X | | X | X | +| os-nosdn-nofeature-ha | X | X | X | X | +-------------------------+---------+---------+---------+---------+ | os-odl_l2-nofeature-ha | X | X | X | X | +-------------------------+---------+---------+---------+---------+ -| os-odl_l2-nofeature-noha| | X | | | +| os-odl_l2-nofeature-noha| | | X | | +-------------------------+---------+---------+---------+---------+ -| os-odl_l3-nofeature-ha | X | | X | | +| os-odl_l3-nofeature-ha | X | X | X | | +-------------------------+---------+---------+---------+---------+ -| os-odl_l3-nofeature-ha | | X | | | +| os-odl_l3-nofeature-noha| | | X | | +-------------------------+---------+---------+---------+---------+ -| os-onos-sfc-ha | X | | X | X | +| os-onos-sfc-ha | X | X | X | X | +-------------------------+---------+---------+---------+---------+ -| os-onos-nofeature-ha | X | | X | X | +| os-onos-sfc-noha | | | X | | +-------------------------+---------+---------+---------+---------+ -| os-onos-nofeature-noha | | X | | | +| os-onos-nofeature-ha | X | X | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-onos-nofeature-noha | | | X | | +-------------------------+---------+---------+---------+---------+ | os-odl_l2-sfc-ha | | | X | | +-------------------------+---------+---------+---------+---------+ -| os-odl_l2-sfc-noha | X | X | | | +| os-odl_l2-sfc-noha | X | X | X | | +-------------------------+---------+---------+---------+---------+ | os-odl_l2-bgpvpn-ha | X | | X | | +-------------------------+---------+---------+---------+---------+ -| os-odl_l2-bgpvpn-noha | | X | | | +| os-odl_l2-bgpvpn-noha | | X | X | | +-------------------------+---------+---------+---------+---------+ | os-nosdn-kvm-ha | | | X | | +-------------------------+---------+---------+---------+---------+ -| os-nosdn-kvm-noha | | X | | | +| os-nosdn-kvm-noha | | X | X | | +-------------------------+---------+---------+---------+---------+ -| os-nosdn-ovs-ha | | | | | +| os-nosdn-ovs-ha | | | X | | +-------------------------+---------+---------+---------+---------+ -| os-nosdn-ovs-noha | X | X | | | +| os-nosdn-ovs-noha | X | | X | | +-------------------------+---------+---------+---------+---------+ | os-ocl-nofeature-ha | | | | | +-------------------------+---------+---------+---------+---------+ @@ -87,6 +89,8 @@ The list of scenarios supported by each installer can be described as follows: +-------------------------+---------+---------+---------+---------+ | os-odl_l2-fdio-noha | X | | | | +-------------------------+---------+---------+---------+---------+ +| os-odl_l2-moon-ha | | X | | | ++-------------------------+---------+---------+---------+---------+ To qualify for release, the scenarios must have deployed and been successfully tested in four consecutive installations to establish stability of deployment diff --git a/docs/results/results.rst b/docs/results/results.rst index bfdba20e9..c5598a069 100644 --- a/docs/results/results.rst +++ b/docs/results/results.rst @@ -20,56 +20,16 @@ The following documents contain results of Yardstick test cases executed on OPNFV labs, triggered by OPNFV CI pipeline, documented per scenario. -Ready scenarios ---------------- - -The following scenarios have been successfully tested at least four consecutive -times: - .. toctree:: :maxdepth: 1 - fuel-os-odl_l2-nofeature-ha.rst - fuel-os-odl_l3-nofeature-noha.rst - fuel-os-nosdn-kvm-ha.rst - fuel-os-nosdn-kvm-noha.rst - fuel-os-odl_l2-bgpvpn-ha.rst - fuel-os-odl_l2-bgpvpn-noha.rst - compass-os-nosdn-nofeature-ha.rst - compass-os-odl_l2-moon-ha.rst - compass-os-onos-sfc-ha.rst - compass-onos-nofeature-ha.rst - joid-os-nosdn-nofeature-ha.rst - joid-os-nosdn-nofeature-noha.rst - joid-odl_l2-nofeature-ha.rst - joid-os-onos-nofeature-ha.rst - joid-os-onos-sfc-ha.rst - apex-os-nosdn-nofeature-ha.rst - apex-os-odl_l2-bgpvpn-ha.rst - -Limitations ------------ - -For the following scenarios, Yardstick generic test cases suite was executed at -least one time however less than four consecutive times, measurements -collected: - - - * fuel-os-odl_l3-nofeature-ha - - -For the following scenario, Yardstick generic test cases suite was executed -four consecutive times, measurements collected; no feature test cases were -executed, therefore the feature is not verified by Yardstick: - - - -For the following scenario, Yardstick generic test cases suite was executed -three consecutive times, measurements collected; no feature test cases -were executed, therefore the feature is not verified by Yardstick: - - * fuel-os-odl_l2-sfc-ha - + os-odl_l2-nofeature-ha.rst + os-nosdn-nofeature-ha.rst + os-nosdn-kvm-ha.rst + os-odl_l2-bgpvpn-ha.rst + os-nosdn-nofeature-noha.rst + os-onos-nofeature-h.rst + os-onos-sfc-ha.rst Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_. diff --git a/docs/userguide/07-installation.rst b/docs/userguide/07-installation.rst index 8d87bc09d..aa45b61af 100644 --- a/docs/userguide/07-installation.rst +++ b/docs/userguide/07-installation.rst @@ -11,7 +11,7 @@ Abstract Yardstick supports installation on Ubuntu 14.04 or by using a Docker image. The installation procedure on Ubuntu 14.04 or via the docker image are -detailed in the section below +detailed in the section below. To use Yardstick you should have access to an OpenStack environment, with at least Nova, Neutron, Glance, Keystone and Heat installed. @@ -21,7 +21,7 @@ The steps needed to run Yardstick are: 1. Install Yardstick. 2. Create the test configuration .yaml file. 3. Build a guest image。 -4 .Load the image into the OpenStack environment. +4. Load the image into the OpenStack environment. 5. Create a Neutron external network. 6. Load OpenStack environment variables. 6. Run the test case. @@ -87,48 +87,6 @@ at: http://www.youtube.com/watch?v=4S4izNolmR0 :alt: http://www.youtube.com/watch?v=4S4izNolmR0 :target: http://www.youtube.com/watch?v=4S4izNolmR0 -.. _guest-image: - -Building a guest image -^^^^^^^^^^^^^^^^^^^^^^ -Yardstick has a tool for building an Ubuntu Cloud Server image containing all -the required tools to run test cases supported by Yardstick. It is necessary to -have sudo rights to use this tool. - -Also you may need install several additional packages to use this tool, by -follwing the commands below: - -:: - - apt-get update && apt-get install -y \ - qemu-utils \ - kpartx - -This image can be built using the following command while in the directory where -Yardstick is installed (``~/yardstick`` if the framework is installed -by following the commands above): - -:: - - eport YARD_IMG_ARCH="amd64" - sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers - sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh - -**Warning:** the script will create files by default in: -``/tmp/workspace/yardstick`` and the files will be owned by root! - -The created image can be added to OpenStack using the ``glance image-create`` or -via the OpenStack Dashboard. - -Example command: - -:: - - glance --os-image-api-version 1 image-create \ - --name yardstick-trusty-server --is-public true \ - --disk-format qcow2 --container-format bare \ - --file /tmp/workspace/yardstick/yardstick-trusty-server.img - Installing Yardstick using Docker --------------------------------- @@ -150,51 +108,13 @@ Run the Docker image: :: - docker run \ - --privileged=true \ - --rm \ - -t \ - -e "INSTALLER_TYPE=${INSTALLER_TYPE}" \ - -e "INSTALLER_IP=${INSTALLER_IP}" \ - opnfv/yardstick \ - exec_tests.sh ${YARDSTICK_DB_BACKEND} ${YARDSTICK_SUITE_NAME} - -Where ``${INSTALLER_TYPE}`` can be apex, compass, fuel or joid, ``${INSTALLER_IP}`` -is the installer master node IP address (i.e. 10.20.0.2 is default for fuel). ``${YARDSTICK_DB_BACKEND}`` -is the IP and port number of DB, ``${YARDSTICK_SUITE_NAME}`` is the test suite you want to run. -For more details, please refer to the Jenkins job defined in Releng project, labconfig information -and sshkey are required. See the link -https://git.opnfv.org/cgit/releng/tree/jjb/yardstick/yardstick-ci-jobs.yml. - -Note: exec_tests.sh is used for executing test suite here, furthermore, if someone -wants to execute the test suite manually, it can be used as long as the parameters -are configured correct. Another script called run_tests.sh is used for unittest in -Jenkins verify job, in local manaul environment, it is recommended to run before -test suite execuation. - -Basic steps performed by the **Yardstick-stable** container: - -1. clone yardstick and releng repos -2. setup OS credentials (releng scripts) -3. install yardstick and dependencies -4. build yardstick cloud image -5. Upload yardstick cloud image to glance -6. upload cirros-0.3.3 cloud image and ubuntu-14.04 cloud image to glance -7. run yardstick test scenarios -8. cleanup - -If someone only wants to execute a single test case, one can log into the yardstick-stable -container first using command: - -:: - - docker run -it openfv/yardstick /bin/bash + docker run --privileged=true -it openfv/yardstick /bin/bash -Then in the container run yardstick task command to execute a single test case. +In the container run yardstick task command to execute a test case. Before executing Yardstick test case, make sure that yardstick-trusty-server image and yardstick flavor is available in OpenStack. -Detailed steps about creating yardstick flavor and executing Yardstick test case -can be found below. +Detailed steps about creating yardstick flavor and building yardstick-trusty-server +image can be found below. OpenStack parameters and credentials @@ -226,6 +146,51 @@ Credential environment variables in the *openrc* file have to include at least: * OS_PASSWORD * OS_TENANT_NAME + +.. _guest-image: + +Building a guest image +^^^^^^^^^^^^^^^^^^^^^^ +Yardstick has a tool for building an Ubuntu Cloud Server image containing all +the required tools to run test cases supported by Yardstick. It is necessary to +have sudo rights to use this tool. + +Also you may need install several additional packages to use this tool, by +follwing the commands below: + +:: + + apt-get update && apt-get install -y \ + qemu-utils \ + kpartx + +This image can be built using the following command while in the directory where +Yardstick is installed (``~/yardstick`` if the framework is installed +by following the commands above): + +:: + + export YARD_IMG_ARCH="amd64" + sudo echo "Defaults env_keep += \"YARD_IMG_ARCH\"" >> /etc/sudoers + sudo ./tools/yardstick-img-modify tools/ubuntu-server-cloudimg-modify.sh + +**Warning:** the script will create files by default in: +``/tmp/workspace/yardstick`` and the files will be owned by root! +If you are building this guest image in inside a docker container make sure the +container is granted with privilege. +The created image can be added to OpenStack using the ``glance image-create`` or +via the OpenStack Dashboard. + +Example command: + +:: + + glance --os-image-api-version 1 image-create \ + --name yardstick-trusty-server --is-public true \ + --disk-format qcow2 --container-format bare \ + --file /tmp/workspace/yardstick/yardstick-trusty-server.img + + Yardstick default key pair ^^^^^^^^^^^^^^^^^^^^^^^^^^ Yardstick uses a SSH key pair to connect to the guest image. This key pair can @@ -237,8 +202,10 @@ Examples and verifying the install ---------------------------------- It is recommended to verify that Yardstick was installed successfully -by executing some simple commands and test samples. Below is an example invocation -of yardstick help command and ping.py test sample: +by executing some simple commands and test samples. Before executing yardstick +test cases make sure yardstick flavor and building yardstick-trusty-server +image can be found in glance and openrc file is sourced. Below is an example +invocation of yardstick help command and ping.py test sample: :: yardstick –h -- cgit 1.2.3-korg