From 5bdaffe7a8b573c4755e46ec86647e303f22bb26 Mon Sep 17 00:00:00 2001 From: JingLu5 Date: Wed, 7 Sep 2016 10:49:18 +0800 Subject: Doc for Xreview by other test projects Change-Id: I9976969344c5ac4859b0e79b88157e54ec4198d9 Signed-off-by: JingLu5 --- docs/results/apex-os-nosdn-nofeature-ha.rst | 267 ++++++++++++++++++++++++++ docs/results/fuel-os-odl_l2-nofeature-ha.rst | 171 ++++++++++++++--- docs/results/index.rst | 12 +- docs/results/joid-os-nosdn-nofeature-noha.rst | 39 ++++ docs/results/joid-os-onos-sfc-ha.rst | 36 ++++ docs/results/overview.rst | 151 ++++++++------- docs/results/results.rst | 53 ++--- 7 files changed, 596 insertions(+), 133 deletions(-) create mode 100644 docs/results/apex-os-nosdn-nofeature-ha.rst create mode 100644 docs/results/joid-os-nosdn-nofeature-noha.rst create mode 100644 docs/results/joid-os-onos-sfc-ha.rst (limited to 'docs/results') diff --git a/docs/results/apex-os-nosdn-nofeature-ha.rst b/docs/results/apex-os-nosdn-nofeature-ha.rst new file mode 100644 index 000000000..faf5e62fb --- /dev/null +++ b/docs/results/apex-os-nosdn-nofeature-ha.rst @@ -0,0 +1,267 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +=========================================== +Test Results for apex-os-nosdn-nofeature-ha +=========================================== + +.. toctree:: + :maxdepth: 2 + + +Details +======= + +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main +.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs + + +Overview of test results +------------------------ + +See Grafana_ for viewing test result metrics for each respective test case. It +is possible to chose which specific scenarios to look at, and then to zoom in +on the details of each run test scenario as well. + +All of the test case results below are based on 4 scenario test +runs, each run on the LF POD1_ between August 25 and 28 in +2016. + +TC002 +----- +The round-trip-time (RTT) between 2 VMs on different blades is measured using +ping. Most test run measurements result on average between 0.74 and 1.08 ms. +A few runs start with a 0.99 - 1.07 ms RTT spike (This could be because of +normal ARP handling). One test run has a greater RTT spike of 1.35 ms. +To be able to draw conclusions more runs should be made. SLA set to 10 ms. +The SLA value is used as a reference, it has not been defined by OPNFV. + +TC005 +----- +The IO read bandwidth looks similar between different dates, with an +average between approx. 128 and 136 MB/s. Within each test run the results +vary, with a minimum 5 MB/s and maximum 446 MB/s on the totality. Most runs +have a minimum BW of 5 MB/s (one run at 6 MB/s). The maximum BW varies more in +absolute numbers between the dates, between 416 and 446 MB/s. +SLA set to 400 MB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC010 +----- +The measurements for memory latency are similar between test dates and result +in approx. 1.09 ns. The variations within each test run are similar, between +1.0860 and 1.0880 ns. +SLA set to 30 ns. The SLA value is used as a reference, it has not been defined +by OPNFV. + +TC011 +----- +Packet delay variation between 2 VMs on different blades is measured using +Iperf3. The reported packet delay variation varies between 0.0025 and 0.0148 ms, +with an average delay variation between 0.0056 ms and 0.0157 ms. + +TC012 +----- +Between test dates, the average measurements for memory bandwidth result in +approx. 19.70 GB/s. Within each test run the results vary more, with a minimal +BW of 18.16 GB/s and maximum of 20.13 GB/s on the totality. +SLA set to 15 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + +TC014 +----- +The Unixbench processor test run results vary between scores 3224.4 and 3842.8, +one result each date. The average score on the total is 3659.5. +No SLA set. + +TC037 +----- +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +Between test dates, the average measurements for memory bandwidth vary between +22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal +BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + + +TC070 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + +Detailed test results +--------------------- +The scenario was run on LF POD1_ with: +Apex +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 +OpenDayLight Beryllium + +Rationale for decisions +----------------------- +Pass + +Tests were successfully executed and metrics collected. +No SLA was verified. To be decided on in next release of OPNFV. + +Conclusions and recommendations +------------------------------- +The pktgen test configuration has a relatively large base effect on RTT in +TC037 compared to TC002, where there is no background load at all. Approx. +15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage +difference in RTT results. +Especially RTT and throughput come out with better results than for instance +the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should +probably be further analyzed and understood. Also of interest could be +to make further analyzes to find patterns and reasons for lost traffic. +Also of interest could be to see if there are continuous variations where +some test cases stand out with better or worse results than the general test +case. + diff --git a/docs/results/fuel-os-odl_l2-nofeature-ha.rst b/docs/results/fuel-os-odl_l2-nofeature-ha.rst index 914781684..e9ef8fe65 100644 --- a/docs/results/fuel-os-odl_l2-nofeature-ha.rst +++ b/docs/results/fuel-os-odl_l2-nofeature-ha.rst @@ -14,7 +14,7 @@ Test Results for fuel-os-odl_l2-nofeature-ha Details ======= -.. _Grafana: http://130.211.154.108/grafana/dashboard/db/yardstick-main +.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main .. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs Overview of test results @@ -24,21 +24,17 @@ See Grafana_ for viewing test result metrics for each respective test case. It is possible to chose which specific scenarios to look at, and then to zoom in on the details of each run test scenario as well. -All of the test case results below are based on 6 scenario test -runs, each run on the Ericsson POD2_ between February 13 and 24 in 2016. Test -case TC011_ is the greater exception for which there are only 2 test runs -available, due to earlier problems with InfluxDB test result population. -The best would be to have more runs to draw better conclusions from, but these -are the only runs available at the time of OPNFV R2 release. +All of the test case results below are based on 4 scenario test +runs, each run on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in +2016. TC002 ----- The round-trip-time (RTT) between 2 VMs on different blades is measured using -ping. Most test run measurements result on average between 0.3 and 0.5 ms, but -one date (Feb. 23) sticks out with an RTT average of 1 ms. -A few runs start with a 1 - 2 ms RTT spike (This could be because of normal ARP -handling). One test run has a greater RTT spike of 3.9 ms, which is the same -one with the 0.9 ms average. The other runs have no similar spike at all. +ping. Most test run measurements result on average between 0.5 and 0.6 ms. +A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP +handling). One test run has a greater RTT spike of 1.9 ms, which is the same +one with the 0.7 ms average. The other runs have no similar spike at all. To be able to draw conclusions more runs should be made. SLA set to 10 ms. The SLA value is used as a reference, it has not been defined by OPNFV. @@ -46,10 +42,10 @@ been defined by OPNFV. TC005 ----- The IO read bandwidth looks similar between different dates, with an -average between approx. 165 and 185 MB/s. Within each test run the results -vary, with a minimum 2 MB/s and maximum 617 MB/s on the totality. Most runs +average between approx. 170 and 200 MB/s. Within each test run the results +vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in -absolute numbers between the dates, between 566 and 617 MB/s. +absolute numbers between the dates, between 617 and 838 MB/s. SLA set to 400 MB/s. The SLA value is used as a reference, it has not been defined by OPNFV. @@ -64,8 +60,6 @@ by OPNFV. TC011 ----- -Only 2 test runs are available to report results on. - Packet delay variation between 2 VMs on different blades is measured using Iperf3. On the first date the reported packet delay variation varies between 0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms. @@ -74,9 +68,7 @@ an average delay variation of 0.004 ms. TC012 ----- -Results are reported for 5 test runs. It is not known why the 6:th test run -is missing. -Between test dates the average measurements for memory bandwidth vary between +Between test dates, the average measurements for memory bandwidth vary between 17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality. SLA set to 15 GB/s. The SLA value is used as a reference, it has not been @@ -84,16 +76,12 @@ defined by OPNFV. TC014 ----- -Results are reported for 5 test runs. It is not known why the 6:th test run -is missing. The Unixbench processor test run results vary between scores 3080 and 3240, one result each date. The average score on the total is 3150. No SLA set. TC037 ----- -Results are reported for 5 test runs. It is not currently known why the 6:th -test run is missing. The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs on different blades are measured when increasing the amount of UDP flows sent between the VMs using pktgen as packet generator tool. @@ -126,12 +114,138 @@ The lost amount of packets normally range between 100 and 1000 per test run, but there are spikes in the range of 10000 lost packets as well, and even more in a rare cases. +CPU utilization statistics are collected during UDP flows sent between the VMs +using pktgen as packet generator tool. The average measurements for CPU +utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio +appears around 7%. + +TC069 +Between test dates, the average measurements for memory bandwidth vary between +15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal +BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality. +SLA set to 6 GB/s. The SLA value is used as a reference, it has not been +defined by OPNFV. + + +TC070 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Memory utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for memory +utilization vary between 225MB to 246MB. The peak of memory utilization appears +around 340MB. + +TC071 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Cache utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. The average measurements for cache +utilization vary between 205MB to 212MB. + +TC072 +The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs +on different blades are measured when increasing the amount of UDP flows sent +between the VMs using pktgen as packet generator tool. + +Round trip times and packet throughput between VMs can typically be affected by +the amount of flows set up and result in higher RTT and less PPS throughput. + +The RTT results are similar throughout the different test dates and runs at +approx. 15 ms. Some test runs show an increase with many flows, in the range +towards 16 to 17 ms. One exception standing out is Feb. 15 where the average +RTT is stable at approx. 13 ms. The PPS results are not as consistent as the +RTT results. +In some test runs when running with less than approx. 10000 flows the PPS +throughput is normally flatter compared to when running with more flows, after +which the PPS throughput decreases. Around 20 percent decrease in the worst +case. For the other test runs there is however no significant change to the PPS +throughput when the number of flows are increased. In some test runs the PPS +is also greater with 1000000 flows compared to other test runs where the PPS +result is less with only 2 flows. + +The average PPS throughput in the different runs varies between 414000 and +452000 PPS. The total amount of packets in each test run is approx. 7500000 to +8200000 packets. One test run Feb. 15 sticks out with a PPS average of +558000 and approx. 1100000 packets in total (same as the on mentioned earlier +for RTT results). + +There are lost packets reported in most of the test runs. There is no observed +correlation between the amount of flows and the amount of lost packets. +The lost amount of packets normally range between 100 and 1000 per test run, +but there are spikes in the range of 10000 lost packets as well, and even +more in a rare cases. + +Network utilization statistics are collected during UDP flows sent between the +VMs using pktgen as packet generator tool. Total number of packets received per +second was average on 200 kpps and total number of packets transmitted per +second was average on 600 kpps. + Detailed test results --------------------- -The scenario was run on Ericsson POD2_ with: -Fuel 8.0 -OpenStack Liberty -OpenVirtualSwitch 2.3.1 +The scenario was run on Ericsson POD2_ and LF POD2_ with: +Fuel 9.0 +OpenStack Mitaka +OpenVirtualSwitch 2.5.90 OpenDayLight Beryllium Rationale for decisions @@ -154,3 +268,4 @@ to make further analyzes to find patterns and reasons for lost traffic. Also of interest could be to see if there are continuous variations where some test cases stand out with better or worse results than the general test case. + diff --git a/docs/results/index.rst b/docs/results/index.rst index b828d1426..2b67f1b22 100644 --- a/docs/results/index.rst +++ b/docs/results/index.rst @@ -3,12 +3,12 @@ .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Ericsson AB and others. -==================================================== -Yardstick Test Results for OPNFV Brahmaputra Release -==================================================== +====================== +Yardstick test results +====================== .. toctree:: - :maxdepth: 2 + :maxdepth: 4 - overview.rst - results.rst +.. include:: ./overview.rst +.. include:: ./results.rst diff --git a/docs/results/joid-os-nosdn-nofeature-noha.rst b/docs/results/joid-os-nosdn-nofeature-noha.rst new file mode 100644 index 000000000..a68a6cd45 --- /dev/null +++ b/docs/results/joid-os-nosdn-nofeature-noha.rst @@ -0,0 +1,39 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +============================================= +Test Results for joid-os-nosdn-nofeature-noha +============================================= + +.. toctree:: + :maxdepth: 2 + + +Details +======= + +.. after this doc is filled, remove all comments and include the scenario in +.. results.rst by removing the comment on the file name. + + +Overview of test results +------------------------ + +.. general on metrics collected, number of iterations + +Detailed test results +--------------------- + +.. info on lab, installer, scenario + +Rationale for decisions +----------------------- +.. result analysis, pass/fail + +Conclusions and recommendations +------------------------------- + +.. did the expected behavior occured? + diff --git a/docs/results/joid-os-onos-sfc-ha.rst b/docs/results/joid-os-onos-sfc-ha.rst new file mode 100644 index 000000000..3d80d38ef --- /dev/null +++ b/docs/results/joid-os-onos-sfc-ha.rst @@ -0,0 +1,36 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 + + +===================================== +Test Results for joid-os-onos-sfc-ha +===================================== + +.. toctree:: + :maxdepth: 2 + + +Details +======= + +.. after this doc is filled, remove all comments and include the scenario in +.. results.rst by removing the comment on the file name. + + +Overview of test results +------------------------ + +.. general on metrics collected, number of iterations + +Detailed test results +--------------------- + +.. info on lab, installer, scenario + +Rationale for decisions +----------------------- +.. result analysis, pass/fail + +Conclusions and recommendations +------------------------------- diff --git a/docs/results/overview.rst b/docs/results/overview.rst index 7f3a34e56..ee0ebe504 100644 --- a/docs/results/overview.rst +++ b/docs/results/overview.rst @@ -3,24 +3,10 @@ .. http://creativecommons.org/licenses/by/4.0 .. (c) OPNFV, Ericsson AB and others. -===================== -Yardstick Test Report -===================== - -.. toctree:: - :maxdepth: 2 - -Introduction -============ - -Document Identifier -------------------- - -This document is part of deliverables of the OPNFV release brahmaputra.3.0 - -Scope ------ +Yardstick test tesult document overview +======================================= +.. _`Yardstick user guide`: artifacts.opnfv.org/yardstick/docs/userguide/index.html .. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main .. _Jenkins: https://build.opnfv.org/ci/view/yardstick/ .. _Scenarios: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-scenarios @@ -28,70 +14,89 @@ Scope This document provides an overview of the results of test cases developed by the OPNFV Yardstick Project, executed on OPNFV community labs. -OPNFV Continous Integration provides automated build, deploy and testing for -the software developed in OPNFV. Unless stated, the reported tests are -automated via Jenkins Jobs. - -Test results are visible in the following dashboard: - -* *Yardstick* Dashboard_: uses influx DB to store test results and Grafana for - visualization (user: opnfv/ password: opnfv) - - -References ----------- - -* IEEE Std 829-2008. "Standard for Software and System Test Documentation". - -* OPNFV Brahamputra release note for Yardstick. - - - -General -======= +Yardstick project is described in `Yardstick user guide`_. -Yardstick Test Cases have been executed for scenarios and features defined in -this OPNFV release. +Yardstick is run systematically at the end of an OPNFV fresh installation. +The system under test (SUT) is installed by the installer Apex, Compass, Fuel +or Joid on Performance Optimized Datacenter (POD); One single installer per +POD. All the runnable test cases are run sequentially. The installer and the +POD are considered to evaluate whether the test case can be run or not. That is +why all the number of test cases may vary from 1 installer to another and from +1 POD to POD. -The test environments were installed by one of the following: Apex, Compass, -Fuel or Joid; one single installer per POD. +OPNFV CI provides automated build, deploy and testing for +the software developed in OPNFV. Unless stated, the reported tests are +automated via Jenkins Jobs. Yardsrick test results from OPNFV Continous +Integration can be found in the following dashboard: -The results of executed tests are available in Dashboard_ and all logs stored -in Jenkins_. +* *Yardstick* Dashboard_: uses influx DB to store Yardstick CI test results and + Grafana for visualization (user: opnfv/ password: opnfv) -After one week of measurments, in general, SDN ONOS showed lower latency than -SDN ODL, which showed lower latency than an environment installed with pure -OpenStack. Additional time and PODs make this a non-conclusive statement, see -Scenarios_ for a snapshot and Dashboard_ for complete results. +The results of executed test cases are available in Dashboard_ and all logs are +stored in Jenkins_. It was not possible to execute the entire Yardstick test cases suite on the PODs assigned for release verification over a longer period of time, due to continuous work on the software components and blocking faults either on -environment, feature or test framework. - -Four consecutive successful runs was defined as criteria for release. -It is a recommendation to run Yardstick test cases over a longer period -of time in order to better understand the behavior of the system. - +environment, features or test framework. + +The list of scenarios supported by each installer can be described as follows: + ++-------------------------+---------+---------+---------+---------+ +| Scenario | Apex | Compass | Fuel | Joid | ++=========================+=========+=========+=========+=========+ +| os-nosdn-nofeature-noha | | | | X | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-nofeature-ha | X | | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-nofeature-ha | X | X | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-nofeature-noha| | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l3-nofeature-ha | X | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l3-nofeature-ha | | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-onos-sfc-ha | X | | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-onos-nofeature-ha | X | | X | X | ++-------------------------+---------+---------+---------+---------+ +| os-onos-nofeature-noha | | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-sfc-ha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-sfc-noha | X | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-bgpvpn-ha | X | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-bgpvpn-noha | | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-kvm-ha | | | X | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-kvm-noha | | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-ovs-ha | | | | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-ovs-noha | X | X | | | ++-------------------------+---------+---------+---------+---------+ +| os-ocl-nofeature-ha | | | | | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-lxd-ha | | | | X | ++-------------------------+---------+---------+---------+---------+ +| os-nosdn-lxd-noha | | | | X | ++-------------------------+---------+---------+---------+---------+ +| os-odl_l2-fdio-noha | X | | | | ++-------------------------+---------+---------+---------+---------+ + +To qualify for release, the scenarios must have deployed and been successfully +tested in four consecutive installations to establish stability of deployment +and feature capability. It is a recommendation to run Yardstick test +cases over a longer period of time in order to better understand the behavior +of the system under test. +References +---------- -Document change procedures and history --------------------------------------- +* IEEE Std 829-2008. "Standard for Software and System Test Documentation". -+--------------------------------------+--------------------------------------+ -| **Project** | Yardstick | -| | | -+--------------------------------------+--------------------------------------+ -| **Repo/tag** | yardstick/brahmaputra.3.0 | -| | | -+--------------------------------------+--------------------------------------+ -| **Release designation** | Brahmaputra | -| | | -+--------------------------------------+--------------------------------------+ -| **Release date** | Apr 27th, 2016 | -| | | -+--------------------------------------+--------------------------------------+ -| **Purpose of the delivery** | OPNFV Brahmaputra release test | -| | results. | -| | | -+--------------------------------------+--------------------------------------+ +* OPNFV Colorado release note for Yardstick. diff --git a/docs/results/results.rst b/docs/results/results.rst index f3831b865..bfdba20e9 100644 --- a/docs/results/results.rst +++ b/docs/results/results.rst @@ -2,14 +2,13 @@ .. License. .. http://creativecommons.org/licenses/by/4.0 +Results listed by scenario +========================== -====================== -Yardstick Test Results -====================== - -.. toctree:: - :maxdepth: 2 - +The following sections describe the yardstick results as evaluated for the +Colorado release scenario validation runs. Each section describes the +determined state of the specific scenario as deployed in the Colorado +release process. Scenario Results ================ @@ -24,22 +23,29 @@ OPNFV labs, triggered by OPNFV CI pipeline, documented per scenario. Ready scenarios --------------- -The following scenarios run at least four consecutive times Yardstick test -cases suite: +The following scenarios have been successfully tested at least four consecutive +times: .. toctree:: :maxdepth: 1 - apex-os-odl_l2-nofeature-ha.rst - compass-os-nosdn-nofeature-ha.rst - compass-os-odl_l2-nofeature-ha.rst - compass-os-onos-nofeature-ha.rst - fuel-os-nosdn-nofeature-ha.rst fuel-os-odl_l2-nofeature-ha.rst - fuel-os-onos-nofeature-ha.rst - fuel-os-nosdn-kvm-ha - joid-os-odl_l2-nofeature-ha.rst - + fuel-os-odl_l3-nofeature-noha.rst + fuel-os-nosdn-kvm-ha.rst + fuel-os-nosdn-kvm-noha.rst + fuel-os-odl_l2-bgpvpn-ha.rst + fuel-os-odl_l2-bgpvpn-noha.rst + compass-os-nosdn-nofeature-ha.rst + compass-os-odl_l2-moon-ha.rst + compass-os-onos-sfc-ha.rst + compass-onos-nofeature-ha.rst + joid-os-nosdn-nofeature-ha.rst + joid-os-nosdn-nofeature-noha.rst + joid-odl_l2-nofeature-ha.rst + joid-os-onos-nofeature-ha.rst + joid-os-onos-sfc-ha.rst + apex-os-nosdn-nofeature-ha.rst + apex-os-odl_l2-bgpvpn-ha.rst Limitations ----------- @@ -49,20 +55,13 @@ least one time however less than four consecutive times, measurements collected: - * fuel-os-odl_l2-bgpvpn-ha - * fuel-os-odl_l3-nofeature-ha - * joid-os-nosdn-nofeature-ha - - * joid-os-onos-nofeature-ha - For the following scenario, Yardstick generic test cases suite was executed four consecutive times, measurements collected; no feature test cases were executed, therefore the feature is not verified by Yardstick: - * apex-os-odl_l2-bgpvpn-ha For the following scenario, Yardstick generic test cases suite was executed @@ -75,7 +74,6 @@ were executed, therefore the feature is not verified by Yardstick: Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_. - Feature Test Results ==================== @@ -91,5 +89,8 @@ The following features were verified by Yardstick test cases: * Virtual Traffic Classifier (see :doc:`yardstick-opnfv-vtc`) + * StorPerf + .. note:: The test cases for IPv6 and Parser Projects are included in the compass scenario. + -- cgit 1.2.3-korg