aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/release/results/euphrates_fraser_comparsion.rst8
-rw-r--r--docs/release/results/images/tc002_pod.pngbin0 -> 39106 bytes
-rw-r--r--docs/release/results/images/tc002_scenario.pngbin0 -> 44920 bytes
-rw-r--r--docs/release/results/images/tc010_pod.pngbin0 -> 44349 bytes
-rw-r--r--docs/release/results/images/tc010_scenario.pngbin0 -> 51251 bytes
-rw-r--r--docs/release/results/images/tc011_pod.pngbin0 -> 43308 bytes
-rw-r--r--docs/release/results/images/tc011_scenario.pngbin0 -> 43647 bytes
-rw-r--r--docs/release/results/images/tc012_pod.pngbin0 -> 47996 bytes
-rw-r--r--docs/release/results/images/tc012_scenario.pngbin0 -> 51405 bytes
-rw-r--r--docs/release/results/images/tc014_pod.pngbin0 -> 36462 bytes
-rw-r--r--docs/release/results/images/tc014_scenario.pngbin0 -> 42056 bytes
-rw-r--r--docs/release/results/images/tc069_pod.pngbin0 -> 41823 bytes
-rw-r--r--docs/release/results/images/tc069_scenario.pngbin0 -> 46728 bytes
-rw-r--r--docs/release/results/images/tc082_pod.pngbin0 -> 28096 bytes
-rw-r--r--docs/release/results/images/tc082_scenario.pngbin0 -> 16082 bytes
-rw-r--r--docs/release/results/images/tc083_pod.pngbin0 -> 29533 bytes
-rw-r--r--docs/release/results/images/tc083_scenario.pngbin0 -> 16481 bytes
-rw-r--r--docs/release/results/index.rst1
-rw-r--r--docs/release/results/os-nosdn-kvm-ha.rst270
-rw-r--r--docs/release/results/os-nosdn-nofeature-ha.rst492
-rw-r--r--docs/release/results/os-nosdn-nofeature-noha.rst259
-rw-r--r--docs/release/results/os-odl_l2-bgpvpn-ha.rst53
-rw-r--r--docs/release/results/os-odl_l2-nofeature-ha.rst743
-rw-r--r--docs/release/results/os-odl_l2-sfc-ha.rst231
-rw-r--r--docs/release/results/os-onos-nofeature-ha.rst257
-rw-r--r--docs/release/results/os-onos-sfc-ha.rst517
-rw-r--r--docs/release/results/overview.rst76
-rw-r--r--docs/release/results/results.rst32
-rw-r--r--docs/release/results/tc002-network-latency.rst317
-rw-r--r--docs/release/results/tc010-memory-read-latency.rst299
-rw-r--r--docs/release/results/tc011-packet-delay-variation.rst262
-rw-r--r--docs/release/results/tc012-memory-read-write-bandwidth.rst299
-rw-r--r--docs/release/results/tc014-cpu-processing-speed.rst298
-rw-r--r--docs/release/results/tc069-memory-write-bandwidth.rst300
-rw-r--r--docs/release/results/tc082-context-switches-under-load.rst129
-rw-r--r--docs/release/results/tc083-network-throughput-between-vm.rst129
-rwxr-xr-xdocs/testing/developer/devguide/devguide.rst3
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc088.rst129
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc089.rst129
39 files changed, 2346 insertions, 2887 deletions
diff --git a/docs/release/results/euphrates_fraser_comparsion.rst b/docs/release/results/euphrates_fraser_comparsion.rst
new file mode 100644
index 000000000..222dc8bb0
--- /dev/null
+++ b/docs/release/results/euphrates_fraser_comparsion.rst
@@ -0,0 +1,8 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+=======================================================
+Test results analysis for Euphrates and Fraser releases
+=======================================================
+
diff --git a/docs/release/results/images/tc002_pod.png b/docs/release/results/images/tc002_pod.png
new file mode 100644
index 000000000..7f92c471d
--- /dev/null
+++ b/docs/release/results/images/tc002_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc002_scenario.png b/docs/release/results/images/tc002_scenario.png
new file mode 100644
index 000000000..0ea1ecec6
--- /dev/null
+++ b/docs/release/results/images/tc002_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc010_pod.png b/docs/release/results/images/tc010_pod.png
new file mode 100644
index 000000000..c7c623481
--- /dev/null
+++ b/docs/release/results/images/tc010_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc010_scenario.png b/docs/release/results/images/tc010_scenario.png
new file mode 100644
index 000000000..7c53a5fab
--- /dev/null
+++ b/docs/release/results/images/tc010_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc011_pod.png b/docs/release/results/images/tc011_pod.png
new file mode 100644
index 000000000..8fec72f5a
--- /dev/null
+++ b/docs/release/results/images/tc011_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc011_scenario.png b/docs/release/results/images/tc011_scenario.png
new file mode 100644
index 000000000..2d78ea372
--- /dev/null
+++ b/docs/release/results/images/tc011_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc012_pod.png b/docs/release/results/images/tc012_pod.png
new file mode 100644
index 000000000..0f2a00910
--- /dev/null
+++ b/docs/release/results/images/tc012_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc012_scenario.png b/docs/release/results/images/tc012_scenario.png
new file mode 100644
index 000000000..16257988d
--- /dev/null
+++ b/docs/release/results/images/tc012_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc014_pod.png b/docs/release/results/images/tc014_pod.png
new file mode 100644
index 000000000..63aead2e8
--- /dev/null
+++ b/docs/release/results/images/tc014_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc014_scenario.png b/docs/release/results/images/tc014_scenario.png
new file mode 100644
index 000000000..98f23ba1b
--- /dev/null
+++ b/docs/release/results/images/tc014_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc069_pod.png b/docs/release/results/images/tc069_pod.png
new file mode 100644
index 000000000..66b272cb4
--- /dev/null
+++ b/docs/release/results/images/tc069_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc069_scenario.png b/docs/release/results/images/tc069_scenario.png
new file mode 100644
index 000000000..caf12f8d5
--- /dev/null
+++ b/docs/release/results/images/tc069_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc082_pod.png b/docs/release/results/images/tc082_pod.png
new file mode 100644
index 000000000..89e01666b
--- /dev/null
+++ b/docs/release/results/images/tc082_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc082_scenario.png b/docs/release/results/images/tc082_scenario.png
new file mode 100644
index 000000000..637a739c3
--- /dev/null
+++ b/docs/release/results/images/tc082_scenario.png
Binary files differ
diff --git a/docs/release/results/images/tc083_pod.png b/docs/release/results/images/tc083_pod.png
new file mode 100644
index 000000000..f874191e4
--- /dev/null
+++ b/docs/release/results/images/tc083_pod.png
Binary files differ
diff --git a/docs/release/results/images/tc083_scenario.png b/docs/release/results/images/tc083_scenario.png
new file mode 100644
index 000000000..afd80aa02
--- /dev/null
+++ b/docs/release/results/images/tc083_scenario.png
Binary files differ
diff --git a/docs/release/results/index.rst b/docs/release/results/index.rst
index 0560152e0..3ec9e1cff 100644
--- a/docs/release/results/index.rst
+++ b/docs/release/results/index.rst
@@ -14,3 +14,4 @@ Yardstick test results
.. include:: ./overview.rst
.. include:: ./results.rst
+.. include:: ./euphrates_fraser_comparsion.rst
diff --git a/docs/release/results/os-nosdn-kvm-ha.rst b/docs/release/results/os-nosdn-kvm-ha.rst
deleted file mode 100644
index a8a56f80e..000000000
--- a/docs/release/results/os-nosdn-kvm-ha.rst
+++ /dev/null
@@ -1,270 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-================================
-Test Results for os-nosdn-kvm-ha
-================================
-
-.. toctree::
- :maxdepth: 2
-
-
-fuel
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test
-runs, each run on the Ericsson POD2_ or LF POD2_ between August 24 and 30 in
-2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.44 and 0.75 ms.
-A few runs start with a 0.65 - 0.68 ms RTT spike (This could be because of
-normal ARP handling). One test run has a greater RTT spike of 1.49 ms.
-To be able to draw conclusions more runs should be made. SLA set to 10 ms.
-The SLA value is used as a reference, it has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 92 and 204 MB/s. Within each test run the results
-vary, with a minimum 2 MB/s and maximum 819 MB/s on the totality. Most runs
-have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies more in
-absolute numbers between the dates, between 238 and 819 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in approx. 2.07 ns. The variations within each test run are similar, between
-1.41 and 3.53 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. The reported packet delay variation varies between 0.0051 and 0.0243 ms,
-with an average delay variation between 0.0081 ms and 0.0195 ms.
-
-TC012
------
-Between test dates, the average measurements for memory bandwidth result in
-approx. 13.6 GB/s. Within each test run the results vary more, with a minimal
-BW of 6.09 GB/s and maximum of 16.47 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 2316 and 3619,
-one result each date.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-CPU utilization statistics are collected during UDP flows sent between the VMs
-using pktgen as packet generator tool. The average measurements for CPU
-utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
-appears around 7%.
-
-TC069
------
-Between test dates, the average measurements for memory bandwidth vary between
-22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal
-BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality.
-SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Memory utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for memory
-utilization vary between 225MB to 246MB. The peak of memory utilization appears
-around 340MB.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Cache utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for cache
-utilization vary between 205MB to 212MB.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. Total number of packets received per
-second was average on 200 kpps and total number of packets transmitted per
-second was average on 600 kpps.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ and LF POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all. Approx.
-15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
-difference in RTT results.
-Especially RTT and throughput come out with better results than for instance
-the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
-probably be further analyzed and understood. Also of interest could be
-to make further analyzes to find patterns and reasons for lost traffic.
-Also of interest could be to see if there are continuous variations where
-some test cases stand out with better or worse results than the general test
-case.
-
diff --git a/docs/release/results/os-nosdn-nofeature-ha.rst b/docs/release/results/os-nosdn-nofeature-ha.rst
deleted file mode 100644
index 9e52731d5..000000000
--- a/docs/release/results/os-nosdn-nofeature-ha.rst
+++ /dev/null
@@ -1,492 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-======================================
-Test Results for os-nosdn-nofeature-ha
-======================================
-
-.. toctree::
- :maxdepth: 2
-
-
-apex
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test
-runs, each run on the LF POD1_ between August 25 and 28 in
-2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.74 and 1.08 ms.
-A few runs start with a 0.99 - 1.07 ms RTT spike (This could be because of
-normal ARP handling). One test run has a greater RTT spike of 1.35 ms.
-To be able to draw conclusions more runs should be made. SLA set to 10 ms.
-The SLA value is used as a reference, it has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 128 and 136 MB/s. Within each test run the results
-vary, with a minimum 5 MB/s and maximum 446 MB/s on the totality. Most runs
-have a minimum BW of 5 MB/s (one run at 6 MB/s). The maximum BW varies more in
-absolute numbers between the dates, between 416 and 446 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in approx. 1.09 ns. The variations within each test run are similar, between
-1.0860 and 1.0880 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. The reported packet delay variation varies between 0.0025 and 0.0148 ms,
-with an average delay variation between 0.0056 ms and 0.0157 ms.
-
-TC012
------
-Between test dates, the average measurements for memory bandwidth result in
-approx. 19.70 GB/s. Within each test run the results vary more, with a minimal
-BW of 18.16 GB/s and maximum of 20.13 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 3224.4 and 3842.8,
-one result each date. The average score on the total is 3659.5.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-CPU utilization statistics are collected during UDP flows sent between the VMs
-using pktgen as packet generator tool. The average measurements for CPU
-utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
-appears around 7%.
-
-TC069
------
-Between test dates, the average measurements for memory bandwidth vary between
-22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal
-BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality.
-SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Memory utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for memory
-utilization vary between 225MB to 246MB. The peak of memory utilization appears
-around 340MB.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Cache utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for cache
-utilization vary between 205MB to 212MB.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. Total number of packets received per
-second was average on 200 kpps and total number of packets transmitted per
-second was average on 600 kpps.
-
-Detailed test results
----------------------
-The scenario was run on LF POD1_ with:
-Apex
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-
-Joid
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Intel POD5_ between September 11 and 14 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.59 and 1.70 ms.
-Two test runs have reached the same greater RTT spike of 3.06 ms, which are
-1.66 and 1.70 ms average, but only one has the lower RTT of 1.35 ms. The other
-two runs have no similar spike at all. To be able to draw conclusions more runs
-should be made. SLA set to be 10 ms. The SLA value is used as a reference, it
-has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput and the
-greatest IO read bandwidth of the four runs is 173.3 MB/s. The IO read
-bandwidth of the four runs looks similar on different four days, with an
-average between 32.7 and 60.4 MB/s. One of the runs has a minimum BW of 429
-KM/s and other has a maximum BW of 173.3 MB/s. The SLA of read bandwidth sets
-to be 400 MB/s, which is used as a reference, and it has not been defined by
-OPNFV.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is 1.1 ns on average. The
-variations within each test run are different, some vary from a large range and
-others have a small change. For example, the largest change is on September 14,
-the memory read latency of which is ranging from 1.12 ns to 1.22 ns. However,
-the results on September 12 change very little, which range from 1.14 ns to
-1.17 ns. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on
-different blades. The reported pocket delay variations of the four test runs
-differ from each other. The results on September 13 within the date look
-similar and the values are between 0.0087 and 0.0190 ms, which is 0.0126 ms on
-average. However, on the fourth day, the pocket delay variation has a large
-wide change within the date, which ranges from 0.0032 ms to 0.0121 ms and has
-the minimum average value. The pocket delay variations of other two test runs
-look relatively similar, which are 0.0076 ms and 0.0152 ms on average. The SLA
-value sets to be 10 ms. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the memory
-bandwidth within the second day almost keep stable, which is 11.58 GB/s on
-average. And the memory bandwidth of the fourth day look similar as that of the
-second day, both of which remain stable. The other two test runs relatively
-change from a large wide range, in which the minimum memory bandwidth is 11.22
-GB/s and the maximum bandwidth is 16.65 GB/s with an average bandwidth of about
-12.20 GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference,
-it has not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to measure processing speed, that is instructions per
-second. It can be seen from the dashboard that the processing test results
-vary from scores 3272 to 3444, and there is only one result one date. The
-overall average score is 3371. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is 119.85, 128.02, 121.40 and
-126.08 kpps, of which the result of the second is the highest. The RTT results
-of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS
-results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 184 kpps and the
-minimum throughput is 49 K respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is 38 ms, of which the worest RTT is
-93 ms on Sep. 14th.
-
-CPU load of the four test runs have a large change, since the minimum value and
-the peak of CPU load is 0 percent and 51 percent respectively. And the best
-result is obtained on Sep. 14th.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 22.3 GB/s to 26.8 GB/s and then to 18.5 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
-look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. SLA
-sets to be 7 GB/s. The SLA value is used as a a reference, it has not been
-defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT can reach
-more than 80 ms and the average RTT is usually approx. 38 ms. On the whole, the
-average RTTs of the four runs keep flat.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 268 MiB on Sep
-14, which also has the largest minimum memory. Besides, the rest three test
-runs have the similar used memory. On the other hand, the free memory of the
-four runs have the same smallest minimum value, that is about 223 MiB, and the
-maximum free memory of three runs have the similar result, that is 337 MiB,
-except that on Sep. 14th, whose maximum free memory is 254 MiB. On the whole,
-all the test runs have similar average free memory.
-
-Network throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-network throughput of the four test runs seem quite different, ranging from
-119.85 kpps to 128.02 kpps. The average number of flows in these tests is
-24000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding packet throughput
-differ between 49.4k and 193.3k with an average packet throughput of approx.
-125k. On the whole, the PPS results seem consistent. Within each test run of
-the four runs, when number of flows becomes larger, the packet throughput seems
-not larger in the meantime.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT can reach
-more than 94 ms and the average RTT is usually approx. 35 ms. On the whole, the
-average RTTs of the four runs keep flat.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool.The largest
-cache size is 212 MiB in the four runs, and the smallest cache size is 75 MiB.
-On the whole, the average cache size of the four runs is approx. 208 MiB.
-Meanwhile, the tread of the buffer size looks similar with each other.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs seem quite different, ranging from 119.85 kpps to 128.02
-kpps. The average number of flows in these tests is 239.7k, and each run has a
-minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the
-same time, the corresponding packet throughput differ between 49.4k and 193.3k
-with an average packet throughput of approx. 125k. On the whole, the PPS results
-seem consistent. Within each test run of the four runs, when number of flows
-becomes larger, the packet throughput seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 32 ms. The PPS results are not as consistent as the RTT results.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second differs from each other, in which the smallest number of
-packets transmitted per second is 6 pps on Sep. 12ed and the largest of that is
-210.8 kpps. Meanwhile, the largest total number of packets received per second
-differs from each other, in which the smallest number of packets received per
-second is 2 pps on Sep. 13rd and the largest of that is 250.2 kpps.
-
-In some test runs when running with less than approx. 90000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 1000000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD5_ with:
-Joid
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-
diff --git a/docs/release/results/os-nosdn-nofeature-noha.rst b/docs/release/results/os-nosdn-nofeature-noha.rst
deleted file mode 100644
index 8b7c184bb..000000000
--- a/docs/release/results/os-nosdn-nofeature-noha.rst
+++ /dev/null
@@ -1,259 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-========================================
-Test Results for os-nosdn-nofeature-noha
-========================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Joid
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Intel POD5_ between September 12 and 15 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.50 and 1.68 ms.
-Only one test run has reached greatest RTT spike of 2.92 ms, which has
-the smallest RTT of 1.06 ms. The other three runs have no similar spike at all,
-the minimum and average RTTs of which are approx. 1.50 ms and 1.68 ms. SLA set to
-be 10 ms. The SLA value is used as a reference, it has not been defined by
-OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 177.5
-MB/s. The IO read bandwidth of the four runs looks similar on different four
-days, with an average between 46.7 and 62.5 MB/s. One of the runs has a minimum
-BW of 680 KM/s and other has a maximum BW of 177.5 MB/s. The SLA of read
-bandwidth sets to be 400 MB/s, which is used as a reference, and it has not
-been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-test runs all have an approx. 1.55 K/s for IO reading with an minimum value of
-less than 60 times per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.134 ns and 1.227
-ns on average. The variations within each test run are quite different, some
-vary from a large range and others have a small change. For example, the
-largest change is on September 15, the memory read latency of which is ranging
-from 1.116 ns to 1.393 ns. However, the results on September 12 change very
-little, which mainly keep flat and range from 1.124 ns to 1.55 ns. The SLA sets
-to be 30 ns. The SLA value is used as a reference, it has not been defined by
-OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on
-different blades. The reported pocket delay variations of the four test runs
-differ from each other. The results on September 13 within the date look
-similar and the values are between 0.0213 and 0.0225 ms, which is 0.0217 ms on
-average. However, on the third day, the packet delay variation has a large
-wide change within the date, which ranges from 0.008 ms to 0.0225 ms and has
-the minimum value. On Sep. 12, the packet delay is quite long, for the value is
-between 0.0236 and 0.0287 ms and it also has the maximum packet delay of 0.0287
-ms. The packet delay of the last test run is 0.0151 ms on average. The SLA
-value sets to be 10 ms. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the memory
-bandwidth of three test runs almost keep stable within each run, which is
-11.65, 11.57 and 11.64 GB/s on average. However, the memory read and write
-bandwidth on Sep. 14 has a large range, for it ranges from 11.36 GB/s to 16.68
-GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3222 to 3585, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is 124.8, 160.1, 113.8 and
-137.3 kpps, of which the result of the second is the highest. The RTT results
-of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS
-results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 243.1 kpps and the
-minimum throughput is 37.6 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is between 32 ms and 41 ms, of which
-the worest RTT is 155 ms on Sep. 14th.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and 9 percent respectively. And the best result is obtained on Sep.
-15th, with an CPU load of nine percent.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 22.4 GB/s to 26.5 GB/s and then to 18.6 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
-look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. And
-then with the block size becoming larger, the memory write bandwidth tends to
-decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has
-not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of three test runs look
-similar with each other, and Within these test runs, the maximum RTT can reach
-95 ms and the average RTT is usually approx. 36 ms. The network latency tested
-on Sep. 14 shows that it has a peak latency of 155 ms. But on the whole, the
-average RTTs of the four runs keep flat.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 270 MiB on Sep
-13, which also has the smallest minimum memory utilization. Besides, the rest
-three test runs have the similar used memory with an average memory usage of
-264 MiB. On the other hand, the free memory of the four runs have the same
-smallest minimum value, that is about 223 MiB, and the maximum free memory of
-three runs have the similar result, that is 226 MiB, except that on Sep. 13th,
-whose maximum free memory is 273 MiB. On the whole, all the test runs have
-similar average free memory.
-
-Network throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-network throughput of the four test runs seem quite different, ranging from
-119.85 kpps to 128.02 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding packet throughput
-differ between 38k and 243k with an average packet throughput of approx. 134k.
-On the whole, the PPS results seem consistent. Within each test run of the four
-runs, when number of flows becomes larger, the packet throughput seems not
-larger in the meantime.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT can reach
-79 ms and the average RTT is usually approx. 35 ms. On the whole, the average
-RTTs of the four runs keep flat.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool.The largest
-cache size is 214 MiB in the four runs, and the smallest cache size is 100 MiB.
-On the whole, the average cache size of the four runs is approx. 210 MiB.
-Meanwhile, the tread of the buffer size looks similar with each other. On the
-other hand, the mean buffer size of the four runs keep flat, since they have a
-minimum value of approx. 7 MiB and a maximum value of 8 MiB, with an average
-value of about 8 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs seem quite different, ranging from 113.8 kpps to 124.8 kpps.
-The average number of flows in these tests is 240k, and each run has a minimum
-number of flows of 2 and a maximum number of flows of 1.001 Mil. At the same
-time, the corresponding packet throughput differ between 47.6k and 243.1k with
-an average packet throughput between 113.8k and 160.1k. Within each test run of
-the four runs, when number of flows becomes larger, the packet throughput seems
-not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 79 ms with an average leatency of approx. 35 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 113.8 kpps to 124.8 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar on the first three runs with a minimum
-number of 10 pps and a maximum number of 97 kpps, except the one on Sep. 15th,
-in which the number of packets transmitted per second is 10 pps. Meanwhile, the
-largest total number of packets received per second differs from each other,
-in which the smallest number of packets received per second is 1 pps and the
-largest of that is 276 kpps.
-
-In some test runs when running with less than approx. 90000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 1000000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD5_ with:
-Joid
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-odl_l2-bgpvpn-ha.rst b/docs/release/results/os-odl_l2-bgpvpn-ha.rst
deleted file mode 100644
index 2bd6dc35d..000000000
--- a/docs/release/results/os-odl_l2-bgpvpn-ha.rst
+++ /dev/null
@@ -1,53 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-====================================
-Test Results for os-odl_l2-bgpvpn-ha
-====================================
-
-.. toctree::
- :maxdepth: 2
-
-
-fuel
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Ericsson POD2_ between September 7 and 11 in 2016.
-
-TC043
------
-The round-trip-time (RTT) between 2 nodes is measured using
-ping. Most test run measurements result on average between 0.21 and 0.28 ms.
-A few runs start with a 0.32 - 0.35 ms RTT spike (This could be because of
-normal ARP handling). To be able to draw conclusions more runs should be made.
-SLA set to 10 ms. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
diff --git a/docs/release/results/os-odl_l2-nofeature-ha.rst b/docs/release/results/os-odl_l2-nofeature-ha.rst
deleted file mode 100644
index ac0c5bb59..000000000
--- a/docs/release/results/os-odl_l2-nofeature-ha.rst
+++ /dev/null
@@ -1,743 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-=======================================
-Test Results for os-odl_l2-nofeature-ha
-=======================================
-
-.. toctree::
- :maxdepth: 2
-
-
-apex
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the LF POD1_ between September 14 and 17 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.49 ms and 0.60 ms.
-Only one test run has reached greatest RTT spike of 0.93 ms. Meanwhile, the
-smallest network latency is 0.33 ms, which is obtained on Sep. 14th.
-SLA set to be 10 ms. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 416
-MB/s. The IO read bandwidth of all four runs looks similar, with an average
-between 128 and 131 MB/s. One of the runs has a minimum BW of 497 KB/s. The SLA
-of read bandwidth sets to be 400 MB/s, which is used as a reference, and it has
-not been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-IO read times per second of the four test runs have an average value at 1k per
-second, and meanwhile, the minimum result is only 45 times per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.0859 ns and
-1.0869 ns on average. The variations within each test run are quite different,
-some vary from a large range and others have a small change. For example, the
-largest change is on September 14th, the memory read latency of which is ranging
-from 1.091 ns to 1.086 ns. However.
-The SLA sets to be 30 ns. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. On the first two test runs the reported packet delay variation varies between
-0.0037 and 0.0740 ms, with an average delay variation between 0.0096 ms and 0.0321.
-On the second date the delay variation varies between 0.0063 and 0.0096 ms, with
-an average delay variation of 0.0124 - 0.0141 ms.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the trend of
-three memory bandwidth almost look similar, which all have a narrow range, and
-the average result is 19.88 GB/s. Here SLA set to be 15 GB/s. The SLA value is
-used as a reference, it has not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3754k to 3831k, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is between 307.3 kpps and
-447.1 kpps, of which the result of the third run is the highest. The RTT
-results of all the test runs keep flat at approx. 15 ms. It is obvious that the
-PPS results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 418.1 kpps and the
-minimum throughput is 326.5 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is approx. 15 ms.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and nine percent respectively. And the best result is obtained on Sep.
-1, with an CPU load of nine percent. But on the whole, the CPU load is very
-poor, since the average value is quite small.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 28.2 GB/s to 29.5 GB/s and then to 29.2 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth
-look similar with a minimal BW of 25.8 GB/s and peak value of 28.3 GB/s. And
-then with the block size becoming larger, the memory write bandwidth tends to
-decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 39 ms and the average RTT is usually approx. 15 ms. The network latency
-tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole,
-the average RTTs of the five runs keep flat and the network latency is
-relatively short.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 267 MiB for the
-four runs. In general, the four test runs have very large memory utilization,
-which can reach 257 MiB on average. On the other hand, for the mean free memory,
-the four test runs have the similar trend with that of the mean used memory.
-In general, the mean free memory change from 233 MiB to 241 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the four test runs seem quite different, ranging from
-305.3 kpps to 447.1 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding average packet
-throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results
-seem consistent. Within each test run of the four runs, when number of flows
-becomes larger, the packet throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT is only 42
-ms and the average RTT is usually approx. 15 ms. On the whole, the average
-RTTs of the four runs keep stable and the network latency is relatively small.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool. The largest
-cache size is 212 MiB, which is same for the four runs, and the smallest cache
-size is 75 MiB. On the whole, the average cache size of the four runs look the
-same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer
-size keep flat, since they have a minimum value of 7 MiB and a maximum value of
-8 MiB, with an average value of about 7.9 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of
-flows in these tests is 240k, and each run has a minimum number of flows of 2
-and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
-packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run
-of the four runs, when number of flows becomes larger, the packet throughput
-seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 354.4 kpps to 381.8 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for three test runs, whose values change a
-lot from 10 pps to 501 kpps. While results of the rest test run seem the same
-and keep stable with the average number of packets transmitted per second of 10
-pps. However, the total number of packets received per second of the four runs
-look similar, which have a large wide range of 2 pps to 815 kpps.
-
-In some test runs when running with less than approx. 251000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 251000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on LF POD1_ with:
-Apex
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-
-
-fuel
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.5 and 0.6 ms.
-A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP
-handling). One test run has a greater RTT spike of 1.9 ms, which is the same
-one with the 0.7 ms average. The other runs have no similar spike at all.
-To be able to draw conclusions more runs should be made.
-SLA set to 10 ms. The SLA value is used as a reference, it has not
-been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 170 and 200 MB/s. Within each test run the results
-vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs
-have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in
-absolute numbers between the dates, between 617 and 838 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in approx. 1.2 ns. The variations within each test run are similar, between
-1.215 and 1.219 ns. One exception is February 16, where the average is 1.222
-and varies between 1.22 and 1.28 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. On the first date the reported packet delay variation varies between
-0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms.
-On the second date the delay variation varies between 0.002 and 0.006 ms, with
-an average delay variation of 0.004 ms.
-
-TC012
------
-Between test dates, the average measurements for memory bandwidth vary between
-17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal
-BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 3080 and 3240,
-one result each date. The average score on the total is 3150.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-CPU utilization statistics are collected during UDP flows sent between the VMs
-using pktgen as packet generator tool. The average measurements for CPU
-utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
-appears around 7%.
-
-TC069
------
-Between test dates, the average measurements for memory bandwidth vary between
-15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal
-BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality.
-SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Memory utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for memory
-utilization vary between 225MB to 246MB. The peak of memory utilization appears
-around 340MB.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Cache utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for cache
-utilization vary between 205MB to 212MB.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. Total number of packets received per
-second was average on 200 kpps and total number of packets transmitted per
-second was average on 600 kpps.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ and LF POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all. Approx.
-15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
-difference in RTT results.
-Especially RTT and throughput come out with better results than for instance
-the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
-probably be further analyzed and understood. Also of interest could be
-to make further analyzes to find patterns and reasons for lost traffic.
-Also of interest could be to see if there are continuous variations where
-some test cases stand out with better or worse results than the general test
-case.
-
-
-
-Joid
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Intel POD6_ between September 1 and 8 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.01 ms and 1.88 ms.
-Only one test run has reached greatest RTT spike of 1.88 ms. Meanwhile, the
-smallest network latency is 1.01 ms, which is obtained on Sep. 1st. In general,
-the average of network latency of the four test runs are between 1.29 ms and
-1.34 ms. SLA set to be 10 ms. The SLA value is used as a reference, it has not
-been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 183.65
-MB/s. The IO read bandwidth of the three runs looks similar, with an average
-between 62.9 and 64.3 MB/s, except one on Sep. 1, for its maximum storage
-throughput is only 159.1 MB/s. One of the runs has a minimum BW of 685 KB/s and
-other has a maximum BW of 183.6 MB/s. The SLA of read bandwidth sets to be
-400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-IO read times per second of the four test runs have an average value between
-1.41k per second and 1.64k per second, and meanwhile, the minimum result is
-only 55 times per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.152 ns and 1.179
-ns on average. The variations within each test run are quite different, some
-vary from a large range and others have a small change. For example, the
-largest change is on September 8, the memory read latency of which is ranging
-from 1.120 ns to 1.221 ns. However, the results on September 7 change very
-little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
-different blades. The reported packet delay variations of the four test runs
-differ from each other. In general, the packet delay of the first two runs look
-similar, for they both stay stable within each run. And the mean packet delay
-of them are 0.0087 ms and 0.0127 ms respectively. Of the four runs, the fourth
-has the worst result, because the packet delay reaches 0.0187 ms. The SLA value
-sets to be 10 ms. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the trend of
-three memory bandwidth almost look similar, which all have a narrow range, and
-the average result is 11.78 GB/s. Here SLA set to be 15 GB/s. The SLA value is
-used as a reference, it has not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3260k to 3328k, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is between 307.3 kpps and
-447.1 kpps, of which the result of the third run is the highest. The RTT
-results of all the test runs keep flat at approx. 15 ms. It is obvious that the
-PPS results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 418.1 kpps and the
-minimum throughput is 326.5 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is approx. 15 ms.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and nine percent respectively. And the best result is obtained on Sep.
-1, with an CPU load of nine percent. But on the whole, the CPU load is very
-poor, since the average value is quite small.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 21.9 GB/s to 25.9 GB/s and then to 17.8 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth
-look similar with a minimal BW of 24.8 GB/s and peak value of 27.8 GB/s. And
-then with the block size becoming larger, the memory write bandwidth tends to
-decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 39 ms and the average RTT is usually approx. 15 ms. The network latency
-tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole,
-the average RTTs of the five runs keep flat and the network latency is
-relatively short.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 267 MiB for the
-four runs. In general, the four test runs have very large memory utilization,
-which can reach 257 MiB on average. On the other hand, for the mean free memory,
-the four test runs have the similar trend with that of the mean used memory.
-In general, the mean free memory change from 233 MiB to 241 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the four test runs seem quite different, ranging from
-305.3 kpps to 447.1 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding average packet
-throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results
-seem consistent. Within each test run of the four runs, when number of flows
-becomes larger, the packet throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT is only 42
-ms and the average RTT is usually approx. 15 ms. On the whole, the average
-RTTs of the four runs keep stable and the network latency is relatively small.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool. The largest
-cache size is 212 MiB, which is same for the four runs, and the smallest cache
-size is 75 MiB. On the whole, the average cache size of the four runs look the
-same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer
-size keep flat, since they have a minimum value of 7 MiB and a maximum value of
-8 MiB, with an average value of about 7.9 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of
-flows in these tests is 240k, and each run has a minimum number of flows of 2
-and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
-packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run
-of the four runs, when number of flows becomes larger, the packet throughput
-seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 354.4 kpps to 381.8 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for three test runs, whose values change a
-lot from 10 pps to 501 kpps. While results of the rest test run seem the same
-and keep stable with the average number of packets transmitted per second of 10
-pps. However, the total number of packets received per second of the four runs
-look similar, which have a large wide range of 2 pps to 815 kpps.
-
-In some test runs when running with less than approx. 251000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 251000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD6_ with:
-Joid
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
diff --git a/docs/release/results/os-odl_l2-sfc-ha.rst b/docs/release/results/os-odl_l2-sfc-ha.rst
deleted file mode 100644
index e27562cae..000000000
--- a/docs/release/results/os-odl_l2-sfc-ha.rst
+++ /dev/null
@@ -1,231 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-==================================
-Test Results for os-odl_l2-sfc-ha
-==================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Fuel
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the LF POD2_ or Ericsson POD2_ between September 16 and 20 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.32 ms and 1.42 ms.
-Only one test run on Sep. 20 has reached greatest RTT spike of 4.66 ms.
-Meanwhile, the smallest network latency is 0.16 ms, which is obtained on Sep.
-17th. To sum up, the curve of network latency has very small wave, which is
-less than 5 ms. SLA sets to be 10 ms. The SLA value is used as a reference, it
-has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 734
-MB/s. The IO read bandwidth of the first three runs looks similar, with an
-average of less than 100 KB/s, except one on Sep. 20, whose maximum storage
-throughput can reach 734 MB/s. The SLA of read bandwidth sets to be 400 MB/s,
-which is used as a reference, and it has not been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-IO read times per second of the four test runs have an average value between
-1.8k per second and 3.27k per second, and meanwhile, the minimum result is
-only 60 times per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.085 ns and 1.218
-ns on average. The variations within each test run are quite small. For
-Ericsson pod2, the average of memory latency is approx. 1.217 ms. While for LF
-pod2, the average value is about 1.085 ms. It can be seen that the performance
-of LF is better than Ericsson's. The SLA sets to be 30 ns. The SLA value is
-used as a reference, it has not been defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. The four test runs all have a narrow range
-of change with the average memory and write BW of 18.5 GB/s. Here SLA set to be
-15 GB/s. The SLA value is used as a reference, it has not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3209k to 3843k, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the three test runs is between 439 kpps and
-582 kpps, and the test run on Sep. 17th has the lowest average value of 371
-kpps. The RTT results of all the test runs keep flat at approx. 10 ms. It is
-obvious that the PPS results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved, since the largest packet throughput is 680 kpps and the
-minimum throughput is 319 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar trend of RTT with the average value of approx. 12 ms.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and ten percent respectively. And the best result is obtained on Sep.
-17th, with an CPU load of ten percent. But on the whole, the CPU load is very
-poor, since the average value is quite small.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the average memory write
-bandwidth tends to become larger first and then smaller within every run test
-for the two pods, which rangs from 25.1 GB/s to 29.4 GB/s and then to 19.2 GB/s
-on average. Since the test id is one, it is that only the INT memory write
-bandwidth is tested. On the whole, with the block size becoming larger, the
-memory write bandwidth tends to decrease. SLA sets to be 7 GB/s. The SLA value
-is used as a reference, it has not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 27 ms and the average RTT is usually approx. 12 ms. The network latency
-tested on Sep. 27th has a peak latency of 27 ms. But on the whole, the average
-RTTs of the four runs keep flat.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 269 MiB for the
-four runs. In general, the four test runs have very large memory utilization,
-which can reach 251 MiB on average. On the other hand, for the mean free memory,
-the four test runs have the similar trend with that of the mean used memory.
-In general, the mean free memory change from 231 MiB to 248 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the four test runs seem quite different, ranging from
-371 kpps to 582 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding average packet
-throughput is between 319 kpps and 680 kpps. In summary, the PPS results
-seem consistent. Within each test run of the four runs, when number of flows
-becomes larger, the packet throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT is only 24
-ms and the average RTT is usually approx. 12 ms. On the whole, the average
-RTTs of the four runs keep stable and the network latency is relatively small.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool. The largest
-cache size is 213 MiB, and the smallest cache size is 99 MiB, which is same for
-the four runs. On the whole, the average cache size of the four runs look the
-same and is between 184 MiB and 205 MiB. Meanwhile, the tread of the buffer
-size keep stable, since they have a minimum value of 7 MiB and a maximum value of
-8 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs differ from 371 kpps to 582 kpps. The average number of
-flows in these tests is 240k, and each run has a minimum number of flows of 2
-and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
-packet throughput differ between 319 kpps to 680 kpps. Within each test run
-of the four runs, when number of flows becomes larger, the packet throughput
-seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 24 ms with an average leatency of less than 13 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 370 kpps to 582 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for the four test runs, whose values change a
-lot from 10 pps to 697 kpps. However, the total number of packets received per
-second of three runs look similar, which have a large wide range of 2 pps to
-1.497 Mpps, while the results on Sep. 18th and 20th have very small maximum
-number of packets received per second of 817 kpps.
-
-In some test runs when running with less than approx. 251000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 251000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ and LF POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-onos-nofeature-ha.rst b/docs/release/results/os-onos-nofeature-ha.rst
deleted file mode 100644
index d8b3ace5f..000000000
--- a/docs/release/results/os-onos-nofeature-ha.rst
+++ /dev/null
@@ -1,257 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-======================================
-Test Results for os-onos-nofeature-ha
-======================================
-
-.. toctree::
- :maxdepth: 2
-
-
-Joid
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 5 scenario test runs, each run
-on the Intel POD6_ between September 13 and 16 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.50 and 1.68 ms.
-Only one test run has reached greatest RTT spike of 2.62 ms, which has
-the smallest RTT of 1.00 ms. The other four runs have no similar spike at all,
-the minimum and average RTTs of which are approx. 1.06 ms and 1.32 ms. SLA set
-to be 10 ms. The SLA value is used as a reference, it has not been defined by
-OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 175.4
-MB/s. The IO read bandwidth of the four runs looks similar on different four
-days, with an average between 58.1 and 62.0 MB/s, except one on Sep. 14, for
-its maximum storage throughput is only 133.0 MB/s. One of the runs has a
-minimum BW of 497 KM/s and other has a maximum BW of 177.4 MB/s. The SLA of read
-bandwidth sets to be 400 MB/s, which is used as a reference, and it has not
-been defined by OPNFV.
-
-The results of storage IOPS for the five runs look similar with each other. The
-IO read times per second of the five test runs have an average value between
-1.20 K/s and 1.61 K/s, and meanwhile, the minimum result is only 41 times per
-second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the five runs is between 1.146 ns and 1.172
-ns on average. The variations within each test run are quite different, some
-vary from a large range and others have a small change. For example, the
-largest change is on September 13, the memory read latency of which is ranging
-from 1.152 ns to 1.221 ns. However, the results on September 14 change very
-little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
-different blades. The reported packet delay variations of the five test runs
-differ from each other. In general, the packet delay of the first two runs look
-similar, for they both stay stable within each run. And the mean packet delay of
-of them are 0.07714 ms and 0.07982 ms respectively. Of the five runs, the third
-has the worst result, because the packet delay reaches 0.08384 ms. The trend of
-therest two runs look the same, for the average packet delay are 0.07808 ms and
-0.07727 ms respectively. The SLA value sets to be 10 ms. The SLA value is used
-as a reference, it has not been defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the five test runs, the memory
-bandwidth of last three test runs almost keep stable within each run, which is
-11.64, 11.71 and 11.61 GB/s on average. However, the memory read and write
-bandwidth on Sep. 13 has a large range, for it ranges from 6.68 GB/s to 11.73
-GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3208 to 3314, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the five test runs is between 259.6 kpps and
-318.4 kpps, of which the result of the second run is the highest. The RTT
-results of all the test runs keep flat at approx. 20 ms. It is obvious that the
-PPS results are not as consistent as the RTT results.
-
-The No. flows of the five test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 398.9 kpps and the
-minimum throughput is 250.6 kpps respectively.
-
-There are no errors of packets received in the five runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the five
-runs have the similar average vaue, that is between 17 ms and 22 ms, of which
-the worest RTT is 53 ms on Sep. 14th.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and 10 percent respectively. And the best result is obtained on Sep.
-13rd, with an CPU load of 10 percent.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 21.6 GB/s to 26.8 GB/s and then to 18.4 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
-look similar with a minimal BW of 23.0 GB/s and peak value of 28.6 GB/s. And
-then with the block size becoming larger, the memory write bandwidth tends to
-decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has
-not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the five test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 53 ms and the average RTT is usually approx. 18 ms. The network latency
-tested on Sep. 14 shows that it has a peak latency of 53 ms. But on the whole,
-the average RTTs of the five runs keep flat and the network latency is
-relatively short.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 272 MiB on Sep
-14. In general, the mean used memory of the five test runs have the similar
-trend and the minimum memory used size is approx. 150 MiB, and the average
-used memory size is about 250 MiB. On the other hand, for the mean free memory,
-the five test runs have the similar trend, whose mean free memory change from
-218 MiB to 342 MiB, with an average value of approx. 38 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the five test runs seem quite different, ranging from
-285.29 kpps to 297.76 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding packet throughput
-differ between 250.6k and 398.9k with an average packet throughput between
-277.2 K and 318.4 K. In summary, the PPS results seem consistent. Within each
-test run of the five runs, when number of flows becomes larger, the packet
-throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the five test runs
-look similar with each other. Within each test run, the maximum RTT is only 49
-ms and the average RTT is usually approx. 20 ms. On the whole, the average
-RTTs of the five runs keep stable and the network latency is relatively short.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool.The largest
-cache size is 215 MiB in the four runs, and the smallest cache size is 95 MiB.
-On the whole, the average cache size of the five runs change a little and is
-about 200 MiB, except the one on Sep. 14th, the mean cache size is very small,
-which keeps 102 MiB. Meanwhile, the tread of the buffer size keep flat, since
-they have a minimum value of 7 MiB and a maximum value of 8 MiB, with an
-average value of about 7.8 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs seem quite different, ranging from 285.29 kpps to 297.76
-kpps. The average number of flows in these tests is 239.7k, and each run has a
-minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the
-same time, the corresponding packet throughput differ between 227.3k and 398.9k
-with an average packet throughput between 277.2k and 318.4k. Within each test
-run of the five runs, when number of flows becomes larger, the packet
-throughput seems not larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
- between 0 ms and 49 ms with an average leatency of less than 22 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the five runs differ from 250.6 kpps to 398.9 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for four test runs, whose values change a
-lot from 10 pps to 399 kpps, except the one on Sep. 14th, whose total number
-of transmitted per second keep stable, that is 10 pps. Similarly, the total
-number of packets received per second look the same for four runs, except the
-one on Sep. 14th, whose value is only 10 pps.
-
-In some test runs when running with less than approx. 90000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 250000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD6_ with:
-Joid
-OpenStack Mitaka
-Onos Goldeneye
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-onos-sfc-ha.rst b/docs/release/results/os-onos-sfc-ha.rst
deleted file mode 100644
index e52ae3d55..000000000
--- a/docs/release/results/os-onos-sfc-ha.rst
+++ /dev/null
@@ -1,517 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International
-.. License.
-.. http://creativecommons.org/licenses/by/4.0
-
-
-===============================
-Test Results for os-onos-sfc-ha
-===============================
-
-.. toctree::
- :maxdepth: 2
-
-
-fuel
-====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Ericsson POD2_ or LF POD2_ between September 5 and 10 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 0.5 and 0.6 ms.
-A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP
-handling). One test run has a greater RTT spike of 1.9 ms, which is the same
-one with the 0.7 ms average. The other runs have no similar spike at all.
-To be able to draw conclusions more runs should be made.
-SLA set to 10 ms. The SLA value is used as a reference, it has not
-been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth looks similar between different dates, with an
-average between approx. 170 and 200 MB/s. Within each test run the results
-vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs
-have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in
-absolute numbers between the dates, between 617 and 838 MB/s.
-SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC010
------
-The measurements for memory latency are similar between test dates and result
-in approx. 1.2 ns. The variations within each test run are similar, between
-1.215 and 1.219 ns. One exception is February 16, where the average is 1.222
-and varies between 1.22 and 1.28 ns.
-SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
-by OPNFV.
-
-TC011
------
-Packet delay variation between 2 VMs on different blades is measured using
-Iperf3. On the first date the reported packet delay variation varies between
-0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms.
-On the second date the delay variation varies between 0.002 and 0.006 ms, with
-an average delay variation of 0.004 ms.
-
-TC012
------
-Between test dates, the average measurements for memory bandwidth vary between
-17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal
-BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality.
-SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC014
------
-The Unixbench processor test run results vary between scores 3080 and 3240,
-one result each date. The average score on the total is 3150.
-No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-CPU utilization statistics are collected during UDP flows sent between the VMs
-using pktgen as packet generator tool. The average measurements for CPU
-utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
-appears around 7%.
-
-TC069
------
-Between test dates, the average measurements for memory bandwidth vary between
-15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal
-BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality.
-SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
-defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Memory utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for memory
-utilization vary between 225MB to 246MB. The peak of memory utilization appears
-around 340MB.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Cache utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The average measurements for cache
-utilization vary between 205MB to 212MB.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs at
-approx. 15 ms. Some test runs show an increase with many flows, in the range
-towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
-RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
-RTT results.
-In some test runs when running with less than approx. 10000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. Around 20 percent decrease in the worst
-case. For the other test runs there is however no significant change to the PPS
-throughput when the number of flows are increased. In some test runs the PPS
-is also greater with 1000000 flows compared to other test runs where the PPS
-result is less with only 2 flows.
-
-The average PPS throughput in the different runs varies between 414000 and
-452000 PPS. The total amount of packets in each test run is approx. 7500000 to
-8200000 packets. One test run Feb. 15 sticks out with a PPS average of
-558000 and approx. 1100000 packets in total (same as the on mentioned earlier
-for RTT results).
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally range between 100 and 1000 per test run,
-but there are spikes in the range of 10000 lost packets as well, and even
-more in a rare cases.
-
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. Total number of packets received per
-second was average on 200 kpps and total number of packets transmitted per
-second was average on 600 kpps.
-
-Detailed test results
----------------------
-The scenario was run on Ericsson POD2_ and LF POD2_ with:
-Fuel 9.0
-OpenStack Mitaka
-Onos Goldeneye
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
-Conclusions and recommendations
--------------------------------
-The pktgen test configuration has a relatively large base effect on RTT in
-TC037 compared to TC002, where there is no background load at all. Approx.
-15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
-difference in RTT results.
-Especially RTT and throughput come out with better results than for instance
-the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
-probably be further analyzed and understood. Also of interest could be
-to make further analyzes to find patterns and reasons for lost traffic.
-Also of interest could be to see if there are continuous variations where
-some test cases stand out with better or worse results than the general test
-case.
-
-
-Joid
-=====
-
-.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
-.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
-
-Overview of test results
-------------------------
-
-See Grafana_ for viewing test result metrics for each respective test case. It
-is possible to chose which specific scenarios to look at, and then to zoom in
-on the details of each run test scenario as well.
-
-All of the test case results below are based on 4 scenario test runs, each run
-on the Intel POD6_ between September 8 and 11 in 2016.
-
-TC002
------
-The round-trip-time (RTT) between 2 VMs on different blades is measured using
-ping. Most test run measurements result on average between 1.35 ms and 1.57 ms.
-Only one test run has reached greatest RTT spike of 2.58 ms. Meanwhile, the
-smallest network latency is 1.11 ms, which is obtained on Sep. 11st. In
-general, the average of network latency of the four test runs are between 1.35
-ms and 1.57 ms. SLA set to be 10 ms. The SLA value is used as a reference, it
-has not been defined by OPNFV.
-
-TC005
------
-The IO read bandwidth actually refers to the storage throughput, which is
-measured by fio and the greatest IO read bandwidth of the four runs is 175.4
-MB/s. The IO read bandwidth of the three runs looks similar, with an average
-between 43.7 and 56.3 MB/s, except one on Sep. 8, for its maximum storage
-throughput is only 107.9 MB/s. One of the runs has a minimum BW of 478 KM/s and
-other has a maximum BW of 168.6 MB/s. The SLA of read bandwidth sets to be
-400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
-
-The results of storage IOPS for the four runs look similar with each other. The
-IO read times per second of the four test runs have an average value between
-978 per second and 1.20 K/s, and meanwhile, the minimum result is only 36 times
-per second.
-
-TC010
------
-The tool we use to measure memory read latency is lmbench, which is a series of
-micro benchmarks intended to measure basic operating system and hardware system
-metrics. The memory read latency of the four runs is between 1.164 ns and 1.244
-ns on average. The variations within each test run are quite different, some
-vary from a large range and others have a small change. For example, the
-largest change is on September 10, the memory read latency of which is ranging
-from 1.128 ns to 1.381 ns. However, the results on September 11 change very
-little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
-not been defined by OPNFV.
-
-TC011
------
-Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
-different blades. The reported packet delay variations of the four test runs
-differ from each other. In general, the packet delay of two runs look similar,
-for they both stay stable within each run. And the mean packet delay of them
-are 0.0772 ms and 0.0788 ms respectively. Of the four runs, the fourth has the
-worst result, because the packet delay reaches 0.0838 ms. The rest one has a
-large wide range from 0.0666 ms to 0.0798 ms. The SLA value sets to be 10 ms.
-The SLA value is used as a reference, it has not been defined by OPNFV.
-
-TC012
------
-Lmbench is also used to measure the memory read and write bandwidth, in which
-we use bw_mem to obtain the results. Among the four test runs, the trend of the
-memory bandwidth almost look similar, which all have a large wide range, and
-the minimum and maximum results are 9.02 GB/s and 18.14 GB/s. Here SLA set to
-be 15 GB/s. The SLA value is used as a reference, it has not been defined by
-OPNFV.
-
-TC014
------
-The Unixbench is used to evaluate the IaaS processing speed with regards to
-score of single cpu running and parallel running. It can be seen from the
-dashboard that the processing test results vary from scores 3395 to 3475, and
-there is only one result one date. No SLA set.
-
-TC037
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The mean packet throughput of the four test runs is between 362.1 kpps and
-363.5 kpps, of which the result of the third run is the highest. The RTT
-results of all the test runs keep flat at approx. 17 ms. It is obvious that the
-PPS results are not as consistent as the RTT results.
-
-The No. flows of the four test runs are 240 k on average and the PPS results
-look a little waved since the largest packet throughput is 418.1 kpps and the
-minimum throughput is 326.5 kpps respectively.
-
-There are no errors of packets received in the four runs, but there are still
-lost packets in all the test runs. The RTT values obtained by ping of the four
-runs have the similar average vaue, that is approx. 17 ms, of which the worst
-RTT is 39 ms on Sep. 11st.
-
-CPU load is measured by mpstat, and CPU load of the four test runs seem a
-little similar, since the minimum value and the peak of CPU load is between 0
-percent and nine percent respectively. And the best result is obtained on Sep.
-10, with an CPU load of nine percent.
-
-TC069
------
-With the block size changing from 1 kb to 512 kb, the memory write bandwidth
-tends to become larger first and then smaller within every run test, which
-rangs from 25.9 GB/s to 26.6 GB/s and then to 18.1 GB/s on average. Since the
-test id is one, it is that only the INT memory write bandwidth is tested. On
-the whole, when the block size is from 2 kb to 16 kb, the memory write
-bandwidth look similar with a minimal BW of 22.1 GB/s and peak value of 28.6
-GB/s. And then with the block size becoming larger, the memory write bandwidth
-tends to decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference,
-it has not been defined by OPNFV.
-
-TC070
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other, and within these test runs, the maximum RTT can
-reach 39 ms and the average RTT is usually approx. 17 ms. The network latency
-tested on Sep. 11 shows that it has a peak latency of 39 ms. But on the whole,
-the average RTTs of the five runs keep flat and the network latency is
-relatively short.
-
-Memory utilization is measured by free, which can display amount of free and
-used memory in the system. The largest amount of used memory is 270 MiB on the
-first two runs. In general, the mean used memory of two test runs have very
-large memory utilization, which can reach 264 MiB on average. And the other two
-runs have a large wide range of memory usage with the minimum value of 150 MiB
-and the maximum value of 270 MiB. On the other hand, for the mean free memory,
-the four test runs have the similar trend with that of the mean used memory.
-In general, the mean free memory change from 220 MiB to 342 MiB.
-
-Packet throughput and packet loss can be measured by pktgen, which is a tool
-in the network for generating traffic loads for network experiments. The mean
-packet throughput of the four test runs seem quite different, ranging from
-326.5 kpps to 418.1 kpps. The average number of flows in these tests is
-240000, and each run has a minimum number of flows of 2 and a maximum number
-of flows of 1.001 Mil. At the same time, the corresponding packet throughput
-differ between 326.5 kpps and 418.1 kpps with an average packet throughput between
-361.7 kpps and 363.5 kpps. In summary, the PPS results seem consistent. Within each
-test run of the four runs, when number of flows becomes larger, the packet
-throughput seems not larger at the same time.
-
-TC071
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The network latency is measured by ping, and the results of the four test runs
-look similar with each other. Within each test run, the maximum RTT is only 47
-ms and the average RTT is usually approx. 15 ms. On the whole, the average
-RTTs of the four runs keep stable and the network latency is relatively small.
-
-Cache utilization is measured by cachestat, which can display size of cache and
-buffer in the system. Cache utilization statistics are collected during UDP
-flows sent between the VMs using pktgen as packet generator tool. The largest
-cache size is 214 MiB, which is same for the four runs, and the smallest cache
-size is 94 MiB. On the whole, the average cache size of the four runs look the
-same and is between 198 MiB and 207 MiB. Meanwhile, the tread of the buffer
-size keep flat, since they have a minimum value of 7 MiB and a maximum value of
-8 MiB, with an average value of about 7.9 MiB.
-
-Packet throughput can be measured by pktgen, which is a tool in the network for
-generating traffic loads for network experiments. The mean packet throughput of
-the four test runs seem quite the same, which is approx. 363 kpps. The average
-number of flows in these tests is 240k, and each run has a minimum number of
-flows of 2 and a maximum number of flows of 1.001 Mil. At the same time, the
-corresponding packet throughput differ between 327 kpps and 418 kpps with an
-average packet throughput of about 363 kpps. Within each test run of the four
-runs, when number of flows becomes larger, the packet throughput seems not
-larger in the meantime.
-
-TC072
------
-The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
-on different blades are measured when increasing the amount of UDP flows sent
-between the VMs using pktgen as packet generator tool.
-
-Round trip times and packet throughput between VMs can typically be affected by
-the amount of flows set up and result in higher RTT and less PPS throughput.
-
-The RTT results are similar throughout the different test dates and runs
-between 0 ms and 47 ms with an average leatency of less than 16 ms. The PPS
-results are not as consistent as the RTT results, for the mean packet
-throughput of the four runs differ from 361.7 kpps to 365.0 kpps.
-
-Network utilization is measured by sar, that is system activity reporter, which
-can display the average statistics for the time since the system was started.
-Network utilization statistics are collected during UDP flows sent between the
-VMs using pktgen as packet generator tool. The largest total number of packets
-transmitted per second look similar for two test runs, whose values change a
-lot from 10 pps to 432 kpps. While results of the other test runs seem the same
-and keep stable with the average number of packets transmitted per second of 10
-pps. However, the total number of packets received per second of the four runs
-look similar, which have a large wide range of 2 pps to 657 kpps.
-
-In some test runs when running with less than approx. 250000 flows the PPS
-throughput is normally flatter compared to when running with more flows, after
-which the PPS throughput decreases. For the other test runs there is however no
-significant change to the PPS throughput when the number of flows are
-increased. In some test runs the PPS is also greater with 250000 flows
-compared to other test runs where the PPS result is less with only 2 flows.
-
-There are lost packets reported in most of the test runs. There is no observed
-correlation between the amount of flows and the amount of lost packets.
-The lost amount of packets normally differs a lot per test run.
-
-Detailed test results
----------------------
-The scenario was run on Intel POD6_ with:
-Joid
-OpenStack Mitaka
-Onos Goldeneye
-OpenVirtualSwitch 2.5.90
-OpenDayLight Beryllium
-
-Rationale for decisions
------------------------
-Pass
-
-Conclusions and recommendations
--------------------------------
-Tests were successfully executed and metrics collected.
-No SLA was verified. To be decided on in next release of OPNFV.
-
diff --git a/docs/release/results/overview.rst b/docs/release/results/overview.rst
index b4a050545..9fd74797c 100644
--- a/docs/release/results/overview.rst
+++ b/docs/release/results/overview.rst
@@ -42,55 +42,31 @@ environment, features or test framework.
The list of scenarios supported by each installer can be described as follows:
-+-------------------------+---------+---------+---------+---------+
-| Scenario | Apex | Compass | Fuel | Joid |
-+=========================+=========+=========+=========+=========+
-| os-nosdn-nofeature-noha | | | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-nofeature-ha | X | X | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-nofeature-ha | X | X | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-nofeature-noha| | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l3-nofeature-ha | X | X | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l3-nofeature-noha| | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-onos-sfc-ha | X | X | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-onos-sfc-noha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-onos-nofeature-ha | X | X | X | X |
-+-------------------------+---------+---------+---------+---------+
-| os-onos-nofeature-noha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-sfc-ha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-sfc-noha | X | X | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-bgpvpn-ha | X | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-bgpvpn-noha | | X | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-kvm-ha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-kvm-noha | | X | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-ovs-ha | | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-ovs-noha | X | | X | |
-+-------------------------+---------+---------+---------+---------+
-| os-ocl-nofeature-ha | | | | |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-lxd-ha | | | | X |
-+-------------------------+---------+---------+---------+---------+
-| os-nosdn-lxd-noha | | | | X |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-fdio-noha | X | | | |
-+-------------------------+---------+---------+---------+---------+
-| os-odl_l2-moon-ha | | X | | |
-+-------------------------+---------+---------+---------+---------+
++-------------------------+------+---------+----------+------+------+-------+
+| Scenario | Apex | Compass | Fuel-arm | Fuel | Joid | Daisy |
++=========================+======+=========+==========+======+======+=======+
+| os-nosdn-nofeature-noha | X | | | | X | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-nofeature-ha | X | | X | X | X | X |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-bar-noha | X | | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-bar-ha | X | | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-odl-bgpvpn-ha | X | | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-calipso-noha | X | | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-kvm-ha | | X | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-odl_l3-nofeature-ha | | X | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-odl-sfc-ha | | X | | | | |
++-------------------------+------+---------+----------+------+------+-------+
+| os-odl-nofeature-ha | | | | X | | X |
++-------------------------+------+---------+----------+------+------+-------+
+| os-nosdn-ovs-ha | | | | X | | |
++-------------------------+------+---------+----------+------+------+-------+
To qualify for release, the scenarios must have deployed and been successfully
tested in four consecutive installations to establish stability of deployment
@@ -103,4 +79,4 @@ References
* IEEE Std 829-2008. "Standard for Software and System Test Documentation".
-* OPNFV Colorado release note for Yardstick.
+* OPNFV Fraser release note for Yardstick.
diff --git a/docs/release/results/results.rst b/docs/release/results/results.rst
index 04c6b9f87..c75f5ae94 100644
--- a/docs/release/results/results.rst
+++ b/docs/release/results/results.rst
@@ -2,13 +2,16 @@
.. License.
.. http://creativecommons.org/licenses/by/4.0
-Results listed by scenario
+Results listed by test cases
==========================
-The following sections describe the yardstick results as evaluated for the
-Colorado release scenario validation runs. Each section describes the
-determined state of the specific scenario as deployed in the Colorado
-release process.
+.. _TOM: https://wiki.opnfv.org/display/testing/R+post-processing+of+the+Yardstick+results
+
+
+The following sections describe the yardstick test case results as evaluated
+for the OPNFV Fraser release scenario validation runs. Each section describes
+the determined state of the specific test case as executed in the Fraser release
+process. All test date are analyzed using TOM_ tool.
Scenario Results
================
@@ -16,21 +19,22 @@ Scenario Results
.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/
+
The following documents contain results of Yardstick test cases executed on
-OPNFV labs, triggered by OPNFV CI pipeline, documented per scenario.
+OPNFV labs, triggered by OPNFV CI pipeline, documented per test case.
.. toctree::
:maxdepth: 1
- os-nosdn-nofeature-ha.rst
- os-nosdn-nofeature-noha.rst
- os-odl_l2-nofeature-ha.rst
- os-odl_l2-bgpvpn-ha.rst
- os-odl_l2-sfc-ha.rst
- os-nosdn-kvm-ha.rst
- os-onos-nofeature-ha.rst
- os-onos-sfc-ha.rst
+ tc002-network-latency.rst
+ tc010-memory-read-latency.rst
+ tc011-packet-delay-variation.rst
+ tc012-memory-read-write-bandwidth.rst
+ tc014-cpu-processing-speed.rst
+ tc069-memory-write-bandwidth.rst
+ tc082-context-switches-under-load.rst
+ tc083-network-throughput-between-vm.rst
Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_.
diff --git a/docs/release/results/tc002-network-latency.rst b/docs/release/results/tc002-network-latency.rst
new file mode 100644
index 000000000..722423473
--- /dev/null
+++ b/docs/release/results/tc002-network-latency.rst
@@ -0,0 +1,317 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+======================================
+Test results for TC002 network latency
+======================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC002 verifies that network latency is within acceptable boundaries when packets travel between hosts located on same or different compute blades.
+Ping packets (ICMP protocol's mandatory ECHO_REQUEST datagram) are sent from host VM to target VM(s) to elicit ICMP ECHO_RESPONSE.
+
+Metric: RTT (Round Trip Time)
+Unit: ms
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [0.214],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [0.309],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [0.3145],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [0.3585],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [0.3765],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [0.403],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [0.413],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [0.494],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [0.5715],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [0.5785],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [0.617],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [0.62],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [0.632],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [0.635],
+
+ "os-odl-bgpvpn-ha:lf-pod1:apex": [0.658],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [0.663],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [0.668],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [0.668],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [0.6815],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [0.7005],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [0.778],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [0.7825],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [0.7885],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [0.795],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [0.8045],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [0.8335],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [0.8755],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [0.8855],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [0.8895],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [0.901],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [0.956],
+
+ "os-nosdn-lxd-noha:intel-pod5:joid": [1.131],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [1.173],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [1.2015],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [1.204],
+
+ "os-nosdn-lxd-ha:intel-pod5:joid": [1.2245],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [1.2285],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [1.3055],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [1.309],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [1.313],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [1.319],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [1.3425],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [1.3475],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [1.348],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [1.432],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual9:compass": [1.442],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [1.4505],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [1.497],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [1.504],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [1.519],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [1.5415],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [1.5785],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [1.604],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [1.61],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [1.633],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [1.6485],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [1.7085],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [1.71],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [1.7955],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [1.838],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [1.88],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [1.8975],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [1.923],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [1.944],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [1.968],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [1.986],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [2.0415],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [2.071],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [2.0855],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [2.1085],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [2.1135],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [2.234],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [2.294],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [2.304],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [2.378],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [2.397],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [2.472],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [2.603],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [2.635],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [2.9055],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [3.1295],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [3.337],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [3.634],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual1:fuel": [3.875],
+
+ "os-odl-nofeature-noha:ericsson-virtual1:fuel": [3.9655],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [3.9795]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc002_scenario.png
+ :width: 800px
+ :alt: TC002 influence of scenario
+
+{
+
+ "os-odl_l2-moon-ha": [0.3415],
+
+ "os-nosdn-ovs-ha": [0.3625],
+
+ "os-nosdn-ovs_dpdk-noha": [0.378],
+
+ "os-nosdn-ovs_dpdk-ha": [0.5265],
+
+ "os-nosdn-bar-noha": [0.632],
+
+ "os-odl-bgpvpn-ha": [0.658],
+
+ "os-ovn-nofeature-noha": [0.668],
+
+ "os-odl_l3-nofeature-ha": [0.8545],
+
+ "os-nosdn-ovs-noha": [0.8575],
+
+ "os-nosdn-bar-ha": [0.903],
+
+ "os-odl-sfc-ha": [1.127],
+
+ "os-nosdn-lxd-noha": [1.131],
+
+ "os-nosdn-nofeature-ha": [1.152],
+
+ "os-odl_l2-moon-noha": [1.1825],
+
+ "os-nosdn-lxd-ha": [1.2245],
+
+ "os-odl_l3-nofeature-noha": [1.337],
+
+ "os-odl-nofeature-ha": [1.352],
+
+ "os-odl-sfc-noha": [1.4255],
+
+ "os-nosdn-kvm-noha": [1.5045],
+
+ "os-nosdn-openbaton-ha": [1.5665],
+
+ "os-nosdn-nofeature-noha": [1.729],
+
+ "os-nosdn-kvm-ha": [1.7745],
+
+ "os-odl-nofeature-noha": [3.106]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc002_pod.png
+ :width: 800px
+ :alt: TC002 influence of the POD
+
+{
+
+ "huawei-pod2": [0.3925],
+
+ "lf-pod2": [0.5315],
+
+ "lf-pod1": [0.62],
+
+ "flex-pod2": [0.795],
+
+ "huawei-pod12": [0.87],
+
+ "intel-pod5": [1.25],
+
+ "ericsson-virtual3": [1.2655],
+
+ "ericsson-pod1": [1.372],
+
+ "arm-pod5": [1.518],
+
+ "huawei-virtual4": [1.5355],
+
+ "huawei-virtual3": [1.606],
+
+ "intel-pod18": [1.6575],
+
+ "huawei-virtual8": [1.709],
+
+ "huawei-virtual2": [1.872],
+
+ "arm-pod6": [1.895],
+
+ "huawei-virtual9": [2.0745],
+
+ "huawei-virtual1": [2.495],
+
+ "ericsson-virtual2": [2.7895],
+
+ "ericsson-virtual4": [3.768],
+
+ "ericsson-virtual1": [3.8035]
+
+}
+
+
+Fraser release
+--------------
diff --git a/docs/release/results/tc010-memory-read-latency.rst b/docs/release/results/tc010-memory-read-latency.rst
new file mode 100644
index 000000000..9a296b7a0
--- /dev/null
+++ b/docs/release/results/tc010-memory-read-latency.rst
@@ -0,0 +1,299 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+==========================================
+Test results for TC010 memory read latency
+==========================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC010 measures the memory read latency for varying memory sizes and strides.
+The test results shown below are for memory size of 16MB.
+
+Metric: Memory read latency
+Unit: ns
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [5.3165],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [5.908],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [6.412],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [6.545],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [6.592],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [6.5975],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [6.7675],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [6.7675],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [6.7945],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [6.839],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [6.9695],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [7.123],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [7.289],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [7.4315],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [7.9],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [8.178],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [8.616],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [8.646],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [8.8615],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [8.87],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [8.877],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [8.892],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [8.898],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [8.952],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [8.9745],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [9.0375],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [9.083],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [9.09],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [9.094],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [9.293],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [9.3525],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [9.477],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [9.5445],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [9.5575],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [9.6435],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [9.68],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [9.728],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [9.751],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [9.8645],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [9.969],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [10.029],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [10.088],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [10.2985],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [10.318],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [10.3215],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [10.617],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [10.762],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [10.7715],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [10.866],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [10.871],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [11.1605],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [11.227],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [11.348],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [11.453],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [11.571],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [11.5925],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [11.689],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [11.8695],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [12.199],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [12.433],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [12.713],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [15.328],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [15.4265],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [15.428],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [15.545],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [15.55],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [15.6395],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [15.696],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [15.774],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [16.6455],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [16.861],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [18.071],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [18.116],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [18.8365],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [18.927],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [29.557],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [32.492],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [37.623],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [41.345],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [42.3795],
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc010_scenario.png
+ :width: 800px
+ :alt: TC010 influence of scenario
+
+{
+
+ "os-nosdn-ovs-noha": [7.9],
+
+ "os-nosdn-ovs_dpdk-noha": [8.641],
+
+ "os-nosdn-ovs_dpdk-ha": [8.6815],
+
+ "os-nosdn-openbaton-ha": [8.882],
+
+ "os-odl_l2-moon-ha": [8.948],
+
+ "os-odl_l3-nofeature-ha": [8.992],
+
+ "os-nosdn-nofeature-ha": [9.118],
+
+ "os-nosdn-nofeature-noha": [9.174],
+
+ "os-odl_l2-moon-noha": [9.312],
+
+ "os-odl_l3-nofeature-noha": [9.5535],
+
+ "os-odl-nofeature-noha": [9.673],
+
+ "os-odl-sfc-noha": [9.8385],
+
+ "os-odl-sfc-ha": [9.98],
+
+ "os-nosdn-kvm-noha": [10.088],
+
+ "os-nosdn-kvm-ha": [11.1705],
+
+ "os-nosdn-bar-ha": [12.1395],
+
+ "os-nosdn-ovs-ha": [15.3195],
+
+ "os-ovn-nofeature-noha": [15.545],
+
+ "os-odl-nofeature-ha": [16.301],
+
+ "os-nosdn-bar-noha": [16.861]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc010_pod.png
+ :width: 800px
+ :alt: TC010 influence of the POD
+
+{
+
+ "ericsson-pod1": [5.7785],
+
+ "flex-pod2": [5.908],
+
+ "ericsson-virtual1": [6.412],
+
+ "intel-pod18": [6.5905],
+
+ "intel-pod5": [6.6975],
+
+ "ericsson-virtual4": [7.183],
+
+ "ericsson-virtual2": [8.4985],
+
+ "huawei-pod2": [8.877],
+
+ "huawei-pod12": [9.091],
+
+ "ericsson-virtual3": [9.719],
+
+ "huawei-virtual4": [10.1195],
+
+ "huawei-virtual3": [10.19],
+
+ "huawei-virtual1": [10.3045],
+
+ "huawei-virtual9": [10.318],
+
+ "huawei-virtual2": [11.274],
+
+ "lf-pod1": [15.7025],
+
+ "lf-pod2": [15.8495],
+
+ "arm-pod5": [18.092],
+
+ "huawei-virtual8": [33.999],
+
+ "arm-pod6": [41.5605]
+
+}
+
+
+Fraser release
+--------------
diff --git a/docs/release/results/tc011-packet-delay-variation.rst b/docs/release/results/tc011-packet-delay-variation.rst
new file mode 100644
index 000000000..b07ea8980
--- /dev/null
+++ b/docs/release/results/tc011-packet-delay-variation.rst
@@ -0,0 +1,262 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+=============================================
+Test results for TC011 packet delay variation
+=============================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC011 measures the packet delay variation sending the packets from one VM to the other.
+
+Metric: packet delay variation (jitter)
+Unit: ms
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [2996],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [2996],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [2996],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [2996],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [2997],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [2997],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [2997],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [2997],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [2997],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [2997.5],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [2998],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [2998],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [2999],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [2999.5],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [3000],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [3001],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [3002],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [3002],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [3002],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [3002],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [3003],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [3003.5],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [3004],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [3004],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [3004.5],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [3005],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [3006],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [3006.5],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [3009],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [3010],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [3010],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [3012],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [3017],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [3017],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [3017],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [3018],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [3020],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [3021],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [3022],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [3022],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [3022],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [3022],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [3022],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [3022],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [3022],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [3022],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [3022],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [3022],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [3022],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [3022],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [3022],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [3022],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [3022],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [3022],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [3022],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [3022],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [3022],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [3022],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [3022],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [3023],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [3023],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [3024]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc011_scenario.png
+ :width: 800px
+ :alt: TC011 influence of scenario
+
+{
+
+ "os-nosdn-ovs_dpdk-noha": [2996],
+
+ "os-odl_l3-nofeature-noha": [2997],
+
+ "os-nosdn-kvm-noha": [2999],
+
+ "os-nosdn-ovs_dpdk-ha": [3002],
+
+ "os-nosdn-kvm-ha": [3014.5],
+
+ "os-odl-sfc-noha": [3018],
+
+ "os-nosdn-nofeature-noha": [3020],
+
+ "os-nosdn-openbaton-ha": [3020],
+
+ "os-nosdn-bar-ha": [3022],
+
+ "os-nosdn-bar-noha": [3022],
+
+ "os-nosdn-nofeature-ha": [3022],
+
+ "os-odl-nofeature-ha": [3022],
+
+ "os-odl-sfc-ha": [3022],
+
+ "os-odl_l2-moon-ha": [3022],
+
+ "os-odl_l2-moon-noha": [3022],
+
+ "os-odl_l3-nofeature-ha": [3022],
+
+ "os-ovn-nofeature-noha": [3022]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc011_pod.png
+ :width: 800px
+ :alt: TC011 influence of the POD
+
+{
+
+ "huawei-virtual2": [2997],
+
+ "flex-pod2": [2997.5],
+
+ "huawei-virtual3": [2998],
+
+ "huawei-virtual9": [3000],
+
+ "huawei-virtual8": [3001],
+
+ "huawei-virtual4": [3002],
+
+ "ericsson-virtual3": [3006],
+
+ "huawei-virtual1": [3007],
+
+ "ericsson-virtual2": [3009],
+
+ "intel-pod18": [3010],
+
+ "ericsson-virtual4": [3017],
+
+ "lf-pod2": [3021],
+
+ "arm-pod5": [3022],
+
+ "arm-pod6": [3022],
+
+ "ericsson-pod1": [3022],
+
+ "huawei-pod12": [3022],
+
+ "huawei-pod2": [3022],
+
+ "intel-pod5": [3022],
+
+ "lf-pod1": [3022]
+
+}
+
+
+Fraser release
+--------------
diff --git a/docs/release/results/tc012-memory-read-write-bandwidth.rst b/docs/release/results/tc012-memory-read-write-bandwidth.rst
new file mode 100644
index 000000000..c28eb1f3c
--- /dev/null
+++ b/docs/release/results/tc012-memory-read-write-bandwidth.rst
@@ -0,0 +1,299 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+==================================================
+Test results for TC012 memory read/write bandwidth
+==================================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC012 measures the rate at which data can be read from and written to the memory (this includes all levels of memory).
+In this test case, the bandwidth to read data from memory and then write data to the same memory location are measured.
+
+Metric: memory bandwidth
+Unit: MBps
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [23126.325],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [23123.975],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [23068.965],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [22972.46],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [22912.015],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [22911.35],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [22900.93],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [22767.56],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [22721.83],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [22511.565],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [22071.235],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [21646.415],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [20229.99],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [17491.18],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [17474.965],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [17141.375],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [17134.99],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [17124.27],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [16599.325],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [16309.13],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [16137.48],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [15960.76],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [15685.505],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [15536.65],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [15431.795],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [15129.27],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [15125.51],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [15030.65],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [15019.89],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [15005.11],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [14975.645],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [14968.97],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [14968.97],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [14741.425],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [14714.28],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [14674.38],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [14664.12],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [14587.62],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [14539.94],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [14534.54],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [14511.925],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [14496.875],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [14378.87],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [14366.69],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [14356.695],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [14341.605],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [14327.78],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [14313.81],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [14284.365],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [14157.99],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [14144.86],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [14138.9],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [14117.7],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [14097.255],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [14085.675],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [14071.605],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [14059.51],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [14057.155],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [14051.945],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [14020.74],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [14017.915],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [13954.27],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [13915.87],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [13874.59],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [13812.215],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [13777.59],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [13765.36],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [13559.905],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [13477.52],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [13255.17],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [13189.64],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [12718.545],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [12559.445],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [12409.66],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [8832.515],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [8823.955],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [4398.08],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [4375.75],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [4260.77],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [4259.62]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc012_scenario.png
+ :width: 800px
+ :alt: TC012 influence of scenario
+
+{
+
+ "os-ovn-nofeature-noha": [22900.93],
+
+ "os-nosdn-bar-noha": [22721.83],
+
+ "os-nosdn-ovs-ha": [22063.67],
+
+ "os-odl-nofeature-ha": [17146.05],
+
+ "os-odl-nofeature-noha": [16017.41],
+
+ "os-nosdn-ovs-noha": [16005.74],
+
+ "os-nosdn-nofeature-noha": [15290.94],
+
+ "os-nosdn-nofeature-ha": [15038.74],
+
+ "os-nosdn-bar-ha": [14972.975],
+
+ "os-odl_l2-moon-ha": [14956.955],
+
+ "os-odl_l3-nofeature-ha": [14839.21],
+
+ "os-odl-sfc-ha": [14823.48],
+
+ "os-nosdn-ovs_dpdk-ha": [14822.17],
+
+ "os-nosdn-ovs_dpdk-noha": [14725.9],
+
+ "os-odl_l2-moon-noha": [14665.4],
+
+ "os-odl_l3-nofeature-noha": [14483.09],
+
+ "os-odl-sfc-noha": [14373.21],
+
+ "os-nosdn-openbaton-ha": [14135.325],
+
+ "os-nosdn-kvm-noha": [14020.26],
+
+ "os-nosdn-kvm-ha": [13996.02]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc012_pod.png
+ :width: 800px
+ :alt: TC012 influence of the POD
+
+{
+
+ "lf-pod1": [22912.39],
+
+ "lf-pod2": [22637.67],
+
+ "flex-pod2": [20229.99],
+
+ "ericsson-virtual1": [17474.965],
+
+ "ericsson-pod1": [17127.38],
+
+ "ericsson-virtual4": [16219.97],
+
+ "ericsson-virtual2": [15652.28],
+
+ "ericsson-virtual3": [15551.26],
+
+ "huawei-pod2": [15017.2],
+
+ "huawei-virtual4": [14266.34],
+
+ "huawei-virtual1": [14233.035],
+
+ "huawei-virtual3": [14227.63],
+
+ "huawei-pod12": [14147.245],
+
+ "intel-pod18": [14058.33],
+
+ "huawei-virtual2": [13862.85],
+
+ "intel-pod5": [13280.32],
+
+ "huawei-virtual9": [12559.445],
+
+ "huawei-virtual8": [8998.02],
+
+ "arm-pod5": [4388.875],
+
+ "arm-pod6": [4260.2]
+
+}
+
+
+Fraser release
+--------------
diff --git a/docs/release/results/tc014-cpu-processing-speed.rst b/docs/release/results/tc014-cpu-processing-speed.rst
new file mode 100644
index 000000000..34d4ad0f9
--- /dev/null
+++ b/docs/release/results/tc014-cpu-processing-speed.rst
@@ -0,0 +1,298 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+===========================================
+Test results for TC014 cpu processing speed
+===========================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC014 measures score of single cpu running using UnixBench.
+
+Metric: score of single CPU running
+Unit: N/A
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-odl-sfc-noha:lf-pod1:apex": [3735.2],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [3725.5],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [3711],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [3708.4],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [3705.7],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [3704],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [3703.2],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [3702.8],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [3698.7],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [3654.8],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [3635.55],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [3633.2],
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [3450.3],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [3406.4],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [3360.4],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [3340.65],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [3208.6],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [3134.8],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [3056.2],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [2988.9],
+
+ "os-nosdn-ovs-ha:ericsson-pod1:fuel": [2773.7],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [2645.85],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [2625.3],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [2601.3],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [2590.4],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [2570.2],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [2558.8],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [2556.5],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [2554.6],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [2536.75],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [2533.55],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [2531.85],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [2531.7],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [2531.2],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [2531],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [2529.6],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [2520.5],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [2481.15],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [2474],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [2472.6],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [2471],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [2470.6],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [2464.15],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [2455.9],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [2455.3],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [2446.85],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [2444.75],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [2430.9],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [2421.3],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [2415.7],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [2399.4],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [2391.85],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [2391.45],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [2380.7],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [2379.6],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [2371.9],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [2364.6],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [2363.4],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [2362],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [2358.5],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [2358.45],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [2336],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [2326.6],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [2324.95],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [2320.2],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [2318.5],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [2312.8],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [2311.7],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [2301.15],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [2297.7],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [2232.8],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [2232.1],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [2230],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [2154],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [2150.1],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [2004.3],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [1754.5],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [1754.15],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [716.15],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [716.05]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc014_scenario.png
+ :width: 800px
+ :alt: TC014 influence of scenario
+
+{
+
+ "os-nosdn-ovs-ha": [3725.5],
+
+ "os-ovn-nofeature-noha": [3654.8],
+
+ "os-nosdn-bar-noha": [3633.2],
+
+ "os-odl-nofeature-ha": [3407.8],
+
+ "os-nosdn-ovs-noha": [2583.2],
+
+ "os-odl-nofeature-noha": [2578.9],
+
+ "os-nosdn-nofeature-noha": [2553.2],
+
+ "os-nosdn-nofeature-ha": [2532.8],
+
+ "os-odl_l2-moon-ha": [2530.5],
+
+ "os-nosdn-bar-ha": [2527],
+
+ "os-odl_l3-nofeature-ha": [2501.5],
+
+ "os-nosdn-ovs_dpdk-noha": [2473.65],
+
+ "os-odl-sfc-ha": [2472.9],
+
+ "os-odl_l2-moon-noha": [2470.8],
+
+ "os-nosdn-ovs_dpdk-ha": [2461.9],
+
+ "os-odl_l3-nofeature-noha": [2442.8],
+
+ "os-nosdn-kvm-noha": [2392.9],
+
+ "os-odl-sfc-noha": [2370.5],
+
+ "os-nosdn-kvm-ha": [2358.5],
+
+ "os-nosdn-openbaton-ha": [2231.8]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc014_pod.png
+ :width: 800px
+ :alt: TC014 influence of the POD
+
+{
+
+ "lf-pod2": [3723.95],
+
+ "lf-pod1": [3669],
+
+ "intel-pod5": [3388.6],
+
+ "intel-pod18": [3298.4],
+
+ "flex-pod2": [3208.6],
+
+ "ericsson-virtual1": [2988.9],
+
+ "ericsson-pod1": [2669.1],
+
+ "ericsson-virtual4": [2598.5],
+
+ "ericsson-virtual3": [2553.15],
+
+ "huawei-pod2": [2531.2],
+
+ "ericsson-virtual2": [2526.9],
+
+ "huawei-virtual4": [2407.4],
+
+ "huawei-virtual3": [2374.6],
+
+ "huawei-virtual2": [2326.4],
+
+ "huawei-virtual9": [2324.95],
+
+ "huawei-virtual1": [2302.6],
+
+ "huawei-pod12": [2232.2],
+
+ "huawei-virtual8": [2085.3],
+
+ "arm-pod5": [1754.4],
+
+ "arm-pod6": [716.15]
+
+}
+
+
+Fraser release
+--------------
diff --git a/docs/release/results/tc069-memory-write-bandwidth.rst b/docs/release/results/tc069-memory-write-bandwidth.rst
new file mode 100644
index 000000000..06e2ec922
--- /dev/null
+++ b/docs/release/results/tc069-memory-write-bandwidth.rst
@@ -0,0 +1,300 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+=============================================
+Test results for TC069 memory write bandwidth
+=============================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC069 measures the maximum possible cache and memory performance while reading and writing certain
+blocks of data (starting from 1Kb and further in power of 2) continuously through ALU and FPU
+respectively. Measure different aspects of memory performance via synthetic simulations.
+Each simulation consists of four performances (Copy, Scale, Add, Triad).
+The test results shown below are for writing 32MB integer block size.
+
+Metric: memory write bandwidth
+Unit: MBps
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-noha:intel-pod18:joid": [20113.395],
+
+ "os-nosdn-openbaton-ha:intel-pod18:joid": [19183.58],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [17851.35],
+
+ "os-nosdn-nofeature-noha:intel-pod5:joid": [16312.37],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [15633.245],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [13332.065],
+
+ "os-odl-nofeature-ha:arm-pod6:fuel": [13327.02],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [9462.74],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [9384.585],
+
+ "os-odl-nofeature-ha:ericsson-pod1:fuel": [9235.98],
+
+ "os-nosdn-nofeature-noha:huawei-pod12:joid": [9213.6],
+
+ "os-nosdn-openbaton-ha:huawei-pod12:joid": [9152.18],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [9079.45],
+
+ "os-odl_l2-moon-ha:huawei-pod2:compass": [9071.13],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [9068.06],
+
+ "os-odl-sfc-ha:huawei-pod2:compass": [9031.24],
+
+ "os-odl_l3-nofeature-ha:huawei-pod2:compass": [9019.53],
+
+ "os-nosdn-bar-ha:huawei-pod2:compass": [8977.3],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-pod2:compass": [8960.635],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [8825.805],
+
+ "os-nosdn-kvm-ha:huawei-pod2:compass": [8282.75],
+
+ "os-odl_l2-moon-noha:huawei-virtual4:compass": [8116.33],
+
+ "os-nosdn-ovs-noha:ericsson-virtual4:fuel": [8083.97],
+
+ "os-odl_l2-moon-noha:huawei-virtual3:compass": [8083.52],
+
+ "os-nosdn-nofeature-noha:huawei-virtual3:compass": [7799.145],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual3:compass": [7776.12],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual3:compass": [7680.37],
+
+ "os-nosdn-ovs-noha:ericsson-virtual1:fuel": [7615.97],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual4:fuel": [7612.62],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual2:fuel": [7518.62],
+
+ "os-nosdn-nofeature-noha:huawei-virtual2:compass": [7489.67],
+
+ "os-nosdn-ovs-noha:ericsson-virtual2:fuel": [7478.57],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual4:compass": [7465.82],
+
+ "os-nosdn-kvm-noha:huawei-virtual3:compass": [7443.16],
+
+ "os-odl-nofeature-noha:ericsson-virtual4:fuel": [7442.855],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [7440.65],
+
+ "os-odl-sfc-noha:huawei-virtual4:compass": [7401.16],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [7389.505],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [7385.76],
+
+ "os-nosdn-nofeature-noha:huawei-virtual1:compass": [7382.345],
+
+ "os-odl_l2-moon-ha:huawei-virtual3:compass": [7286.385],
+
+ "os-odl_l3-nofeature-noha:huawei-virtual4:compass": [7272.06],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual4:compass": [7261.73],
+
+ "os-nosdn-nofeature-noha:ericsson-virtual3:fuel": [7253.64],
+
+ "os-odl-sfc-noha:huawei-virtual3:compass": [7247.89],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual2:compass": [7214.01],
+
+ "os-nosdn-ovs_dpdk-ha:huawei-virtual3:compass": [7207.39],
+
+ "os-nosdn-ovs_dpdk-noha:huawei-virtual4:compass": [7205.565],
+
+ "os-nosdn-ovs-noha:ericsson-virtual3:fuel": [7201.005],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [7132.835],
+
+ "os-odl-nofeature-noha:ericsson-virtual3:fuel": [7117.05],
+
+ "os-odl_l3-nofeature-ha:huawei-virtual3:compass": [7064.18],
+
+ "os-odl_l2-moon-ha:huawei-virtual4:compass": [6997.295],
+
+ "os-odl-nofeature-ha:lf-pod1:apex": [6992.21],
+
+ "os-odl-sfc-ha:huawei-virtual4:compass": [6975.63],
+
+ "os-odl-nofeature-noha:lf-pod1:apex": [6972.63],
+
+ "os-nosdn-nofeature-noha:lf-pod1:apex": [6955],
+
+ "os-ovn-nofeature-noha:lf-pod1:apex": [6954.5],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [6953.35],
+
+ "os-odl-sfc-noha:lf-pod1:apex": [6951.89],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [6932.29],
+
+ "os-nosdn-nofeature-noha:huawei-virtual4:compass": [6929.54],
+
+ "os-nosdn-kvm-noha:huawei-virtual4:compass": [6921.6],
+
+ "os-nosdn-ovs-ha:lf-pod2:fuel": [6913.355],
+
+ "os-odl-nofeature-ha:lf-pod2:fuel": [6848.58],
+
+ "os-odl-sfc-ha:lf-pod1:apex": [6818.74],
+
+ "os-nosdn-bar-noha:lf-pod1:apex": [6812.16],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [6808.18],
+
+ "os-odl-nofeature-noha:ericsson-virtual2:fuel": [6807.565],
+
+ "os-nosdn-bar-ha:lf-pod1:apex": [6774.76],
+
+ "os-nosdn-bar-ha:huawei-virtual4:compass": [6759.4],
+
+ "os-nosdn-nofeature-noha:huawei-virtual8:compass": [6756.9],
+
+ "os-nosdn-bar-ha:huawei-virtual3:compass": [6543.46],
+
+ "os-nosdn-kvm-ha:huawei-virtual3:compass": [6504.34],
+
+ "os-odl-sfc-ha:huawei-virtual3:compass": [6481.005],
+
+ "os-nosdn-kvm-ha:huawei-virtual4:compass": [6461.5],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [6152.375],
+
+ "os-odl-sfc-ha:huawei-virtual8:compass": [5941.7],
+
+ "os-nosdn-kvm-noha:huawei-virtual8:compass": [4564.515]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc069_scenario.png
+ :width: 800px
+ :alt: TC069 influence of scenario
+
+{
+
+ "os-nosdn-openbaton-ha": [9187.16],
+
+ "os-odl_l2-moon-ha": [9010.57],
+
+ "os-nosdn-nofeature-ha": [8886.75],
+
+ "os-odl_l3-nofeature-ha": [8779.67],
+
+ "os-odl_l2-moon-noha": [8114.995],
+
+ "os-nosdn-ovs_dpdk-ha": [7864.07],
+
+ "os-odl_l3-nofeature-noha": [7632.11],
+
+ "os-odl-sfc-ha": [7624.67],
+
+ "os-nosdn-nofeature-noha": [7470.66],
+
+ "os-odl-nofeature-ha": [7372.23],
+
+ "os-nosdn-ovs_dpdk-noha": [7311.54],
+
+ "os-odl-sfc-noha": [7300.56],
+
+ "os-nosdn-ovs-noha": [7280.005],
+
+ "os-odl-nofeature-noha": [7162.67],
+
+ "os-nosdn-kvm-ha": [7130.775],
+
+ "os-nosdn-kvm-noha": [7041.13],
+
+ "os-ovn-nofeature-noha": [6954.5],
+
+ "os-nosdn-ovs-ha": [6913.355],
+
+ "os-nosdn-bar-ha": [6829.17],
+
+ "os-nosdn-bar-noha": [6812.16]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc069_pod.png
+ :width: 800px
+ :alt: TC069 influence of the POD
+
+{
+
+ "intel-pod18": [18871.79],
+
+ "intel-pod5": [16055.79],
+
+ "arm-pod6": [13327.02],
+
+ "flex-pod2": [9384.585],
+
+ "ericsson-pod1": [9331.535],
+
+ "huawei-pod12": [9164.88],
+
+ "huawei-pod2": [9026.52],
+
+ "huawei-virtual9": [8825.805],
+
+ "ericsson-virtual1": [7615.97],
+
+ "ericsson-virtual4": [7539.23],
+
+ "arm-pod5": [7403.38],
+
+ "huawei-virtual3": [7247.89],
+
+ "huawei-virtual2": [7205.35],
+
+ "huawei-virtual1": [7196.405],
+
+ "ericsson-virtual3": [7173.72],
+
+ "huawei-virtual4": [7131.47],
+
+ "ericsson-virtual2": [7129.08],
+
+ "lf-pod1": [6928.18],
+
+ "lf-pod2": [6875.88],
+
+ "huawei-virtual8": [5729.705]
+
+}
+
+
+Fraser release
+--------------
diff --git a/docs/release/results/tc082-context-switches-under-load.rst b/docs/release/results/tc082-context-switches-under-load.rst
new file mode 100644
index 000000000..d8a9f5493
--- /dev/null
+++ b/docs/release/results/tc082-context-switches-under-load.rst
@@ -0,0 +1,129 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+==================================================
+Test results for TC082 context switches under load
+==================================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC082 measures various software performance events using perf.
+The test results shown below are for context-switches.
+
+Metric: context switches
+Unit: N/A
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (lower is better):
+
+{
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [316],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [340],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [357.5],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [384],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [394.5],
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [435],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [476],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [518],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [863],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [871],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [1002],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [1174],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [1239],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [1430],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [1489],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [1883.5]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc082_scenario.png
+ :width: 800px
+ :alt: TC082 influence of scenario
+
+the influence of the scenario
+
+{
+
+ "os-nosdn-nofeature-ha": [505],
+
+ "os-odl-nofeature-ha": [863]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc082_pod.png
+ :width: 800px
+ :alt: TC082 influence of the POD
+
+the influence of the POD
+
+{
+
+ "huawei-pod12": [316],
+
+ "intel-pod18": [340],
+
+ "intel-pod5": [357.5],
+
+ "ericsson-pod1": [384],
+
+ "lf-pod2": [394.5],
+
+ "lf-pod1": [435],
+
+ "flex-pod2": [476],
+
+ "huawei-pod2": [518],
+
+ "arm-pod5": [869.5],
+
+ "huawei-virtual9": [1002],
+
+ "huawei-virtual4": [1174],
+
+ "huawei-virtual3": [1239],
+
+ "huawei-virtual2": [1430],
+
+ "huawei-virtual1": [1489],
+
+ "arm-pod6": [1883.5]
+
+}
+
+
+Fraser release
+--------------
diff --git a/docs/release/results/tc083-network-throughput-between-vm.rst b/docs/release/results/tc083-network-throughput-between-vm.rst
new file mode 100644
index 000000000..f846571a5
--- /dev/null
+++ b/docs/release/results/tc083-network-throughput-between-vm.rst
@@ -0,0 +1,129 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+=====================================================
+Test results for TC083 network throughput between VMs
+=====================================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Overview of test case
+=====================
+
+TC083 measures network latency and throughput between VMs using netperf.
+The test results shown below are for UDP throughout.
+
+Metric: UDP stream throughput
+Unit: 10^6bits/s
+
+
+Euphrates release
+-----------------
+
+Test results per scenario and pod (higher is better):
+
+{
+
+ "os-nosdn-nofeature-ha:lf-pod1:apex": [2204.42],
+
+ "os-nosdn-nofeature-ha:intel-pod18:joid": [1835.55],
+
+ "os-nosdn-nofeature-ha:lf-pod2:fuel": [1676.705],
+
+ "os-nosdn-nofeature-ha:intel-pod5:joid": [1612.555],
+
+ "os-nosdn-nofeature-ha:flex-pod2:apex": [1370.23],
+
+ "os-nosdn-nofeature-ha:huawei-pod12:joid": [1300.12],
+
+ "os-nosdn-nofeature-ha:huawei-pod2:compass": [1070.455],
+
+ "os-nosdn-nofeature-ha:ericsson-pod1:fuel": [1004.32],
+
+ "os-nosdn-nofeature-ha:huawei-virtual9:compass": [753.46],
+
+ "os-nosdn-nofeature-ha:huawei-virtual4:compass": [735.07],
+
+ "os-odl-nofeature-ha:arm-pod5:fuel": [531.63],
+
+ "os-nosdn-nofeature-ha:huawei-virtual3:compass": [493.985],
+
+ "os-nosdn-nofeature-ha:arm-pod5:fuel": [448.82],
+
+ "os-nosdn-nofeature-ha:arm-pod6:fuel": [193.43],
+
+ "os-nosdn-nofeature-ha:huawei-virtual1:compass": [189.99],
+
+ "os-nosdn-nofeature-ha:huawei-virtual2:compass": [80.15]
+
+}
+
+
+The influence of the scenario
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc083_scenario.png
+ :width: 800px
+ :alt: TC083 influence of scenario
+
+the influence of the scenario
+
+{
+
+ "os-nosdn-nofeature-ha": [1109.12],
+
+ "os-odl-nofeature-ha": [531.63]
+
+}
+
+
+The influence of the POD
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. image:: images/tc083_pod.png
+ :width: 800px
+ :alt: TC083 influence of the POD
+
+the influence of the POD
+
+{
+
+ "lf-pod1": [2204.42],
+
+ "intel-pod18": [1835.55],
+
+ "lf-pod2": [1676.705],
+
+ "intel-pod5": [1612.555],
+
+ "flex-pod2": [1370.23],
+
+ "huawei-pod12": [1300.12],
+
+ "huawei-pod2": [1070.455],
+
+ "ericsson-pod1": [1004.32],
+
+ "huawei-virtual9": [753.46],
+
+ "huawei-virtual4": [735.07],
+
+ "huawei-virtual3": [493.985],
+
+ "arm-pod5": [451.38],
+
+ "arm-pod6": [193.43],
+
+ "huawei-virtual1": [189.99],
+
+ "huawei-virtual2": [80.15]
+
+}
+
+
+Fraser release
+--------------
diff --git a/docs/testing/developer/devguide/devguide.rst b/docs/testing/developer/devguide/devguide.rst
index dade49b75..04d5350be 100755
--- a/docs/testing/developer/devguide/devguide.rst
+++ b/docs/testing/developer/devguide/devguide.rst
@@ -432,7 +432,8 @@ Yardstick committers and contributors to review your codes.
:width: 800px
:alt: Gerrit for code review
-You can find Yardstick people info `here <https://wiki.opnfv.org/display/yardstick/People>`_.
+You can find a list Yardstick people `here <https://wiki.opnfv.org/display/yardstick/People>`_,
+or use the ``yardstick-reviewers`` and ``yardstick-committers`` groups in gerrit.
Modify the code under review in Gerrit
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc088.rst b/docs/testing/user/userguide/opnfv_yardstick_tc088.rst
new file mode 100644
index 000000000..2423a6b31
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc088.rst
@@ -0,0 +1,129 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC088
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node Openstack Service High Availability - Nova Scheduler |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC088: Control node Openstack service down - |
+| | nova scheduler |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | compute scheduler service provided by OpenStack (nova- |
+| | scheduler) on control node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of nova-scheduler service |
+| | on a selected control node, then checks whether the request |
+| | of the related OpenStack command is OK and the killed |
+| | processes are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to "nova- |
+| | scheduler". |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "nova-scheduler" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, one kind of monitor is needed: |
+| | 1. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor: |
+| | -monitor_type: "process" |
+| | -process_name: "nova-scheduler" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance": create a VM instance to check |
+| | whether the nova-scheduler works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are one metric: |
+| | 1)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc088.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stopping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | create a new instance to check whether the nova scheduler |
+| | works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | stop the monitor after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc089.rst b/docs/testing/user/userguide/opnfv_yardstick_tc089.rst
new file mode 100644
index 000000000..0a8b2570b
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc089.rst
@@ -0,0 +1,129 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC089
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node Openstack Service High Availability - Nova Conductor |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC089: Control node Openstack service down - |
+| | nova conductor |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | compute database proxy service provided by OpenStack (nova- |
+| | conductor) on control node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of nova-conductor service |
+| | on a selected control node, then checks whether the request |
+| | of the related OpenStack command is OK and the killed |
+| | processes are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to "nova- |
+| | conductor". |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "nova-conductor" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, one kind of monitor is needed: |
+| | 1. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor: |
+| | -monitor_type: "process" |
+| | -process_name: "nova-conductor" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance": create a VM instance to check |
+| | whether the nova-conductor works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are one metric: |
+| | 1)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc089.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stopping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | create a new instance to check whether the nova conductor |
+| | works normally. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | stop the monitor after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+