aboutsummaryrefslogtreecommitdiffstats
path: root/docs/release
diff options
context:
space:
mode:
authorrexlee8776 <limingjiang@huawei.com>2017-03-08 07:12:55 +0000
committerrexlee8776 <limingjiang@huawei.com>2017-03-08 07:12:55 +0000
commitfd54fcc22170aa880fc49730730ad80896e2e608 (patch)
tree025941493c552421e46f4c323bab1694c6d7fe01 /docs/release
parent536076de790aed38b462edd8f8b2f079d3e828b2 (diff)
Yardstick Preliminary Documentation
JIRA: YARDSTICK-554 align with opnfvdocs path structure about testing projects Change-Id: I6c2f2d37e41447dccd76b9f4426d00fd85cb1e3b Signed-off-by: rexlee8776 <limingjiang@huawei.com>
Diffstat (limited to 'docs/release')
-rw-r--r--docs/release/release-notes/index.rst (renamed from docs/release/index.rst)0
-rw-r--r--docs/release/release-notes/release-notes.rst (renamed from docs/release/release-notes.rst)0
-rw-r--r--docs/release/results/index.rst14
-rw-r--r--docs/release/results/os-nosdn-kvm-ha.rst270
-rw-r--r--docs/release/results/os-nosdn-nofeature-ha.rst492
-rw-r--r--docs/release/results/os-nosdn-nofeature-noha.rst259
-rw-r--r--docs/release/results/os-odl_l2-bgpvpn-ha.rst53
-rw-r--r--docs/release/results/os-odl_l2-nofeature-ha.rst743
-rw-r--r--docs/release/results/os-odl_l2-sfc-ha.rst231
-rw-r--r--docs/release/results/os-onos-nofeature-ha.rst257
-rw-r--r--docs/release/results/os-onos-sfc-ha.rst517
-rw-r--r--docs/release/results/overview.rst106
-rw-r--r--docs/release/results/results.rst57
-rw-r--r--docs/release/results/yardstick-opnfv-ha.rst118
-rw-r--r--docs/release/results/yardstick-opnfv-kvm.rst38
-rw-r--r--docs/release/results/yardstick-opnfv-parser.rst38
-rw-r--r--docs/release/results/yardstick-opnfv-vtc.rst248
17 files changed, 3441 insertions, 0 deletions
diff --git a/docs/release/index.rst b/docs/release/release-notes/index.rst
index c9cadc539..c9cadc539 100644
--- a/docs/release/index.rst
+++ b/docs/release/release-notes/index.rst
diff --git a/docs/release/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 8df0776df..8df0776df 100644
--- a/docs/release/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
diff --git a/docs/release/results/index.rst b/docs/release/results/index.rst
new file mode 100644
index 000000000..2b67f1b22
--- /dev/null
+++ b/docs/release/results/index.rst
@@ -0,0 +1,14 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Ericsson AB and others.
+
+======================
+Yardstick test results
+======================
+
+.. toctree::
+ :maxdepth: 4
+
+.. include:: ./overview.rst
+.. include:: ./results.rst
diff --git a/docs/release/results/os-nosdn-kvm-ha.rst b/docs/release/results/os-nosdn-kvm-ha.rst
new file mode 100644
index 000000000..a8a56f80e
--- /dev/null
+++ b/docs/release/results/os-nosdn-kvm-ha.rst
@@ -0,0 +1,270 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+================================
+Test Results for os-nosdn-kvm-ha
+================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+fuel
+====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test
+runs, each run on the Ericsson POD2_ or LF POD2_ between August 24 and 30 in
+2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 0.44 and 0.75 ms.
+A few runs start with a 0.65 - 0.68 ms RTT spike (This could be because of
+normal ARP handling). One test run has a greater RTT spike of 1.49 ms.
+To be able to draw conclusions more runs should be made. SLA set to 10 ms.
+The SLA value is used as a reference, it has not been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth looks similar between different dates, with an
+average between approx. 92 and 204 MB/s. Within each test run the results
+vary, with a minimum 2 MB/s and maximum 819 MB/s on the totality. Most runs
+have a minimum BW of 3 MB/s (one run at 2 MB/s). The maximum BW varies more in
+absolute numbers between the dates, between 238 and 819 MB/s.
+SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC010
+-----
+The measurements for memory latency are similar between test dates and result
+in approx. 2.07 ns. The variations within each test run are similar, between
+1.41 and 3.53 ns.
+SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
+by OPNFV.
+
+TC011
+-----
+Packet delay variation between 2 VMs on different blades is measured using
+Iperf3. The reported packet delay variation varies between 0.0051 and 0.0243 ms,
+with an average delay variation between 0.0081 ms and 0.0195 ms.
+
+TC012
+-----
+Between test dates, the average measurements for memory bandwidth result in
+approx. 13.6 GB/s. Within each test run the results vary more, with a minimal
+BW of 6.09 GB/s and maximum of 16.47 GB/s on the totality.
+SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC014
+-----
+The Unixbench processor test run results vary between scores 2316 and 3619,
+one result each date.
+No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+CPU utilization statistics are collected during UDP flows sent between the VMs
+using pktgen as packet generator tool. The average measurements for CPU
+utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
+appears around 7%.
+
+TC069
+-----
+Between test dates, the average measurements for memory bandwidth vary between
+22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal
+BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality.
+SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Memory utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The average measurements for memory
+utilization vary between 225MB to 246MB. The peak of memory utilization appears
+around 340MB.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Cache utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The average measurements for cache
+utilization vary between 205MB to 212MB.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. Total number of packets received per
+second was average on 200 kpps and total number of packets transmitted per
+second was average on 600 kpps.
+
+Detailed test results
+---------------------
+The scenario was run on Ericsson POD2_ and LF POD2_ with:
+Fuel 9.0
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
+Conclusions and recommendations
+-------------------------------
+The pktgen test configuration has a relatively large base effect on RTT in
+TC037 compared to TC002, where there is no background load at all. Approx.
+15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
+difference in RTT results.
+Especially RTT and throughput come out with better results than for instance
+the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
+probably be further analyzed and understood. Also of interest could be
+to make further analyzes to find patterns and reasons for lost traffic.
+Also of interest could be to see if there are continuous variations where
+some test cases stand out with better or worse results than the general test
+case.
+
diff --git a/docs/release/results/os-nosdn-nofeature-ha.rst b/docs/release/results/os-nosdn-nofeature-ha.rst
new file mode 100644
index 000000000..9e52731d5
--- /dev/null
+++ b/docs/release/results/os-nosdn-nofeature-ha.rst
@@ -0,0 +1,492 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+======================================
+Test Results for os-nosdn-nofeature-ha
+======================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+apex
+====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test
+runs, each run on the LF POD1_ between August 25 and 28 in
+2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 0.74 and 1.08 ms.
+A few runs start with a 0.99 - 1.07 ms RTT spike (This could be because of
+normal ARP handling). One test run has a greater RTT spike of 1.35 ms.
+To be able to draw conclusions more runs should be made. SLA set to 10 ms.
+The SLA value is used as a reference, it has not been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth looks similar between different dates, with an
+average between approx. 128 and 136 MB/s. Within each test run the results
+vary, with a minimum 5 MB/s and maximum 446 MB/s on the totality. Most runs
+have a minimum BW of 5 MB/s (one run at 6 MB/s). The maximum BW varies more in
+absolute numbers between the dates, between 416 and 446 MB/s.
+SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC010
+-----
+The measurements for memory latency are similar between test dates and result
+in approx. 1.09 ns. The variations within each test run are similar, between
+1.0860 and 1.0880 ns.
+SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
+by OPNFV.
+
+TC011
+-----
+Packet delay variation between 2 VMs on different blades is measured using
+Iperf3. The reported packet delay variation varies between 0.0025 and 0.0148 ms,
+with an average delay variation between 0.0056 ms and 0.0157 ms.
+
+TC012
+-----
+Between test dates, the average measurements for memory bandwidth result in
+approx. 19.70 GB/s. Within each test run the results vary more, with a minimal
+BW of 18.16 GB/s and maximum of 20.13 GB/s on the totality.
+SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC014
+-----
+The Unixbench processor test run results vary between scores 3224.4 and 3842.8,
+one result each date. The average score on the total is 3659.5.
+No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+CPU utilization statistics are collected during UDP flows sent between the VMs
+using pktgen as packet generator tool. The average measurements for CPU
+utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
+appears around 7%.
+
+TC069
+-----
+Between test dates, the average measurements for memory bandwidth vary between
+22.6 and 29.1 GB/s. Within each test run the results vary more, with a minimal
+BW of 20.0 GB/s and maximum of 29.5 GB/s on the totality.
+SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Memory utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The average measurements for memory
+utilization vary between 225MB to 246MB. The peak of memory utilization appears
+around 340MB.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Cache utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The average measurements for cache
+utilization vary between 205MB to 212MB.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. Total number of packets received per
+second was average on 200 kpps and total number of packets transmitted per
+second was average on 600 kpps.
+
+Detailed test results
+---------------------
+The scenario was run on LF POD1_ with:
+Apex
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
+
+Joid
+====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Intel POD5_ between September 11 and 14 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 1.59 and 1.70 ms.
+Two test runs have reached the same greater RTT spike of 3.06 ms, which are
+1.66 and 1.70 ms average, but only one has the lower RTT of 1.35 ms. The other
+two runs have no similar spike at all. To be able to draw conclusions more runs
+should be made. SLA set to be 10 ms. The SLA value is used as a reference, it
+has not been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput and the
+greatest IO read bandwidth of the four runs is 173.3 MB/s. The IO read
+bandwidth of the four runs looks similar on different four days, with an
+average between 32.7 and 60.4 MB/s. One of the runs has a minimum BW of 429
+KM/s and other has a maximum BW of 173.3 MB/s. The SLA of read bandwidth sets
+to be 400 MB/s, which is used as a reference, and it has not been defined by
+OPNFV.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is 1.1 ns on average. The
+variations within each test run are different, some vary from a large range and
+others have a small change. For example, the largest change is on September 14,
+the memory read latency of which is ranging from 1.12 ns to 1.22 ns. However,
+the results on September 12 change very little, which range from 1.14 ns to
+1.17 ns. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC011
+-----
+Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on
+different blades. The reported pocket delay variations of the four test runs
+differ from each other. The results on September 13 within the date look
+similar and the values are between 0.0087 and 0.0190 ms, which is 0.0126 ms on
+average. However, on the fourth day, the pocket delay variation has a large
+wide change within the date, which ranges from 0.0032 ms to 0.0121 ms and has
+the minimum average value. The pocket delay variations of other two test runs
+look relatively similar, which are 0.0076 ms and 0.0152 ms on average. The SLA
+value sets to be 10 ms. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the four test runs, the memory
+bandwidth within the second day almost keep stable, which is 11.58 GB/s on
+average. And the memory bandwidth of the fourth day look similar as that of the
+second day, both of which remain stable. The other two test runs relatively
+change from a large wide range, in which the minimum memory bandwidth is 11.22
+GB/s and the maximum bandwidth is 16.65 GB/s with an average bandwidth of about
+12.20 GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference,
+it has not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to measure processing speed, that is instructions per
+second. It can be seen from the dashboard that the processing test results
+vary from scores 3272 to 3444, and there is only one result one date. The
+overall average score is 3371. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the four test runs is 119.85, 128.02, 121.40 and
+126.08 kpps, of which the result of the second is the highest. The RTT results
+of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS
+results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 184 kpps and the
+minimum throughput is 49 K respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar average vaue, that is 38 ms, of which the worest RTT is
+93 ms on Sep. 14th.
+
+CPU load of the four test runs have a large change, since the minimum value and
+the peak of CPU load is 0 percent and 51 percent respectively. And the best
+result is obtained on Sep. 14th.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 22.3 GB/s to 26.8 GB/s and then to 18.5 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
+look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. SLA
+sets to be 7 GB/s. The SLA value is used as a a reference, it has not been
+defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT can reach
+more than 80 ms and the average RTT is usually approx. 38 ms. On the whole, the
+average RTTs of the four runs keep flat.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 268 MiB on Sep
+14, which also has the largest minimum memory. Besides, the rest three test
+runs have the similar used memory. On the other hand, the free memory of the
+four runs have the same smallest minimum value, that is about 223 MiB, and the
+maximum free memory of three runs have the similar result, that is 337 MiB,
+except that on Sep. 14th, whose maximum free memory is 254 MiB. On the whole,
+all the test runs have similar average free memory.
+
+Network throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+network throughput of the four test runs seem quite different, ranging from
+119.85 kpps to 128.02 kpps. The average number of flows in these tests is
+24000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding packet throughput
+differ between 49.4k and 193.3k with an average packet throughput of approx.
+125k. On the whole, the PPS results seem consistent. Within each test run of
+the four runs, when number of flows becomes larger, the packet throughput seems
+not larger in the meantime.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT can reach
+more than 94 ms and the average RTT is usually approx. 35 ms. On the whole, the
+average RTTs of the four runs keep flat.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool.The largest
+cache size is 212 MiB in the four runs, and the smallest cache size is 75 MiB.
+On the whole, the average cache size of the four runs is approx. 208 MiB.
+Meanwhile, the tread of the buffer size looks similar with each other.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs seem quite different, ranging from 119.85 kpps to 128.02
+kpps. The average number of flows in these tests is 239.7k, and each run has a
+minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the
+same time, the corresponding packet throughput differ between 49.4k and 193.3k
+with an average packet throughput of approx. 125k. On the whole, the PPS results
+seem consistent. Within each test run of the four runs, when number of flows
+becomes larger, the packet throughput seems not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 32 ms. The PPS results are not as consistent as the RTT results.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second differs from each other, in which the smallest number of
+packets transmitted per second is 6 pps on Sep. 12ed and the largest of that is
+210.8 kpps. Meanwhile, the largest total number of packets received per second
+differs from each other, in which the smallest number of packets received per
+second is 2 pps on Sep. 13rd and the largest of that is 250.2 kpps.
+
+In some test runs when running with less than approx. 90000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 1000000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Intel POD5_ with:
+Joid
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
+
diff --git a/docs/release/results/os-nosdn-nofeature-noha.rst b/docs/release/results/os-nosdn-nofeature-noha.rst
new file mode 100644
index 000000000..8b7c184bb
--- /dev/null
+++ b/docs/release/results/os-nosdn-nofeature-noha.rst
@@ -0,0 +1,259 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+========================================
+Test Results for os-nosdn-nofeature-noha
+========================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Joid
+=====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD5: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Intel POD5_ between September 12 and 15 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 1.50 and 1.68 ms.
+Only one test run has reached greatest RTT spike of 2.92 ms, which has
+the smallest RTT of 1.06 ms. The other three runs have no similar spike at all,
+the minimum and average RTTs of which are approx. 1.50 ms and 1.68 ms. SLA set to
+be 10 ms. The SLA value is used as a reference, it has not been defined by
+OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 177.5
+MB/s. The IO read bandwidth of the four runs looks similar on different four
+days, with an average between 46.7 and 62.5 MB/s. One of the runs has a minimum
+BW of 680 KM/s and other has a maximum BW of 177.5 MB/s. The SLA of read
+bandwidth sets to be 400 MB/s, which is used as a reference, and it has not
+been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+test runs all have an approx. 1.55 K/s for IO reading with an minimum value of
+less than 60 times per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.134 ns and 1.227
+ns on average. The variations within each test run are quite different, some
+vary from a large range and others have a small change. For example, the
+largest change is on September 15, the memory read latency of which is ranging
+from 1.116 ns to 1.393 ns. However, the results on September 12 change very
+little, which mainly keep flat and range from 1.124 ns to 1.55 ns. The SLA sets
+to be 30 ns. The SLA value is used as a reference, it has not been defined by
+OPNFV.
+
+TC011
+-----
+Iperf3 is a tool for evaluating the pocket delay variation between 2 VMs on
+different blades. The reported pocket delay variations of the four test runs
+differ from each other. The results on September 13 within the date look
+similar and the values are between 0.0213 and 0.0225 ms, which is 0.0217 ms on
+average. However, on the third day, the packet delay variation has a large
+wide change within the date, which ranges from 0.008 ms to 0.0225 ms and has
+the minimum value. On Sep. 12, the packet delay is quite long, for the value is
+between 0.0236 and 0.0287 ms and it also has the maximum packet delay of 0.0287
+ms. The packet delay of the last test run is 0.0151 ms on average. The SLA
+value sets to be 10 ms. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the four test runs, the memory
+bandwidth of three test runs almost keep stable within each run, which is
+11.65, 11.57 and 11.64 GB/s on average. However, the memory read and write
+bandwidth on Sep. 14 has a large range, for it ranges from 11.36 GB/s to 16.68
+GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3222 to 3585, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the four test runs is 124.8, 160.1, 113.8 and
+137.3 kpps, of which the result of the second is the highest. The RTT results
+of all the test runs keep flat at approx. 37 ms. It is obvious that the PPS
+results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 243.1 kpps and the
+minimum throughput is 37.6 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar average vaue, that is between 32 ms and 41 ms, of which
+the worest RTT is 155 ms on Sep. 14th.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and 9 percent respectively. And the best result is obtained on Sep.
+15th, with an CPU load of nine percent.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 22.4 GB/s to 26.5 GB/s and then to 18.6 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
+look similar with a minimal BW of 22.5 GB/s and peak value of 28.7 GB/s. And
+then with the block size becoming larger, the memory write bandwidth tends to
+decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has
+not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of three test runs look
+similar with each other, and Within these test runs, the maximum RTT can reach
+95 ms and the average RTT is usually approx. 36 ms. The network latency tested
+on Sep. 14 shows that it has a peak latency of 155 ms. But on the whole, the
+average RTTs of the four runs keep flat.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 270 MiB on Sep
+13, which also has the smallest minimum memory utilization. Besides, the rest
+three test runs have the similar used memory with an average memory usage of
+264 MiB. On the other hand, the free memory of the four runs have the same
+smallest minimum value, that is about 223 MiB, and the maximum free memory of
+three runs have the similar result, that is 226 MiB, except that on Sep. 13th,
+whose maximum free memory is 273 MiB. On the whole, all the test runs have
+similar average free memory.
+
+Network throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+network throughput of the four test runs seem quite different, ranging from
+119.85 kpps to 128.02 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding packet throughput
+differ between 38k and 243k with an average packet throughput of approx. 134k.
+On the whole, the PPS results seem consistent. Within each test run of the four
+runs, when number of flows becomes larger, the packet throughput seems not
+larger in the meantime.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT can reach
+79 ms and the average RTT is usually approx. 35 ms. On the whole, the average
+RTTs of the four runs keep flat.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool.The largest
+cache size is 214 MiB in the four runs, and the smallest cache size is 100 MiB.
+On the whole, the average cache size of the four runs is approx. 210 MiB.
+Meanwhile, the tread of the buffer size looks similar with each other. On the
+other hand, the mean buffer size of the four runs keep flat, since they have a
+minimum value of approx. 7 MiB and a maximum value of 8 MiB, with an average
+value of about 8 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs seem quite different, ranging from 113.8 kpps to 124.8 kpps.
+The average number of flows in these tests is 240k, and each run has a minimum
+number of flows of 2 and a maximum number of flows of 1.001 Mil. At the same
+time, the corresponding packet throughput differ between 47.6k and 243.1k with
+an average packet throughput between 113.8k and 160.1k. Within each test run of
+the four runs, when number of flows becomes larger, the packet throughput seems
+not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 79 ms with an average leatency of approx. 35 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 113.8 kpps to 124.8 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar on the first three runs with a minimum
+number of 10 pps and a maximum number of 97 kpps, except the one on Sep. 15th,
+in which the number of packets transmitted per second is 10 pps. Meanwhile, the
+largest total number of packets received per second differs from each other,
+in which the smallest number of packets received per second is 1 pps and the
+largest of that is 276 kpps.
+
+In some test runs when running with less than approx. 90000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 1000000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Intel POD5_ with:
+Joid
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-odl_l2-bgpvpn-ha.rst b/docs/release/results/os-odl_l2-bgpvpn-ha.rst
new file mode 100644
index 000000000..2bd6dc35d
--- /dev/null
+++ b/docs/release/results/os-odl_l2-bgpvpn-ha.rst
@@ -0,0 +1,53 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+====================================
+Test Results for os-odl_l2-bgpvpn-ha
+====================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+fuel
+====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Ericsson POD2_ between September 7 and 11 in 2016.
+
+TC043
+-----
+The round-trip-time (RTT) between 2 nodes is measured using
+ping. Most test run measurements result on average between 0.21 and 0.28 ms.
+A few runs start with a 0.32 - 0.35 ms RTT spike (This could be because of
+normal ARP handling). To be able to draw conclusions more runs should be made.
+SLA set to 10 ms. The SLA value is used as a reference, it has not been defined
+by OPNFV.
+
+Detailed test results
+---------------------
+The scenario was run on Ericsson POD2_ with:
+Fuel 9.0
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
diff --git a/docs/release/results/os-odl_l2-nofeature-ha.rst b/docs/release/results/os-odl_l2-nofeature-ha.rst
new file mode 100644
index 000000000..ac0c5bb59
--- /dev/null
+++ b/docs/release/results/os-odl_l2-nofeature-ha.rst
@@ -0,0 +1,743 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+=======================================
+Test Results for os-odl_l2-nofeature-ha
+=======================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+apex
+====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD1: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the LF POD1_ between September 14 and 17 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 0.49 ms and 0.60 ms.
+Only one test run has reached greatest RTT spike of 0.93 ms. Meanwhile, the
+smallest network latency is 0.33 ms, which is obtained on Sep. 14th.
+SLA set to be 10 ms. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 416
+MB/s. The IO read bandwidth of all four runs looks similar, with an average
+between 128 and 131 MB/s. One of the runs has a minimum BW of 497 KB/s. The SLA
+of read bandwidth sets to be 400 MB/s, which is used as a reference, and it has
+not been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+IO read times per second of the four test runs have an average value at 1k per
+second, and meanwhile, the minimum result is only 45 times per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.0859 ns and
+1.0869 ns on average. The variations within each test run are quite different,
+some vary from a large range and others have a small change. For example, the
+largest change is on September 14th, the memory read latency of which is ranging
+from 1.091 ns to 1.086 ns. However.
+The SLA sets to be 30 ns. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC011
+-----
+Packet delay variation between 2 VMs on different blades is measured using
+Iperf3. On the first two test runs the reported packet delay variation varies between
+0.0037 and 0.0740 ms, with an average delay variation between 0.0096 ms and 0.0321.
+On the second date the delay variation varies between 0.0063 and 0.0096 ms, with
+an average delay variation of 0.0124 - 0.0141 ms.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the four test runs, the trend of
+three memory bandwidth almost look similar, which all have a narrow range, and
+the average result is 19.88 GB/s. Here SLA set to be 15 GB/s. The SLA value is
+used as a reference, it has not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3754k to 3831k, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the four test runs is between 307.3 kpps and
+447.1 kpps, of which the result of the third run is the highest. The RTT
+results of all the test runs keep flat at approx. 15 ms. It is obvious that the
+PPS results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 418.1 kpps and the
+minimum throughput is 326.5 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar average vaue, that is approx. 15 ms.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and nine percent respectively. And the best result is obtained on Sep.
+1, with an CPU load of nine percent. But on the whole, the CPU load is very
+poor, since the average value is quite small.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 28.2 GB/s to 29.5 GB/s and then to 29.2 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth
+look similar with a minimal BW of 25.8 GB/s and peak value of 28.3 GB/s. And
+then with the block size becoming larger, the memory write bandwidth tends to
+decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 39 ms and the average RTT is usually approx. 15 ms. The network latency
+tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole,
+the average RTTs of the five runs keep flat and the network latency is
+relatively short.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 267 MiB for the
+four runs. In general, the four test runs have very large memory utilization,
+which can reach 257 MiB on average. On the other hand, for the mean free memory,
+the four test runs have the similar trend with that of the mean used memory.
+In general, the mean free memory change from 233 MiB to 241 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the four test runs seem quite different, ranging from
+305.3 kpps to 447.1 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding average packet
+throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results
+seem consistent. Within each test run of the four runs, when number of flows
+becomes larger, the packet throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT is only 42
+ms and the average RTT is usually approx. 15 ms. On the whole, the average
+RTTs of the four runs keep stable and the network latency is relatively small.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool. The largest
+cache size is 212 MiB, which is same for the four runs, and the smallest cache
+size is 75 MiB. On the whole, the average cache size of the four runs look the
+same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer
+size keep flat, since they have a minimum value of 7 MiB and a maximum value of
+8 MiB, with an average value of about 7.9 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of
+flows in these tests is 240k, and each run has a minimum number of flows of 2
+and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
+packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run
+of the four runs, when number of flows becomes larger, the packet throughput
+seems not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 354.4 kpps to 381.8 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for three test runs, whose values change a
+lot from 10 pps to 501 kpps. While results of the rest test run seem the same
+and keep stable with the average number of packets transmitted per second of 10
+pps. However, the total number of packets received per second of the four runs
+look similar, which have a large wide range of 2 pps to 815 kpps.
+
+In some test runs when running with less than approx. 251000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 251000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on LF POD1_ with:
+Apex
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
+
+
+fuel
+====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Ericsson POD2_ or LF POD2_ between August 25 and 29 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 0.5 and 0.6 ms.
+A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP
+handling). One test run has a greater RTT spike of 1.9 ms, which is the same
+one with the 0.7 ms average. The other runs have no similar spike at all.
+To be able to draw conclusions more runs should be made.
+SLA set to 10 ms. The SLA value is used as a reference, it has not
+been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth looks similar between different dates, with an
+average between approx. 170 and 200 MB/s. Within each test run the results
+vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs
+have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in
+absolute numbers between the dates, between 617 and 838 MB/s.
+SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC010
+-----
+The measurements for memory latency are similar between test dates and result
+in approx. 1.2 ns. The variations within each test run are similar, between
+1.215 and 1.219 ns. One exception is February 16, where the average is 1.222
+and varies between 1.22 and 1.28 ns.
+SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
+by OPNFV.
+
+TC011
+-----
+Packet delay variation between 2 VMs on different blades is measured using
+Iperf3. On the first date the reported packet delay variation varies between
+0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms.
+On the second date the delay variation varies between 0.002 and 0.006 ms, with
+an average delay variation of 0.004 ms.
+
+TC012
+-----
+Between test dates, the average measurements for memory bandwidth vary between
+17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal
+BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality.
+SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC014
+-----
+The Unixbench processor test run results vary between scores 3080 and 3240,
+one result each date. The average score on the total is 3150.
+No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+CPU utilization statistics are collected during UDP flows sent between the VMs
+using pktgen as packet generator tool. The average measurements for CPU
+utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
+appears around 7%.
+
+TC069
+-----
+Between test dates, the average measurements for memory bandwidth vary between
+15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal
+BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality.
+SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Memory utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The average measurements for memory
+utilization vary between 225MB to 246MB. The peak of memory utilization appears
+around 340MB.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Cache utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The average measurements for cache
+utilization vary between 205MB to 212MB.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. Total number of packets received per
+second was average on 200 kpps and total number of packets transmitted per
+second was average on 600 kpps.
+
+Detailed test results
+---------------------
+The scenario was run on Ericsson POD2_ and LF POD2_ with:
+Fuel 9.0
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
+Conclusions and recommendations
+-------------------------------
+The pktgen test configuration has a relatively large base effect on RTT in
+TC037 compared to TC002, where there is no background load at all. Approx.
+15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
+difference in RTT results.
+Especially RTT and throughput come out with better results than for instance
+the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
+probably be further analyzed and understood. Also of interest could be
+to make further analyzes to find patterns and reasons for lost traffic.
+Also of interest could be to see if there are continuous variations where
+some test cases stand out with better or worse results than the general test
+case.
+
+
+
+Joid
+=====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Intel POD6_ between September 1 and 8 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 1.01 ms and 1.88 ms.
+Only one test run has reached greatest RTT spike of 1.88 ms. Meanwhile, the
+smallest network latency is 1.01 ms, which is obtained on Sep. 1st. In general,
+the average of network latency of the four test runs are between 1.29 ms and
+1.34 ms. SLA set to be 10 ms. The SLA value is used as a reference, it has not
+been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 183.65
+MB/s. The IO read bandwidth of the three runs looks similar, with an average
+between 62.9 and 64.3 MB/s, except one on Sep. 1, for its maximum storage
+throughput is only 159.1 MB/s. One of the runs has a minimum BW of 685 KB/s and
+other has a maximum BW of 183.6 MB/s. The SLA of read bandwidth sets to be
+400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+IO read times per second of the four test runs have an average value between
+1.41k per second and 1.64k per second, and meanwhile, the minimum result is
+only 55 times per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.152 ns and 1.179
+ns on average. The variations within each test run are quite different, some
+vary from a large range and others have a small change. For example, the
+largest change is on September 8, the memory read latency of which is ranging
+from 1.120 ns to 1.221 ns. However, the results on September 7 change very
+little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC011
+-----
+Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
+different blades. The reported packet delay variations of the four test runs
+differ from each other. In general, the packet delay of the first two runs look
+similar, for they both stay stable within each run. And the mean packet delay
+of them are 0.0087 ms and 0.0127 ms respectively. Of the four runs, the fourth
+has the worst result, because the packet delay reaches 0.0187 ms. The SLA value
+sets to be 10 ms. The SLA value is used as a reference, it has not been defined
+by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the four test runs, the trend of
+three memory bandwidth almost look similar, which all have a narrow range, and
+the average result is 11.78 GB/s. Here SLA set to be 15 GB/s. The SLA value is
+used as a reference, it has not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3260k to 3328k, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the four test runs is between 307.3 kpps and
+447.1 kpps, of which the result of the third run is the highest. The RTT
+results of all the test runs keep flat at approx. 15 ms. It is obvious that the
+PPS results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 418.1 kpps and the
+minimum throughput is 326.5 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar average vaue, that is approx. 15 ms.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and nine percent respectively. And the best result is obtained on Sep.
+1, with an CPU load of nine percent. But on the whole, the CPU load is very
+poor, since the average value is quite small.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 21.9 GB/s to 25.9 GB/s and then to 17.8 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth
+look similar with a minimal BW of 24.8 GB/s and peak value of 27.8 GB/s. And
+then with the block size becoming larger, the memory write bandwidth tends to
+decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 39 ms and the average RTT is usually approx. 15 ms. The network latency
+tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole,
+the average RTTs of the five runs keep flat and the network latency is
+relatively short.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 267 MiB for the
+four runs. In general, the four test runs have very large memory utilization,
+which can reach 257 MiB on average. On the other hand, for the mean free memory,
+the four test runs have the similar trend with that of the mean used memory.
+In general, the mean free memory change from 233 MiB to 241 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the four test runs seem quite different, ranging from
+305.3 kpps to 447.1 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding average packet
+throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results
+seem consistent. Within each test run of the four runs, when number of flows
+becomes larger, the packet throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT is only 42
+ms and the average RTT is usually approx. 15 ms. On the whole, the average
+RTTs of the four runs keep stable and the network latency is relatively small.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool. The largest
+cache size is 212 MiB, which is same for the four runs, and the smallest cache
+size is 75 MiB. On the whole, the average cache size of the four runs look the
+same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer
+size keep flat, since they have a minimum value of 7 MiB and a maximum value of
+8 MiB, with an average value of about 7.9 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of
+flows in these tests is 240k, and each run has a minimum number of flows of 2
+and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
+packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run
+of the four runs, when number of flows becomes larger, the packet throughput
+seems not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 354.4 kpps to 381.8 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for three test runs, whose values change a
+lot from 10 pps to 501 kpps. While results of the rest test run seem the same
+and keep stable with the average number of packets transmitted per second of 10
+pps. However, the total number of packets received per second of the four runs
+look similar, which have a large wide range of 2 pps to 815 kpps.
+
+In some test runs when running with less than approx. 251000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 251000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Intel POD6_ with:
+Joid
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
diff --git a/docs/release/results/os-odl_l2-sfc-ha.rst b/docs/release/results/os-odl_l2-sfc-ha.rst
new file mode 100644
index 000000000..e27562cae
--- /dev/null
+++ b/docs/release/results/os-odl_l2-sfc-ha.rst
@@ -0,0 +1,231 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+==================================
+Test Results for os-odl_l2-sfc-ha
+==================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Fuel
+=====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the LF POD2_ or Ericsson POD2_ between September 16 and 20 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 0.32 ms and 1.42 ms.
+Only one test run on Sep. 20 has reached greatest RTT spike of 4.66 ms.
+Meanwhile, the smallest network latency is 0.16 ms, which is obtained on Sep.
+17th. To sum up, the curve of network latency has very small wave, which is
+less than 5 ms. SLA sets to be 10 ms. The SLA value is used as a reference, it
+has not been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 734
+MB/s. The IO read bandwidth of the first three runs looks similar, with an
+average of less than 100 KB/s, except one on Sep. 20, whose maximum storage
+throughput can reach 734 MB/s. The SLA of read bandwidth sets to be 400 MB/s,
+which is used as a reference, and it has not been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+IO read times per second of the four test runs have an average value between
+1.8k per second and 3.27k per second, and meanwhile, the minimum result is
+only 60 times per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.085 ns and 1.218
+ns on average. The variations within each test run are quite small. For
+Ericsson pod2, the average of memory latency is approx. 1.217 ms. While for LF
+pod2, the average value is about 1.085 ms. It can be seen that the performance
+of LF is better than Ericsson's. The SLA sets to be 30 ns. The SLA value is
+used as a reference, it has not been defined by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. The four test runs all have a narrow range
+of change with the average memory and write BW of 18.5 GB/s. Here SLA set to be
+15 GB/s. The SLA value is used as a reference, it has not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3209k to 3843k, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the three test runs is between 439 kpps and
+582 kpps, and the test run on Sep. 17th has the lowest average value of 371
+kpps. The RTT results of all the test runs keep flat at approx. 10 ms. It is
+obvious that the PPS results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved, since the largest packet throughput is 680 kpps and the
+minimum throughput is 319 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar trend of RTT with the average value of approx. 12 ms.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and ten percent respectively. And the best result is obtained on Sep.
+17th, with an CPU load of ten percent. But on the whole, the CPU load is very
+poor, since the average value is quite small.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the average memory write
+bandwidth tends to become larger first and then smaller within every run test
+for the two pods, which rangs from 25.1 GB/s to 29.4 GB/s and then to 19.2 GB/s
+on average. Since the test id is one, it is that only the INT memory write
+bandwidth is tested. On the whole, with the block size becoming larger, the
+memory write bandwidth tends to decrease. SLA sets to be 7 GB/s. The SLA value
+is used as a reference, it has not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 27 ms and the average RTT is usually approx. 12 ms. The network latency
+tested on Sep. 27th has a peak latency of 27 ms. But on the whole, the average
+RTTs of the four runs keep flat.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 269 MiB for the
+four runs. In general, the four test runs have very large memory utilization,
+which can reach 251 MiB on average. On the other hand, for the mean free memory,
+the four test runs have the similar trend with that of the mean used memory.
+In general, the mean free memory change from 231 MiB to 248 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the four test runs seem quite different, ranging from
+371 kpps to 582 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding average packet
+throughput is between 319 kpps and 680 kpps. In summary, the PPS results
+seem consistent. Within each test run of the four runs, when number of flows
+becomes larger, the packet throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT is only 24
+ms and the average RTT is usually approx. 12 ms. On the whole, the average
+RTTs of the four runs keep stable and the network latency is relatively small.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool. The largest
+cache size is 213 MiB, and the smallest cache size is 99 MiB, which is same for
+the four runs. On the whole, the average cache size of the four runs look the
+same and is between 184 MiB and 205 MiB. Meanwhile, the tread of the buffer
+size keep stable, since they have a minimum value of 7 MiB and a maximum value of
+8 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs differ from 371 kpps to 582 kpps. The average number of
+flows in these tests is 240k, and each run has a minimum number of flows of 2
+and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
+packet throughput differ between 319 kpps to 680 kpps. Within each test run
+of the four runs, when number of flows becomes larger, the packet throughput
+seems not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 24 ms with an average leatency of less than 13 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 370 kpps to 582 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for the four test runs, whose values change a
+lot from 10 pps to 697 kpps. However, the total number of packets received per
+second of three runs look similar, which have a large wide range of 2 pps to
+1.497 Mpps, while the results on Sep. 18th and 20th have very small maximum
+number of packets received per second of 817 kpps.
+
+In some test runs when running with less than approx. 251000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 251000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Ericsson POD2_ and LF POD2_ with:
+Fuel 9.0
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-onos-nofeature-ha.rst b/docs/release/results/os-onos-nofeature-ha.rst
new file mode 100644
index 000000000..d8b3ace5f
--- /dev/null
+++ b/docs/release/results/os-onos-nofeature-ha.rst
@@ -0,0 +1,257 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+======================================
+Test Results for os-onos-nofeature-ha
+======================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Joid
+=====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 5 scenario test runs, each run
+on the Intel POD6_ between September 13 and 16 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 1.50 and 1.68 ms.
+Only one test run has reached greatest RTT spike of 2.62 ms, which has
+the smallest RTT of 1.00 ms. The other four runs have no similar spike at all,
+the minimum and average RTTs of which are approx. 1.06 ms and 1.32 ms. SLA set
+to be 10 ms. The SLA value is used as a reference, it has not been defined by
+OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 175.4
+MB/s. The IO read bandwidth of the four runs looks similar on different four
+days, with an average between 58.1 and 62.0 MB/s, except one on Sep. 14, for
+its maximum storage throughput is only 133.0 MB/s. One of the runs has a
+minimum BW of 497 KM/s and other has a maximum BW of 177.4 MB/s. The SLA of read
+bandwidth sets to be 400 MB/s, which is used as a reference, and it has not
+been defined by OPNFV.
+
+The results of storage IOPS for the five runs look similar with each other. The
+IO read times per second of the five test runs have an average value between
+1.20 K/s and 1.61 K/s, and meanwhile, the minimum result is only 41 times per
+second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the five runs is between 1.146 ns and 1.172
+ns on average. The variations within each test run are quite different, some
+vary from a large range and others have a small change. For example, the
+largest change is on September 13, the memory read latency of which is ranging
+from 1.152 ns to 1.221 ns. However, the results on September 14 change very
+little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC011
+-----
+Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
+different blades. The reported packet delay variations of the five test runs
+differ from each other. In general, the packet delay of the first two runs look
+similar, for they both stay stable within each run. And the mean packet delay of
+of them are 0.07714 ms and 0.07982 ms respectively. Of the five runs, the third
+has the worst result, because the packet delay reaches 0.08384 ms. The trend of
+therest two runs look the same, for the average packet delay are 0.07808 ms and
+0.07727 ms respectively. The SLA value sets to be 10 ms. The SLA value is used
+as a reference, it has not been defined by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the five test runs, the memory
+bandwidth of last three test runs almost keep stable within each run, which is
+11.64, 11.71 and 11.61 GB/s on average. However, the memory read and write
+bandwidth on Sep. 13 has a large range, for it ranges from 6.68 GB/s to 11.73
+GB/s. Here SLA set to be 15 GB/s. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3208 to 3314, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the five test runs is between 259.6 kpps and
+318.4 kpps, of which the result of the second run is the highest. The RTT
+results of all the test runs keep flat at approx. 20 ms. It is obvious that the
+PPS results are not as consistent as the RTT results.
+
+The No. flows of the five test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 398.9 kpps and the
+minimum throughput is 250.6 kpps respectively.
+
+There are no errors of packets received in the five runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the five
+runs have the similar average vaue, that is between 17 ms and 22 ms, of which
+the worest RTT is 53 ms on Sep. 14th.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and 10 percent respectively. And the best result is obtained on Sep.
+13rd, with an CPU load of 10 percent.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 21.6 GB/s to 26.8 GB/s and then to 18.4 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is 8 kb and 16 kb, the memory write bandwidth
+look similar with a minimal BW of 23.0 GB/s and peak value of 28.6 GB/s. And
+then with the block size becoming larger, the memory write bandwidth tends to
+decrease. SLA sets to be 7 GB/s. The SLA value is used as a a reference, it has
+not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the five test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 53 ms and the average RTT is usually approx. 18 ms. The network latency
+tested on Sep. 14 shows that it has a peak latency of 53 ms. But on the whole,
+the average RTTs of the five runs keep flat and the network latency is
+relatively short.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 272 MiB on Sep
+14. In general, the mean used memory of the five test runs have the similar
+trend and the minimum memory used size is approx. 150 MiB, and the average
+used memory size is about 250 MiB. On the other hand, for the mean free memory,
+the five test runs have the similar trend, whose mean free memory change from
+218 MiB to 342 MiB, with an average value of approx. 38 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the five test runs seem quite different, ranging from
+285.29 kpps to 297.76 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding packet throughput
+differ between 250.6k and 398.9k with an average packet throughput between
+277.2 K and 318.4 K. In summary, the PPS results seem consistent. Within each
+test run of the five runs, when number of flows becomes larger, the packet
+throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the five test runs
+look similar with each other. Within each test run, the maximum RTT is only 49
+ms and the average RTT is usually approx. 20 ms. On the whole, the average
+RTTs of the five runs keep stable and the network latency is relatively short.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool.The largest
+cache size is 215 MiB in the four runs, and the smallest cache size is 95 MiB.
+On the whole, the average cache size of the five runs change a little and is
+about 200 MiB, except the one on Sep. 14th, the mean cache size is very small,
+which keeps 102 MiB. Meanwhile, the tread of the buffer size keep flat, since
+they have a minimum value of 7 MiB and a maximum value of 8 MiB, with an
+average value of about 7.8 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs seem quite different, ranging from 285.29 kpps to 297.76
+kpps. The average number of flows in these tests is 239.7k, and each run has a
+minimum number of flows of 2 and a maximum number of flows of 1.001 Mil. At the
+same time, the corresponding packet throughput differ between 227.3k and 398.9k
+with an average packet throughput between 277.2k and 318.4k. Within each test
+run of the five runs, when number of flows becomes larger, the packet
+throughput seems not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+ between 0 ms and 49 ms with an average leatency of less than 22 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the five runs differ from 250.6 kpps to 398.9 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for four test runs, whose values change a
+lot from 10 pps to 399 kpps, except the one on Sep. 14th, whose total number
+of transmitted per second keep stable, that is 10 pps. Similarly, the total
+number of packets received per second look the same for four runs, except the
+one on Sep. 14th, whose value is only 10 pps.
+
+In some test runs when running with less than approx. 90000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 250000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Intel POD6_ with:
+Joid
+OpenStack Mitaka
+Onos Goldeneye
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/release/results/os-onos-sfc-ha.rst b/docs/release/results/os-onos-sfc-ha.rst
new file mode 100644
index 000000000..e52ae3d55
--- /dev/null
+++ b/docs/release/results/os-onos-sfc-ha.rst
@@ -0,0 +1,517 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+===============================
+Test Results for os-onos-sfc-ha
+===============================
+
+.. toctree::
+ :maxdepth: 2
+
+
+fuel
+====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Ericsson POD2_ or LF POD2_ between September 5 and 10 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 0.5 and 0.6 ms.
+A few runs start with a 1 - 1.5 ms RTT spike (This could be because of normal ARP
+handling). One test run has a greater RTT spike of 1.9 ms, which is the same
+one with the 0.7 ms average. The other runs have no similar spike at all.
+To be able to draw conclusions more runs should be made.
+SLA set to 10 ms. The SLA value is used as a reference, it has not
+been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth looks similar between different dates, with an
+average between approx. 170 and 200 MB/s. Within each test run the results
+vary, with a minimum 2 MB/s and maximum 838 MB/s on the totality. Most runs
+have a minimum BW of 3 MB/s (two runs at 2 MB/s). The maximum BW varies more in
+absolute numbers between the dates, between 617 and 838 MB/s.
+SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC010
+-----
+The measurements for memory latency are similar between test dates and result
+in approx. 1.2 ns. The variations within each test run are similar, between
+1.215 and 1.219 ns. One exception is February 16, where the average is 1.222
+and varies between 1.22 and 1.28 ns.
+SLA set to 30 ns. The SLA value is used as a reference, it has not been defined
+by OPNFV.
+
+TC011
+-----
+Packet delay variation between 2 VMs on different blades is measured using
+Iperf3. On the first date the reported packet delay variation varies between
+0.0025 and 0.011 ms, with an average delay variation of 0.0067 ms.
+On the second date the delay variation varies between 0.002 and 0.006 ms, with
+an average delay variation of 0.004 ms.
+
+TC012
+-----
+Between test dates, the average measurements for memory bandwidth vary between
+17.4 and 17.9 GB/s. Within each test run the results vary more, with a minimal
+BW of 16.4 GB/s and maximum of 18.2 GB/s on the totality.
+SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC014
+-----
+The Unixbench processor test run results vary between scores 3080 and 3240,
+one result each date. The average score on the total is 3150.
+No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+CPU utilization statistics are collected during UDP flows sent between the VMs
+using pktgen as packet generator tool. The average measurements for CPU
+utilization ratio vary between 1% to 2%. The peak of CPU utilization ratio
+appears around 7%.
+
+TC069
+-----
+Between test dates, the average measurements for memory bandwidth vary between
+15.5 and 25.4 GB/s. Within each test run the results vary more, with a minimal
+BW of 9.7 GB/s and maximum of 29.5 GB/s on the totality.
+SLA set to 6 GB/s. The SLA value is used as a reference, it has not been
+defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Memory utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The average measurements for memory
+utilization vary between 225MB to 246MB. The peak of memory utilization appears
+around 340MB.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Cache utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The average measurements for cache
+utilization vary between 205MB to 212MB.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs at
+approx. 15 ms. Some test runs show an increase with many flows, in the range
+towards 16 to 17 ms. One exception standing out is Feb. 15 where the average
+RTT is stable at approx. 13 ms. The PPS results are not as consistent as the
+RTT results.
+In some test runs when running with less than approx. 10000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. Around 20 percent decrease in the worst
+case. For the other test runs there is however no significant change to the PPS
+throughput when the number of flows are increased. In some test runs the PPS
+is also greater with 1000000 flows compared to other test runs where the PPS
+result is less with only 2 flows.
+
+The average PPS throughput in the different runs varies between 414000 and
+452000 PPS. The total amount of packets in each test run is approx. 7500000 to
+8200000 packets. One test run Feb. 15 sticks out with a PPS average of
+558000 and approx. 1100000 packets in total (same as the on mentioned earlier
+for RTT results).
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally range between 100 and 1000 per test run,
+but there are spikes in the range of 10000 lost packets as well, and even
+more in a rare cases.
+
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. Total number of packets received per
+second was average on 200 kpps and total number of packets transmitted per
+second was average on 600 kpps.
+
+Detailed test results
+---------------------
+The scenario was run on Ericsson POD2_ and LF POD2_ with:
+Fuel 9.0
+OpenStack Mitaka
+Onos Goldeneye
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
+Conclusions and recommendations
+-------------------------------
+The pktgen test configuration has a relatively large base effect on RTT in
+TC037 compared to TC002, where there is no background load at all. Approx.
+15 ms compared to approx. 0.5 ms, which is more than a 3000 percentage
+difference in RTT results.
+Especially RTT and throughput come out with better results than for instance
+the *fuel-os-nosdn-nofeature-ha* scenario does. The reason for this should
+probably be further analyzed and understood. Also of interest could be
+to make further analyzes to find patterns and reasons for lost traffic.
+Also of interest could be to see if there are continuous variations where
+some test cases stand out with better or worse results than the general test
+case.
+
+
+Joid
+=====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Intel POD6_ between September 8 and 11 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 1.35 ms and 1.57 ms.
+Only one test run has reached greatest RTT spike of 2.58 ms. Meanwhile, the
+smallest network latency is 1.11 ms, which is obtained on Sep. 11st. In
+general, the average of network latency of the four test runs are between 1.35
+ms and 1.57 ms. SLA set to be 10 ms. The SLA value is used as a reference, it
+has not been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 175.4
+MB/s. The IO read bandwidth of the three runs looks similar, with an average
+between 43.7 and 56.3 MB/s, except one on Sep. 8, for its maximum storage
+throughput is only 107.9 MB/s. One of the runs has a minimum BW of 478 KM/s and
+other has a maximum BW of 168.6 MB/s. The SLA of read bandwidth sets to be
+400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+IO read times per second of the four test runs have an average value between
+978 per second and 1.20 K/s, and meanwhile, the minimum result is only 36 times
+per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.164 ns and 1.244
+ns on average. The variations within each test run are quite different, some
+vary from a large range and others have a small change. For example, the
+largest change is on September 10, the memory read latency of which is ranging
+from 1.128 ns to 1.381 ns. However, the results on September 11 change very
+little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC011
+-----
+Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
+different blades. The reported packet delay variations of the four test runs
+differ from each other. In general, the packet delay of two runs look similar,
+for they both stay stable within each run. And the mean packet delay of them
+are 0.0772 ms and 0.0788 ms respectively. Of the four runs, the fourth has the
+worst result, because the packet delay reaches 0.0838 ms. The rest one has a
+large wide range from 0.0666 ms to 0.0798 ms. The SLA value sets to be 10 ms.
+The SLA value is used as a reference, it has not been defined by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the four test runs, the trend of the
+memory bandwidth almost look similar, which all have a large wide range, and
+the minimum and maximum results are 9.02 GB/s and 18.14 GB/s. Here SLA set to
+be 15 GB/s. The SLA value is used as a reference, it has not been defined by
+OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3395 to 3475, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the four test runs is between 362.1 kpps and
+363.5 kpps, of which the result of the third run is the highest. The RTT
+results of all the test runs keep flat at approx. 17 ms. It is obvious that the
+PPS results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 418.1 kpps and the
+minimum throughput is 326.5 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar average vaue, that is approx. 17 ms, of which the worst
+RTT is 39 ms on Sep. 11st.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and nine percent respectively. And the best result is obtained on Sep.
+10, with an CPU load of nine percent.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 25.9 GB/s to 26.6 GB/s and then to 18.1 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is from 2 kb to 16 kb, the memory write
+bandwidth look similar with a minimal BW of 22.1 GB/s and peak value of 28.6
+GB/s. And then with the block size becoming larger, the memory write bandwidth
+tends to decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference,
+it has not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 39 ms and the average RTT is usually approx. 17 ms. The network latency
+tested on Sep. 11 shows that it has a peak latency of 39 ms. But on the whole,
+the average RTTs of the five runs keep flat and the network latency is
+relatively short.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 270 MiB on the
+first two runs. In general, the mean used memory of two test runs have very
+large memory utilization, which can reach 264 MiB on average. And the other two
+runs have a large wide range of memory usage with the minimum value of 150 MiB
+and the maximum value of 270 MiB. On the other hand, for the mean free memory,
+the four test runs have the similar trend with that of the mean used memory.
+In general, the mean free memory change from 220 MiB to 342 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the four test runs seem quite different, ranging from
+326.5 kpps to 418.1 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding packet throughput
+differ between 326.5 kpps and 418.1 kpps with an average packet throughput between
+361.7 kpps and 363.5 kpps. In summary, the PPS results seem consistent. Within each
+test run of the four runs, when number of flows becomes larger, the packet
+throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT is only 47
+ms and the average RTT is usually approx. 15 ms. On the whole, the average
+RTTs of the four runs keep stable and the network latency is relatively small.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool. The largest
+cache size is 214 MiB, which is same for the four runs, and the smallest cache
+size is 94 MiB. On the whole, the average cache size of the four runs look the
+same and is between 198 MiB and 207 MiB. Meanwhile, the tread of the buffer
+size keep flat, since they have a minimum value of 7 MiB and a maximum value of
+8 MiB, with an average value of about 7.9 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs seem quite the same, which is approx. 363 kpps. The average
+number of flows in these tests is 240k, and each run has a minimum number of
+flows of 2 and a maximum number of flows of 1.001 Mil. At the same time, the
+corresponding packet throughput differ between 327 kpps and 418 kpps with an
+average packet throughput of about 363 kpps. Within each test run of the four
+runs, when number of flows becomes larger, the packet throughput seems not
+larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 47 ms with an average leatency of less than 16 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 361.7 kpps to 365.0 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for two test runs, whose values change a
+lot from 10 pps to 432 kpps. While results of the other test runs seem the same
+and keep stable with the average number of packets transmitted per second of 10
+pps. However, the total number of packets received per second of the four runs
+look similar, which have a large wide range of 2 pps to 657 kpps.
+
+In some test runs when running with less than approx. 250000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 250000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Intel POD6_ with:
+Joid
+OpenStack Mitaka
+Onos Goldeneye
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
diff --git a/docs/release/results/overview.rst b/docs/release/results/overview.rst
new file mode 100644
index 000000000..b4a050545
--- /dev/null
+++ b/docs/release/results/overview.rst
@@ -0,0 +1,106 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Ericsson AB and others.
+
+Yardstick test tesult document overview
+=======================================
+
+.. _`Yardstick user guide`: artifacts.opnfv.org/yardstick/docs/userguide/index.html
+.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/
+.. _Scenarios: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-scenarios
+
+This document provides an overview of the results of test cases developed by
+the OPNFV Yardstick Project, executed on OPNFV community labs.
+
+Yardstick project is described in `Yardstick user guide`_.
+
+Yardstick is run systematically at the end of an OPNFV fresh installation.
+The system under test (SUT) is installed by the installer Apex, Compass, Fuel
+or Joid on Performance Optimized Datacenter (POD); One single installer per
+POD. All the runnable test cases are run sequentially. The installer and the
+POD are considered to evaluate whether the test case can be run or not. That is
+why all the number of test cases may vary from 1 installer to another and from
+1 POD to POD.
+
+OPNFV CI provides automated build, deploy and testing for
+the software developed in OPNFV. Unless stated, the reported tests are
+automated via Jenkins Jobs. Yardsrick test results from OPNFV Continous
+Integration can be found in the following dashboard:
+
+* *Yardstick* Dashboard_: uses influx DB to store Yardstick CI test results and
+ Grafana for visualization (user: opnfv/ password: opnfv)
+
+The results of executed test cases are available in Dashboard_ and all logs are
+stored in Jenkins_.
+
+It was not possible to execute the entire Yardstick test cases suite on the
+PODs assigned for release verification over a longer period of time, due to
+continuous work on the software components and blocking faults either on
+environment, features or test framework.
+
+The list of scenarios supported by each installer can be described as follows:
+
++-------------------------+---------+---------+---------+---------+
+| Scenario | Apex | Compass | Fuel | Joid |
++=========================+=========+=========+=========+=========+
+| os-nosdn-nofeature-noha | | | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-nofeature-ha | X | X | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-nofeature-ha | X | X | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-nofeature-noha| | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l3-nofeature-ha | X | X | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l3-nofeature-noha| | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-onos-sfc-ha | X | X | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-onos-sfc-noha | | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-onos-nofeature-ha | X | X | X | X |
++-------------------------+---------+---------+---------+---------+
+| os-onos-nofeature-noha | | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-sfc-ha | | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-sfc-noha | X | X | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-bgpvpn-ha | X | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-bgpvpn-noha | | X | X | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-kvm-ha | | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-kvm-noha | | X | X | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-ovs-ha | | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-ovs-noha | X | | X | |
++-------------------------+---------+---------+---------+---------+
+| os-ocl-nofeature-ha | | | | |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-lxd-ha | | | | X |
++-------------------------+---------+---------+---------+---------+
+| os-nosdn-lxd-noha | | | | X |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-fdio-noha | X | | | |
++-------------------------+---------+---------+---------+---------+
+| os-odl_l2-moon-ha | | X | | |
++-------------------------+---------+---------+---------+---------+
+
+To qualify for release, the scenarios must have deployed and been successfully
+tested in four consecutive installations to establish stability of deployment
+and feature capability. It is a recommendation to run Yardstick test
+cases over a longer period of time in order to better understand the behavior
+of the system under test.
+
+References
+----------
+
+* IEEE Std 829-2008. "Standard for Software and System Test Documentation".
+
+* OPNFV Colorado release note for Yardstick.
diff --git a/docs/release/results/results.rst b/docs/release/results/results.rst
new file mode 100644
index 000000000..04c6b9f87
--- /dev/null
+++ b/docs/release/results/results.rst
@@ -0,0 +1,57 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+Results listed by scenario
+==========================
+
+The following sections describe the yardstick results as evaluated for the
+Colorado release scenario validation runs. Each section describes the
+determined state of the specific scenario as deployed in the Colorado
+release process.
+
+Scenario Results
+================
+
+.. _Dashboard: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _Jenkins: https://build.opnfv.org/ci/view/yardstick/
+
+The following documents contain results of Yardstick test cases executed on
+OPNFV labs, triggered by OPNFV CI pipeline, documented per scenario.
+
+
+.. toctree::
+ :maxdepth: 1
+
+ os-nosdn-nofeature-ha.rst
+ os-nosdn-nofeature-noha.rst
+ os-odl_l2-nofeature-ha.rst
+ os-odl_l2-bgpvpn-ha.rst
+ os-odl_l2-sfc-ha.rst
+ os-nosdn-kvm-ha.rst
+ os-onos-nofeature-ha.rst
+ os-onos-sfc-ha.rst
+
+Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_.
+
+
+Feature Test Results
+====================
+
+The following features were verified by Yardstick test cases:
+
+ * IPv6
+
+ * HA (see :doc:`yardstick-opnfv-ha`)
+
+ * KVM
+
+ * Parser
+
+ * Virtual Traffic Classifier (see :doc:`yardstick-opnfv-vtc`)
+
+ * StorPerf
+
+.. note:: The test cases for IPv6 and Parser Projects are included in the
+ compass scenario.
+
diff --git a/docs/release/results/yardstick-opnfv-ha.rst b/docs/release/results/yardstick-opnfv-ha.rst
new file mode 100644
index 000000000..ef1617342
--- /dev/null
+++ b/docs/release/results/yardstick-opnfv-ha.rst
@@ -0,0 +1,118 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+===================================
+Test Results for yardstick-opnfv-ha
+===================================
+
+.. toctree::
+ :maxdepth: 2
+
+Details
+=======
+There are two test cases, TC019 and TC025, for high availability (HA) test of
+OPNFV platform, and both test cases were executed in CMCC's lab with 3+2 HA
+deployment, where the installer is Arno SR1 release of fuel.
+
+
+TC019
+-----
+This test case verifies the high availability of the openstack service, i.e.
+"nova-api", on controller node.
+There are one attacker, "kill-process" which kills all "nova-api" processes,
+and two monitors, "openstack-cmd" monitoring "nova-api" service by openstack
+command "nova image-list", while "process" monitor checks whether "nova-api"
+process is running. Please see the test case description document for detail.
+
+Overview of test results
+------------------------
+The service_outage_time of "nova image-list" is 0 seconds, while the
+process_recover_time of "nova-api" is 300 seconds which equals the running time
+of this test case, that means the "nova-api" service can't automatiocally
+recover itself.
+
+Detailed test results
+---------------------
+All "nova-api" process on the selected controller node was killed, and results
+of two monitors were collected. Specifically, the results of "nova image-list"
+request were collected from compute node and the status of "nova-api" process
+were collected from the selected controller node.
+
+Each monitor was running in a single process. The running time of each monitor
+was about 300 seconds with no waiting time between twice monitor running. For
+"nova image-list", the running times is 127, that's to say there is one
+openstack command request every 2.36 seconds; while the running times is 141
+for "nova-api" process checking, the accurancy is about 2.13 seconds.
+
+The outage time of each monitor, which the name is "service_outage_time" for
+"openstack-cmd" monitor and "process_recover_time" for "process" monitor, is
+defined as the duration from the begin time of the first failure request to the
+end time of the last failure request.
+
+All "nova image-list" requestes were success, so the service_outage_time of
+"nova image-list" is 0 second, while "nova-api" processes were not running for
+all "process" checking, so the process_recover_time of "nova-api" is 300s.
+
+Rationale for decisions
+-----------------------
+The service_outage_time is 0 second, that means the failover time of openstack
+service is less than 2.36s, which is the period of each request. However, the
+process_recover_time equals test case runing time, that means the process is
+not automatically recovered, so this test case is fail.
+
+
+TC025
+-----
+This test case verifies the high availability of controller node. When one of
+the controller node abnormally shutdown, the service provided should be OK.
+There are one attacker, "kill-process" which kills all "nova-api" processes,
+and two "openstack-cmd" monitors, one monitoring openstack command
+"nova image-list" and the other monitoring "neutron router-list".
+Please see the test case description document for detail.
+
+Overview of test results
+------------------------
+The both service_outage_time of "nova image-list" and "neutron router-list"
+were 0 second.
+
+Detailed test results
+---------------------
+A selected controller node was shutdown, and results of two monitors were
+collected from compute node.
+
+The return results of "nova image-list" and "neutron router-list" requests from
+compute node were collected, then the failure requestion time were statistic
+service_outage_time of corresponding service.
+
+Each monitor was running in a single process. The running time of each monitor
+was about 300 seconds with no waiting time between twice monitor running. For
+"nova image-list", the running times is 49, that's to say there is one
+openstack command request every 6.12 seconds; while the running times is 28 for
+"neutron router-list", the accurancy is about 10.71 seconds.
+
+The "service_outage_time" for two monitors is defined as the duration from the
+begin time of the first failure request to the end time of the last failure
+request.
+
+All "nova image-list" and "neutron router-list" requestes were success, so the
+service_outage_time of both two monitor were 0 second.
+
+Rationale for decisions
+-----------------------
+As service_outage_time of all monitors are 0 second, that means there are none
+failure request in this test case running time, this test case is passed.
+
+
+Conclusions and recommendations
+-------------------------------
+The TC019 shows the killed process will be not automatically recovered, which
+should be imporved.
+
+There are several improvement points for HA test:
+a) Running test cases in different enveriment deployed by different installers,
+such as compass4nfv, apex and joid, with different versiones.
+b) The period of each request is a little long, it needs more accurate test
+method.
+c) More test cases with different faults and different monitors are needed.
diff --git a/docs/release/results/yardstick-opnfv-kvm.rst b/docs/release/results/yardstick-opnfv-kvm.rst
new file mode 100644
index 000000000..ee4c6390b
--- /dev/null
+++ b/docs/release/results/yardstick-opnfv-kvm.rst
@@ -0,0 +1,38 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+====================================
+Test Results for yardstick-opnfv-kvm
+====================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Details
+=======
+
+.. after this doc is filled, remove all comments and include the scenario in
+.. results.rst by removing the comment on the file name.
+
+
+Overview of test results
+------------------------
+
+.. general on metrics collected, number of iterations
+
+Detailed test results
+---------------------
+
+.. info on lab, installer, scenario
+
+Rationale for decisions
+-----------------------
+.. result analysis, pass/fail
+
+Conclusions and recommendations
+-------------------------------
+
+.. did the expected behavior occured?
diff --git a/docs/release/results/yardstick-opnfv-parser.rst b/docs/release/results/yardstick-opnfv-parser.rst
new file mode 100644
index 000000000..520d867ef
--- /dev/null
+++ b/docs/release/results/yardstick-opnfv-parser.rst
@@ -0,0 +1,38 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+=======================================
+Test Results for yardstick-opnfv-parser
+=======================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Details
+=======
+
+.. after this doc is filled, remove all comments and include the scenario in
+.. results.rst by removing the comment on the file name.
+
+
+Overview of test results
+------------------------
+
+.. general on metrics collected, number of iterations
+
+Detailed test results
+---------------------
+
+.. info on lab, installer, scenario
+
+Rationale for decisions
+-----------------------
+.. result analysis, pass/fail
+
+Conclusions and recommendations
+-------------------------------
+
+.. did the expected behavior occured?
diff --git a/docs/release/results/yardstick-opnfv-vtc.rst b/docs/release/results/yardstick-opnfv-vtc.rst
new file mode 100644
index 000000000..059b5491f
--- /dev/null
+++ b/docs/release/results/yardstick-opnfv-vtc.rst
@@ -0,0 +1,248 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _Dashboard006: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc006
+.. _Dashboard007: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc007
+.. _Dashboard020: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc020
+.. _Dashboard021: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-tc021
+.. _DashboardVTC: http://testresults.opnfv.org/grafana/dashboard/db/vtc-dashboard
+====================================
+Test Results for yardstick-opnfv-vtc
+====================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Details
+=======
+
+.. after this doc is filled, remove all comments and include the scenario in
+.. results.rst by removing the comment on the file name.
+
+
+Overview of test results
+------------------------
+
+.. general on metrics collected, number of iterations
+
+The virtual Traffic Classifier (vtc) Scenario supported by Yardstick is used by 4 Test Cases:
+
+- TC006
+- TC007
+- TC020
+- TC021
+
+
+* TC006
+
+TC006 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking Test.
+It collects measures about the end-to-end throughput supported by the
+virtual Traffic Classifier (vTC).
+Results of the test are shown in the Dashboard006_
+The throughput is expressed as percentage of the available bandwidth on the NIC.
+
+
+* TC007
+
+TC007 is the Virtual Traffic Classifier Data Plane Throughput Benchmarking in presence of
+noisy neighbors Test.
+It collects measures about the end-to-end throughput supported by the
+virtual Traffic Classifier when a user-defined number of noisy neighbors is deployed.
+Results of the test are shown in the Dashboard007_
+The throughput is expressed as percentage of the available bandwidth on the NIC.
+
+
+* TC020
+
+TC020 is the Virtual Traffic Classifier Instantiation Test.
+It verifies that a newly instantiated vTC is alive and functional and its instantiation
+is correctly supported by the underlying infrastructure.
+Results of the test are shown in the Dashboard020_
+
+
+* TC021
+
+TC021 is the Virtual Traffic Classifier Instantiation in presence of noisy neighbors Test.
+It verifies that a newly instantiated vTC is alive and functional and its instantiation
+is correctly supported by the underlying infrastructure when noisy neighbors are present.
+Results of the test are shown in the Dashboard021_
+
+* Generic
+
+In the Generic scenario the Virtual Traffic Classifier is running on a standard Openstack
+setup and traffic is being replayed from a neighbor VM. The traffic sent contains
+various protocols and applications, and the VTC identifies them and exports the data.
+Results of the test are shown in the DashboardVTC.
+
+Detailed test results
+---------------------
+
+* TC006
+
+The results for TC006 have been obtained using the following test case
+configuration:
+
+- Context: Dummy
+- Scenario: vtc_throughput
+- Network Techology: SR-IOV
+- vTC Flavor: m1.large
+
+
+* TC007
+
+The results for TC007 have been obtained using the following test case
+configuration:
+
+- Context: Dummy
+- Scenario: vtc_throughput_noisy
+- Network Techology: SR-IOV
+- vTC Flavor: m1.large
+- Number of noisy neighbors: 2
+- Number of cores per neighbor: 2
+- Amount of RAM per neighbor: 1G
+
+
+* TC020
+
+The results for TC020 have been obtained using the following test case
+configuration:
+
+The results listed in previous section have been obtained using the following
+test case configuration:
+
+- Context: Dummy
+- Scenario: vtc_instantiation_validation
+- Network Techology: SR-IOV
+- vTC Flavor: m1.large
+
+
+* TC021
+
+The results listed in previous section have been obtained using the following
+test case configuration:
+
+- Context: Dummy
+- Scenario: vtc_instantiation_validation
+- Network Techology: SR-IOV
+- vTC Flavor: m1.large
+- Number of noisy neighbors: 2
+- Number of cores per neighbor: 2
+- Amount of RAM per neighbor: 1G
+
+
+For all the test cases, the user can specify different values for the parameters.
+
+* Generic
+
+The results listed in the previous section have been obtained, using a
+standard Openstack setup.
+The user can replay his/her own traffic and see the corresponding results.
+
+Rationale for decisions
+-----------------------
+
+* TC006
+
+The result of the test is a number between 0 and 100 which represents the percentage of bandwidth
+available on the NIC that corresponds to the supported throughput by the vTC.
+
+
+* TC007
+
+The result of the test is a number between 0 and 100 which represents the percentage of bandwidth
+available on the NIC that corresponds to the supported throughput by the vTC.
+
+* TC020
+
+The execution of the test is done as described in the following:
+
+- The vTC is deployed on the OpenStack testbed;
+- Some traffic is sent to the vTC;
+- The vTC changes the header of the packets and sends them back to the packet generator;
+- The packet generator checks that all the packets are received correctly and have been changed
+correctly by the vTC.
+
+The test is declared as PASSED if all the packets are correcly received by the packet generator
+and they have been modified by the virtual Traffic Classifier as required.
+
+
+* TC021
+
+The execution of the test is done as described in the following:
+
+- The vTC is deployed on the OpenStack testbed;
+- The noisy neighbors are deployed as requested by the user;
+- Some traffic is sent to the vTC;
+- The vTC change the header of the packets and sends them back to the packet generator;
+- The packet generator checks that all the packets are received correctly and have been changed
+correctly by the vTC
+
+The test is declared as PASSED if all the packets are correcly received by the packet generator
+and they have been modified by the virtual Traffic Classifier as required.
+
+* Generic
+
+The execution of the test consists of the following actions:
+
+- The vTC is deployed on the OpenStack testbed;
+- The traffic generator VM is deployed on the Openstack Testbed;
+- Traffic data are relevant to the network setup;
+- Traffic is sent to the vTC;
+
+
+
+Conclusions and recommendations
+-------------------------------
+
+* TC006
+
+The obtained results show that the virtual Traffic Classifier can support up to 4 Gbps
+(40% of the available bandwidth) correspond to the expected behaviour of the virtual
+Traffic Classifier.
+Using the configuration with SR-IOV and large flavor, the expected throughput should
+generally be in the range between 3 and 4 Gbps.
+
+
+* TC007
+
+These results correspond to the configuration in which the virtual Traffic Classifier uses SR-IOV
+Virtual Functions and the flavor is set to large for the virtual machine.
+The throughput is in the range between 2.5 Gbps and 3.7 Gbps.
+This shows that the effect of 2 noisy neighbors reduces the throughput of
+the service between 10 and 20%.
+Increasing number of neihbours would have a higher impact on the performance.
+
+
+* TC020
+
+The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
+Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is
+correctly instantiated, it is able to receive and send packets using SR-IOV technology
+and to forward packets back to the packet generator changing the TCP/IP header as required.
+
+
+* TC021
+
+The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
+Using the configuration with SR-IOV and large flavor, the expected result is that the vTC is
+correctly instantiated, it is able to receive and send packets using SR-IOV technology
+and to forward packets back to the packet generator changing the TCP/IP header as required,
+also in presence of noisy neighbors.
+
+* Generic
+
+The obtained results correspond to the expected behaviour of the virtual Traffic Classifier.
+Using the aforementioned configuration the expected application protocols are identified
+and their traffic statistics are demonstrated in the DashboardVTC, a group of popular
+applications is selected to demonstrate the sound operation of the vTC.
+The demonstrated application protocols are:
+- HTTP
+- Skype
+- Bittorrent
+- Youtube
+- Dropbox
+- Twitter
+- Viber
+- iCloud