aboutsummaryrefslogtreecommitdiffstats
path: root/docs/results
diff options
context:
space:
mode:
authorwulin wang <wangwulin@huawei.com>2016-09-21 00:52:22 +0000
committerliang gao <jean.gaoliang@huawei.com>2016-09-21 09:19:47 +0000
commitf20f1a81c62d44e17fb8ce9c7fa814fcc2c3aa98 (patch)
tree823719294b224ee54338300ce1ae59ec77f14f3f /docs/results
parent84d6e1a32fc07603f7b6fa5990f0f4dc15e1a9dc (diff)
Update scenario test results file for Colorado release
JIRA: YARDSTICK-351 Change-Id: I1770bb6f5fe24bc43ee4c50776bcfd3ba89360d9 Signed-off-by: wulin wang <wangwulin@huawei.com> (cherry picked from commit 77fc34b53bfcfd2c218d02c75dcd25235489aea9)
Diffstat (limited to 'docs/results')
-rw-r--r--docs/results/os-odl_l2-nofeature-ha.rst237
-rw-r--r--docs/results/os-odl_l2-sfc-ha.rst231
-rw-r--r--docs/results/os-onos-sfc-ha.rst243
-rw-r--r--docs/results/results.rst1
4 files changed, 712 insertions, 0 deletions
diff --git a/docs/results/os-odl_l2-nofeature-ha.rst b/docs/results/os-odl_l2-nofeature-ha.rst
index 6eb6252af..53b1c11fe 100644
--- a/docs/results/os-odl_l2-nofeature-ha.rst
+++ b/docs/results/os-odl_l2-nofeature-ha.rst
@@ -272,3 +272,240 @@ Also of interest could be to see if there are continuous variations where
some test cases stand out with better or worse results than the general test
case.
+
+
+Joid
+=====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Intel POD6_ between September 1 and 8 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 1.01 ms and 1.88 ms.
+Only one test run has reached greatest RTT spike of 1.88 ms. Meanwhile, the
+smallest network latency is 1.01 ms, which is obtained on Sep. 1st. In general,
+the average of network latency of the four test runs are between 1.29 ms and
+1.34 ms. SLA set to be 10 ms. The SLA value is used as a reference, it has not
+been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 183.65
+MB/s. The IO read bandwidth of the three runs looks similar, with an average
+between 62.9 and 64.3 MB/s, except one on Sep. 1, for its maximum storage
+throughput is only 159.1 MB/s. One of the runs has a minimum BW of 685 KM/s and
+other has a maximum BW of 183.6 MB/s. The SLA of read bandwidth sets to be
+400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+IO read times per second of the four test runs have an average value between
+1.41k per second and 1.64k per second, and meanwhile, the minimum result is
+only 55 times per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.152 ns and 1.179
+ns on average. The variations within each test run are quite different, some
+vary from a large range and others have a small change. For example, the
+largest change is on September 8, the memory read latency of which is ranging
+from 1.120 ns to 1.221 ns. However, the results on September 7 change very
+little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC011
+-----
+Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
+different blades. The reported packet delay variations of the four test runs
+differ from each other. In general, the packet delay of the first two runs look
+similar, for they both stay stable within each run. And the mean packet delay
+of them are 0.0087 ms and 0.0127 ms respectively. Of the four runs, the fourth
+has the worst result, because the packet delay reaches 0.0187 ms. The SLA value
+sets to be 10 ms. The SLA value is used as a reference, it has not been defined
+by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the four test runs, the trend of
+three memory bandwidth almost look similar, which all have a narrow range, and
+the average result is 11.78 GB/s. Here SLA set to be 15 GB/s. The SLA value is
+used as a reference, it has not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3260k to 3328k, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the four test runs is between 307.3 kpps and
+447.1 kpps, of which the result of the third run is the highest. The RTT
+results of all the test runs keep flat at approx. 15 ms. It is obvious that the
+PPS results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 418.1 kpps and the
+minimum throughput is 326.5 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar average vaue, that is approx. 15 ms.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and nine percent respectively. And the best result is obtained on Sep.
+1, with an CPU load of nine percent. But on the whole, the CPU load is very
+poor, since the average value is quite small.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 21.9 GB/s to 25.9 GB/s and then to 17.8 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is 2 kb or 16 kb, the memory write bandwidth
+look similar with a minimal BW of 24.8 GB/s and peak value of 27.8 GB/s. And
+then with the block size becoming larger, the memory write bandwidth tends to
+decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 39 ms and the average RTT is usually approx. 15 ms. The network latency
+tested on Sep. 1 and Sep. 8 have a peak latency of 39 ms. But on the whole,
+the average RTTs of the five runs keep flat and the network latency is
+relatively short.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 267 MiB for the
+four runs. In general, the four test runs have very large memory utilization,
+which can reach 257 MiB on average. On the other hand, for the mean free memory,
+the four test runs have the similar trend with that of the mean used memory.
+In general, the mean free memory change from 233 MiB to 241 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the four test runs seem quite different, ranging from
+305.3 kpps to 447.1 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding average packet
+throughput is between 354.4 kpps and 381.8 kpps. In summary, the PPS results
+seem consistent. Within each test run of the four runs, when number of flows
+becomes larger, the packet throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT is only 42
+ms and the average RTT is usually approx. 15 ms. On the whole, the average
+RTTs of the four runs keep stable and the network latency is relatively small.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool. The largest
+cache size is 212 MiB, which is same for the four runs, and the smallest cache
+size is 75 MiB. On the whole, the average cache size of the four runs look the
+same and is between 197 MiB and 211 MiB. Meanwhile, the tread of the buffer
+size keep flat, since they have a minimum value of 7 MiB and a maximum value of
+8 MiB, with an average value of about 7.9 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs differ from 354.4 kpps to 381.8 kpps. The average number of
+flows in these tests is 240k, and each run has a minimum number of flows of 2
+and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
+packet throughput differ between 305.3 kpps to 447.1 kpps. Within each test run
+of the four runs, when number of flows becomes larger, the packet throughput
+seems not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 42 ms with an average leatency of less than 15 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 354.4 kpps to 381.8 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for three test runs, whose values change a
+lot from 10 pps to 501 kpps. While results of the rest test run seem the same
+and keep stable with the average number of packets transmitted per second of 10
+pps. However, the total number of packets received per second of the four runs
+look similar, which have a large wide range of 2 pps to 815 kpps.
+
+In some test runs when running with less than approx. 251000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 251000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Intel POD6_ with:
+Joid
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/results/os-odl_l2-sfc-ha.rst b/docs/results/os-odl_l2-sfc-ha.rst
new file mode 100644
index 000000000..e27562cae
--- /dev/null
+++ b/docs/results/os-odl_l2-sfc-ha.rst
@@ -0,0 +1,231 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+
+
+==================================
+Test Results for os-odl_l2-sfc-ha
+==================================
+
+.. toctree::
+ :maxdepth: 2
+
+
+Fuel
+=====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the LF POD2_ or Ericsson POD2_ between September 16 and 20 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 0.32 ms and 1.42 ms.
+Only one test run on Sep. 20 has reached greatest RTT spike of 4.66 ms.
+Meanwhile, the smallest network latency is 0.16 ms, which is obtained on Sep.
+17th. To sum up, the curve of network latency has very small wave, which is
+less than 5 ms. SLA sets to be 10 ms. The SLA value is used as a reference, it
+has not been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 734
+MB/s. The IO read bandwidth of the first three runs looks similar, with an
+average of less than 100 KB/s, except one on Sep. 20, whose maximum storage
+throughput can reach 734 MB/s. The SLA of read bandwidth sets to be 400 MB/s,
+which is used as a reference, and it has not been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+IO read times per second of the four test runs have an average value between
+1.8k per second and 3.27k per second, and meanwhile, the minimum result is
+only 60 times per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.085 ns and 1.218
+ns on average. The variations within each test run are quite small. For
+Ericsson pod2, the average of memory latency is approx. 1.217 ms. While for LF
+pod2, the average value is about 1.085 ms. It can be seen that the performance
+of LF is better than Ericsson's. The SLA sets to be 30 ns. The SLA value is
+used as a reference, it has not been defined by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. The four test runs all have a narrow range
+of change with the average memory and write BW of 18.5 GB/s. Here SLA set to be
+15 GB/s. The SLA value is used as a reference, it has not been defined by OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3209k to 3843k, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the three test runs is between 439 kpps and
+582 kpps, and the test run on Sep. 17th has the lowest average value of 371
+kpps. The RTT results of all the test runs keep flat at approx. 10 ms. It is
+obvious that the PPS results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved, since the largest packet throughput is 680 kpps and the
+minimum throughput is 319 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar trend of RTT with the average value of approx. 12 ms.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and ten percent respectively. And the best result is obtained on Sep.
+17th, with an CPU load of ten percent. But on the whole, the CPU load is very
+poor, since the average value is quite small.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the average memory write
+bandwidth tends to become larger first and then smaller within every run test
+for the two pods, which rangs from 25.1 GB/s to 29.4 GB/s and then to 19.2 GB/s
+on average. Since the test id is one, it is that only the INT memory write
+bandwidth is tested. On the whole, with the block size becoming larger, the
+memory write bandwidth tends to decrease. SLA sets to be 7 GB/s. The SLA value
+is used as a reference, it has not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 27 ms and the average RTT is usually approx. 12 ms. The network latency
+tested on Sep. 27th has a peak latency of 27 ms. But on the whole, the average
+RTTs of the four runs keep flat.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 269 MiB for the
+four runs. In general, the four test runs have very large memory utilization,
+which can reach 251 MiB on average. On the other hand, for the mean free memory,
+the four test runs have the similar trend with that of the mean used memory.
+In general, the mean free memory change from 231 MiB to 248 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the four test runs seem quite different, ranging from
+371 kpps to 582 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding average packet
+throughput is between 319 kpps and 680 kpps. In summary, the PPS results
+seem consistent. Within each test run of the four runs, when number of flows
+becomes larger, the packet throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT is only 24
+ms and the average RTT is usually approx. 12 ms. On the whole, the average
+RTTs of the four runs keep stable and the network latency is relatively small.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool. The largest
+cache size is 213 MiB, and the smallest cache size is 99 MiB, which is same for
+the four runs. On the whole, the average cache size of the four runs look the
+same and is between 184 MiB and 205 MiB. Meanwhile, the tread of the buffer
+size keep stable, since they have a minimum value of 7 MiB and a maximum value of
+8 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs differ from 371 kpps to 582 kpps. The average number of
+flows in these tests is 240k, and each run has a minimum number of flows of 2
+and a maximum number of flows of 1.001 Mil. At the same time, the corresponding
+packet throughput differ between 319 kpps to 680 kpps. Within each test run
+of the four runs, when number of flows becomes larger, the packet throughput
+seems not larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 24 ms with an average leatency of less than 13 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 370 kpps to 582 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for the four test runs, whose values change a
+lot from 10 pps to 697 kpps. However, the total number of packets received per
+second of three runs look similar, which have a large wide range of 2 pps to
+1.497 Mpps, while the results on Sep. 18th and 20th have very small maximum
+number of packets received per second of 817 kpps.
+
+In some test runs when running with less than approx. 251000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 251000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Ericsson POD2_ and LF POD2_ with:
+Fuel 9.0
+OpenStack Mitaka
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
diff --git a/docs/results/os-onos-sfc-ha.rst b/docs/results/os-onos-sfc-ha.rst
index 1a09f53d7..e52ae3d55 100644
--- a/docs/results/os-onos-sfc-ha.rst
+++ b/docs/results/os-onos-sfc-ha.rst
@@ -272,3 +272,246 @@ Also of interest could be to see if there are continuous variations where
some test cases stand out with better or worse results than the general test
case.
+
+Joid
+=====
+
+.. _Grafana: http://testresults.opnfv.org/grafana/dashboard/db/yardstick-main
+.. _POD6: https://wiki.opnfv.org/pharos?&#community_test_labs
+
+Overview of test results
+------------------------
+
+See Grafana_ for viewing test result metrics for each respective test case. It
+is possible to chose which specific scenarios to look at, and then to zoom in
+on the details of each run test scenario as well.
+
+All of the test case results below are based on 4 scenario test runs, each run
+on the Intel POD6_ between September 8 and 11 in 2016.
+
+TC002
+-----
+The round-trip-time (RTT) between 2 VMs on different blades is measured using
+ping. Most test run measurements result on average between 1.35 ms and 1.57 ms.
+Only one test run has reached greatest RTT spike of 2.58 ms. Meanwhile, the
+smallest network latency is 1.11 ms, which is obtained on Sep. 11st. In
+general, the average of network latency of the four test runs are between 1.35
+ms and 1.57 ms. SLA set to be 10 ms. The SLA value is used as a reference, it
+has not been defined by OPNFV.
+
+TC005
+-----
+The IO read bandwidth actually refers to the storage throughput, which is
+measured by fio and the greatest IO read bandwidth of the four runs is 175.4
+MB/s. The IO read bandwidth of the three runs looks similar, with an average
+between 43.7 and 56.3 MB/s, except one on Sep. 8, for its maximum storage
+throughput is only 107.9 MB/s. One of the runs has a minimum BW of 478 KM/s and
+other has a maximum BW of 168.6 MB/s. The SLA of read bandwidth sets to be
+400 MB/s, which is used as a reference, and it has not been defined by OPNFV.
+
+The results of storage IOPS for the four runs look similar with each other. The
+IO read times per second of the four test runs have an average value between
+978 per second and 1.20 K/s, and meanwhile, the minimum result is only 36 times
+per second.
+
+TC010
+-----
+The tool we use to measure memory read latency is lmbench, which is a series of
+micro benchmarks intended to measure basic operating system and hardware system
+metrics. The memory read latency of the four runs is between 1.164 ns and 1.244
+ns on average. The variations within each test run are quite different, some
+vary from a large range and others have a small change. For example, the
+largest change is on September 10, the memory read latency of which is ranging
+from 1.128 ns to 1.381 ns. However, the results on September 11 change very
+little. The SLA sets to be 30 ns. The SLA value is used as a reference, it has
+not been defined by OPNFV.
+
+TC011
+-----
+Iperf3 is a tool for evaluating the packet delay variation between 2 VMs on
+different blades. The reported packet delay variations of the four test runs
+differ from each other. In general, the packet delay of two runs look similar,
+for they both stay stable within each run. And the mean packet delay of them
+are 0.0772 ms and 0.0788 ms respectively. Of the four runs, the fourth has the
+worst result, because the packet delay reaches 0.0838 ms. The rest one has a
+large wide range from 0.0666 ms to 0.0798 ms. The SLA value sets to be 10 ms.
+The SLA value is used as a reference, it has not been defined by OPNFV.
+
+TC012
+-----
+Lmbench is also used to measure the memory read and write bandwidth, in which
+we use bw_mem to obtain the results. Among the four test runs, the trend of the
+memory bandwidth almost look similar, which all have a large wide range, and
+the minimum and maximum results are 9.02 GB/s and 18.14 GB/s. Here SLA set to
+be 15 GB/s. The SLA value is used as a reference, it has not been defined by
+OPNFV.
+
+TC014
+-----
+The Unixbench is used to evaluate the IaaS processing speed with regards to
+score of single cpu running and parallel running. It can be seen from the
+dashboard that the processing test results vary from scores 3395 to 3475, and
+there is only one result one date. No SLA set.
+
+TC037
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The mean packet throughput of the four test runs is between 362.1 kpps and
+363.5 kpps, of which the result of the third run is the highest. The RTT
+results of all the test runs keep flat at approx. 17 ms. It is obvious that the
+PPS results are not as consistent as the RTT results.
+
+The No. flows of the four test runs are 240 k on average and the PPS results
+look a little waved since the largest packet throughput is 418.1 kpps and the
+minimum throughput is 326.5 kpps respectively.
+
+There are no errors of packets received in the four runs, but there are still
+lost packets in all the test runs. The RTT values obtained by ping of the four
+runs have the similar average vaue, that is approx. 17 ms, of which the worst
+RTT is 39 ms on Sep. 11st.
+
+CPU load is measured by mpstat, and CPU load of the four test runs seem a
+little similar, since the minimum value and the peak of CPU load is between 0
+percent and nine percent respectively. And the best result is obtained on Sep.
+10, with an CPU load of nine percent.
+
+TC069
+-----
+With the block size changing from 1 kb to 512 kb, the memory write bandwidth
+tends to become larger first and then smaller within every run test, which
+rangs from 25.9 GB/s to 26.6 GB/s and then to 18.1 GB/s on average. Since the
+test id is one, it is that only the INT memory write bandwidth is tested. On
+the whole, when the block size is from 2 kb to 16 kb, the memory write
+bandwidth look similar with a minimal BW of 22.1 GB/s and peak value of 28.6
+GB/s. And then with the block size becoming larger, the memory write bandwidth
+tends to decrease. SLA sets to be 7 GB/s. The SLA value is used as a reference,
+it has not been defined by OPNFV.
+
+TC070
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other, and within these test runs, the maximum RTT can
+reach 39 ms and the average RTT is usually approx. 17 ms. The network latency
+tested on Sep. 11 shows that it has a peak latency of 39 ms. But on the whole,
+the average RTTs of the five runs keep flat and the network latency is
+relatively short.
+
+Memory utilization is measured by free, which can display amount of free and
+used memory in the system. The largest amount of used memory is 270 MiB on the
+first two runs. In general, the mean used memory of two test runs have very
+large memory utilization, which can reach 264 MiB on average. And the other two
+runs have a large wide range of memory usage with the minimum value of 150 MiB
+and the maximum value of 270 MiB. On the other hand, for the mean free memory,
+the four test runs have the similar trend with that of the mean used memory.
+In general, the mean free memory change from 220 MiB to 342 MiB.
+
+Packet throughput and packet loss can be measured by pktgen, which is a tool
+in the network for generating traffic loads for network experiments. The mean
+packet throughput of the four test runs seem quite different, ranging from
+326.5 kpps to 418.1 kpps. The average number of flows in these tests is
+240000, and each run has a minimum number of flows of 2 and a maximum number
+of flows of 1.001 Mil. At the same time, the corresponding packet throughput
+differ between 326.5 kpps and 418.1 kpps with an average packet throughput between
+361.7 kpps and 363.5 kpps. In summary, the PPS results seem consistent. Within each
+test run of the four runs, when number of flows becomes larger, the packet
+throughput seems not larger at the same time.
+
+TC071
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The network latency is measured by ping, and the results of the four test runs
+look similar with each other. Within each test run, the maximum RTT is only 47
+ms and the average RTT is usually approx. 15 ms. On the whole, the average
+RTTs of the four runs keep stable and the network latency is relatively small.
+
+Cache utilization is measured by cachestat, which can display size of cache and
+buffer in the system. Cache utilization statistics are collected during UDP
+flows sent between the VMs using pktgen as packet generator tool. The largest
+cache size is 214 MiB, which is same for the four runs, and the smallest cache
+size is 94 MiB. On the whole, the average cache size of the four runs look the
+same and is between 198 MiB and 207 MiB. Meanwhile, the tread of the buffer
+size keep flat, since they have a minimum value of 7 MiB and a maximum value of
+8 MiB, with an average value of about 7.9 MiB.
+
+Packet throughput can be measured by pktgen, which is a tool in the network for
+generating traffic loads for network experiments. The mean packet throughput of
+the four test runs seem quite the same, which is approx. 363 kpps. The average
+number of flows in these tests is 240k, and each run has a minimum number of
+flows of 2 and a maximum number of flows of 1.001 Mil. At the same time, the
+corresponding packet throughput differ between 327 kpps and 418 kpps with an
+average packet throughput of about 363 kpps. Within each test run of the four
+runs, when number of flows becomes larger, the packet throughput seems not
+larger in the meantime.
+
+TC072
+-----
+The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
+on different blades are measured when increasing the amount of UDP flows sent
+between the VMs using pktgen as packet generator tool.
+
+Round trip times and packet throughput between VMs can typically be affected by
+the amount of flows set up and result in higher RTT and less PPS throughput.
+
+The RTT results are similar throughout the different test dates and runs
+between 0 ms and 47 ms with an average leatency of less than 16 ms. The PPS
+results are not as consistent as the RTT results, for the mean packet
+throughput of the four runs differ from 361.7 kpps to 365.0 kpps.
+
+Network utilization is measured by sar, that is system activity reporter, which
+can display the average statistics for the time since the system was started.
+Network utilization statistics are collected during UDP flows sent between the
+VMs using pktgen as packet generator tool. The largest total number of packets
+transmitted per second look similar for two test runs, whose values change a
+lot from 10 pps to 432 kpps. While results of the other test runs seem the same
+and keep stable with the average number of packets transmitted per second of 10
+pps. However, the total number of packets received per second of the four runs
+look similar, which have a large wide range of 2 pps to 657 kpps.
+
+In some test runs when running with less than approx. 250000 flows the PPS
+throughput is normally flatter compared to when running with more flows, after
+which the PPS throughput decreases. For the other test runs there is however no
+significant change to the PPS throughput when the number of flows are
+increased. In some test runs the PPS is also greater with 250000 flows
+compared to other test runs where the PPS result is less with only 2 flows.
+
+There are lost packets reported in most of the test runs. There is no observed
+correlation between the amount of flows and the amount of lost packets.
+The lost amount of packets normally differs a lot per test run.
+
+Detailed test results
+---------------------
+The scenario was run on Intel POD6_ with:
+Joid
+OpenStack Mitaka
+Onos Goldeneye
+OpenVirtualSwitch 2.5.90
+OpenDayLight Beryllium
+
+Rationale for decisions
+-----------------------
+Pass
+
+Conclusions and recommendations
+-------------------------------
+Tests were successfully executed and metrics collected.
+No SLA was verified. To be decided on in next release of OPNFV.
+
diff --git a/docs/results/results.rst b/docs/results/results.rst
index c5598a069..d51704252 100644
--- a/docs/results/results.rst
+++ b/docs/results/results.rst
@@ -30,6 +30,7 @@ OPNFV labs, triggered by OPNFV CI pipeline, documented per scenario.
os-nosdn-nofeature-noha.rst
os-onos-nofeature-h.rst
os-onos-sfc-ha.rst
+ os-odl_l2-sfc-ha.rst
Test results of executed tests are avilable in Dashboard_ and logs in Jenkins_.