summaryrefslogtreecommitdiffstats
path: root/docs/results/fuel-os-nosdn-nofeature-ha.rst
blob: eb8b1474178994aa5ec1f57623117c2caa6729fe (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0


===========================================
Test Results for fuel-os-nosdn-nofeature-ha
===========================================

.. toctree::
   :maxdepth: 2


Details
=======

.. _Grafana: http://130.211.154.108/grafana/dashboard/db/yardstick-main
.. _POD2: https://wiki.opnfv.org/pharos?&#community_test_labs

Overview of test results
------------------------

See Grafana_ for viewing test result metrics for each respective test case. It
is possible to chose which specific scenarios to look at, and then to zoom in
on the details of each run test scenario as well.

All of the test case results below are based on 5 consecutive scenario test
runs, each run on the Ericsson POD2_ between February 13 and 18 in 2016. The
best would be to have more runs to draw better conclusions from, but these are
the only runs available at the time of OPNFV R2 release.

TC002
-----
The round-trip-time (RTT) between 2 VMs on different blades is measured using
ping. The measurements are on average varying between 0.5 and 1.1 ms
with a first 2 - 2.5 ms RTT spike in the beginning of each run (This could be
because of normal ARP handling). The 2 last runs are very similar in their
results. But, to be able to draw any further conclusions more runs should be
made. There is one measurement taken on February 16 that does not have the
first RTT spike, and less variations to the RTT. The reason for this is
unknown. There is a discussion on another test measurement made Feb. 16 in
TC037_.
SLA set to 10 ms. The SLA value is used as a reference, it has not
been defined by OPNFV.

TC005
-----
The IO read bandwidth look similar between different test runs, with an
average at approx. 160-170 MB/s. Within each run the results vary much,
minimum 2 MB/s and maximum 630 MB/s on the totality. Most runs have a
minimum of 3 MB/s (one run at 2 MB/s). The maximum BW varies much more in
absolute numbers, between 566 and 630 MB/s.
SLA set to 400 MB/s. The SLA value is used as a reference, it has not been
defined by OPNFV.

TC010
-----
The measurements for memory latency are consistent among test runs and results
in approx. 1.2 ns. The variations between runs are similar, between
1.215 and 1.219 ns. One exception is February 16, where the varation is
greater, between 1.22 and 1.28 ns. SLA set to 30 ns. The SLA value is used as
a reference, it has not been defined by OPNFV.

TC011
-----
For this scenario no results are available to report on. Probable reason is
an integer/floating point issue regarding how InfluxDB is populated with
result data from the test runs.

TC012
-----
The average measurements for memory bandwidth are consistent among most of the
different test runs at 17.2 - 17.3 GB/s. The very first test run averages at
17.7 GB/s. Within each run the results vary, with a minimal BW of 15.4
GB/s and maximum of 18.2 GB/s of the totality.
SLA set to 15 GB/s. The SLA value is used as a reference, it has not been
defined by OPNFV.

TC014
-----
The Unixbench processor single and parallel speed scores show similar results
at approx. 3200. The runs vary between scores 3160 and 3240.
No SLA set.

TC037
-----
The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs
on different blades are measured when increasing the amount of UDP flows sent
between the VMs using pktgen as packet generator tool.

Round trip times and packet throughput between VMs are typically affected by
the amount of flows set up and result in higher RTT and less PPS
throughput.

When running with less than 10000 flows the results are flat and consistent.
RTT is then approx. 30 ms and the number of PPS remains flat at approx.
250000 PPS. Beyond approx. 10000 flows and up to 1000000 (one million) there
is an even drop in RTT and PPS performance, eventually ending up at approx.
150-250 ms and 40000 PPS respectively.

There is one measurement made February 16 that has slightly worse results
compared to the other 4 measurements. The reason for this is unknown. For
instance anyone being logged onto the POD can be of relevance for such a
disturbance.

Detailed test results
---------------------
The scenario was run on Ericsson POD2_ with:
Fuel 8.0
OpenStack Liberty
OVS 2.3.1

No SDN controller installed

Rationale for decisions
-----------------------
Pass

Tests were successfully executed and metrics collects (apart from TC011_).
No SLA was verified. To be decided on in next release of OPNFV.

Conclusions and recommendations
-------------------------------
The pktgen test configuration has a relatively large base effect on RTT in
TC037 compared to TC002, where there is no background load at all (30 ms
compared to 1 ms or less, which is more than a 3000 percentage different
in RTT results). The larger amounts of flows in TC037 generate worse
RTT results, in the magnitude of several hundreds of milliseconds. It would
be interesting to also make and compare all these measurements to completely
(optimized) bare metal machines running native Linux with all other relevant
tools available, e.g. lmbench, pktgen etc.
a id='n609' href='#n609'>609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788