summaryrefslogtreecommitdiffstats
path: root/samples/vnf_samples/nsut/vfw/vfw.cfg
blob: c0b11419c509bf49e3bc06d336900c26d34053b5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# Copyright (c) 2016-2017 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[PIPELINE0]
type = MASTER
core = 0
[PIPELINE1]
type =  ARPICMP
core = 1
pktq_in  = SWQ4
pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
pktq_in_prv =  RXQ0.0
prv_to_pub_map = (0,1)
[PIPELINE2]
type = TXRX
core = 2
pipeline_txrx_type = RXRX
dest_if_offset = 176
pktq_in  = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = SWQ0   SWQ1   SWQ2   SWQ3   SWQ4
[PIPELINE3]
type = LOADB
core = 3
pktq_in  = SWQ0 SWQ1 SWQ2 SWQ3
pktq_out = SWQ4 SWQ5 SWQ6 SWQ7 SWQ8 SWQ9 SWQ10 SWQ11
outport_offset = 136
n_vnf_threads = 2
n_lb_tuples = 5
loadb_debug = 0
lib_arp_debug = 0
[PIPELINE4]
type = VFW
core = 4
pktq_in  = SWQ2 SWQ3
pktq_out = SWQ4 SWQ5
n_rules = 10
prv_que_handler = (0)
n_flows = 2000000
traffic_type = 4
pkt_type = ipv4
tcp_be_liberal = 0
[PIPELINE5]
type = TXRX
core = 5
pipeline_txrx_type = TXTX
dest_if_offset = 176
pktq_in  = SWQ8 SWQ9 SWQ10 SWQ11 SWQ12 SWQ13
pktq_out = TXQ0.0 TXQ1.0 TXQ0.1 TXQ1.1 TXQ0.2 TXQ1.2
ch processor single and parallel speed scores show similar results at approx. 3200. The runs vary between scores 3160 and 3240. No SLA set. TC037 ----- The amount of packets per second (PPS) and round trip times (RTT) between 2 VMs on different blades are measured when increasing the amount of UDP flows sent between the VMs using pktgen as packet generator tool. Round trip times and packet throughput between VMs are typically affected by the amount of flows set up and result in higher RTT and less PPS throughput. When running with less than 10000 flows the results are flat and consistent. RTT is then approx. 30 ms and the number of PPS remains flat at approx. 250000 PPS. Beyond approx. 10000 flows and up to 1000000 (one million) there is an even drop in RTT and PPS performance, eventually ending up at approx. 150-250 ms and 40000 PPS respectively. There is one measurement made February 16 that has slightly worse results compared to the other 4 measurements. The reason for this is unknown. For instance anyone being logged onto the POD can be of relevance for such a disturbance. Detailed test results --------------------- The scenario was run on Ericsson POD2_ with: Fuel 8.0 OpenStack Liberty OVS 2.3.1 No SDN controller installed Rationale for decisions ----------------------- Pass Tests were successfully executed and metrics collects (apart from TC011_). No SLA was verified. To be decided on in next release of OPNFV. Conclusions and recommendations ------------------------------- The pktgen test configuration has a relatively large base effect on RTT in TC037 compared to TC002, where there is no background load at all (30 ms compared to 1 ms or less, which is more than a 3000 percentage different in RTT results). The larger amounts of flows in TC037 generate worse RTT results, in the magnitude of several hundreds of milliseconds. It would be interesting to also make and compare all these measurements to completely (optimized) bare metal machines running native Linux with all other relevant tools available, e.g. lmbench, pktgen etc.