1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
|
##
## Copyright (c) 2010-2019 Intel Corporation
##
## Licensed under the Apache License, Version 2.0 (the "License");
## you may not use this file except in compliance with the License.
## You may obtain a copy of the License at
##
## http://www.apache.org/licenses/LICENSE-2.0
##
## Unless required by applicable law or agreed to in writing, software
## distributed under the License is distributed on an "AS IS" BASIS,
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
## See the License for the specific language governing permissions and
## limitations under the License.
##
rapid (Rapid Automated Performance Indication for Dataplane)
************************************************************
rapid is a set of files offering an easy way to do a sanity check of the
dataplane performance of an OpenStack or container environment.
Most of the information below is now available on wiki.opnfv.org/display/SAM/Rapid+scripting
In case of OpenStack, copy the files in a directory on a machine that can run the OpenStack CLI
commands and that can reach the networks to connect to the VMs.
You will need an image that has the PROX tool installed.
A good way to do this is to use the packer tool to build an image for a target of your choice.
You can also build this image manually by executing all the commands described in the deploycentostools.sh.
The default name of the qcow2 file is rapidVM.qcow2
When using the packer tool, the first step is to upload an
existing CentOS cloud image from the internet into OpenStack.
Check out: https://cloud.centos.org/centos/7/images/
You should now create proper clouds.yaml file so Packer can connect to your OpenStack.
Sample clouds.yaml could look like this:
client:
force_ipv4: true
clouds:
overcloud:
verify: False
interface: "public"
auth:
username: "admin"
password: "your_password"
project_name: "admin"
tenant_name: "admin"
auth_url: "https://192.168.1.1:5000/v3"
user_domain_name: "Default"
domain_name: "Default"
identity_api_version: "3"
Packer could be run from docker image, you will need to create following alias:
alias packer='docker run -it --env OS_CLOUD=$OS_CLOUD -v "$PWD":/root/project -w /root/project hashicorp/packer:light $@'
and make sure the OS_CLOUD variable is set to the correct cloud: in the clouds.yaml example above, you would first
export OS_CLOUD=overcloud
There are 2 files: centos.json and deploycentostools.sh, allowing you to create
an image automatically. Run
# packer build centos.json
Edit centos.json to reflect the settings of your environment: The following fields need to populated
with the values of your system:
- "source_image_name": Needs to be the name of the Centos cloud image
- "flavor": Needs to be the ID or name of the flavor existing in your OpenStack environment that will be used
to start the VM in which we will install all tools
- "network_discovery_cidrs": Should contain the CIDR of the network you want to use e.g. "10.6.6.0/24"
- "floating_ip_network": ID or name of the floating ip network in case floating ip are being used
- "security_groups": ID or name of the security group being used
Refer to Packer docs for more details:
https://www.packer.io/docs/builders/openstack.html
Note that this procedure is not only installing the necessary tools to run PROX,
but also does some system optimizations (tuned). Check deploycentostools.sh for more details.
Now you can run the createrapid.py file. Use help for more info on the usage:
# ./createrapid.py --help
createrapid.py will use the OpenStack CLI to create the flavor, key-pair, network, image,
servers, ...
It will create a <STACK>.env file containing all info that will be used by runrapid.py
to actually run the tests. Logging can be found in the CREATE<STACK>.log file
You can use floating IP addresses by specifying the floating IP network
--floating_network NETWORK
or directly connect through the INTERNAL_NETWORK by using the following parameter:
--floating_network NO
/etc/resolv.conf will contain DNS info from the "best" interface. Since we are
deploying VMs with multiple interface on different networks, this info might be
taken from the "wrong" network (e.g. the dataplane network).
Now you can run the runrapid.py file. Use help for more info on the usage:
# ./runrapid.py --help
The script will connect to all machines that have been instantiated and it will launch
PROX in all machines. This will be done through the admin IP assigned to the machines.
Once that is done it will connect to the PROX tcp socket and start sending
commands to run the actual test.
Make sure the security groups allow for tcp access (ssh & prox port).
It will print test results on the screen while running.
The actual test that is running is described in <TEST>.test.
Notes about prox_user_data.sh script:
- The script contains commands that will be executed using cloud-init at
startup of the VMs.
- huge pages are allocated for DPDK on node 0 (hard-coded) in the VM.
Note on using SRIOV ports:
Before running createrapid, make sure the network, subnet and ports are already created
This can be done as follows (change the parameters to your needs):
openstack network create --share --external --provider-network-type flat --provider-physical-network physnet2 fast-network
openstack subnet create --network fast-network --subnet-range 20.20.20.0/24 --gateway none fast-subnet
openstack port create --network fast-network --vnic-type direct --fixed-ip subnet=fast-subnet Port1
openstack port create --network fast-network --vnic-type direct --fixed-ip subnet=fast-subnet Port2
openstack port create --network fast-network --vnic-type direct --fixed-ip subnet=fast-subnet Port3
Make sure to use the network and subnet in the createrapid parameters list. Port1, Port2 and Port3
are being used in the *.env file.
Note when doing tests using the gateway functionality on OVS:
When a GW VM is sending packets on behalf of another VM (e.g. the generator), we need to make sure the OVS
will allow those packets to go through. Therefore you need to the IP address of the generator in the
"allowed address pairs" of the GW VM.
Note when doing tests using encryption on OVS:
Your OVS configuration might block encrypted packets. To allow packets to go through,
you can disable port_security. You can do this by using the following commands
neutron port-update xxxxxx --no-security-groups
neutron port-update xxxxxx --port_security_enabled=False
An example of the env file generated by createrapid.py can be found below.
Note that this file can be created manually in case the stack is created in a
different way (not using the createrapid.py). This can be useful in case you are
not using OpenStack as a VIM or when using special configurations that cannot be
achieved using createrapid.py. Fields needed for runrapid are:
* all info in the [Mx] sections
* the key information in the [ssh] section
* the total_number_of_vms information in the [rapid] section
[rapid]
loglevel = DEBUG
version = 19.6.30
total_number_of_machines = 3
[M1]
name = rapid-VM1
admin_ip = 10.25.1.109
dp_ip = 10.10.10.4
dp_mac = fa:16:3e:25:be:25
[M2]
name = rapid-VM2
admin_ip = 10.25.1.110
dp_ip = 10.10.10.7
dp_mac = fa:16:3e:72:bf:e8
[M3]
name = rapid-VM3
admin_ip = 10.25.1.125
dp_ip = 10.10.10.15
dp_mac = fa:16:3e:69:f3:e7
[ssh]
key = prox.pem
user = centos
[Varia]
vim = OpenStack
stack = rapid
vms = rapid.vms
image = rapidVM
image_file = rapidVM.qcow2
dataplane_network = dataplane-network
subnet = dpdk-subnet
subnet_cidr = 10.10.10.0/24
internal_network = admin_internal_net
floating_network = admin_floating_net
|