aboutsummaryrefslogtreecommitdiffstats
path: root/yardstick/vTC/apexlake/README.rst
blob: 38c838494523379a8f8c044ba765062d870fd89a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
.. _DPDK: http://dpdk.org/doc/nics
.. _DPDK-pktgen: https://github.com/Pktgen/Pktgen-DPDK/
.. _SRIOV: https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking

===========================
Apexlake installation guide
===========================
ApexLake is a framework that provides automatic execution of experiments and related data collection to help
the user validating the infrastructure from the perspective of a Virtual Network Function.
To do so in the context of Yardstick, the virtual Traffic Classifier network function is utilized.


Hardware dependencies to run the framework
==========================================
In order to run the framework some hardware dependencies are required to run ApexLake.

The framework needs to be installed on a physical node where the DPDK packet DPDK-pktgen_
can be correctly installed and executed.
That requires for the packet generator to have 2 NICs DPDK_ Compatible.

The 2 NICs will be connected to the switch where the Openstack VM network is managed.

The switch is required to support multicast traffic and snooping protocol.

The corresponding ports to which the cables are connected will be configured as VLAN trunks
using two of the VLAN IDs available for Neutron.
The mentioned VLAN IDs will be required in further configuration steps.


Software dependencies to run the framework
==========================================
Before to start the framework, a set of dependencies are required to be installed.
In the following a set of instructions to be executed on the Linux shell to install dependencies
and configure the environment is presented.

1. Install dependencies.

To install the dependencies required by the framework it is necessary install the following packages.
The following example is provided for Ubuntu and need to be executed as root.
::

    apt-get install python-dev
    apt-get install python-pip
    apt-get install python-mock
    apt-get install tcpreplay
    apt-get install libpcap-dev

2. Install the framework on the system.

The installation of the framework on the system requires the setup of the project.
After entering into the apexlake directory, it is sufficient to run the following command.
::

    python setup.py install

3. Source OpenStack openrc file.

::

    source openrc

4. Create 2 Networks based on VLANs in Neutron.

In order for the network communication between the packet generator and the Compute node to
work fine, it is required to create through Neutron two networks and map those on the VLAN IDs
that have been previously used for the configuration on the physical switch.
The underlying switch needs to be configured accordingly.
::

    VLAN_1=2025
    VLAN_2=2021
    neutron net-create apexlake_inbound_network \
            --provider:network_type vlan \
            --provider:segmentation_id $VLAN_1 \
            --provider:physical_network physnet1

    neutron subnet-create apexlake_inbound_network \
            192.168.0.0/24 --name apexlake_inbound_subnet

    neutron net-create apexlake_outbound_network \
            --provider:network_type vlan \
            --provider:physical_network physnet1

    neutron net-create apexlake_inbound_network \
            --provider:network_type vlan \
            --provider:segmentation_id $VLAN_2 \
            --provider:physical_network physnet1

    neutron subnet-create apexlake_outbound_network 192.168.1.0/24 \
            --name apexlake_outbound_subnet

5. Configure the Test Cases.

The VLAN tags are also required into the test case Yardstick yaml file as parameters the following test cases:
    - TC 006
    - TC 007
    - TC 020
    - TC 021


Install and configure DPDK Pktgen
+++++++++++++++++++++++++++++++++
The execution of the framework is based on DPDK Pktgen.
If DPDK Pktgen has not been installed on the system by the user, it is necessary to download, compile and configure it.
The user can create a directory and download the dpdk packet generator source code:
::

    cd experimental_framework/libraries
    mkdir dpdk_pktgen
    git clone https://github.com/pktgen/Pktgen-DPDK.git

For the installation and configuration of DPDK and DPDK Pktgen please follow the official DPDK Pktgen README file.
Once the installation is completed, it is necessary to load the DPDK kernel driver, as follow:
::

    insmod uio
    insmod DPDK_DIR/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko

It is required to properly set the configuration file according to the system on Pktgen runs on.
A description of the required configuration parameters and examples is provided in the following:
::

    [PacketGen]
    packet_generator = dpdk_pktgen

    # This is the directory where the packet generator is installed
    # (if the user previously installed dpdk-pktgen,
    # it is required to provide the director where it is installed).
    pktgen_directory = /home/user/software/dpdk_pktgen/dpdk/examples/pktgen/

    # This is the directory where DPDK is installed
    dpdk_directory = /home/user/apexlake/experimental_framework/libraries/Pktgen-DPDK/dpdk/

    # Name of the dpdk-pktgen program that starts the packet generator
    program_name = app/app/x86_64-native-linuxapp-gcc/pktgen

    # DPDK coremask (see DPDK-Pktgen readme)
    coremask = 1f

    # DPDK memory channels (see DPDK-Pktgen readme)
    memory_channels = 3

    # Name of the interface of the pktgen to be used to send traffic (vlan_sender)
    name_if_1 = p1p1

    # Name of the interface of the pktgen to be used to receive traffic (vlan_receiver)
    name_if_2 = p1p2

    # PCI bus address correspondent to if_1
    bus_slot_nic_1 = 01:00.0

    # PCI bus address correspondent to if_2
    bus_slot_nic_2 = 01:00.1


To find the parameters related to names of the NICs and addresses of the PCI buses
the user may find useful to run the DPDK tool nic_bind as follows:
::

    DPDK_DIR/tools/dpdk_nic_bind.py --status

which lists the NICs available on the system, show the available drivers and bus addresses for each interface.
Please make sure to select NICs which are DPDK compatible.

Installation and configuration of smcroute
++++++++++++++++++++++++++++++++++++++++++
The user is required to install smcroute which is used by the framework to support multicast communications.
In the following a list of commands to be ran to download and install smroute is provided.
::

    cd ~
    git clone https://github.com/troglobit/smcroute.git
    cd smcroute
    sed -i 's/aclocal-1.11/aclocal/g' ./autogen.sh
    sed -i 's/automake-1.11/automake/g' ./autogen.sh
    ./autogen.sh
    ./configure
    make
    sudo make install
    cd ..

It is also required to create a configuration file using the following command:

    SMCROUTE_NIC=(name of the nic)

where name of the nic is the name used previously for the variable "name_if_2".
In the example it would be:
::

    SMCROUTE_NIC=p1p2

Then create the smcroute configuration file /etc/smcroute.conf
::

    echo mgroup from $SMCROUTE_NIC group 224.192.16.1 > /etc/smcroute.conf


At the end of this procedure it will be necessary to perform the following actions to add the user to the sudoers:
::

    adduser USERNAME sudo
    echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers


Experiment using SR-IOV configuration on the compute node
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In order to enable SR-IOV interfaces on the physical NIC of the compute node, a compatible NIC is required.
NIC configuration depends on model and vendor. After proper configuration to support SR-IOV,
a proper configuration of openstack is required.
For further information, please look at the _SRIOV configuration guide