summaryrefslogtreecommitdiffstats
path: root/docs/guides/quickstart.rst
blob: b85b3c8a169cd6271ae1ab13929974fb32078e49 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
Getting Started with 'vsperf'
=============================

Hardware Requirements
---------------------

VSPERF requires one of the following traffic generators to run tests:

- IXIA traffic generator (IxNetwork hardware) and a machine that runs the IXIA client software
- Spirent traffic generator (TestCenter hardware chassis or TestCenter virtual in a VM) and a
VM to run the Spirent Virtual Deployment Service image, formerly known as "Spirent LabServer".

Both test configurations, above, also require a CentOS Linux release 7.1.1503 (Core) host.

vSwitch Requirements
--------------------

The vSwitch must support Open Flow 1.3 or greater.

Installation
------------

Follow the `installation instructions <installation.html>`__ to install.

IXIA Setup
----------

On the CentOS 7 system
~~~~~~~~~~~~~~~~~~~~~~

You need to install IxNetworkTclClient$(VER\_NUM)Linux.bin.tgz.

On the IXIA client software system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Find the IxNetwork TCL server app (start -> All Programs -> IXIA ->
IxNetwork -> IxNetwork\_$(VER\_NUM) -> IxNetwork TCL Server)

Right click on IxNetwork TCL Server, select properties - Under shortcut tab in
the Target dialogue box make sure there is the argument "-tclport xxxx"
where xxxx is your port number (take note of this port number you will
need it for the 10\_custom.conf file).

|Alt text|

Hit Ok and start the TCL server application

Spirent Setup
-------------

Spirent installation files and instructions are available on the
Spirent support website at:

http://support.spirent.com

Select a version of Spirent TestCenter software to utilize. This example
will use Spirent TestCenter v4.57 as an example. Substitute the appropriate
version in place of 'v4.57' in the examples, below.

On the CentOS 7 System
~~~~~~~~~~~~~~~~~~~~~~

Download and install the following:

Spirent TestCenter Application, v4.57 for 64-bit Linux Client

Spirent Virtual Deployment Service (VDS)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Spirent VDS is required for both TestCenter hardware and virtual
chassis in the vsperf environment. For installation, select the version
that matches the Spirent TestCenter Application version. For v4.57,
the matching VDS version is 1.0.55. Download either the ova (VMware)
or qcow2 (QEMU) image and create a VM with it. Initialize the VM
according to Spirent installation instructions.

Using Spirent TestCenter Virtual (STCv)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

STCv is available in both ova (VMware) and qcow2 (QEMU) formats. For
VMware, download:

Spirent TestCenter Virtual Machine for VMware, v4.57 for Hypervisor - VMware ESX.ESXi

Virtual test port performance is affected by the hypervisor configuration. For
best practice results in deploying STCv, the following is suggested:

- Create a single VM with two test ports rather than two VMs with one port each
- Set STCv in DPDK mode
- Give STCv 2*n + 1 cores, where n = the number of ports. For vsperf, cores = 5.
- Turning off hyperthreading and pinning these cores will improve performance
- Give STCv 2 GB of RAM

To get the highest performance and accuracy, Spirent TestCenter hardware is
recommended. vsperf can run with either stype test ports.

Cloning and building src dependencies
-------------------------------------

In order to run VSPERF, you will need to download DPDK and OVS. You can
do this manually and build them in a preferred location, or you could
use vswitchperf/src. The vswitchperf/src directory contains makefiles
that will allow you to clone and build the libraries that VSPERF depends
on, such as DPDK and OVS. To clone and build simply:

  .. code-block:: console

    cd src
    make

VSPERF can be used with OVS without DPDK support. In this case you have
to specify path to the kernel sources by WITH\_LINUX parameter:

  .. code-block:: console

     cd src
     make WITH_LINUX=/lib/modules/`uname -r`/build

To build DPDK and OVS for PVP and PVVP testing with vhost_user as the guest
access method, use:

  .. code-block:: console

     make VHOST_USER=y

To build everything: Vanilla OVS, OVS with vhost_user as the guest access
method and OVS with vhost_cuse access simply:
  .. code-block:: console

     make

The vhost_user build will reside in src/ovs/
The vhost_cuse build will reside in vswitchperf/src_cuse
The Vanilla OVS build will reside in vswitchperf/src_vanilla

To delete a src subdirectory and its contents to allow you to re-clone simply
use:

  .. code-block:: console

     make clobber

Configure the ``./conf/10_custom.conf`` file
--------------------------------------------
The ``10_custom.conf`` file is the configuration file that overrides
default configurations in all the other configuration files in ``./conf``
The supplied ``10_custom.conf`` file must be modified, as it contains
configuration items for which there are no reasonable default values.

The configuration items that can be added is not limited to the initial
contents. Any configuration item mentioned in any .conf file in
``./conf`` directory can be added and that item will be overridden by
the custom configuration value.

Using a custom settings file
----------------------------

If your ``10_custom.conf`` doesn't reside in the ``./conf`` directory
of if you want to use an alternative configuration file, the file can
be passed to ``vsperf`` via the ``--conf-file`` argument.

  .. code-block:: console

    ./vsperf --conf-file <path_to_settings_py> ...

Note that configuration passed in via the environment (``--load-env``)
or via another command line argument will override both the default and
your custom configuration files. This "priority hierarchy" can be
described like so (1 = max priority):

1. Command line arguments
2. Environment variables
3. Configuration file(s)

--------------

Executing tests
---------------

Before running any tests make sure you have root permissions by adding
the following line to /etc/sudoers:

  .. code-block:: console

    username ALL=(ALL)       NOPASSWD: ALL

username in the example above should be replaced with a real username.

To list the available tests:

  .. code-block:: console

    ./vsperf --list

To run a single test:

  .. code-block:: console

    ./vsperf $TESTNAME

Where $TESTNAME is the name of the vsperf test you would like to run.

To run a group of tests, for example all tests with a name containing
'RFC2544':

  .. code-block:: console

    ./vsperf --conf-file=user_settings.py --tests="RFC2544"

To run all tests:

  .. code-block:: console

    ./vsperf --conf-file=user_settings.py

Some tests allow for configurable parameters, including test duration
(in seconds) as well as packet sizes (in bytes).

.. code:: bash

    ./vsperf --conf-file user_settings.py
        --tests RFC2544Tput
        --test-param "duration=10;pkt_sizes=128"

For all available options, check out the help dialog:

  .. code-block:: console

    ./vsperf --help

Executing Vanilla OVS tests
----------------------------
If you have compiled all the variants of OVS in ''src/'' please skip
step 1.

1. Recompile src for Vanilla OVS testing

  .. code-block:: console

     cd src
     make cleanse
     make WITH_LINUX=/lib/modules/`uname -r`/build

2. Update your ''10_custom.conf'' file to use the appropriate variables
for Vanilla OVS:
  .. code-block:: console

   VSWITCH = 'OvsVanilla'
   VSWITCH_VANILLA_PHY_PORT_NAMES = ['$PORT1', '$PORT1']

Where $PORT1 and $PORT2 are the Linux interfaces you'd like to bind
to the vswitch.

3. Run test:

  .. code-block:: console

     ./vsperf --conf-file <path_to_settings_py>

Please note if you don't want to configure Vanilla OVS through the
configuration file, you can pass it as a CLI argument; BUT you must
set the ports.

  .. code-block:: console

    ./vsperf --vswitch OvsVanilla


    Executing PVP and PVVP tests
----------------------------
To run tests using vhost-user as guest access method:

1. Set VHOST_METHOD and VNF of your settings file to:

  .. code-block:: console

   VHOST_METHOD='user'
   VNF = 'QemuDpdkVhost'

2. Recompile src for VHOST USER testing

  .. code-block:: console

     cd src
     make cleanse
     make VHOST_USER=y

3. Run test:

  .. code-block:: console

     ./vsperf --conf-file <path_to_settings_py>

To run tests using vhost-cuse as guest access method:

1. Set VHOST_METHOD and VNF of your settings file to:

  .. code-block:: console

     VHOST_METHOD='cuse'
     VNF = 'QemuDpdkVhostCuse'

2. Recompile src for VHOST USER testing

  .. code-block:: console

     cd src
     make cleanse
     make VHOST_USER=n

3. Run test:

  .. code-block:: console

     ./vsperf --conf-file <path_to_settings_py>

Executing PVP tests using Vanilla OVS
-------------------------------------
To run tests using Vanilla OVS:

1. Set the following variables:

  .. code-block:: console

   VSWITCH = 'OvsVanilla'
   VNF = 'QemuVirtioNet'

   VANILLA_TGEN_PORT1_IP = n.n.n.n
   VANILLA_TGEN_PORT1_MAC = nn:nn:nn:nn:nn:nn

   VANILLA_TGEN_PORT2_IP = n.n.n.n
   VANILLA_TGEN_PORT2_MAC = nn:nn:nn:nn:nn:nn

   VANILLA_BRIDGE_IP = n.n.n.n

   or use --test-param

   ./vsperf --conf-file user_settings.py
            --test-param "vanilla_tgen_tx_ip=n.n.n.n;
                          vanilla_tgen_tx_mac=nn:nn:nn:nn:nn:nn"


2. Recompile src for Vanilla OVS testing

  .. code-block:: console

     cd src
     make cleanse
     make WITH_LINUX=/lib/modules/`uname -r`/build

3. Run test:

  .. code-block:: console

     ./vsperf --conf-file <path_to_settings_py>

Selection of loopback application for PVP and PVVP tests
--------------------------------------------------------
To select loopback application, which will perform traffic forwarding
inside VM, following configuration parameter should be configured:

  .. code-block:: console

     GUEST_LOOPBACK = ['testpmd', 'testpmd']

     or use --test-param

     ./vsperf --conf-file user_settings.py
              --test-param "guest_loopback=testpmd"

Supported loopback applications are:

  .. code-block:: console

     'testpmd'       - testpmd from dpdk will be built and used
     'l2fwd'         - l2fwd module provided by Huawei will be built and used
     'linux_bridge'  - linux bridge will be configured
     'buildin'       - nothing will be configured by vsperf; VM image must
                       ensure traffic forwarding between its interfaces

Guest loopback application must be configured, otherwise traffic
will not be forwarded by VM and testcases with PVP and PVVP deployments
will fail. Guest loopback application is set to 'testpmd' by default.

Code change verification by pylint
----------------------------------
Every developer participating in VSPERF project should run
pylint before his python code is submitted for review. Project
specific configuration for pylint is available at 'pylint.rc'.

Example of manual pylint invocation:

  .. code-block:: console

          pylint --rcfile ./pylintrc ./vsperf

GOTCHAs:
--------

OVS with DPDK and QEMU
~~~~~~~~~~~~~~~~~~~~~~~
If you encounter the following error: "before (last 100 chars):
'-path=/dev/hugepages,share=on: unable to map backing store for
hugepages: Cannot allocate memory\r\n\r\n" with the PVP or PVVP
deployment scenario, check the amount of hugepages on your system:

.. code:: bash

    cat /proc/meminfo | grep HugePages


By default the vswitchd is launched with 1Gb of memory, to  change
this, modify --socket-mem parameter in conf/02_vswitch.conf to allocate
an appropriate amount of memory:

.. code:: bash

    VSWITCHD_DPDK_ARGS = ['-c', '0x4', '-n', '4', '--socket-mem 1024,0']

--------------

.. |Alt text| image:: ../images/TCLServerProperties.png