aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide/hw_requirements.rst
blob: acb4c0aec7d1439207d6d0da7dea2ca8207c3a3c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. SPDX-License-Identifier: CC-BY-4.0
.. (c) Cisco Systems, Inc

Requirements for running NFVbench
=================================

.. _requirements:

Hardware Requirements
---------------------
To run NFVbench you need the following hardware:
- a Linux server
- a DPDK compatible NIC with at least 2 ports (preferably 10Gbps or higher)
- 2 ethernet cables between the NIC and the OpenStack pod under test (usually through a top of rack switch)

The DPDK-compliant NIC must be one supported by the TRex traffic generator (such as Intel X710, refer to the `Trex Installation Guide <https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_download_and_installation>`_ for a complete list of supported NIC)

To run the TRex traffic generator (that is bundled with NFVbench) you will need to wire 2 physical interfaces of the NIC to the TOR switch(es):
    - if you have only 1 TOR, wire both interfaces to that same TOR
    - 1 interface to each TOR if you have 2 TORs and want to use bonded links to your compute nodes

.. image:: images/nfvbench-trex-setup.svg


Switch Configuration
--------------------
For VLAN encapsulation, the 2 corresponding ports on the switch(es) facing the Trex ports on the Linux server should be configured in trunk mode (NFVbench will instruct TRex to insert the appropriate vlan tag).

For VxLAN encapsulation, the switch(es) must support the VTEP feature (VxLAN Tunnel End Point) with the ability to attach an interface to a VTEP (this is an advanced feature that requires an NFVbench plugin for the switch).

Using a TOR switch is more representative of a real deployment and allows to measure packet flows on any compute node in the rack without rewiring and includes the overhead of the TOR switch.

Although not the primary targeted use case, NFVbench could also support the direct wiring of the traffic generator to
a compute node without a switch (although that will limit some of the features that invove multiple compute nodes in the packet path).

Software Requirements
---------------------

You need Docker to be installed on the Linux server.

TRex uses the DPDK interface to interact with the DPDK compatible NIC for sending and receiving frames. The Linux server will
need to be configured properly to enable DPDK.

DPDK requires a uio (User space I/O) or vfio (Virtual Function I/O) kernel module to be installed on the host to work.
There are 2 main uio kernel modules implementations (igb_uio and uio_pci_generic) and one vfio kernel module implementation.

To check if a uio or vfio is already loaded on the host:

.. code-block:: bash

    lsmod | grep -e igb_uio -e uio_pci_generic -e vfio


If missing, it is necessary to install a uio/vfio kernel module on the host server:

- find a suitable kernel module for your host server (any uio or vfio kernel module built with the same Linux kernel version should work)
- load it using the modprobe and insmod commands

Example of installation of the igb_uio kernel module:

.. code-block:: bash

    modprobe uio
    insmod ./igb_uio.ko

Finally, the correct iommu options and huge pages to be configured on the Linux server on the boot command line:

- enable intel_iommu and iommu pass through: "intel_iommu=on iommu=pt"
- for Trex, pre-allocate 1024 huge pages of 2MB each (for a total of 2GB): "hugepagesz=2M hugepages=1024"

More detailed instructions can be found in the DPDK documentation (https://media.readthedocs.org/pdf/dpdk/latest/dpdk.pdf).


NFVbench loopback VM image Upload
---------------------------------

The NFVbench loopback VM image should be uploaded to OpenStack prior to running NFVbench.
The NFVbench VM qcow2 image can be rebuilt from script or can be copied from the OPNFV artifact repository [URL TBP].