diff options
author | Christian Trautman <ctrautma@redhat.com> | 2016-06-28 12:27:17 -0400 |
---|---|---|
committer | Christian Trautman <ctrautma@redhat.com> | 2016-06-29 17:49:52 -0400 |
commit | 095fa73e80f7a9485e72a7f3ba23c4e4608627cd (patch) | |
tree | ef727e2aba11f9a15fd29099a07ced1a85ed6607 /docs/userguide | |
parent | e04b1b9a22f93bb1783ff9e82486aec38dcb0efb (diff) |
multi-queue: Add basic multi-queue functionality
Adds support for multi-queue using the following config.
* VNF = QemuDpdkVhostUser
* VSWITCH = OvsDpdkVhost
* Guest Loopback as testpmd
Adds CPU mask, nbcore, rxq, and txq options for testpmd.
Adds option for guest nic multi-queue.
Adds option for dpdkvhostuser and dpdk multi-queue enable
JIRA: VSPERF-309
Change-Id: I5296fc18b430eace598d8c51620fc27a6c46a65e
Signed-off-by: Christian Trautman <ctrautma@redhat.com>
Diffstat (limited to 'docs/userguide')
-rwxr-xr-x | docs/userguide/testusage.rst | 58 |
1 files changed, 58 insertions, 0 deletions
diff --git a/docs/userguide/testusage.rst b/docs/userguide/testusage.rst index 104723e3..d807590d 100755 --- a/docs/userguide/testusage.rst +++ b/docs/userguide/testusage.rst @@ -437,6 +437,64 @@ Guest loopback application must be configured, otherwise traffic will not be forwarded by VM and testcases with PVP and PVVP deployments will fail. Guest loopback application is set to 'testpmd' by default. +Multi-Queue Configuration +^^^^^^^^^^^^^^^^^^^^^^^^^ + +VSPerf currently supports multi-queue with the following limitations: + + 1. Execution of pvp/pvvp tests require testpmd as the loopback if multi-queue + is enabled at the guest. + + 2. Requires QemuDpdkVhostUser as the vnf. + + 3. Requires switch to be set to OvsDpdkVhost. + + 4. Requires QEMU 2.5 or greater and any OVS version higher than 2.5. The + default upstream package versions installed by VSPerf satisfy this + requirement. + +To enable multi-queue modify the ''02_vswitch.conf'' file to enable multi-queue +on the switch. + + .. code-block:: console + + VSWITCH_MULTI_QUEUES = 2 + +**NOTE:** you should consider using the switch affinity to set a pmd cpu mask +that can optimize your performance. Consider the numa of the NIC in use if this +applies by checking /sys/class/net/<eth_name>/device/numa_node and setting an +appropriate mask to create PMD threads on the same numa node. + +When multi-queue is enabled, each dpdk or dpdkvhostuser port that is created +on the switch will set the option for multiple queues. + +To enable multi-queue on the guest modify the ''04_vnf.conf'' file. + + .. code-block:: console + + GUEST_NIC_QUEUES = 2 + +Enabling multi-queue at the guest will add multiple queues to each NIC port when +qemu launches the guest. + +Testpmd should be configured to take advantage of multi-queue on the guest. This +can be done by modifying the ''04_vnf.conf'' file. + + .. code-block:: console + + GUEST_TESTPMD_CPU_MASK = '-l 0,1,2,3,4' + + GUEST_TESTPMD_NB_CORES = 4 + GUEST_TESTPMD_TXQ = 2 + GUEST_TESTPMD_RXQ = 2 + +**NOTE:** The guest SMP cores must be configured to allow for testpmd to use the +optimal number of cores to take advantage of the multiple guest queues. + +**NOTE:** For optimal performance guest SMPs should be on the same numa as the +NIC in use if possible/applicable. Testpmd should be assigned at least +(nb_cores +1) total cores with the cpu mask. + Executing Packet Forwarding tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |