aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing')
-rw-r--r--docs/testing/user/userguide/testlist.rst281
1 files changed, 281 insertions, 0 deletions
diff --git a/docs/testing/user/userguide/testlist.rst b/docs/testing/user/userguide/testlist.rst
index 4b535494..2b0e9d7f 100644
--- a/docs/testing/user/userguide/testlist.rst
+++ b/docs/testing/user/userguide/testlist.rst
@@ -107,3 +107,284 @@ List of integration testcases above can be obtained by execution of:
.. code-block:: bash
$ ./vsperf --integration --list
+
+OVS/DPDK Regression TestCases
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+These regression tests verify several DPDK features used internally by Open vSwitch. Tests
+can be used for verification of performance and correct functionality of upcoming DPDK
+and OVS releases and release candidates.
+
+These tests are part of integration testcases and they must be executed with
+``--integration`` CLI parameter.
+
+Example of execution of all OVS/DPDK regression tests:
+
+.. code-block:: bash
+
+ $ ./vsperf --integration --tests ovsdpdk_
+
+Testcases are defined in the file ``conf/integration/01b_dpdk_regression_tests.conf``. This file
+contains a set of configuration options with prefix ``OVSDPDK_``. These parameters can be used
+for customization of regression tests and they will override some of standard VSPERF configuration
+options. It is recommended to check OVSDPDK configuration parameters and modify them in accordance
+with VSPERF configuration.
+
+At least following parameters should be examined. Their values shall ensure, that DPDK and
+QEMU threads are pinned to cpu cores of the same NUMA slot, where tested NICs are connected.
+
+.. code-block:: python
+
+ _OVSDPDK_1st_PMD_CORE
+ _OVSDPDK_2nd_PMD_CORE
+ _OVSDPDK_GUEST_5_CORES
+
+DPDK NIC Support
+++++++++++++++++
+
+A set of performance tests to verify support of DPDK accelerated network interface cards.
+Testcases use standard physical to physical network scenario with several vSwitch and
+traffic configurations, which includes one and two PMD threads, uni and bidirectional traffic
+and RFC2544 Continuous or RFC2544 Throughput with 0% packet loss traffic types.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_nic_p2p_single_pmd_unidir_cont P2P with single PMD in OVS and unidirectional traffic.
+ovsdpdk_nic_p2p_single_pmd_bidir_cont P2P with single PMD in OVS and bidirectional traffic.
+ovsdpdk_nic_p2p_two_pmd_bidir_cont P2P with two PMDs in OVS and bidirectional traffic.
+ovsdpdk_nic_p2p_single_pmd_unidir_tput P2P with single PMD in OVS and unidirectional traffic.
+ovsdpdk_nic_p2p_single_pmd_bidir_tput P2P with single PMD in OVS and bidirectional traffic.
+ovsdpdk_nic_p2p_two_pmd_bidir_tput P2P with two PMDs in OVS and bidirectional traffic.
+======================================== ======================================================================================
+
+DPDK Hotplug Support
+++++++++++++++++++++
+
+A set of functional tests to verify DPDK hotplug support. Tests verify, that it is possible
+to use port, which was not bound to DPDK driver during vSwitch startup. There is also
+a test which verifies a possibility to detach port from DPDK driver. However support
+for manual detachment of a port from DPDK has been removed from recent OVS versions and
+thus this testcase is expected to fail.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_hotplug_attach Ensure successful port-add after binding a device to igb_uio after
+ ovs-vswitchd is launched.
+ovsdpdk_hotplug_detach Same as ovsdpdk_hotplug_attach, but delete and detach the device
+ after the hotplug. Note Support of netdev-dpdk/detach has been
+ removed from OVS, so testcase will fail with recent OVS/DPDK
+ versions.
+======================================== ======================================================================================
+
+RX Checksum Support
++++++++++++++++++++
+
+A set of functional tests for verification of RX checksum calculation for tunneled traffic.
+Open vSwitch enables RX checksum offloading by default if NIC supports it. It is to note,
+that it is not possible to disable or enable RX checksum offloading. In order to verify
+correct RX checksum calculation in software, user has to execute these testcases
+at NIC without HW offloading capabilities.
+
+Testcases utilize existing overlay physical to physical (op2p) network deployment
+implemented in vsperf. This deployment expects, that traffic generator sends unidirectional
+tunneled traffic (e.g. vxlan) and Open vSwitch performs data decapsulation and sends them
+back to the traffic generator via second port.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_checksum_l3 Test verifies RX IP header checksum (offloading) validation for
+ tunneling protocols.
+ovsdpdk_checksum_l4 Test verifies RX UDP header checksum (offloading) validation for
+ tunneling protocols.
+======================================== ======================================================================================
+
+Flow Control Support
+++++++++++++++++++++
+
+A set of functional testcases for the validation of flow control support in Open vSwitch
+with DPDK support. If flow control is enabled in both OVS and Traffic Generator,
+the network endpoint (OVS or TGEN) is not able to process incoming data and
+thus it detects a RX buffer overflow. It then sends an ethernet pause frame (as defined at 802.3x)
+to the TX side. This mechanism will ensure, that the TX side will slow down traffic transmission
+and thus no data is lost at RX side.
+
+Introduced testcases use physical to physical scenario to forward data between
+traffic generator ports. It is expected that the processing of small frames in OVS is slower
+than line rate. It means that with flow control disabled, traffic generator will
+report a frame loss. On the other hand with flow control enabled, there should be 0%
+frame loss reported by traffic generator.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_flow_ctrl_rx Test the rx flow control functionality of DPDK PHY ports.
+ovsdpdk_flow_ctrl_rx_dynamic Change the rx flow control support at run time and ensure the system
+ honored the changes.
+======================================== ======================================================================================
+
+Multiqueue Support
+++++++++++++++++++
+
+A set of functional testcases for validation of multiqueue support for both physical
+and vHost User DPDK ports. Testcases utilize P2P and PVP network deployments and
+native support of multiqueue configuration available in VSPERF.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_mq_p2p_rxqs Setup rxqs on NIC port.
+ovsdpdk_mq_p2p_rxqs_same_core_affinity Affinitize rxqs to the same core.
+ovsdpdk_mq_p2p_rxqs_multi_core_affinity Affinitize rxqs to separate cores.
+ovsdpdk_mq_pvp_rxqs Setup rxqs on vhost user port.
+ovsdpdk_mq_pvp_rxqs_linux_bridge Confirm traffic received over vhost RXQs with Linux virtio device in
+ guest.
+ovsdpdk_mq_pvp_rxqs_testpmd Confirm traffic received over vhost RXQs with DPDK device in guest.
+======================================== ======================================================================================
+
+Vhost User
+++++++++++
+
+A set of functional testcases for validation of vHost User Client and vHost User
+Server modes in OVS.
+
+**NOTE:** Vhost User Server mode is deprecated and it will be removed from OVS
+in the future.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_vhostuser_client Test vhost-user client mode
+ovsdpdk_vhostuser_client_reconnect Test vhost-user client mode reconnect feature
+ovsdpdk_vhostuser_server Test vhost-user server mode
+ovsdpdk_vhostuser_sock_dir Verify functionality of vhost-sock-dir flag
+======================================== ======================================================================================
+
+Virtual Devices Support
++++++++++++++++++++++++
+
+A set of functional testcases for verification of correct functionality of virtual
+device PMD drivers.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_vdev_add_null_pmd Test addition of port using the null DPDK PMD driver.
+ovsdpdk_vdev_del_null_pmd Test deletion of port using the null DPDK PMD driver.
+ovsdpdk_vdev_add_af_packet_pmd Test addition of port using the af_packet DPDK PMD driver.
+ovsdpdk_vdev_del_af_packet_pmd Test deletion of port using the af_packet DPDK PMD driver.
+======================================== ======================================================================================
+
+NUMA Support
+++++++++++++
+
+A functional testcase for validation of NUMA awareness feature in OVS.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_numa Test vhost-user NUMA support. Vhostuser PMD threads should migrate to
+ the same numa slot, where QEMU is executed.
+======================================== ======================================================================================
+
+Jumbo Frame Support
++++++++++++++++++++
+
+A set of functional testcases for verification of jumbo frame support in OVS.
+Testcases utilize P2P and PVP network deployments and native support of jumbo
+frames available in VSPERF.
+
+============================================ ==================================================================================
+Testcase Name Description
+============================================ ==================================================================================
+ovsdpdk_jumbo_increase_mtu_phy_port_ovsdb Ensure that the increased MTU for a DPDK physical port is updated in
+ OVSDB.
+ovsdpdk_jumbo_increase_mtu_vport_ovsdb Ensure that the increased MTU for a DPDK vhost-user port is updated in
+ OVSDB.
+ovsdpdk_jumbo_reduce_mtu_phy_port_ovsdb Ensure that the reduced MTU for a DPDK physical port is updated in
+ OVSDB.
+ovsdpdk_jumbo_reduce_mtu_vport_ovsdb Ensure that the reduced MTU for a DPDK vhost-user port is updated in
+ OVSDB.
+ovsdpdk_jumbo_increase_mtu_phy_port_datapath Ensure that the MTU for a DPDK physical port is updated in the
+ datapath itself when increased to a valid value.
+ovsdpdk_jumbo_increase_mtu_vport_datapath Ensure that the MTU for a DPDK vhost-user port is updated in the
+ datapath itself when increased to a valid value.
+ovsdpdk_jumbo_reduce_mtu_phy_port_datapath
+ Ensure that the MTU for a DPDK physical port is updated in the
+ datapath itself when decreased to a valid value.
+ovsdpdk_jumbo_reduce_mtu_vport_datapath Ensure that the MTU for a DPDK vhost-user port is updated in the
+ datapath itself when decreased to a valid value.
+ovsdpdk_jumbo_mtu_upper_bound_phy_port Verify that the upper bound limit is enforced for OvS DPDK Phy ports.
+ovsdpdk_jumbo_mtu_upper_bound_vport Verify that the upper bound limit is enforced for OvS DPDK vhost-user
+ ports.
+ovsdpdk_jumbo_mtu_lower_bound_phy_port Verify that the lower bound limit is enforced for OvS DPDK Phy ports.
+ovsdpdk_jumbo_mtu_lower_bound_vport Verify that the lower bound limit is enforced for OvS DPDK vhost-user
+ ports.
+ovsdpdk_jumbo_p2p Ensure that jumbo frames are received, processed and forwarded
+ correctly by DPDK physical ports.
+ovsdpdk_jumbo_pvp Ensure that jumbo frames are received, processed and forwarded
+ correctly by DPDK vhost-user ports.
+ovsdpdk_jumbo_p2p_upper_bound Ensure that jumbo frames above the configured Rx port's MTU are not
+ accepted
+============================================ ==================================================================================
+
+Rate Limiting
++++++++++++++
+
+A set of functional testcases for validation of rate limiting support. This feature
+allows to configure an ingress policing for both physical and vHost User DPDK
+ports.
+
+**NOTE:** Desired maximum rate is specified in kilo bits per second and it defines
+the rate of payload only.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_rate_create_phy_port Ensure a rate limiting interface can be created on a physical DPDK
+ port.
+ovsdpdk_rate_delete_phy_port Ensure a rate limiting interface can be destroyed on a physical DPDK
+ port.
+ovsdpdk_rate_create_vport Ensure a rate limiting interface can be created on a vhost-user port.
+ovsdpdk_rate_delete_vport Ensure a rate limiting interface can be destroyed on a vhost-user
+ port.
+ovsdpdk_rate_no_policing Ensure when a user attempts to create a rate limiting interface but
+ is missing policing rate argument, no rate limitiner is created.
+ovsdpdk_rate_no_burst Ensure when a user attempts to create a rate limiting interface but
+ is missing policing burst argument, rate limitiner is created.
+ovsdpdk_rate_p2p Ensure when a user creates a rate limiting physical interface that
+ the traffic is limited to the specified policer rate in a p2p setup.
+ovsdpdk_rate_pvp Ensure when a user creates a rate limiting vHost User interface that
+ the traffic is limited to the specified policer rate in a pvp setup.
+ovsdpdk_rate_p2p_multi_pkt_sizes Ensure that rate limiting works for various frame sizes.
+======================================== ======================================================================================
+
+Quality of Service
+++++++++++++++++++
+
+A set of functional testcases for validation of QoS support. This feature
+allows to configure an egress policing for both physical and vHost User DPDK
+ports.
+
+**NOTE:** Desired maximum rate is specified in bytes per second and it defines
+the rate of payload only.
+
+======================================== ======================================================================================
+Testcase Name Description
+======================================== ======================================================================================
+ovsdpdk_qos_create_phy_port Ensure a QoS policy can be created on a physical DPDK port
+ovsdpdk_qos_delete_phy_port Ensure an existing QoS policy can be destroyed on a physical DPDK
+ port.
+ovsdpdk_qos_create_vport Ensure a QoS policy can be created on a virtual vhost user port.
+ovsdpdk_qos_delete_vport Ensure an existing QoS policy can be destroyed on a vhost user port.
+ovsdpdk_qos_create_no_cir Ensure that a QoS policy cannot be created if the egress policer cir
+ argument is missing.
+ovsdpdk_qos_create_no_cbs Ensure that a QoS policy cannot be created if the egress policer cbs
+ argument is missing.
+ovsdpdk_qos_p2p In a p2p setup, ensure when a QoS egress policer is created that the
+ traffic is limited to the specified rate.
+ovsdpdk_qos_pvp In a pvp setup, ensure when a QoS egress policer is created that the
+ traffic is limited to the specified rate.
+======================================== ======================================================================================