aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/testing/user/userguide')
-rw-r--r--docs/testing/user/userguide/12-nsb-overview.rst8
-rw-r--r--docs/testing/user/userguide/13-nsb-installation.rst252
-rw-r--r--docs/testing/user/userguide/14-nsb-operation.rst34
-rw-r--r--docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst2
-rw-r--r--docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst96
-rw-r--r--docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst113
6 files changed, 487 insertions, 18 deletions
diff --git a/docs/testing/user/userguide/12-nsb-overview.rst b/docs/testing/user/userguide/12-nsb-overview.rst
index 70aba1e37..012ea6c3d 100644
--- a/docs/testing/user/userguide/12-nsb-overview.rst
+++ b/docs/testing/user/userguide/12-nsb-overview.rst
@@ -65,6 +65,14 @@ NSB extension includes:
* L4-L7 state-full traffic profiles
* Tunneling protocol/network overlay support
+* Scenarios that handle NSB test cases execution
+
+ * NSPerf - scenario that handles generic NSB test case execution
+ (setup and init tg/vnf, trigger traffic on tg, collect kpi)
+ * NSPerf-RFC2544 - scenario that allows repeatable triggering of traffic on
+ traffic generators until test case acceptance criteria is met
+ (for example RFC2544 binary search)
+
* Test case samples
* Ping
diff --git a/docs/testing/user/userguide/13-nsb-installation.rst b/docs/testing/user/userguide/13-nsb-installation.rst
index 0487dad9a..694521d2b 100644
--- a/docs/testing/user/userguide/13-nsb-installation.rst
+++ b/docs/testing/user/userguide/13-nsb-installation.rst
@@ -21,6 +21,7 @@ NSB Installation
.. _OVS-DPDK: http://docs.openvswitch.org/en/latest/intro/install/dpdk/
.. _devstack: https://docs.openstack.org/devstack/pike/>
+.. _OVS-DPDK-versions: http://docs.openvswitch.org/en/latest/faq/releases/
Abstract
--------
@@ -95,9 +96,10 @@ The ``nsb_setup.sh`` allows to:
1. Install Yardstick in specified mode: bare metal or container.
Refer :doc:`04-installation`.
2. Install package dependencies on remote servers used as traffic generator or
- sample VNF. Add such servers to ``install-inventory.ini`` file to either
+ sample VNF. Install DPDK, sample VNFs, TREX, collectd.
+ Add such servers to ``install-inventory.ini`` file to either
``yardstick-standalone`` or ``yardstick-baremetal`` server groups.
- Configures IOMMU, hugepages, open file limits, CPU isolation, etc.
+ It configures IOMMU, hugepages, open file limits, CPU isolation, etc.
3. Build VM image either nsb or normal. The nsb VM image is used to run
Yardstick sample VNF tests, like vFW, vACL, vCGNAPT, etc.
The normal VM image is used to run Yardstick ping tests in OpenStack context.
@@ -136,21 +138,25 @@ Modify the Yardstick installation inventory used by Ansible::
[yardstick:children]
jumphost
- [yardstick-standalone]
- standalone ansible_host=192.168.2.51 ansible_connection=ssh
-
[yardstick-baremetal]
- baremetal ansible_host=192.168.2.52 ansible_connection=ssh
+ baremetal ansible_host=192.168.2.51 ansible_connection=ssh
+
+ [yardstick-standalone]
+ standalone ansible_host=192.168.2.52 ansible_connection=ssh
[all:vars]
- arch_amd64=amd64
- arch_arm64=arm64
- inst_mode_baremetal=baremetal
- inst_mode_container=container
- inst_mode_container_pull=container_pull
- ubuntu_archive={"amd64": "http://archive.ubuntu.com/ubuntu/", "arm64": "http://ports.ubuntu.com/ubuntu-ports/"}
- ansible_user=root
- ansible_ssh_pass=root # OR ansible_ssh_private_key_file=/root/.ssh/id_rsa
+ # Uncomment credentials below if needed
+ ansible_user=root
+ ansible_ssh_pass=root
+ # ansible_ssh_private_key_file=/root/.ssh/id_rsa
+ # When IMG_PROPERTY is passed neither normal nor nsb set
+ # "path_to_vm=/path/to/image" to add it to OpenStack
+ # path_to_img=/tmp/workspace/yardstick-image.img
+
+ # List of CPUs to be isolated (not used by default)
+ # Grub line will be extended with:
+ # "isolcpus=<ISOL_CPUS> nohz=on nohz_full=<ISOL_CPUS> rcu_nocbs=1<ISOL_CPUS>"
+ # ISOL_CPUS=2-27,30-55 # physical cpu's for all NUMA nodes, four cpu's reserved
.. warning::
@@ -178,14 +184,18 @@ Modify the Yardstick installation inventory used by Ansible::
.. note::
CPU isolation can be applied to the remote servers, like:
- ISOL_CPUS=2-27,30-55
- Uncomment and modify accordingly in ``install-inventory.ini`` file.
+ ISOL_CPUS=2-27,30-55. Uncomment and modify accordingly in
+ ``install-inventory.ini`` file.
By default ``nsb_setup.sh`` pulls Yardstick image based on Ubuntu 16.04 from
docker hub and starts container, builds NSB VM image based on Ubuntu 16.04,
installs packages to the servers given in ``yardstick-standalone`` and
``yardstick-baremetal`` host groups.
+To pull Yardstick built based on Ubuntu 18 run::
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest
+
To change default behavior modify parameters for ``install.yaml`` in
``nsb_setup.sh`` file.
@@ -196,11 +206,15 @@ To execute an installation for a **BareMetal** or a **Standalone context**::
./nsb_setup.sh
-
To execute an installation for an **OpenStack** context::
./nsb_setup.sh <path to admin-openrc.sh>
+.. note::
+
+ Yardstick may not be operational after distributive linux kernel update if
+ it has been installed before. Run ``nsb_setup.sh`` again to resolve this.
+
.. warning::
The Yardstick VM image (NSB or normal) cannot be built inside a VM.
@@ -217,11 +231,75 @@ execute::
docker exec -it yardstick bash
+.. note::
+
+ It may be needed to configure tty in docker container to extend commandline
+ character length, for example:
+
+ stty size rows 58 cols 234
+
It will also automatically download all the packages needed for NSB Testing
-setup. Refer chapter :doc:`04-installation` for more on Docker
+setup. Refer chapter :doc:`04-installation` for more on Docker.
**Install Yardstick using Docker (recommended)**
+Bare Metal context example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG and DUT servers to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies. Install
+ python on servers.
+3. Start deployment using docker image based on Ubuntu 16:
+
+.. code-block:: console
+
+ ./nsb_setup.sh
+
+4. Reboot bare metal servers.
+5. Enter to yardstick container and modify pod yaml file and run tests.
+
+Standalone context example for Ubuntu 18
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Let's assume there are three servers acting as TG, sample VNF DUT and jump host.
+Ubuntu 18 is installed on all servers.
+
+Perform following steps to install NSB:
+
+1. Clone Yardstick repo to jump host.
+2. Add TG server to ``yardstick-baremetal`` group in
+ ``install-inventory.ini`` file to install NSB and dependencies.
+ Add server where VM with sample VNF will be deployed to
+ ``yardstick-standalone`` group in ``install-inventory.ini`` file.
+ Target VM image named ``yardstick-nsb-image.img`` will be placed to
+ ``/var/lib/libvirt/images/``.
+ Install python on servers.
+3. Modify ``nsb_setup.sh`` on jump host:
+
+.. code-block:: console
+
+ ansible-playbook \
+ -e IMAGE_PROPERTY='nsb' \
+ -e OS_RELEASE='bionic' \
+ -e INSTALLATION_MODE='container_pull' \
+ -e YARD_IMAGE_ARCH='amd64' ${extra_args} \
+ -i install-inventory.ini install.yaml
+
+4. Start deployment with Yardstick docker images based on Ubuntu 18:
+
+.. code-block:: console
+
+ ./nsb_setup.sh -i opnfv/yardstick-ubuntu-18.04:latest -o <openrc_file>
+
+5. Reboot servers.
+6. Enter to yardstick container and modify pod yaml file and run tests.
+
+
System Topology
---------------
@@ -881,6 +959,144 @@ Update contexts section
cidr: '152.16.40.10/24'
gateway_ip: '152.16.100.20'
+OVS-DPDK configuration options
+++++++++++++++++++++++++++++++
+
+There are number of configuration options available for OVS-DPDK context in
+test case. Mostly they are used for performance tuning.
+
+OVS-DPDK properties:
+''''''''''''''''''''
+
+OVS-DPDK properties example under *ovs_properties* section:
+
+ .. code-block:: console
+
+ ovs_properties:
+ version:
+ ovs: 2.8.1
+ dpdk: 17.05.2
+ pmd_threads: 4
+ pmd_cpu_mask: "0x3c"
+ ram:
+ socket_0: 2048
+ socket_1: 2048
+ queues: 2
+ vpath: "/usr/local"
+ max_idle: 30000
+ lcore_mask: 0x02
+ dpdk_pmd-rxq-affinity:
+ 0: "0:2,1:2"
+ 1: "0:2,1:2"
+ 2: "0:3,1:3"
+ 3: "0:3,1:3"
+ vhost_pmd-rxq-affinity:
+ 0: "0:3,1:3"
+ 1: "0:3,1:3"
+ 2: "0:4,1:4"
+ 3: "0:4,1:4"
+
+OVS-DPDK properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | version || Version of OVS and DPDK to be installed |
+ | || There is a relation between OVS and DPDK |
+ | | version which can be found at |
+ | | `OVS-DPDK-versions`_ |
+ | || By default OVS: 2.6.0, DPDK: 16.07.2 |
+ +-------------------------+-------------------------------------------------+
+ | lcore_mask || Core bitmask used during DPDK initialization |
+ | | where the non-datapath OVS-DPDK threads such |
+ | | as handler and revalidator threads run |
+ +-------------------------+-------------------------------------------------+
+ | pmd_cpu_mask || Core bitmask that sets which cores are used by |
+ | || OVS-DPDK for datapath packet processing |
+ +-------------------------+-------------------------------------------------+
+ | pmd_threads || Number of PMD threads used by OVS-DPDK for |
+ | | datapath |
+ | || This core mask is evaluated in Yardstick |
+ | || It will be used if pmd_cpu_mask is not given |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for each socket, MB |
+ | || Default is 2048 MB |
+ +-------------------------+-------------------------------------------------+
+ | queues || Number of RX queues used for DPDK physical |
+ | | interface |
+ +-------------------------+-------------------------------------------------+
+ | dpdk_pmd-rxq-affinity || RX queue assignment to PMD threads for DPDK |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vhost_pmd-rxq-affinity || RX queue assignment to PMD threads for vhost |
+ | || e.g.: <port number> : <queue-id>:<core-id> |
+ +-------------------------+-------------------------------------------------+
+ | vpath || User path for openvswitch files |
+ | || Default is ``/usr/local`` |
+ +-------------------------+-------------------------------------------------+
+ | max_idle || The maximum time that idle flows will remain |
+ | | cached in the datapath, ms |
+ +-------------------------+-------------------------------------------------+
+
+
+VM image properties
+'''''''''''''''''''
+
+VM image properties example under *flavor* section:
+
+ .. code-block:: console
+
+ flavor:
+ images: <path>
+ ram: 8192
+ extra_specs:
+ machine_type: 'pc-i440fx-xenial'
+ hw:cpu_sockets: 1
+ hw:cpu_cores: 6
+ hw:cpu_threads: 2
+ hw_socket: 0
+ cputune: |
+ <cputune>
+ <vcpupin vcpu="0" cpuset="7"/>
+ <vcpupin vcpu="1" cpuset="8"/>
+ ...
+ <vcpupin vcpu="11" cpuset="18"/>
+ <emulatorpin cpuset="11"/>
+ </cputune>
+
+VM image properties description:
+
+ +-------------------------+-------------------------------------------------+
+ | Parameters | Detail |
+ +=========================+=================================================+
+ | images || Path to the VM image generated by |
+ | | ``nsb_setup.sh`` |
+ | || Default path is ``/var/lib/libvirt/images/`` |
+ | || Default file name ``yardstick-nsb-image.img`` |
+ | | or ``yardstick-image.img`` |
+ +-------------------------+-------------------------------------------------+
+ | ram || Amount of RAM to be used for VM |
+ | || Default is 4096 MB |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_sockets || Number of sockets provided to the guest VM |
+ | || Default is 1 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_cores || Number of cores provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw:cpu_threads || Number of threads provided to the guest VM |
+ | || Default is 2 |
+ +-------------------------+-------------------------------------------------+
+ | hw_socket || Generate vcpu cpuset from given HW socket |
+ | || Default is 0 |
+ +-------------------------+-------------------------------------------------+
+ | cputune || Maps virtual cpu with logical cpu |
+ +-------------------------+-------------------------------------------------+
+ | machine_type || Machine type to be emulated in VM |
+ | || Default is 'pc-i440fx-xenial' |
+ +-------------------------+-------------------------------------------------+
+
OpenStack with SR-IOV support
-----------------------------
diff --git a/docs/testing/user/userguide/14-nsb-operation.rst b/docs/testing/user/userguide/14-nsb-operation.rst
index 941a0bb65..69ffb8a3b 100644
--- a/docs/testing/user/userguide/14-nsb-operation.rst
+++ b/docs/testing/user/userguide/14-nsb-operation.rst
@@ -640,3 +640,37 @@ A testcase can be started with the following command as an example:
.. code-block:: bash
yardstick task start /yardstick/samples/vnf_samples/nsut/vpe/tc_baremetal_rfc2544_ipv4_1flow_64B_ixia.yaml
+
+Preparing test run of vIPSEC test case
+------------------------------------
+
+Location of vIPSEC test cases: ``samples/vnf_samples/nsut/ipsec/``.
+
+Before running a specific vIPSEC test case using NSB, some dependencies have to be
+preinstalled and properly configured.
+- VPP
+
+.. code-block:: console
+
+ export UBUNTU="xenial"
+ export RELEASE=".stable.1810"
+ sudo rm /etc/apt/sources.list.d/99fd.io.list
+ echo "deb [trusted=yes] https://nexus.fd.io/content/repositories/fd.io$RELEASE.ubuntu.$UBUNTU.main/ ./" | sudo tee -a /etc/apt/sources.list.d/99fd.io.list
+ sudo apt-get update
+ sudo apt-get install vpp vpp-lib vpp-plugin vpp-dbg vpp-dev vpp-api-java vpp-api-python vpp-api-lua
+
+- VAT templates
+
+ VAT templates is required for the VPP API.
+
+.. code-block:: console
+
+ mkdir -p /opt/nsb_bin/vpp/templates/
+ echo 'exec trace add dpdk-input 50' > /opt/nsb_bin/vpp/templates/enable_dpdk_traces.vat
+ echo 'exec trace add vhost-user-input 50' > /opt/nsb_bin/vpp/templates/enable_vhost_user_traces.vat
+ echo 'exec trace add memif-input 50' > /opt/nsb_bin/vpp/templates/enable_memif_traces.vat
+ cat > /opt/nsb_bin/vpp/templates/dump_interfaces.vat << EOL
+ sw_interface_dump
+ dump_interface_table
+ quit
+ EOL
diff --git a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
index 6c18c7d89..1a4bf32b5 100644
--- a/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
+++ b/docs/testing/user/userguide/nsb/nsb-list-of-tcs.rst
@@ -36,3 +36,5 @@ NSB PROX Test Case Descriptions
tc_vfw_rfc2544
tc_vfw_rfc2544_correlated
tc_vfw_rfc3511
+ tc_vpp_baremetal_crypto_ipsec
+ tc_vims_context_sipp
diff --git a/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
new file mode 100644
index 000000000..6df4ab880
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vims_context_sipp.rst
@@ -0,0 +1,96 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) 2019 Viosoft Corporation.
+
+**********************************************
+Yardstick Test Case Description: NSB VIMS
+**********************************************
+
++-----------------------------------------------------------------------------+
+|NSB VIMS test for vIMS characterization |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | tc_vims_{context}_sipp |
+| | |
+| | * context = baremetal or heat; |
+| | |
++--------------+--------------------------------------------------------------+
+|metric | * Successful registrations per second; |
+| | * Total number of active registrations per server; |
+| | * Successful de-registrations per second; |
+| | * Successful session establishments per second; |
+| | * Total number of active sessions per server; |
+| | * Mean session setup time; |
+| | * Successful re-registrations per second; |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | The vIMS test handles registration rate, call rate, |
+| | round trip delay, and message statistics of vIMS system. |
+| | |
+| | The vIMS test cases are implemented to run in baremetal |
+| | and heat context default configuration. |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | The vIMS test cases are listed below: |
+| | |
+| | * tc_vims_baremetal_sipp.yaml |
+| | * tc_vims_heat_sipp.yaml |
+| | |
+| | Each test runs one time and collects all the KPIs. |
+| | The configuration of vIMS and SIPp can be changed in each |
+| | test. |
++--------------+--------------------------------------------------------------+
+|test tool | SIPp |
+| | |
+| | SIPp is an application that can simulate SIP scenarios, can |
+| | generate RTP traffic and used for vIMS characterization. |
+| | |
++--------------+--------------------------------------------------------------+
+|applicability | The SIPp test cases can be configured with different: |
+| | |
+| | * number of accounts; |
+| | * the call per second (cps) of SIP test; |
+| | * the holding time; |
+| | * RTP configuratioin; |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-test | For Openstack test case, only vIMS is deployed by external |
+|conditions | heat template, SIPp needs pod.yaml file with the necessary |
+| | system and NIC information |
+| | |
+| | For Baremetal tests cases SIPp and vIMS must be installed in |
+| | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
+| | For Heat test: One host VM for vIMS is booted, based on |
+| | the test flavor. Another host for SIPp is booted as |
+| | traffic generator, based on pod.yaml file |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Yardstick is connected with the vIMS and SIPp via ssh. |
+| | The test will resolve the topology, instantiate the vIMS and |
+| | SIPp and collect the KPIs/metrics. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | The SIPp will run scenario tests with parameters configured |
+| | in test case files (tc_vims_baremetal_sipp.yaml and |
+| | tc_vims_heat_sipp.yaml files). |
+| | This is done until the KPIs of SIPp are within an acceptable |
+| | threshold. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application. |
+| | |
+| | In Heat test: The host VM of vIMS is deleted on test |
+| | completion. |
++--------------+--------------------------------------------------------------+
+|test verdict | The test case will collect the KPIs and plot on Grafana. |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
new file mode 100644
index 000000000..6a4a37697
--- /dev/null
+++ b/docs/testing/user/userguide/nsb/tc_vpp_baremetal_crypto_ipsec.rst
@@ -0,0 +1,113 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, 2019 Viosoft Corporation.
+
+***********************************************
+Yardstick Test Case Description: NSB VPP IPSEC
+***********************************************
+
++------------------------------------------------------------------------------+
+|NSB VPP test for vIPSEC characterization |
+| |
++--------------+---------------------------------------------------------------+
+|test case id | tc_baremetal_rfc2544_ipv4_{crypto_dev}_{crypto_alg} |
+| | |
+| | * crypto_dev = HW_cryptodev or SW_cryptodev; |
+| | * crypto_alg = aes-gcm or cbc-sha1; |
+| | |
++--------------+---------------------------------------------------------------+
+|metric | * Network Throughput NDR or PDR; |
+| | * Connections Per Second (CPS); |
+| | * Latency; |
+| | * Number of tunnels; |
+| | * TG Packets Out; |
+| | * TG Packets In; |
+| | * VNF Packets Out; |
+| | * VNF Packets In; |
+| | * Dropped packets; |
+| | |
++--------------+---------------------------------------------------------------+
+|test purpose | IPv4 IPsec tunnel mode performance test: |
+| | |
+| | * Finds and reports throughput NDR (Non Drop Rate) with zero |
+| | packet loss tolerance or throughput PDR (Partial Drop Rate) |
+| | with non-zero packet loss tolerance (LT) expressed in |
+| | number of packets transmitted. |
+| | |
+| | * The IPSEC test cases are implemented to run in baremetal |
+| | |
++--------------+---------------------------------------------------------------+
+|configuration | The IPSEC test cases are listed below: |
+| | |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_hw_cbcsha1_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_aesgcm_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_IMIX_trex.yaml |
+| | * tc_baremetal_rfc2544_ipv4_sw_cbcsha1_trex.yaml |
+| | |
+| | Test duration is set as 500sec for each test. |
+| | Packet size set as 64 bytes or higher. |
+| | Number of tunnels set as 1 or higher. |
+| | Number of connections set as 1 or higher |
+| | These can be configured |
+| | |
++--------------+---------------------------------------------------------------+
+|test tool | Vector Packet Processing (VPP) |
+| | The VPP platform is an extensible framework that provides |
+| | out-of-the-box production quality switch/router functionality.|
+| | Its high performance, proven technology, its modularity and, |
+| | flexibility and rich feature set |
+| | |
++--------------+---------------------------------------------------------------+
+|applicability | This VPP IPSEC test cases can be configured with different: |
+| | |
+| | * packet sizes; |
+| | * test durations; |
+| | * tolerated loss; |
+| | * crypto device type; |
+| | * number of physical cores; |
+| | * number of tunnels; |
+| | * number of connections; |
+| | * encryption algorithms - integrity algorithm; |
+| | |
+| | Default values exist. |
+| | |
++--------------+---------------------------------------------------------------+
+|pre-test | For Baremetal tests cases VPP and DPDK must be installed in |
+|conditions | the hosts where the test is executed. The pod.yaml file must |
+| | have the necessary system and NIC information |
+| | |
++--------------+---------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+---------------------------------------------------------------+
+|step 1 | For Baremetal test: The TG and VNF are started on the hosts |
+| | based on the pod file. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 2 | Yardstick is connected with the TG and VNF by using ssh. |
+| | The test will resolve the topology and instantiate the VNF |
+| | and TG and collect the KPI's/metrics. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 3 | Test packets are generated by TG on links to DUTs. If the |
+| | number of dropped packets is more than the tolerated loss |
+| | the line rate or throughput is halved. This is done until |
+| | the dropped packets are within an acceptable tolerated loss. |
+| | |
+| | The KPI is the number of packets per second for a packet size |
+| | specified in the test case with an accepted minimal packet |
+| | loss for the default configuration. |
+| | |
++--------------+---------------------------------------------------------------+
+|step 4 | In Baremetal test: The test quits the application and unbind |
+| | the DPDK ports. |
+| | |
++--------------+---------------------------------------------------------------+
+|test verdict | The test case will achieve a Throughput with an accepted |
+| | minimal tolerated packet loss. |
++--------------+---------------------------------------------------------------+ \ No newline at end of file