aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/testing/user/userguide/13-nsb_operation.rst54
-rw-r--r--docs/testing/user/userguide/15-list-of-tcs.rst1
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc087.rst182
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc090.rst151
-rw-r--r--docs/testing/user/userguide/opnfv_yardstick_tc091.rst138
5 files changed, 517 insertions, 9 deletions
diff --git a/docs/testing/user/userguide/13-nsb_operation.rst b/docs/testing/user/userguide/13-nsb_operation.rst
index 8c477fa3f..e791b048d 100644
--- a/docs/testing/user/userguide/13-nsb_operation.rst
+++ b/docs/testing/user/userguide/13-nsb_operation.rst
@@ -126,7 +126,7 @@ To collectd KPIs from the NFVi compute nodes:
Scale-Up
-------------------
+--------
VNFs performance data with scale-up
@@ -137,21 +137,59 @@ VNFs performance data with scale-up
Heat
^^^^
-For VNF scale-up tests we increase the number for VNF worker threads. In the case of VNFs
+For VNF scale-up tests we increase the number for VNF worker threads and ports. In the case of VNFs
we also need to increase the number of VCPUs and memory allocated to the VNF.
An example scale-up Heat testcase is:
+.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
+ :language: yaml
+
+This testcase template requires specifying the number of VCPUs, Memory and Ports.
+We set the VCPUs and memory using the ``--task-args`` options
+
.. code-block:: console
- <repo>/samples/vnf_samples/nsut/acl/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml
+ yardstick task start --task-args='{"mem": 10480, "vcpus": 4, "ports": 2}' \
+ samples/vnf_samples/nsut/vfw/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale-up.yaml
-This testcase template requires specifying the number of VCPUs and Memory.
-We set the VCPUs and memory using the --task-args options
+In order to support ports scale-up, traffic and topology templates need to be used in testcase.
-.. code-block:: console
+A example topology template is:
+
+.. literalinclude:: /submodules/yardstick/samples/vnf_samples/nsut/vfw/vfw-tg-topology-scale-up.yaml
+ :language: yaml
+
+This template has ``vports`` as an argument. To pass this argument it needs to
+be configured in ``extra_args`` scenario definition. Please note that more
+argument can be defined in that section. All of them will be passed to topology
+and traffic profile templates
+
+For example:
+
+.. code-block:: yaml
- yardstick --debug task start --task-args='{"mem": 20480, "vcpus": 10}' samples/vnf_samples/nsut/acl/tc_heat_rfc2544_ipv4_1rule_1flow_64B_trex_scale_up.yaml
+ schema: yardstick:task:0.1
+ scenarios:
+ - type: NSPerf
+ traffic_profile: ../../traffic_profiles/ipv4_throughput-scale-up.yaml
+ extra_args:
+ vports: {{ vports }}
+ topology: vfw-tg-topology-scale-up.yaml
+
+A example traffic profile template is:
+
+.. literalinclude:: /submodules/yardstick/samples/vnf_samples/traffic_profiles/ipv4_throughput-scale-up.yaml
+ :language: yaml
+
+There is an option to provide predefined config for SampleVNFs. Path to config
+file may by specified in ``vnf_config`` scenario section.
+
+.. code-block:: yaml
+
+ vnf__0:
+ rules: acl_1rule.yaml
+ vnf_config: {lb_config: 'SW', file: vfw_vnf_pipeline_cores_4_ports_2_lb_1_sw.conf }
Baremetal
@@ -266,5 +304,3 @@ To enable multiple queue set the queues_per_port value in the TG VNF options sec
options:
tg_0:
queues_per_port: 2
-
-
diff --git a/docs/testing/user/userguide/15-list-of-tcs.rst b/docs/testing/user/userguide/15-list-of-tcs.rst
index 47526cdda..cb99c49cf 100644
--- a/docs/testing/user/userguide/15-list-of-tcs.rst
+++ b/docs/testing/user/userguide/15-list-of-tcs.rst
@@ -83,6 +83,7 @@ H A
opnfv_yardstick_tc056.rst
opnfv_yardstick_tc057.rst
opnfv_yardstick_tc058.rst
+ opnfv_yardstick_tc087.rst
IPv6
----
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc087.rst b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
new file mode 100644
index 000000000..99bfeebfc
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc087.rst
@@ -0,0 +1,182 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Ericsson and others.
+
+*************************************
+Yardstick Test Case Description TC087
+*************************************
+
++-----------------------------------------------------------------------------+
+|SDN Controller resilience in non-HA configuration |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC087: SDN controller resilience in |
+| | non-HA configuration |
+| | |
++--------------+--------------------------------------------------------------+
+|test purpose | This test validates that network data plane services are |
+| | highly available in the event of an SDN Controller failure, |
+| | even if the SDN controller is deployed in a non-HA |
+| | configuration. Specifically, the test verifies that |
+| | existing data plane connectivity is not impacted, i.e. all |
+| | configured network services such as DHCP, ARP, L2, |
+| | L3 Security Groups should continue to operate |
+| | between the existing VMs while the SDN controller is |
+| | offline or rebooting. |
+| | |
+| | The test also validates that new network service operations |
+| | (creating a new VM in the existing L2/L3 network or in a new |
+| | network, etc.) are operational after the SDN controller |
+| | has recovered from a failure. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case fails the SDN controller service running |
+| | on the OpenStack controller node, then checks if already |
+| | configured DHCP/ARP/L2/L3/SNAT connectivity is not |
+| | impacted between VMs and the system is able to execute |
+| | new virtual network operations once the SDN controller |
+| | is restarted and has fully recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called “kill-process” is |
+| | needed. This attacker includes three parameters: |
+| | 1. fault_type: which is used for finding the attacker's |
+| | scripts. It should be set to 'kill-process' in this test |
+| | |
+| | 2. process_name: should be set to the name of the SDN |
+| | controller process |
+| | |
+| | 3. host: which is the name of a control node where the |
+| | SDN controller process is running |
+| | |
+| | e.g. -fault_type: "kill-process" |
+| | -process_name: "opendaylight" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | This test case utilizes two monitors of type "ip-status" |
+| | and one monitor of type "process" to track the following |
+| | conditions: |
+| | 1. "ping_same_network_l2": monitor ICMP traffic between |
+| | VMs in the same Neutron network |
+| | |
+| | 2. "ping_external_snat": monitor ICMP traffic from VMs to |
+| | an external host on the Internet to verify SNAT |
+| | functionality. |
+| | |
+| | 3. "SDN controller process monitor": a monitor checking the |
+| | state of a specified SDN controller process. It measures |
+| | the recovery time of the given process. |
+| | |
+| | Monitors of type "ip-status" use the "ping" utility to |
+| | verify reachability of a given target IP. |
+| | |
++--------------+--------------------------------------------------------------+
+|operations | In this test case, the following operations are needed: |
+| | 1. "nova-create-instance-in_network": create a VM instance |
+| | in one of the existing Neutron network. |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1. process_recover_time: which indicates the maximun |
+| | time (seconds) from the process being killed to |
+| | recovered |
+| | |
+| | 2. packet_drop: measure the packets that have been dropped |
+| | by the monitors using pktgen. |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | none |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1. test case file: opnfv_yardstick_tc087.yaml |
+| | - Attackers: see above “attackers” discription |
+| | - waiting_time: which is the time (seconds) from the |
+| | process being killed to stoping monitors the monitors |
+| | - Monitors: see above “monitors” discription |
+| | - SLA: see above “metrics” discription |
+| | |
+| | 2. POD file: pod.yaml The POD configuration should record |
+| | on pod.yaml first. the “host” item in this test case |
+| | will use the node name in the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | Description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|pre-action | 1. The OpenStack cluster is set up with a single SDN |
+| | controller in a non-HA configuration. |
+| | |
+| | 2. One or more Neutron networks are created with two or |
+| | more VMs attached to each of the Neutron networks. |
+| | |
+| | 3. The Neutron networks are attached to a Neutron router |
+| | which is attached to an external network towards the |
+| | DCGW. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | Start IP connectivity monitors: |
+| | 1. Check the L2 connectivity between the VMs in the same |
+| | Neutron network. |
+| | |
+| | 2. Check connectivity from one VM to an external host on |
+| | the Internet to verify SNAT functionality.
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | Start attacker: |
+| | SSH connect to the VIM node and kill the SDN controller |
+| | process |
+| | |
+| | Result: the SDN controller service will be shutdown |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | Verify the results of the IP connectivity monitors. |
+| | |
+| | Result: The outage_time metric reported by the monitors |
+| | is zero. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | Restart the SDN controller. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 5 | Create a new VM in the existing Neutron network |
+| | |
++--------------+--------------------------------------------------------------+
+|step 6 | Verify connectivity between VMs as follows: |
+| | 1. Check the L2 connectivity between the previously |
+| | existing VM and the newly created VM on the same |
+| | Neutron network by sending ICMP messages |
+| | |
++--------------+--------------------------------------------------------------+
+|step 7 | Stop IP connectivity monitors after a period of time |
+| | specified by “waiting_time” |
+| | |
+| | Result: The monitor info will be aggregated |
+| | |
++--------------+--------------------------------------------------------------+
+|step 8 | Verify the IP connectivity monitor results |
+| | |
+| | Result: IP connectivity monitor should not have any packet |
+| | drop failures reported |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | This test fails if the SLAs are not met or if there is a |
+| | test case execution problem. The SLAs are define as follows |
+| | for this test: |
+| | * SDN Controller recovery |
+| | * process_recover_time <= 30 sec |
+| | |
+| | * no impact on data plane connectivity during SDN |
+| | controller failure and recovery. |
+| | * packet_drop == 0 |
+| | |
++--------------+--------------------------------------------------------------+
+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc090.rst b/docs/testing/user/userguide/opnfv_yardstick_tc090.rst
new file mode 100644
index 000000000..1f8747b2b
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc090.rst
@@ -0,0 +1,151 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC090
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node OpenStack Service High Availability - Database Instances |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC090: Control node OpenStack service down - |
+| | database instances |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | data base instances used by OpenStack (mysql) on control |
+| | node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of database service on a |
+| | selected control node, then checks whether the request of |
+| | the related OpenStack command is OK and the killed processes |
+| | are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to the name |
+| | of the database service of OpenStack. |
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "mysql" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, two kinds of monitor are needed: |
+| | 1. the "openstack-cmd" monitor constantly request a specific |
+| | Openstack command, which needs two parameters: |
+| | 1) monitor_type: which is used for finding the monitor class |
+| | and related scritps. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for request. |
+| | In this case, the command name should be neutron related |
+| | commands. |
+| | |
+| | 2. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | The examples of monitors show as follows, there are four |
+| | instance of the "openstack-cmd" monitor, in order to check |
+| | the database connection of different OpenStack components. |
+| | |
+| | monitor1: |
+| | -monitor_type: "openstack-cmd" |
+| | -api_name: "openstack image list" |
+| | monitor2: |
+| | -monitor_type: "openstack-cmd" |
+| | -api_name: "openstack router list" |
+| | monitor3: |
+| | -monitor_type: "openstack-cmd" |
+| | -api_name: "openstack stack list" |
+| | monitor4: |
+| | -monitor_type: "openstack-cmd" |
+| | -api_name: "openstack volume list" |
+| | monitor5: |
+| | -monitor_type: "process" |
+| | -process_name: "mysql" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1)service_outage_time: which indicates the maximum outage |
+| | time (seconds) of the specified OpenStack command request. |
+| | 2)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc090.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to stopping monitors the monitors |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | stop monitors after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | verify the SLA |
+| | |
+| | Result: The test case is passed or not. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+
diff --git a/docs/testing/user/userguide/opnfv_yardstick_tc091.rst b/docs/testing/user/userguide/opnfv_yardstick_tc091.rst
new file mode 100644
index 000000000..8e89b6425
--- /dev/null
+++ b/docs/testing/user/userguide/opnfv_yardstick_tc091.rst
@@ -0,0 +1,138 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International
+.. License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV, Yin Kanglin and others.
+.. 14_ykl@tongji.edu.cn
+
+*************************************
+Yardstick Test Case Description TC091
+*************************************
+
++-----------------------------------------------------------------------------+
+|Control Node Openstack Service High Availability - Heat Api |
+| |
++--------------+--------------------------------------------------------------+
+|test case id | OPNFV_YARDSTICK_TC091: Control node OpenStack service down - |
+| | heat api |
++--------------+--------------------------------------------------------------+
+|test purpose | This test case will verify the high availability of the |
+| | orchestration service provided by OpenStack (heat-api) on |
+| | control node. |
+| | |
++--------------+--------------------------------------------------------------+
+|test method | This test case kills the processes of heat-api service on a |
+| | selected control node, then checks whether the request of |
+| | the related OpenStack command is OK and the killed processes |
+| | are recovered. |
+| | |
++--------------+--------------------------------------------------------------+
+|attackers | In this test case, an attacker called "kill-process" is |
+| | needed. This attacker includes three parameters: |
+| | 1) fault_type: which is used for finding the attacker's |
+| | scripts. It should be always set to "kill-process" in this |
+| | test case. |
+| | 2) process_name: which is the process name of the specified |
+| | OpenStack service. If there are multiple processes use the |
+| | same name on the host, all of them are killed by this |
+| | attacker. |
+| | In this case. This parameter should always set to "heat-api".|
+| | 3) host: which is the name of a control node being attacked. |
+| | |
+| | e.g. |
+| | -fault_type: "kill-process" |
+| | -process_name: "heat-api" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|monitors | In this test case, two kinds of monitor are needed: |
+| | 1. the "openstack-cmd" monitor constantly request a specific |
+| | OpenStack command, which needs two parameters: |
+| | 1) monitor_type: which is used for finding the monitor class |
+| | and related scripts. It should be always set to |
+| | "openstack-cmd" for this monitor. |
+| | 2) command_name: which is the command name used for request. |
+| | In this case, the command name should be neutron related |
+| | commands. |
+| | |
+| | 2. the "process" monitor check whether a process is running |
+| | on a specific node, which needs three parameters: |
+| | 1) monitor_type: which used for finding the monitor class and|
+| | related scripts. It should be always set to "process" |
+| | for this monitor. |
+| | 2) process_name: which is the process name for monitor |
+| | 3) host: which is the name of the node running the process |
+| | |
+| | e.g. |
+| | monitor1: |
+| | -monitor_type: "openstack-cmd" |
+| | -command_name: "heat stack list" |
+| | monitor2: |
+| | -monitor_type: "process" |
+| | -process_name: "heat-api" |
+| | -host: node1 |
+| | |
++--------------+--------------------------------------------------------------+
+|metrics | In this test case, there are two metrics: |
+| | 1)service_outage_time: which indicates the maximum outage |
+| | time (seconds) of the specified OpenStack command request. |
+| | 2)process_recover_time: which indicates the maximum time |
+| | (seconds) from the process being killed to recovered |
+| | |
++--------------+--------------------------------------------------------------+
+|test tool | Developed by the project. Please see folder: |
+| | "yardstick/benchmark/scenarios/availability/ha_tools" |
+| | |
++--------------+--------------------------------------------------------------+
+|references | ETSI NFV REL001 |
+| | |
++--------------+--------------------------------------------------------------+
+|configuration | This test case needs two configuration files: |
+| | 1) test case file: opnfv_yardstick_tc091.yaml |
+| | -Attackers: see above "attackers" description |
+| | -waiting_time: which is the time (seconds) from the process |
+| | being killed to the monitor stopped |
+| | -Monitors: see above "monitors" description |
+| | -SLA: see above "metrics" description |
+| | |
+| | 2)POD file: pod.yaml |
+| | The POD configuration should record on pod.yaml first. |
+| | the "host" item in this test case will use the node name in |
+| | the pod.yaml. |
+| | |
++--------------+--------------------------------------------------------------+
+|test sequence | description and expected result |
+| | |
++--------------+--------------------------------------------------------------+
+|step 1 | start monitors: |
+| | each monitor will run with independently process |
+| | |
+| | Result: The monitor info will be collected. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 2 | do attacker: connect the host through SSH, and then execute |
+| | the kill process script with param value specified by |
+| | "process_name" |
+| | |
+| | Result: Process will be killed. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 3 | stop monitors after a period of time specified by |
+| | "waiting_time" |
+| | |
+| | Result: The monitor info will be aggregated. |
+| | |
++--------------+--------------------------------------------------------------+
+|step 4 | verify the SLA |
+| | |
+| | Result: The test case is passed or not. |
+| | |
++--------------+--------------------------------------------------------------+
+|post-action | It is the action when the test cases exist. It will check the|
+| | status of the specified process on the host, and restart the |
+| | process if it is not running for next test cases |
+| | |
++--------------+--------------------------------------------------------------+
+|test verdict | Fails only if SLA is not passed, or if there is a test case |
+| | execution problem. |
+| | |
++--------------+--------------------------------------------------------------+