aboutsummaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide/opnfv_yardstick_tc093.rst
blob: 4e22e8bf37ff4d476f92148bff2d4ffa5460a37f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Intracom Telecom and others.
.. mardim@intracom-telecom.com

*************************************
Yardstick Test Case Description TC093
*************************************

+-----------------------------------------------------------------------------+
|SDN Vswitch resilience in non-HA or HA configuration                         |
|                                                                             |
+--------------+--------------------------------------------------------------+
|test case id  | OPNFV_YARDSTICK_TC093: SDN Vswitch resilience in             |
|              | non-HA or HA configuration                                   |
+--------------+--------------------------------------------------------------+
|test purpose  | This test validates that network data plane services are     |
|              | resilient in the event of Virtual Switch failure             |
|              | in compute nodes. Specifically, the test verifies that       |
|              | existing data plane connectivity is not permanently impacted |
|              | i.e. all configured network services such as DHCP, ARP, L2,  |
|              | L3 Security Groups continue to operate between the existing  |
|              | VMs eventually after the Virtual Switches have finished      |
|              | rebooting.                                                   |
|              |                                                              |
|              | The test also validates that new network service operations  |
|              | (creating a new VM in the existing L2/L3 network or in a new |
|              | network, etc.) are operational after the Virtual Switches    |
|              | have recovered from a failure.                               |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test method   | This testcase first checks if the already configured         |
|              | DHCP/ARP/L2/L3/SNAT connectivity is proper. After            |
|              | it fails and restarts again the VSwitch services which are   |
|              | running on both OpenStack compute nodes, and then checks if  |
|              | already configured DHCP/ARP/L2/L3/SNAT connectivity is not   |
|              | permanently impacted (even if there are some packet          |
|              | loss events) between VMs and the system is able to execute   |
|              | new virtual network operations once the Vswitch services     |
|              | are restarted and have been fully recovered                  |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|attackers     | In this test case, two attackers called “kill-process” are   |
|              | needed. These attackers include three parameters:            |
|              |                                                              |
|              | 1. fault_type: which is used for finding the attacker's      |
|              |    scripts. It should be set to 'kill-process' in this test  |
|              |                                                              |
|              | 2. process_name: should be set to the name of the Vswitch    |
|              |    process                                                   |
|              |                                                              |
|              | 3. host: which is the name of the compute node where the     |
|              |    Vswitch process is running                                |
|              |                                                              |
|              | e.g. -fault_type: "kill-process"                             |
|              |      -process_name: "openvswitch"                            |
|              |      -host: node1                                            |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|monitors      | This test case utilizes two monitors of type "ip-status"     |
|              | and one monitor of type "process" to track the following     |
|              | conditions:                                                  |
|              |                                                              |
|              | 1. "ping_same_network_l2": monitor ICMP traffic between      |
|              |    VMs in the same Neutron network                           |
|              |                                                              |
|              | 2. "ping_external_snat": monitor ICMP traffic from VMs to    |
|              |    an external host on the Internet to verify SNAT           |
|              |    functionality.                                            |
|              |                                                              |
|              | 3. "Vswitch process monitor": a monitor checking the         |
|              |    state of the specified Vswitch process. It measures       |
|              |    the recovery time of the given process.                   |
|              |                                                              |
|              | Monitors of type "ip-status" use the "ping" utility to       |
|              | verify reachability of a given target IP.                    |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|operations    | In this test case, the following operations are needed:      |
|              |  1. "nova-create-instance-in_network": create a VM instance  |
|              |     in one of the existing Neutron network.                  |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|metrics       | In this test case, there are two metrics:                    |
|              |  1. process_recover_time: which indicates the maximun        |
|              |     time (seconds) from the process being killed to          |
|              |     recovered                                                |
|              |                                                              |
|              |  2. outage_time: measures the total time in which            |
|              |     monitors were failing in their tasks (e.g. total time of |
|              |     Ping failure)                                            |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test tool     | Developed by the project. Please see folder:                 |
|              | "yardstick/benchmark/scenarios/availability/ha_tools"        |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|references    | none                                                         |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|configuration | This test case needs two configuration files:                |
|              |  1. test case file: opnfv_yardstick_tc093.yaml               |
|              |                                                              |
|              |     - Attackers: see above “attackers” description           |
|              |     - monitor_time: which is the time (seconds) from         |
|              |       starting to stoping the monitors                       |
|              |     - Monitors: see above “monitors” discription             |
|              |     - SLA: see above “metrics” description                   |
|              |                                                              |
|              |  2. POD file: pod.yaml The POD configuration should record   |
|              |     on pod.yaml first. the “host” item in this test case     |
|              |     will use the node name in the pod.yaml.                  |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test sequence | Description and expected result                              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|pre-action    |  1. The Vswitches are set up in both compute nodes.          |
|              |                                                              |
|              |  2. One or more Neutron networks are created with two or     |
|              |     more VMs attached to each of the Neutron networks.       |
|              |                                                              |
|              |  3. The Neutron networks are attached to a Neutron router    |
|              |     which is attached to an external network towards the     |
|              |     DCGW.                                                    |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 1        | Start IP connectivity monitors:                              |
|              |  1. Check the L2 connectivity between the VMs in the same    |
|              |     Neutron network.                                         |
|              |                                                              |
|              |  2. Check connectivity from one VM to an external host on    |
|              |     the Internet to verify SNAT functionality.               |
|              |                                                              |
|              | Result: The monitor info will be collected.                  |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 2        | Start attackers:                                             |
|              | SSH connect to the VIM compute nodes and kill the Vswitch    |
|              | processes                                                    |
|              |                                                              |
|              | Result: the SDN Vswitch services will be shutdown            |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 3        | Verify the results of the IP connectivity monitors.          |
|              |                                                              |
|              | Result: The outage_time metric reported by the monitors      |
|              | is not greater than the max_outage_time.                     |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 4        | Restart the SDN Vswitch services.                            |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 5        | Create a new VM in the existing Neutron network              |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 6        | Verify connectivity between VMs as follows:                  |
|              |  1. Check the L2 connectivity between the previously         |
|              |     existing VM and the newly created VM on the same         |
|              |     Neutron network by sending ICMP messages                 |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 7        | Stop IP connectivity monitors after a period of time         |
|              | specified by “monitor_time”                                  |
|              |                                                              |
|              | Result: The monitor info will be aggregated                  |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|step 8        | Verify the IP connectivity monitor results                   |
|              |                                                              |
|              | Result: IP connectivity monitor should not have any packet   |
|              | drop failures reported                                       |
|              |                                                              |
+--------------+--------------------------------------------------------------+
|test verdict  | This test fails if the SLAs are not met or if there is a     |
|              | test case execution problem. The SLAs are define as follows  |
|              | for this test:                                               |
|              | * SDN Vswitch recovery                                       |
|              |                                                              |
|              |   * process_recover_time <= 30 sec                           |
|              |                                                              |
|              | * no impact on data plane connectivity during SDN            |
|              |   Vswitch failure and recovery.                              |
|              |                                                              |
|              |   * packet_drop == 0                                         |
|              |                                                              |
+--------------+--------------------------------------------------------------+