blob: 0bc0b78ab67da8ae90b5ca2d5ea79db51113b2ff (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
|
*************************************
Yardstick Test Case Description TC025
*************************************
+-----------------------------------------------------------------------------+
|OpenStack Controller Node abnormally shutdown High Availability |
| |
+--------------+--------------------------------------------------------------+
|test case id | OPNFV_YARDSTICK_TC025_HA: OpenStack Controller Node |
| | abnormally shutdown |
| | |
+--------------+--------------------------------------------------------------+
|test purpose | This test case will verify the high availability of |
| | controller node. When one of the controller node abnormally |
| | shutdown, the service provided by it should be OK. |
| | |
+--------------+--------------------------------------------------------------+
|test method | This test case shutdowns a specified controller node with |
| | some fault injection tools, then checks whether all services |
| | provided by the controller node are OK with some monitor |
| | tools. |
| | |
+--------------+--------------------------------------------------------------+
|attackers | In this test case, an attacker called "host-shutdown" is |
| | needed. This attacker includes two parameters: |
| | 1) fault_type: which is used for finding the attacker's |
| | scripts. It should be always set to "host-shutdown" in |
| | this test case. |
| | 2) host: the name of a controller node being attacked. |
| | |
| | e.g. |
| | -fault_type: "host-shutdown" |
| | -host: node1 |
| | |
+--------------+--------------------------------------------------------------+
|monitors | In this test case, one kind of monitor are needed: |
| | 1. the "openstack-cmd" monitor constantly request a specific |
| | Openstack command, which needs two parameters |
| | 1) monitor_type: which is used for finding the monitor class |
| | and related scritps. It should be always set to |
| | "openstack-cmd" for this monitor. |
| | 2) command_name: which is the command name used for request |
| | |
| | There are four instance of the "openstack-cmd" monitor: |
| | monitor1: |
| | -monitor_type: "openstack-cmd" |
| | -api_name: "nova image-list" |
| | monitor2: |
| | -monitor_type: "openstack-cmd" |
| | -api_name: "neutron router-list" |
| | monitor3: |
| | -monitor_type: "openstack-cmd" |
| | -api_name: "heat stack-list" |
| | monitor4: |
| | -monitor_type: "openstack-cmd" |
| | -api_name: "cinder list" |
| | |
+--------------+--------------------------------------------------------------+
|metrics | In this test case, there is one metric: |
| | 1)service_outage_time: which indicates the maximum outage |
| | time (seconds) of the specified Openstack command request. |
| | |
+--------------+--------------------------------------------------------------+
|test tool | Developed by the project. Please see folder: |
| | "yardstick/benchmark/scenarios/availability/ha_tools" |
| | |
+--------------+--------------------------------------------------------------+
|references | ETSI NFV REL001 |
| | |
+--------------+--------------------------------------------------------------+
|configuration | This test case needs two configuration files: |
| | 1) test case file: opnfv_yardstick_tc019.yaml |
| | -Attackers: see above "attackers" discription |
| | -waiting_time: which is the time (seconds) from the process |
| | being killed to stoping monitors the monitors |
| | -Monitors: see above "monitors" discription |
| | -SLA: see above "metrics" discription |
| | |
| | 2)POD file: pod.yaml |
| | The POD configuration should record on pod.yaml first. |
| | the "host" item in this test case will use the node name in |
| | the pod.yaml. |
| | |
+--------------+--------------------------------------------------------------+
|test sequence | description and expected result |
| | |
+--------------+--------------------------------------------------------------+
|step 1 | start monitors: |
| | each monitor will run with independently process |
| | |
| | Result: The monitor info will be collected. |
| | |
+--------------+--------------------------------------------------------------+
|step 2 | do attacker: connect the host through SSH, and then execute |
| | shutdown script on the host |
| | |
| | Result: The host will be shutdown. |
| | |
+--------------+--------------------------------------------------------------+
|step 3 | stop monitors after a period of time specified by |
| | "waiting_time" |
| | |
| | Result: All monitor result will be aggregated. |
| | |
+--------------+--------------------------------------------------------------+
|step 4 | verify the SLA |
| | |
| | Result: The test case is passed or not. |
| | |
+--------------+--------------------------------------------------------------+
|post-action | It is the action when the test cases exist. It restarts the |
| | specified controller node if it is not restarted. |
| | |
+--------------+--------------------------------------------------------------+
|test verdict | Fails only if SLA is not passed, or if there is a test case |
| | execution problem. |
| | |
+--------------+--------------------------------------------------------------+
|