summaryrefslogtreecommitdiffstats
path: root/docs/testing/user/userguide/opnfv_yardstick_tc050.rst
blob: 8890c9d53860a4b1359d91d27d4d3eb2529f6fb4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59

@media only all and (prefers-color-scheme: dark) {
.highlight .hll { background-color: #49483e }
.highlight .c { color: #75715e } /* Comment */
.highlight .err { color: #960050; background-color: #1e0010 } /* Error */
.highlight .k { color: #66d9ef } /* Keyword */
.highlight .l { color: #ae81ff } /* Literal */
.highlight .n { color: #f8f8f2 } /* Name */
.highlight .o { color: #f92672 } /* Operator */
.highlight .p { color: #f8f8f2 } /* Punctuation */
.highlight .ch { color: #75715e } /* Comment.Hashbang */
.highlight .cm { color: #75715e } /* Comment.Multiline */
.highlight .cp { color: #75715e } /* Comment.Preproc */
.highlight .cpf { color: #75715e } /* Comment.PreprocFile */
.highlight .c1 { color: #75715e } /* Comment.Single */
.highlight .cs { color: #75715e } /* Comment.Special */
.highlight .gd { color: #f92672 } /* Generic.Deleted */
.highlight .ge { font-style: italic } /* Generic.Emph */
.highlight .gi { color: #a6e22e } /* Generic.Inserted */
.highlight .gs { font-weight: bold } /* Generic.Strong */
.highlight .gu { color: #75715e } /* Generic.Subheading */
.highlight .kc { color: #66d9ef } /* Keyword.Constant */
.highlight .kd { color: #66d9ef } /* Keyword.Declaration */
.highlight .kn { color: #f92672 } /* Keyword.Namespace */
.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */
.highlight .kr { color: #66d9ef } /* Keyword.Reserved */
.highlight .kt { color: #66d9ef } /* Keyword.Type */
.highlight .ld { color: #e6db74 } /* Literal.Date */
.highlight .m { color: #ae81ff } /* Literal.Number */
.highlight .s { color: #e6db74 } /* Literal.String */
.highlight .na { color: #a6e22e } /* Name.Attribute */
.highlight .nb { color: #f8f8f2 } /* Name.Builtin */
.highlight .nc { color: #a6e22e } /* Name.Class */
.highlight .no { color: #66d9ef } /* Name.Constant */
.highlight .nd { color: #a6e22e } /* Name.Decorator */
.highlight .ni { color: #f8f8f2 } /* Name.Entity */
.highlight .ne { color: #a6e22e } /* Name.Exception */
.highlight .nf { color: #a6e22e } /* Name.Function */
.highlight .nl { color: #f8f8f2 } /* Name.Label */
.highlight .nn { color: #f8f8f2 } /* Name.Namespace */
.highlight .nx { color: #a6e22e } /* Name.Other */
.highlight .py { color: #f8f8f2 } /* Name.Property */
.highlight .nt { color: #f92672 } /* Name.Tag */
.highlight .nv { color: #f8f8f2 } /* Name.Variable */
.highlight .ow { color: #f92672 } /* Operator.Word */
.highlight .w { color: #f8f8f2 } /* Text.Whitespace */
.highlight .mb { color: #ae81ff } /* Literal.
.. This work is licensed under a Creative Commons Attribution 4.0 International
.. License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) OPNFV, Yin Kanglin and others.
.. 14_ykl@tongji.edu.cn

*************************************
Yardstick Test Case Description TC050
*************************************

+-----------------------------------------------------------------------------+
|OpenStack Controller Node Network High Availability                          |
|                                                                             |
+--------------+--------------------------------------------------------------+
|test case id  | OPNFV_YARDSTICK_TC050: OpenStack Controller Node Network     |
|              | High Availability                                            |
+--------------+--------------------------------------------------------------+
|test purpose  | This test case will verify the high availability of control  |
|              | node. When one of the controller failed to connect the       |
|              | network, which breaks down the Openstack services on this    |
|              | node. These Openstack service should able to be accessed by  |
|              | other controller nodes, and the services on failed           |
|              | controller node should be isolated.                          |
+--------------+--------------------------------------------------------------+
|test method   | This test case turns off the network interfaces of a         |
|              | specified control node, then checks whether all services     |
|              | provided by the control node are OK with some monitor tools. |
+--------------+--------------------------------------------------------------+
|attackers     | In this test case, an attacker called "close-interface" is   |
|              | needed. This attacker includes three parameters:             |
|              | 1) fault_type: which is used for finding the attacker's      |
|              | scripts. It should be always set to "close-interface" in     |
|              | this test case.                                              |
|              | 2) host: which is the name of a control node being attacked. |
|              | 3) interface: the network interface to be turned off.        |
|              |                                                              |
|              | There are four instance of the "close-interface" monitor:    |
|              | attacker1(for public netork):                                |
|              | -fault_type: "close-interface"                               |
|              | -host: node1                                                 |
|              | -interface: "br-ex"                                          |
|              | attacker2(for management netork):                            |
|              | -fault_type: "close-interface"                               |
|              | -host: node1                                                 |
|              | -interface: "br-mgmt"                                        |
|              | attacker3(for storage netork):                               |
|              | -fault_type: "close-interface"                               |
|              | -host: node1                                                 |
|              | -interface: "br-storage"                                     |
|              | attacker4(for private netork):                               |
|              | -fault_type: "close-interface"                               |
|              | -host: node1                                                 |
|              | -interface: "br-mesh"                                        |
+--------------+--------------------------------------------------------------+
| | | -monitor_type: "openstack-cmd" | | | -command_name: "neutron router-list" | | | monitor3: | | | -monitor_type: "openstack-cmd" | | | -command_name: "heat stack-list" | | | monitor4: | | | -monitor_type: "openstack-cmd" | | | -command_name: "cinder list" | +--------------+--------------------------------------------------------------+ |metrics | In this test case, there is one metric: | | | 1)service_outage_time: which indicates the maximum outage | | | time (seconds) of the specified Openstack command request. | +--------------+--------------------------------------------------------------+ |test tool | Developed by the project. Please see folder: | | | "yardstick/benchmark/scenarios/availability/ha_tools" | | | | +--------------+--------------------------------------------------------------+ |references | ETSI NFV REL001 | | | | +--------------+--------------------------------------------------------------+ |configuration | This test case needs two configuration files: | | | 1) test case file: opnfv_yardstick_tc050.yaml | | | -Attackers: see above "attackers" discription | | | -waiting_time: which is the time (seconds) from the process | | | being killed to stoping monitors the monitors | | | -Monitors: see above "monitors" discription | | | -SLA: see above "metrics" discription | | | | | | 2)POD file: pod.yaml | | | The POD configuration should record on pod.yaml first. | | | the "host" item in this test case will use the node name in | | | the pod.yaml. | | | | +--------------+--------------------------------------------------------------+ |test sequence | description and expected result | | | | +--------------+--------------------------------------------------------------+ |step 1 | start monitors: | | | each monitor will run with independently process | | | | | | Result: The monitor info will be collected. | | | | +--------------+--------------------------------------------------------------+ |step 2 | do attacker: connect the host through SSH, and then execute | | | the turnoff network interface script with param value | | | specified by "interface". | | | | | | Result: Network interfaces will be turned down. | | | | +--------------+--------------------------------------------------------------+ |step 3 | stop monitors after a period of time specified by | | | "waiting_time" | | | | | | Result: The monitor info will be aggregated. | | | | +--------------+--------------------------------------------------------------+ |step 4 | verify the SLA | | | | | | Result: The test case is passed or not. | | | | +--------------+--------------------------------------------------------------+ |post-action | It is the action when the test cases exist. It turns up the | | | network interface of the control node if it is not turned | | | up. | +--------------+--------------------------------------------------------------+ |test verdict | Fails only if SLA is not passed, or if there is a test case | | | execution problem. | | | | +--------------+--------------------------------------------------------------+