diff options
Diffstat (limited to 'docs/userguide')
-rw-r--r-- | docs/userguide/opnfv_yardstick_tc024.rst | 23 | ||||
-rw-r--r-- | docs/userguide/opnfv_yardstick_tc050.rst | 135 | ||||
-rw-r--r-- | docs/userguide/opnfv_yardstick_tc051.rst | 117 | ||||
-rw-r--r-- | docs/userguide/opnfv_yardstick_tc053.rst | 142 | ||||
-rw-r--r-- | docs/userguide/opnfv_yardstick_tc061.rst | 88 | ||||
-rw-r--r-- | docs/userguide/opnfv_yardstick_tc073.rst | 81 |
6 files changed, 578 insertions, 8 deletions
diff --git a/docs/userguide/opnfv_yardstick_tc024.rst b/docs/userguide/opnfv_yardstick_tc024.rst index ffdacb106..8d15e8d2f 100644 --- a/docs/userguide/opnfv_yardstick_tc024.rst +++ b/docs/userguide/opnfv_yardstick_tc024.rst @@ -22,17 +22,17 @@ Yardstick Test Case Description TC024 |test purpose | To evaluate the CPU load performance of the IaaS. This test | | | case should be run in parallel to other Yardstick test cases | | | and not run as a stand-alone test case. | -| | | -| | The purpose is also to be able to spot trends. Test results, | -| | graphs ans similar shall be stored for comparison reasons and| -| | product evolution understanding between different OPNFV | -| | versions and/or configurations. | +| | Average, minimum and maximun values are obtained. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | | | | +--------------+--------------------------------------------------------------+ |configuration | file: cpuload.yaml (in the 'samples' directory) | | | | -| | There is are no additional configurations to be set for this | -| | TC. | +| | * interval: 1 - repeat, pausing every 1 seconds in-between. | +| | * count: 10 - display statistics 10 times, then exit. | | | | +--------------+--------------------------------------------------------------+ |test tool | mpstat | @@ -46,7 +46,14 @@ Yardstick Test Case Description TC024 |references | man-pages_ | | | | +--------------+--------------------------------------------------------------+ -|applicability | Run in background with other test cases. | +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * count; | +| | * runner Iteration and intervals. | +| | | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | | | | +--------------+--------------------------------------------------------------+ |pre-test | The test case image needs to be installed into Glance | diff --git a/docs/userguide/opnfv_yardstick_tc050.rst b/docs/userguide/opnfv_yardstick_tc050.rst new file mode 100644 index 000000000..8890c9d53 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc050.rst @@ -0,0 +1,135 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC050 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node Network High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC050: OpenStack Controller Node Network | +| | High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of control | +| | node. When one of the controller failed to connect the | +| | network, which breaks down the Openstack services on this | +| | node. These Openstack service should able to be accessed by | +| | other controller nodes, and the services on failed | +| | controller node should be isolated. | ++--------------+--------------------------------------------------------------+ +|test method | This test case turns off the network interfaces of a | +| | specified control node, then checks whether all services | +| | provided by the control node are OK with some monitor tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "close-interface" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "close-interface" in | +| | this test case. | +| | 2) host: which is the name of a control node being attacked. | +| | 3) interface: the network interface to be turned off. | +| | | +| | There are four instance of the "close-interface" monitor: | +| | attacker1(for public netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-ex" | +| | attacker2(for management netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-mgmt" | +| | attacker3(for storage netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-storage" | +| | attacker4(for private netork): | +| | -fault_type: "close-interface" | +| | -host: node1 | +| | -interface: "br-mesh" | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, the monitor named "openstack-cmd" is | +| | needed. The monitor needs needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request | +| | | +| | There are four instance of the "openstack-cmd" monitor: | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "neutron router-list" | +| | monitor3: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "heat stack-list" | +| | monitor4: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "cinder list" | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc050.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the turnoff network interface script with param value | +| | specified by "interface". | +| | | +| | Result: Network interfaces will be turned down. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It turns up the | +| | network interface of the control node if it is not turned | +| | up. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc051.rst b/docs/userguide/opnfv_yardstick_tc051.rst new file mode 100644 index 000000000..3402ccd92 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc051.rst @@ -0,0 +1,117 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC051 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Node CPU Overload High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC051: OpenStack Controller Node CPU | +| | Overload High Availability | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of control | +| | node. When the CPU usage of a specified controller node is | +| | stressed to 100%, which breaks down the Openstack services | +| | on this node. These Openstack service should able to be | +| | accessed by other controller nodes, and the services on | +| | failed controller node should be isolated. | ++--------------+--------------------------------------------------------------+ +|test method | This test case stresses the CPU uasge of a specified control | +| | node to 100%, then checks whether all services provided by | +| | the environment are OK with some monitor tools. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "stress-cpu" is | +| | needed. This attacker includes two parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "stress-cpu" in | +| | this test case. | +| | 2) host: which is the name of a control node being attacked. | +| | e.g. | +| | -fault_type: "stress-cpu" | +| | -host: node1 | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, the monitor named "openstack-cmd" is | +| | needed. The monitor needs needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request | +| | | +| | There are four instance of the "openstack-cmd" monitor: | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "neutron router-list" | +| | monitor3: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "heat stack-list" | +| | monitor4: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "cinder list" | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there is one metric: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc051.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the stress cpu script on the host. | +| | | +| | Result: The CPU usage of the host will be stressed to 100%. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It kills the | +| | process that stresses the CPU usage. | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc053.rst b/docs/userguide/opnfv_yardstick_tc053.rst new file mode 100644 index 000000000..8808d12d9 --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc053.rst @@ -0,0 +1,142 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Yin Kanglin and others. +.. 14_ykl@tongji.edu.cn + +************************************* +Yardstick Test Case Description TC053 +************************************* + ++-----------------------------------------------------------------------------+ +|OpenStack Controller Load Balance Service High Availability | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC053: OpenStack Controller Load Balance | +| | Service High Availability- | ++--------------+--------------------------------------------------------------+ +|test purpose | This test case will verify the high availability of the | +| | load balance service(current is HAProxy) that supports | +| | OpenStack on controller node. When the load balance service | +| | of a specified controller node is killed, whether other load | +| | balancers on other controller nodes will work, and whether | +| | the controller node will restart the load balancer are | +| | checked. | ++--------------+--------------------------------------------------------------+ +|test method | This test case kills the processes of load balance service | +| | on a selected control node, then checks whether the request | +| | of the related Openstack command is OK and the killed | +| | processes are recovered. | ++--------------+--------------------------------------------------------------+ +|attackers | In this test case, an attacker called "kill-process" is | +| | needed. This attacker includes three parameters: | +| | 1) fault_type: which is used for finding the attacker's | +| | scripts. It should be always set to "kill-process" in this | +| | test case. | +| | 2) process_name: which is the process name of the specified | +| | OpenStack service. If there are multiple processes use the | +| | same name on the host, all of them are killed by this | +| | attacker. | +| | In this case. This parameter should always set to "swift- | +| | proxy". | +| | 3) host: which is the name of a control node being attacked. | +| | | +| | e.g. | +| | -fault_type: "kill-process" | +| | -process_name: "haproxy" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|monitors | In this test case, two kinds of monitor are needed: | +| | 1. the "openstack-cmd" monitor constantly request a specific | +| | Openstack command, which needs two parameters: | +| | 1) monitor_type: which is used for finding the monitor class | +| | and related scritps. It should be always set to | +| | "openstack-cmd" for this monitor. | +| | 2) command_name: which is the command name used for request. | +| | | +| | 2. the "process" monitor check whether a process is running | +| | on a specific node, which needs three parameters: | +| | 1) monitor_type: which used for finding the monitor class | +| | and related scripts. It should be always set to "process" | +| | for this monitor. | +| | 2) process_name: which is the process name for monitor | +| | 3) host: which is the name of the node runing the process | +| | In this case, the command_name of monitor1 should be | +| | services that is supported by load balancer and the process- | +| | name of monitor2 should be "haproxy", for example: | +| | | +| | e.g. | +| | monitor1: | +| | -monitor_type: "openstack-cmd" | +| | -command_name: "nova image-list" | +| | monitor2: | +| | -monitor_type: "process" | +| | -process_name: "haproxy" | +| | -host: node1 | +| | | ++--------------+--------------------------------------------------------------+ +|metrics | In this test case, there are two metrics: | +| | 1)service_outage_time: which indicates the maximum outage | +| | time (seconds) of the specified Openstack command request. | +| | 2)process_recover_time: which indicates the maximun time | +| | (seconds) from the process being killed to recovered | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | Developed by the project. Please see folder: | +| | "yardstick/benchmark/scenarios/availability/ha_tools" | +| | | ++--------------+--------------------------------------------------------------+ +|references | ETSI NFV REL001 | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | This test case needs two configuration files: | +| | 1) test case file: opnfv_yardstick_tc053.yaml | +| | -Attackers: see above "attackers" discription | +| | -waiting_time: which is the time (seconds) from the process | +| | being killed to stoping monitors the monitors | +| | -Monitors: see above "monitors" discription | +| | -SLA: see above "metrics" discription | +| | | +| | 2)POD file: pod.yaml | +| | The POD configuration should record on pod.yaml first. | +| | the "host" item in this test case will use the node name in | +| | the pod.yaml. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | start monitors: | +| | each monitor will run with independently process | +| | | +| | Result: The monitor info will be collected. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | do attacker: connect the host through SSH, and then execute | +| | the kill process script with param value specified by | +| | "process_name" | +| | | +| | Result: Process will be killed. | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | stop monitors after a period of time specified by | +| | "waiting_time" | +| | | +| | Result: The monitor info will be aggregated. | +| | | ++--------------+--------------------------------------------------------------+ +|step 4 | verify the SLA | +| | | +| | Result: The test case is passed or not. | +| | | ++--------------+--------------------------------------------------------------+ +|post-action | It is the action when the test cases exist. It will check | +| | the status of the specified process on the host, and restart | +| | the process if it is not running for next test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc061.rst b/docs/userguide/opnfv_yardstick_tc061.rst new file mode 100644 index 000000000..1d424414e --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc061.rst @@ -0,0 +1,88 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC061 +************************************* + +.. _man-pages: http://linux.die.net/man/1/sar + ++-----------------------------------------------------------------------------+ +|Network Utilization | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC061_Network Utilization | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Network utilization | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network capability with regards to | +| | network utilization, including Total number of packets | +| | received per second, Total number of packets transmitted per | +| | second, Total number of kilobytes received per second, Total | +| | number of kilobytes transmitted per second, Number of | +| | compressed packets received per second (for cslip etc.), | +| | Number of compressed packets transmitted per second, Number | +| | of multicast packets received per second, Utilization | +| | percentage of the network interface. | +| | This test case should be run in parallel to other Yardstick | +| | test cases and not run as a stand-alone test case. | +| | Measure the network usage statistics from the network devices| +| | Average, minimum and maximun values are obtained. | +| | The purpose is also to be able to spot trends. | +| | Test results, graphs and similar shall be stored for | +| | comparison reasons and product evolution understanding | +| | between different OPNFV versions and/or configurations. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | File: netutilization.yaml (in the 'samples' directory) | +| | | +| | * interval: 1 - repeat, pausing every 1 seconds in-between. | +| | * count: 1 - display statistics 1 times, then exit. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | sar | +| | | +| | The sar command writes to standard output the contents of | +| | selected cumulative activity counters in the operating | +| | system. | +| | sar is normally part of a Linux distribution, hence it | +| | doesn't needs to be installed. | +| | | ++--------------+--------------------------------------------------------------+ +|references | man-pages_ | +| | | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different: | +| | | +| | * interval; | +| | * count; | +| | * runner Iteration and intervals. | +| | | +| | There are default values for each above-mentioned option. | +| | Run in background with other test cases. | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The test case image needs to be installed into Glance | +|conditions | with sar included in the image. | +| | | +| | No POD specific requirements have been identified. | +| | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result. | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | The host is installed as client. The related TC, or TCs, is | +| | invoked and sar logs are produced and stored. | +| | | +| | Result: logs are stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | None. Network utilization results are fetched and stored. | +| | | ++--------------+--------------------------------------------------------------+ diff --git a/docs/userguide/opnfv_yardstick_tc073.rst b/docs/userguide/opnfv_yardstick_tc073.rst new file mode 100644 index 000000000..a6499eabb --- /dev/null +++ b/docs/userguide/opnfv_yardstick_tc073.rst @@ -0,0 +1,81 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International +.. License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) OPNFV, Huawei Technologies Co.,Ltd and others. + +************************************* +Yardstick Test Case Description TC073 +************************************* + +.. _netperf: http://www.netperf.org/netperf/training/Netperf.html + ++-----------------------------------------------------------------------------+ +|Throughput per NFVI node test | +| | ++--------------+--------------------------------------------------------------+ +|test case id | OPNFV_YARDSTICK_TC073_Network latency and throughput between | +| | nodes | +| | | ++--------------+--------------------------------------------------------------+ +|metric | Network latency and throughput | +| | | ++--------------+--------------------------------------------------------------+ +|test purpose | To evaluate the IaaS network performance with regards to | +| | flows and throughput, such as if and how different amounts | +| | of packet sizes and flows matter for the throughput between | +| | nodes in one pod. | +| | | ++--------------+--------------------------------------------------------------+ +|configuration | file: opnfv_yardstick_tc073.yaml | +| | | +| | Packet size: default 1024 bytes. | +| | | +| | Test length: default 20 seconds. | +| | | +| | The client and server are distributed on different nodes. | +| | | +| | For SLA max_mean_latency is set to 100. | +| | | ++--------------+--------------------------------------------------------------+ +|test tool | netperf | +| | Netperf is a software application that provides network | +| | bandwidth testing between two hosts on a network. It | +| | supports Unix domain sockets, TCP, SCTP, DLPI and UDP via | +| | BSD Sockets. Netperf provides a number of predefined tests | +| | e.g. to measure bulk (unidirectional) data transfer or | +| | request response performance. | +| | (netperf is not always part of a Linux distribution, hence | +| | it needs to be installed.) | +| | | ++--------------+--------------------------------------------------------------+ +|references | netperf Man pages | +| | ETSI-NFV-TST001 | +| | | ++--------------+--------------------------------------------------------------+ +|applicability | Test can be configured with different packet sizes and | +| | test duration. Default values exist. | +| | | +| | SLA (optional): max_mean_latency | +| | | ++--------------+--------------------------------------------------------------+ +|pre-test | The POD can be reached by external ip and logged on via ssh | +|conditions | | ++--------------+--------------------------------------------------------------+ +|test sequence | description and expected result | +| | | ++--------------+--------------------------------------------------------------+ +|step 1 | Install netperf tool on each specified node, one is as the | +| | server, and the other as the client. | +| | | ++--------------+--------------------------------------------------------------+ +|step 2 | Log on to the client node and use the netperf command to | +| | execute the network performance test | +| | | ++--------------+--------------------------------------------------------------+ +|step 3 | The throughput results stored. | +| | | ++--------------+--------------------------------------------------------------+ +|test verdict | Fails only if SLA is not passed, or if there is a test case | +| | execution problem. | +| | | ++--------------+--------------------------------------------------------------+ |