aboutsummaryrefslogtreecommitdiffstats
path: root/tests/opnfv/test_suites/opnfv_os-odl_l2-nofeature-ha_daily.yaml
blob: b8b8c4695712df568434a678bcf6bd15e0e6f80b (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
##############################################################################
# Copyright (c) 2017 Huawei Technologies Co.,Ltd and others.
#
# All rights reserved. This program and the accompanying materials
# are made available under the terms of the Apache License, Version 2.0
# which accompanies this distribution, and is available at
# http://www.apache.org/licenses/LICENSE-2.0
##############################################################################
---
# os-odl_l2-nofeature-ha daily task suite

schema: "yardstick:suite:0.1"

name: "os-odl_l2-nofeature-ha"
test_cases_dir: "tests/opnfv/test_cases/"
test_cases:
-
  file_name: opnfv_yardstick_tc002.yaml
-
  file_name: opnfv_yardstick_tc005.yaml
-
  file_name: opnfv_yardstick_tc010.yaml
-
  file_name: opnfv_yardstick_tc011.yaml
  constraint:
      installer: compass
-
  file_name: opnfv_yardstick_tc012.yaml
-
  file_name: opnfv_yardstick_tc014.yaml
-
  file_name: opnfv_yardstick_tc037.yaml
-
  file_name: opnfv_yardstick_tc055.yaml
  constraint:
      installer: compass
      pod: huawei-pod1
  task_args:
      huawei-pod1: '{"file": "etc/yardstick/nodes/compass_sclab_physical/pod.yaml",
      "host": "node5.yardstick-TC055"}'
-
  file_name: opnfv_yardstick_tc063.yaml
  constraint:
      installer: compass
      pod: huawei-pod1
  task_args:
      huawei-pod1: '{"file": "etc/yardstick/nodes/compass_sclab_physical/pod.yaml",
      "host": "node5.yardstick-TC063"}'
-
  file_name: opnfv_yardstick_tc069.yaml
-
  file_name: opnfv_yardstick_tc070.yaml
-
  file_name: opnfv_yardstick_tc071.yaml
-
  file_name: opnfv_yardstick_tc072.yaml
-
  file_name: opnfv_yardstick_tc075.yaml
  constraint:
      installer: compass
      pod: huawei-pod1
  task_args:
      huawei-pod1: '{"file": "etc/yardstick/nodes/compass_sclab_physical/pod.yaml",
      "host": "node1.LF"}'
pod descrition file) in /tmp directory. Edit admin_rc.sh and add the following line .. code-block:: bash export OS_CACERT=/tmp/os_cacert If you are using compass, fuel, apex or joid to deploy your openstack environment, you could use the following command to get the required files. .. code-block:: bash bash /utils/env_prepare/config_prepare.sh -i <installer> [--debug] Note that if we execute the command above, then admin_rc.sh and pod.yml gets created automatically in /tmp folder along with the line `export OS_CACERT=/tmp/os_cacert` added in admin_rc.sh file. Executing Specified Testcase --------------------------- 1. Bottlenecks provides a CLI interface to run the tests, which is one of the most convenient way since it is more close to our natural languge. An GUI interface with rest API will also be provided in later update. .. code-block:: bash bottlenecks testcase|teststory run <testname> For the *testcase* command, testname should be as the same name of the test case configuration file located in testsuites/posca/testcase_cfg. For stress tests in Danube/Euphrates, *testcase* should be replaced by either *posca_factor_ping* or *posca_factor_system_bandwidth*. For the *teststory* command, a user can specify the test cases to be executed by defining it in a teststory configuration file located in testsuites/posca/testsuite_story. There is also an example there named *posca_factor_test*. 2. There are also other 2 ways to run test cases and test stories. The first one is to use shell script. .. code-block:: bash bash run_tests.sh [-h|--help] -s <testsuite>|-c <testcase> The second is to use python interpreter. .. code-block:: bash $REPORT=False opts="--privileged=true -id" docker_volume="-v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp" docker run $opts --name bottlenecks-load-master $docker_volume opnfv/bottlenecks:latest /bin/bash sleep 5 POSCA_SCRIPT="/home/opnfv/bottlenecks/testsuites/posca" docker exec bottlenecks-load-master python ${POSCA_SCRIPT}/../run_posca.py testcase|teststory <testname> ${REPORT} Showing Report -------------- Bottlenecks uses ELK to illustrate the testing results. Asumming IP of the SUT (System Under Test) is denoted as ipaddr, then the address of Kibana is http://[ipaddr]:5601. One can visit this address to see the illustrations. Address for elasticsearch is http://[ipaddr]:9200. One can use any Rest Tool to visit the testing data stored in elasticsearch. Cleaning Up Environment ----------------------- .. code-block:: bash . rm_virt_env.sh If you want to clean the dockers that established during the test, you can excute the additional commands below. .. code-block:: bash bash run_tests.sh --cleanup Note that you can also add cleanup parameter when you run a test case. Then environment will be automatically cleaned up when completing the test. Run POSCA through Community CI ============================== POSCA test cases are runned by OPNFV CI now. See https://build.opnfv.org for details of the building jobs. Each building job is set up to execute a single test case. The test results/logs will be printed on the web page and reported automatically to community MongoDB. There are two ways to report the results. 1. Report testing result by shell script .. code-block:: bash bash run_tests.sh [-h|--help] -s <testsuite>|-c <testcase> --report 2. Report testing result by python interpreter .. code-block:: bash REPORT=True opts="--privileged=true -id" docker_volume="-v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp" docker run $opts --name bottlenecks-load-master $docker_volume opnfv/bottlenecks:latest /bin/bash sleep 5 REPORT="True" POSCA_SCRIPT="/home/opnfv/bottlenecks/testsuites/posca" docker exec bottlenecks_load-master python ${POSCA_SCRIPT}/../run_posca.py testcase|teststory <testcase> ${REPORT} Test Result Description ======================= * Please refer to release notes and also https://wiki.opnfv.org/display/testing/Result+alignment+for+ELK+post-processing