summaryrefslogtreecommitdiffstats
path: root/docs/testing/developer/genericframework/index.rst
blob: 757214695a7263efbf70212981951b3469743651 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) Huawei Technologies Co.,Ltd, and others

====================================
Dovetail as a Generic Test Framework
====================================

.. toctree::
   :maxdepth: 2


Overview
========

Dovetail is responsible for the technical realization of the OPNFV Verification
Program (OVP) and other compliance verification projects within the scope of
the Linux Foundation Networking (LFN) umbrella projects.
Dovetail provides a generic framework for executing a specific set of test cases
which define the scope of a given compliance verification program, such as OVP.

This document aims at introducing what Dovetail generic framework looks like and
how to develop within this framework.


Introduction of Dovetail Framework
==================================

The following diagram illustrates Dovetail generic framework.

.. image:: ../../../images/dovetail_generic_framework.png
    :align: center
    :scale: 50%

In this diagram, there are 5 main parts, `TestcaseFactory`, `TestRunnerFactory`,
`CrawlerFactory`, `CheckerFactory` and `test case groups`.

- **TestcaseFactory**: For each project, there needs to create its own
  testcase class such as `FunctestTestcase` and `OnapVtpTestcase`. All these
  classes are based on class `Testcase`. There are already many functions in this
  base class which are mainly used to parse test case configuration files. If no
  other special requirements exist, it only needs to initialize these classes with
  different types. Otherwise, it needs to overload or add some functions.

- **TestRunnerFactory**: Similar to `TestcaseFactory`, each project has its own
  test runner class. Dovetail supports 2 kinds of test runners, `DockerRunner`
  and `ShellRunner`. For projects based on Docker, it needs to create
  their own test runner classes such as `FunctestRunner` which inherit from class
  `DockerRunner`. For other projects that are based on Shell, it can use `ShellRunner`
  directly. Test case runners provide many functions to support test cases runs
  such as preparing test tool of each project, run all the commands defined by
  each test case and clean the environment.

- **Test case groups**: Each group is composed of one project configuration file
  and a set of test cases belonging to this project. These groups are used as the
  input of test runners to provide information of projects and test cases. For
  `ShellRunner`, it only needs test case configuratons as the input.

- **CrawlerFactory**: This is used to parse the results of test cases and record
  them with unified format. The original result data report by each project is
  different. So it needs to create different crawler classes for different projects
  to parse their results.

- **CheckerFactory**: This is used to check the result data generated by crawler.
  Each project should have its own checker class due to the different requirements
  of different projects.


Development with Dovetail Framework
===================================

Everyone who is interested in developing Dovetail framework to integrate new upstream
test cases will face one of the two following scenarios:

- **Adding test cases that belong to integrated projects**: There are already some
  projects integrated in Dovetail. These projects are coming from OPNFV (Open Platform
  for NFV) and ONAP (Open Network Automation Platform) communities. It will be
  much easier to add new test cases that belong to these projects.

- **Adding test cases that not belong to integrated projects**: The test cases
  may belong to other projects that haven't been integrated into Dovetail yet.
  These projects could be in OPNFV, ONAP or other communities. This scenario is a
  little more complicated.


Test cases belonging to integrated projects
-------------------------------------------

Dovetail framework already includes a large amount of test cases. All these test
cases are implemented by upstream projects in OPNFV and ONAP. The upstream
projects already integrated in Dovetail are Functest, Yardstick and Bottlenecks
from OPNFV and VNF SDK and VVP from ONAP.

In order to add a test case belonging to one of these projects, there
only need to add one test case configuration file which is in yaml format.
Following is the introduction about how to use the file to add one new test case.
Please refer to `Dovetail test case github
<https://github.com/opnfv/dovetail/tree/master/etc/testcase>`_
for all configuration files of all test cases.

.. code-block:: bash

   ---
   Test case name in Dovetail:
     name: Test case name in Dovetail
     objective: Test case description
     validate:
       type: 'shell' or name of the project already integrated in Dovetail
       testcase: The original test case name called in the project that it is developed
       image_name: Name of the Docker image used to run this test
       pre_condition:
         - 'Commands needed to be executed before running this test'
         - 'e.g. cp src_file dest_file'
       cmds:
         - 'Commands used to run this test case'
       post_condition:
         - 'Commands needed to be executed after running this test case'
     report:
       source_archive_files:
         - test.log
       dest_archive_files:
         - path/to/archive/test.log
       check_results_files:
         - results.json
       portal_key_file: path/to/key/logs/xxx.log
       sub_testcase_list:
         - sub_test_1
         - sub_test_2
         - sub_test_3

This is the complete format of test case configuration file. Here are some
detailed description for each of the configuration options.

- **Test case name in Dovetail**: All test cases should be named as 'xxx.yyy.zzz'.
  This is the alias in Dovetail and has no relationship with its name in its own
  project. The first part is used to identify the project where this test case
  come from (e.g. functest, onap-vtp). The second part is used to classify this
  test case according to test area (e.g. healthcheck, ha). Dovetail supports to
  run whole test cases in one test suite with the same test area. Also the area
  is used to group all test cases and generate the summary report at the end of
  the test. The last part is special for this test case itself (e.g. image,
  haproxy, csar). It's better to keep the file name the same as the test case
  name to make it easier to find the config file according to this test case
  alias in Dovetail.

- **validate**: This is the main section to define how to run this test case.

  - **type**: This is the type of this test case. It can be `shell` which means
    running this test case with Linux bash commands within Dovetail container. Also it
    can be one of the projects already integrated in Dovetail (functest, yardstick,
    bottlenecks, onap-vtp and onap-vvp). Then this type is used to map to its project
    configuration yaml file. For example, in order to add a test case
    in OPNFV project Functest to Dovetail framework, the type here should be
    `functest`, and will map to `functest_config.yml` for more configurations
    in project level. Please refer to `Dovetail project config github
    <https://github.com/opnfv/dovetail/tree/master/etc/conf>`_ for more details.

  - **testcase**: This is the name defined in its own project. One test case can
    be uniquely identified by `type` and `testcase`. Take the test case
    `functest.vping.ssh` as an example. Its `type` is 'functest' and `testcase`
    is 'vping_ssh'. With these 2 properties, it can be uniquely identified. End users only
    need to know that there is a test case named `functest.vping.ssh` in OVP
    compliance test scope. Dovetail Framework will run `vping_ssh` within Functest
    Docker container.

  - **image_name**: [optional] If the type is `shell`, there is no need to give
    this. For other types, there are default docker images defined in their project
    configuration files. If this test case uses a different docker image, it needs
    to overwrite it by adding `image_name` here. The `image_name` here should only
    be the docker image name without tag. The tag is defined in project's configuration
    file for all test cases belonging to this project.

  - **pre_condition**: [optional] A list of all preparations needed by this
    test case. If the list is the same as the default one in its project configuration
    file, then there is no need to repeat it here. Otherwise, it's necessary to
    overwrite it. If its type is `shell`, then all commands in `pre_condition`,
    `cmds` and `post_condition` should be executable within Dovetail Ubuntu 14.04
    Docker container. If its type is one of the Docker runner projects, then all
    commands should be executable within their own containers. For Functest, it's
    alpine 3.8. For Yardstick and Bottlenecks it's Ubuntu 16.04. For VNF SDK it's
    Ubuntu 14.04. Also all these commands should not require network connection
    because some commercial platforms may be offline environments in private labs.

  - **cmds**: [optional] A list of all commands used to run this test case.

  - **post_condition**: [optional] A list of all commands needed after executing
    this test case such as some clean up operations.

- **report**: This is the section for this test case to archive some log files and
  provide the result file for reporting PASS or FAIL.

  - **source_archive_files**: [optional] If there is no need to archive any files,
    this section can be removed. Otherwise, this is a list of all source files
    needed to be archived. All files generated by all integrated projects will be
    put under `$DOVETAIL_HOME/results`. In order to classify and avoid overwriting
    them, it needs to rename some important files or move them to new directories.
    Navigating directory `$DOVETAIL_HOME/results` to find out all files
    needed to be archived. The paths here should be relative ones according to
    `$DOVETAIL_HOME/results`.

  - **dest_archive_files**: [optional] This should be a list corresponding to the
    list of `source_archive_files`. Also all paths here should be relative ones
    according to `$DOVETAIL_HOME/results`.

  - **check_results_files**: This should be a list of relative paths of
    the result files generated by this test case. Dovetail will parse these files
    to get the result (PASS or FAIL).

  - **portal_key_file**: This should be the key log file of this test case which will
    be used by the OVP portal for review.

  - **sub_testcase_list**: [optional] This section is almost only for Tempest tests
    in Functest. Take `functest.tempest.osinterop` as an example. The `sub_testcase_list`
    here is an check list for this kind of tempest tests. Only when all sub test
    cases list here are passed, this test case can be taken as PASS. The other kind
    of tempest tests is `tempest_custom` such as `functest.tempest.image`. Besides
    taking the `sub_testcase_list` as the check list, it's also used to generate an
    input file of Functest to define the list of sub test cases to be tested.


Test cases not belonging to integrated projects
-----------------------------------------------

If test cases waiting to be added into Dovetail do not belong to any project
that is already integrated into Dovetail framework, then besides adding the test
case configuration files introduced before, there are some other files needed to
be added or modified.


Step 1: Add a project configuration file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

For a new test case that belongs to a new project, it needs to create a project
configuration file to define this new project in Dovetail first. Now Dovetail
only supports integration of projects by using their Docker images. If this test
case should be run with shell runner, then can only add test case configuration
files with `type` 'shell' as describing before and skip the following steps. Following is
the introduction of how to use project configuration file to add one new project
into Dovetail. Please refer to `Dovetail projects configuration github
<https://github.com/opnfv/dovetail/tree/master/etc/conf>`_ for all configuration
files of all integrated projects.

.. code-block:: bash

   ---

   {% set validate_testcase = validate_testcase or '' %}
   {% set testcase = testcase or '' %}
   {% set dovetail_home = dovetail_home or '' %}
   {% set debug = debug or 'false' %}
   {% set build_tag = build_tag or '' %}
   {% set userconfig_dir = '/tmp/userconfig' %}
   {% set patches_dir = '/tmp/patches' %}
   {% set result_dir = '/tmp/results' %}
   {% set openrc_file = '/home/conf/env_file' %}

   project name:
     image_name: name of the docker image
     docker_tag: tag of the docker image
     opts:
       detach: true
       stdin_open: true
       privileged: true
     shell: '/bin/bash'
     envs:
       - 'CI_DEBUG={{debug}}'
       - 'DEPLOY_SCENARIO={{deploy_scenario}}'
       - 'ENV_NAME=env_value'
     volumes:
       - '{{dovetail_home}}/userconfig:{{userconfig_dir}}'
       - '{{dovetail_home}}/results:{{result_dir}}'
       - '/path/on/host:/path/in/container'
       - '/path/of/host/file:/file/path/in/container'
     mounts:
       - 'source={{dovetail_home}}/pre_config/env_config.sh,target={{openrc_file}}
       - 'source={{dovetail_home}}/pre_config,target=/home/opnfv/pre_config'
       - 'source=/file/or/derectory/on/host,target=/file/or/derectory/in/container'
     patches_dir: {{patches_dir}}
     pre_condition:
       - 'Commands needed to be executed before running this test'
     cmds:
       - 'Commands used to run this test case'
     post_condition:
       - 'Commands needed to be executed after running this test case'
     openrc: absolute path of openstack credential files
     extra_container:
       - container1_name
       - container2_name

This is the complete format of project configuration file. Here are some
detailed description for each of the configuration options.

- **Jinja Template**: At the begining of this yaml file, it uses Jinja template
  to define some parameters that will be used somewhere in this file (e.g. result_dir
  and openrc_file). Besides those, there are some other parameters providing by Dovetail
  framework as input of this file, and other parameters can be defined by using these
  ones (e.g. testcase and dovetail_home). The whole input parameters which can be used
  are list below.

  - **attack_host**: This is the attack host name of the test case which calls this
    project configuration file. It's only for HA test cases and can be given in HA
    configuration file `pod.yaml`.

  - **attack_process**: This is the attack process name of the test case which calls
    this project configuration file. It's only for HA test cases and can be given in HA
    configuration file `pod.yaml`.

  - **build_tag**: This is a string includes the UUID generated by Dovetail.

  - **cacert**: This is also only for OpenStack test cases. It is the absolute
    path of the OpenStack certificate provided in `env_config.sh` file.

  - **deploy_scenario**: This is the input when running Dovetail with option
    `--deploy-scenario`.

  - **debug**: This is `True` or `False` according to the command running test
    cases with or without option `--debug`.

  - **dovetail_home**: This is the `DOVETAIL_HOME` getting from the ENV.

  - **os_insecure**: This is only for test cases aiming at OpenStack. This is
    `True` or `False` according to `env_config.sh` file.

  - **testcase**: This is the name of the test case which calls this project
    configuration file. Different from `validate_testcase`, this is the alias
    defined in Dovetail not in its own project.

  - **validate_testcase**: This is the name of the test case instance which calls this
    project configuration file. The name is provided by the configuration file
    of this test case (validate -> testcase).

- **project name**: This is the project name defined in Dovetail. For example
  OPNFV Functest project is named as 'functest' here in Dovetail. This project
  name will be used by test case configuration files as well as somewhere in
  Dovetail source code.

- **image_name**: This is the name of the default Docker image for most test cases
  within this project. Each test case can overwrite it with its own configuration.

- **docker_tag**: This is the tag of all Docker images for all test cases within
  this project. For each release, it should use one Docker image with a stable
  and official release version.

- **opts**: Here are all options used to run Docker containers except 'image',
  'command', 'environment', 'volumes', 'mounts' and 'extra_hosts'. For example,
  the options include 'detach', 'privileged' and 'tty'. The full list of all
  options can be found in `Docker python SDK docs <https://docker-py.readthedocs.io/en/stable/containers.html>`_.

- **shell**: This is the command used to run in the container.

- **envs**: This is a list of all envs used to run Docker containers.

- **volumes**: A volume mapping list used to run Docker containers. The source volumes
  list here are allowed to be nonexistent and Docker will create new directories for them
  on the host. Every project should at least map the `$DOVETAIL_HOME/results`
  in the test host to containers to collect all result files.

- **mounts**: A mount mapping list used to run Docker containers. More powerful alternative
  to **volumes**. The source volumes list here are not allowed to be nonexistent.
  Every project should at least mount the `$DOVETAIL_HOME/pre_config` in the test host to
  containers to get config files.

- **patches_dir**: [optional] This is an absolute path of the patches applied to
  the containers.

- **pre_condition**: A list of all default preparations needed by this project.
  It can be overwritten by configurations of test cases.

- **cmds**: A list of all default commands used to run all test cases within
  this project. Also it can be overwritten by configurations of test cases.

- **post_condition**: A list of all default cleaning commands needed by this
  project.

- **openrc**: [optional] If the system under test is OpenStack, then it needs to
  provide the absolute path here to copy the credential file in the Test Host to
  containers.

- **extra_container**: [optional] The extra containers needed to be removed at the
  end of the test. These containers are created by the test cases themselves at
  runtime rather than created by Dovetail.


Step 2: Add related classes
^^^^^^^^^^^^^^^^^^^^^^^^^^^

After adding the project and test case configuration files, there also need to
add some related classes into the source code.

- **Test Case class**: Each project should have its own test case class in
  `testcase.py` for `TestcaseFactory`.

- **Test Runner class**: Each project should have its own test runner class in
  `test_runner.py` for `TestRunnerFactory`.

- **Crawler class**: Each project should have its own test results crawler class
  in `report.py` for `CrawlerFactory`.

- **Checker class**: Each project should have its own test results checker class
  in `report.py` for `CheckerFactory`.


Step 3: Create related logs
^^^^^^^^^^^^^^^^^^^^^^^^^^^

If the classes added in step2 have function `create_log`, then need to call
these functions in `run.py` to initial the log instances at the very begining.


Step 4: Update unit tests
^^^^^^^^^^^^^^^^^^^^^^^^^

A patch is not going to be verified without 100% coverage when applying acceptance check.