summaryrefslogtreecommitdiffstats
path: root/docs/release/release-notes/functest-release.rst
blob: c2bea676d1c1a68903d7c98c96f0ce4b24a0b478 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. SPDX-License-Identifier: CC-BY-4.0

=======
License
=======

OPNFV Euphrates release note for Functest Docs
are licensed under a Creative Commons Attribution 4.0 International License.
You should have received a copy of the license along with this.
If not, see <http://creativecommons.org/licenses/by/4.0/>.

=============================================
OPNFV Euphrates 5.0 release note for Functest
=============================================

Abstract
========

This document contains the release notes of the Functest project.


OPNFV Euphrates Release
======================

Functest deals with functional testing of the OPNFV solution.
It includes test cases developed within the project, test cases developed in
other OPNFV projects and it also intgrates test cases from other upstream
communities.

The internal test cases are:

 * connection_check
 * api_check
 * snaps_health_check
 * vping_ssh
 * vping_userdata
 * tempest_smoke_serial
 * refstack_defcore
 * snaps_smoke
 * rally_sanity
 * odl
 * tempest_full_parallel
 * rally_full
 * cloudify_ims
 * vyos_vrouter

The OPNFV projects integrated into Functest framework for automation are:

 * barometer
 * bgpvpn
 * doctor
 * domino
 * fds
 * odl-sfc
 * odl-netvirt
 * parser
 * promise
 * orchestra_openims
 * orchestra_clearwaterims


Release Data
============

+--------------------------------------+--------------------------------------+
| **Project**                          | functest                             |
|                                      |                                      |
+--------------------------------------+--------------------------------------+
| **Repo/tag**                         | opnfv-5.0.0                          |
|                                      |                                      |
+--------------------------------------+--------------------------------------+
| **Release designation**              | Euphrates initial release            |
|                                      |                                      |
+--------------------------------------+--------------------------------------+
| **Release date**                     | October 20th 2017                    |
|                                      |                                      |
+--------------------------------------+--------------------------------------+
| **Purpose of the delivery**          | Euphrates first release              |
|                                      |                                      |
+--------------------------------------+--------------------------------------+

Deliverables
============

Software
--------

 Functest Docker images:

 * https://hub.docker.com/r/opnfv/functest
 * https://hub.docker.com/r/opnfv/functest-healthcheck
 * https://hub.docker.com/r/opnfv/functest-smoke
 * https://hub.docker.com/r/opnfv/functest-features
 * https://hub.docker.com/r/opnfv/functest-components
 * https://hub.docker.com/r/opnfv/functest-vnf
 * https://hub.docker.com/r/opnfv/functest-parser
 * https://hub.docker.com/r/opnfv/functest-restapi

 TestAPI Docker image:

 * https://hub.docker.com/r/opnfv/testapi

Docker tag to be pulled: opnfv-5.0.0

Documents
---------

 - Installation/configuration guide: http://docs.opnfv.org/en/stable-euphrates/submodules/functest/docs/testing/user/configguide/index.html

 - User Guide: http://docs.opnfv.org/en/stable-euphrates/submodules/functest/docs/testing/user/userguide/index.html

 - Developer Guide: http://docs.opnfv.org/en/stable-euphrates/submodules/functest/docs/testing/developer/devguide/index.html

 - API Docs: http://artifacts.opnfv.org/functest/docs/index.html

 - Functest Framework presentation: http://testresults.opnfv.org/functest/framework/index.html


Version change
==============

Functest now delivers light-weigth Docker images based on Alpine 3.6. The test cases are grouped into several categories
or tiers and must be run from the corresponding container. For example, to run the test case healthcheck, the image
opnfv/functest-healthcheck shall be used. The tiers and the tests within them are explained in detail in the User Guide.

For ARM (aarch64), the former Ubuntu image opnfv/functest shall be used since there are not any Alpine images built
for this architecture yet. It will probably be supported for Euphrates 5.1.

The Parser test case has its own dedicated Docker image since it requires libraries released for OpenStack Pike and
Euphrates is based on Ocata.

The Docker images do not contain OS images (Cirros, Ubuntu, Centos, ..) anymore. A script has been created under the
ci directory (download_images.sh) which contains all the needed images for all the tests. This file can be modified by
the user since not all the images might be used. It must be executed before starting Functest and attach the needed
images as a Docker volume. See Configuration Guide for more information.

The requirements have been split into 3 files:
 * requirements.txt : lists all abstract dependencies of the OPNFV packages
 * test-requirements.txt : lists all abstract dependencies required by Functest Unit Tests
 * upper-constraints.txt : lists all concrete upstream dependencies required by Functest Docker container

OPNFV (test-)requirements.txt have been updated according to stable/ocata global-requirements.txt.
Functest uses (and completes) stable/ocata upper-constraints.txt in Dockerfiles and tox configuration.
The project relies on pbr, which injects requirements into the install_requires, tests_require and/or dependency_links
arguments to setup. It also supports conditional dependencies which can be added to the requirements (e.g. dnspython>=1.14.0;python_version=='2.7')

The way to manage logging has been centralized to a configuration file (logging.ini) which might be modified by the user.
By default, the output of executing the test cases is redirected to log files and is not displayed on the console, only result
messages and summary tables are displayed.

The framework has been refactored and all the test cases inherit from a core class TestCase. For Feature projects who develop
test cases, 2 sub-classes have been created:
 - Feature: it implements all the needed functions and the developer must only overwrite the method "execute" (e.g. Barometer)
 - BashFeature: it is used if the third party test case is a shell script. This way, the execution command must be specified in
 testcases.yaml as the argument (e.g. Domino, Doctor)

An internal REST API has been introduced in Euphrates. The goal is to trigger Functest operations through an API in addition of the CLI.
This could be considered as a first step towards a pseudo micro services approach where the different test projects could expose and
consume APIs to the other test projects.


Euphrates known restrictions/issues
===================================
+--------------+-----------+----------------------------------------------+
| Installer    | Scenario  |  Issue                                       |
+==============+===========+==============================================+
| fuel@aarch64 |    any    |  Alpine containers not supported yet for ARM |
|              |           |  The former Ubuntu Docker image shall be     |
|              |           |  still used for this architecture.           |
+--------------+-----------+----------------------------------------------+
| fuel@aarch64 |    any    |  VNF tier not supported yet.                 |
+--------------+-----------+----------------------------------------------+
|              |           |  The test cases belonging to the VNF tier    |
|     any      |    any    |  have been only tested on os-nosdn-nofeature |
|              |           |  scenarios and baremetal deployments.        |
+--------------+-----------+----------------------------------------------+
|     Joid     |    k8     |  Functest does not offer test suites for     |
|    Compass   |           |  Kubernetes scenarios yet.                   |
+--------------+-----------+----------------------------------------------+


Test and installer/scenario dependencies
========================================

It is not always possible to run all the test cases on all the scenarios.
The scenario dependencies (installer or scenario) are detailed
in the different testcases.yaml for each tier:

 * https://git.opnfv.org/functest/tree/docker/healthcheck/testcases.yaml?h=stable/euphrates
 * https://git.opnfv.org/functest/tree/docker/smoke/testcases.yaml?h=stable/euphrates
 * https://git.opnfv.org/functest/tree/docker/features/testcases.yaml?h=stable/euphrates
 * https://git.opnfv.org/functest/tree/docker/components/testcases.yaml?h=stable/euphrates
 * https://git.opnfv.org/functest/tree/docker/vnf/testcases.yaml?h=stable/euphrates
 * https://git.opnfv.org/functest/tree/docker/parser/testcases.yaml?h=stable/euphrates


Test results
============

The Functest scenario status on October 20, 2017 can be seen on
http://testresults.opnfv.org/functest/euphrates/

Test logs are available in:

 - test results logs from CI: http://artifacts.opnfv.org (within different directories 'logs_functest_X')

 - jenkins logs on CI: https://build.opnfv.org/ci/view/functest/

 - jenkins logs on ARM CI: https://build.opnfv.org/ci/view/armband/



Open JIRA tickets
=================

+------------------+-----------------------------------------------+
|   JIRA           |         Description                           |
+==================+===============================================+
|                  |                                               |
|                  |                                               |
+------------------+-----------------------------------------------+

All the tickets that are not blocking have been fixed or postponed
the next release.


Useful links
============

 - wiki project page: https://wiki.opnfv.org/opnfv_functional_testing

 - wiki Functest Euphrates page: https://wiki.opnfv.org/display/functest/5.+Euphrates

 - Functest repo: https://git.opnfv.org/cgit/functest

 - Functest CI dashboard: https://build.opnfv.org/ci/view/functest/

 - JIRA dashboard: https://jira.opnfv.org/secure/Dashboard.jspa?selectPageId=10611

 - Functest IRC chan: #opnfv-functest

 - Reporting page: http://testresults.opnfv.org/reporting/euphrates.html
ition env for Testing. Yardstick will automatic setup the stack which are described in this section. In fact, yardstick use convert this section to heat template and setup the VMs by heat-client (Meanwhile, yardstick can support to convert this section to Kubernetes template to setup containers). Two Test VMs(athena and ares) are configured by keyword "servers". "flavor" will determine how many vCPU, how much memory for test VMs. As "yardstick-flavor" is a basic flavor which will be automatically created when you run command "yardstick env prepare". "yardstick-flavor" is "1 vCPU 1G RAM,3G Disk". "image" is the image name of test VMs. if you use cirros.3.5.0, you need fill the username of this image into "user". the "policy" of placement of Test VMs have two values (affinity and availability). "availability" means anti-affinity. In "network" section, you can configure which provide network and physical_network you want Test VMs use. you may need to configure segmentation_id when your network is vlan. Moreover, you can configure your specific flavor as below, yardstick will setup the stack for you. :: flavor: name: yardstick-new-flavor vcpus: 12 ram: 1024 disk: 2 Besides default heat stack, yardstick also allow you to setup other two types stack. they are "Node" and "Kubernetes". :: context: type: Kubernetes name: k8s and :: context: type: Node name: LF "Scenarios" section is the description of testing step, you can orchestrate the complex testing step through orchestrate scenarios. Each scenario will do one testing step, In one scenario, you can configure the type of scenario(operation), runner type and SLA of the scenario. For TC002, We only have one step , that is Ping from host VM to target VM. In this step, we also have some detail operation implement ( such as ssh to VM, ping from VM1 to VM2. Get the latency, verify the SLA, report the result). If you want to get this detail implement , you can check with the scenario.py file. For Ping scenario, you can find it in yardstick repo ( yardstick / yardstick / benchmark / scenarios / networking / ping.py) after you select the type of scenario( such as Ping), you will select one type of runner, there are 4 types of runner. Usually, we use the "Iteration" and "Duration". and Default is "Iteration". For Iteration, you can specify the iteration number and interval of iteration. :: runner: type: Iteration iterations: 10 interval: 1 That means yardstick will iterate the 10 times of Ping test and the interval of each iteration is one second. For Duration, you can specify the duration of this scenario and the interval of each ping test. :: runner: type: Duration duration: 60 interval: 10 That means yardstick will run the ping test as loop until the total time of this scenario reach the 60s and the interval of each loop is ten seconds. SLA is the criterion of this scenario. that depends on the scenario. different scenario can have different SLA metric. **How to write a new test case** Yardstick already provide a library of testing step. that means yardstick provide lots of type scenario. Basiclly, What you need to do is to orchestrate the scenario from the library. Here, We will show two cases. One is how to write a simple test case, the other is how to write a quite complex test case. Write a new simple test case First, you can image a basic test case description as below. +-----------------------------------------------------------------------------+ |Storage Performance | | | +--------------+--------------------------------------------------------------+ |metric | IOPS (Average IOs performed per second), | | | Throughput (Average disk read/write bandwidth rate), | | | Latency (Average disk read/write latency) | | | | +--------------+--------------------------------------------------------------+ |test purpose | The purpose of TC005 is to evaluate the IaaS storage | | | performance with regards to IOPS, throughput and latency. | | | | +--------------+--------------------------------------------------------------+ |test | fio test is invoked in a host VM on a compute blade, a job | |description | file as well as parameters are passed to fio and fio will | | | start doing what the job file tells it to do. | | | | +--------------+--------------------------------------------------------------+ |configuration | file: opnfv_yardstick_tc005.yaml | | | | | | IO types is set to read, write, randwrite, randread, rw. | | | IO block size is set to 4KB, 64KB, 1024KB. | | | fio is run for each IO type and IO block size scheme, | | | each iteration runs for 30 seconds (10 for ramp time, 20 for | | | runtime). | | | | | | For SLA, minimum read/write iops is set to 100, | | | minimum read/write throughput is set to 400 KB/s, | | | and maximum read/write latency is set to 20000 usec. | | | | +--------------+--------------------------------------------------------------+ |applicability | This test case can be configured with different: | | | | | | * IO types; | | | * IO block size; | | | * IO depth; | | | * ramp time; | | | * test duration. | | | | | | Default values exist. | | | | | | SLA is optional. The SLA in this test case serves as an | | | example. Considerably higher throughput and lower latency | | | are expected. However, to cover most configurations, both | | | baremetal and fully virtualized ones, this value should be | | | possible to achieve and acceptable for black box testing. | | | Many heavy IO applications start to suffer badly if the | | | read/write bandwidths are lower than this. | | | | +--------------+--------------------------------------------------------------+ |pre-test | The test case image needs to be installed into Glance | |conditions | with fio included in it. | | | | | | No POD specific requirements have been identified. | | | | +--------------+--------------------------------------------------------------+ |test sequence | description and expected result | | | | +--------------+--------------------------------------------------------------+ |step 1 | A host VM with fio installed is booted. | | | | +--------------+--------------------------------------------------------------+ |step 2 | Yardstick is connected with the host VM by using ssh. | | | 'fio_benchmark' bash script is copyied from Jump Host to | | | the host VM via the ssh tunnel. | | | | +--------------+--------------------------------------------------------------+ |step 3 | 'fio_benchmark' script is invoked. Simulated IO operations | | | are started. IOPS, disk read/write bandwidth and latency are | | | recorded and checked against the SLA. Logs are produced and | | | stored. | | | | | | Result: Logs are stored. | | | | +--------------+--------------------------------------------------------------+ |step 4 | The host VM is deleted. | | | | +--------------+--------------------------------------------------------------+ |test verdict | Fails only if SLA is not passed, or if there is a test case | | | execution problem. | | | | +--------------+--------------------------------------------------------------+ TODO How can I contribute to Yardstick? ----------------------------------- If you are already a contributor of any OPNFV project, you can contribute to Yardstick. If you are totally new to OPNFV, you must first create your Linux Foundation account, then contact us in order to declare you in the repository database. We distinguish 2 levels of contributors: * the standard contributor can push patch and vote +1/0/-1 on any Yardstick patch * The commitor can vote -2/-1/0/+1/+2 and merge Yardstick commitors are promoted by the Yardstick contributors. Gerrit & JIRA introduction ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. _Gerrit: https://www.gerritcodereview.com/ .. _`OPNFV Gerrit`: http://gerrit.opnfv.org/ .. _link: https://identity.linuxfoundation.org/ .. _JIRA: https://jira.opnfv.org/secure/Dashboard.jspa OPNFV uses Gerrit_ for web based code review and repository management for the Git Version Control System. You can access `OPNFV Gerrit`_. Please note that you need to have Linux Foundation ID in order to use OPNFV Gerrit. You can get one from this link_. OPNFV uses JIRA_ for issue management. An important principle of change management is to have two-way trace-ability between issue management (i.e. JIRA_) and the code repository (via Gerrit_). In this way, individual commits can be traced to JIRA issues and we also know which commits were used to resolve a JIRA issue. If you want to contribute to Yardstick, you can pick a issue from Yardstick's JIRA dashboard or you can create you own issue and submit it to JIRA. Install Git and Git-reviews ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Installing and configuring Git and Git-Review is necessary in order to submit code to Gerrit. The `Getting to the code <https://wiki.opnfv.org/display/DEV/Developer+Getting+Started>`_ page will provide you with some help for that. Verify your patch locally before submitting ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Once you finish a patch, you can submit it to Gerrit for code review. A developer sends a new patch to Gerrit will trigger patch verify job on Jenkins CI. The yardstick patch verify job includes python flake8 check, unit test and code coverage test. Before you submit your patch, it is recommended to run the patch verification in your local environment first. Open a terminal window and set the project's directory to the working directory using the ``cd`` command. Assume that ``YARDSTICK_REPO_DIR`` is the path to the Yardstick project folder on your computer:: cd $YARDSTICK_REPO_DIR Verify your patch:: tox It is used in CI but also by the CLI. Submit the code with Git ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Tell Git which files you would like to take into account for the next commit. This is called 'staging' the files, by placing them into the staging area, using the ``git add`` command (or the synonym ``git stage`` command):: git add $YARDSTICK_REPO_DIR/samples/sample.yaml Alternatively, you can choose to stage all files that have been modified (that is the files you have worked on) since the last time you generated a commit, by using the `-a` argument:: git add -a Git won't let you push (upload) any code to Gerrit if you haven't pulled the latest changes first. So the next step is to pull (download) the latest changes made to the project by other collaborators using the ``pull`` command:: git pull Now that you have the latest version of the project and you have staged the files you wish to push, it is time to actually commit your work to your local Git repository:: git commit --signoff -m "Title of change" Test of change that describes in high level what was done. There is a lot of documentation in code so you do not need to repeat it here. JIRA: YARDSTICK-XXX .. _`this document`: http://chris.beams.io/posts/git-commit/ The message that is required for the commit should follow a specific set of rules. This practice allows to standardize the description messages attached to the commits, and eventually navigate among the latter more easily. `This document`_ happened to be very clear and useful to get started with that. Push the code to Gerrit for review ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Now that the code has been comitted into your local Git repository the following step is to push it online to Gerrit for it to be reviewed. The command we will use is ``git review``:: git review This will automatically push your local commit into Gerrit. You can add Yardstick committers and contributors to review your codes. .. image:: images/review.PNG :width: 800px :alt: Gerrit for code review You can find Yardstick people info `here <https://wiki.opnfv.org/display/yardstick/People>`_. Modify the code under review in Gerrit ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ At the same time the code is being reviewed in Gerrit, you may need to edit it to make some changes and then send it back for review. The following steps go through the procedure. Once you have modified/edited your code files under your IDE, you will have to stage them. The 'status' command is very helpful at this point as it provides an overview of Git's current state:: git status The output of the command provides us with the files that have been modified after the latest commit. You can now stage the files that have been modified as part of the Gerrit code review edition/modification/improvement using ``git add`` command. It is now time to commit the newly modified files, but the objective here is not to create a new commit, we simply want to inject the new changes into the previous commit. You can achieve that with the '--amend' option on the ``git commit`` command:: git commit --amend If the commit was successful, the ``git status`` command should not return the updated files as about to be commited. The final step consists in pushing the newly modified commit to Gerrit:: git review Plugins ========== For information about Yardstick plugins, refer to the chapter **Installing a plug-in into Yardstick** in the `user guide`_.