aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/devguide/index.rst423
-rw-r--r--docs/userguide/index.rst314
-rwxr-xr-xtestcases/VIM/OpenStack/CI/libraries/run_rally-cert.py93
-rw-r--r--testcases/features/doctor.py30
4 files changed, 513 insertions, 347 deletions
diff --git a/docs/devguide/index.rst b/docs/devguide/index.rst
index bdf0e5b26..7dd5cc790 100644
--- a/docs/devguide/index.rst
+++ b/docs/devguide/index.rst
@@ -11,7 +11,8 @@ OPNFV FUNCTEST developer guide
Introduction
============
-This document describes how feature projects aiming to run functional tests can be integrated into FuncTest framework.
+This document describes how feature projects aiming to run functional tests can
+be integrated into FuncTest framework.
================================
@@ -46,16 +47,25 @@ The Internal test cases in Brahmaputra are:
* vIMS
* Rally
+The external tescases are:
+
+ * Promise
+ * Doctor
+ * Onos
+ * BGPVPN
+
see `[2]`_ for details.
-Functest can also be considered as a framework that may include external OPNFV projects.
+Functest can also be considered as a framework that may include external OPNFV
+projects.
This framework will ease the integration of the feature test suite to the CI.
==================
How Functest works
==================
-The installation and the launch of the Functest docker image is described in `[1]`_.
+The installation and the launch of the Functest docker image is described in
+`[1]`_.
The Functest docker directories are::
@@ -80,46 +90,50 @@ The Functest docker directories are::
::
- +--------------+-------------------+---------------------------------------------------------+
- | Directory | Subdirectory | Comments |
- +--------------+-------------------+---------------------------------------------------------+
- | | <project>/conf | All the useful configuration file(s) for <project> |
- | | | the openstack creds are put there for CI |
- | | | It is recommended to push it there when passing the |
- | | | credentials to container through the -v option |
- | +-------------------+---------------------------------------------------------+
- | opnfv | <project>/data | Usefull data, images for <projects> |
- | | | By default we put an image cirros-0.3.4-x86_64-disk.img |
- | | | This image can be used by any projects |
- | +-------------------+---------------------------------------------------------+
- | | <project>/results | Local result directory of project <project> |
- +--------------+-------------------+---------------------------------------------------------+
- | | bgpvpn | |
- | +-------------------+ +
- | repos | doctor | |
- | +-------------------+ +
- | | functest | |
- | +-------------------+ +
- | | odl_integration | |
- | +-------------------+ Clone of the useful repositories +
- | | onos | These repositories may include: |
- | +-------------------+ - tooling +
- | | promise | - scenario |
- | +-------------------+ - scripts +
- | | rally | |
- | +-------------------+ +
- | | releng | |
- | +-------------------+ +
- | | vims-test | |
- | +-------------------+ +
- | | <your project> | |
- +--------------+-------------------+---------------------------------------------------------+
+ +-----------+-------------------+---------------------------------------------+
+ | Directory | Subdirectory | Comments |
+ +-----------+-------------------+---------------------------------------------+
+ | | <project>/conf | All the useful configuration file(s) for |
+ | | | <project> the openstack creds are put there |
+ | | | for CI |
+ | | | It is recommended to push it there when |
+ | | | passing the credentials to container through|
+ | | | the -v option |
+ | +-------------------+---------------------------------------------+
+ | opnfv | <project>/data | Usefull data, images for <projects> |
+ | | | By default we put a cirros image: |
+ | | | cirros-0.3.4-x86_64-disk.img |
+ | | | This image can be used by any projects |
+ | +-------------------+---------------------------------------------+
+ | | <project>/results | Local result directory of project <project> |
+ +-----------+-------------------+---------------------------------------------+
+ | | bgpvpn | |
+ | +-------------------+ +
+ | repos | doctor | |
+ | +-------------------+ +
+ | | functest | |
+ | +-------------------+ +
+ | | odl_integration | |
+ | +-------------------+ Clone of the useful repositories +
+ | | onos | These repositories may include: |
+ | +-------------------+ - tooling +
+ | | promise | - scenario |
+ | +-------------------+ - scripts +
+ | | rally | |
+ | +-------------------+ +
+ | | releng | |
+ | +-------------------+ +
+ | | vims-test | |
+ | +-------------------+ +
+ | | <your project> | |
+ +-----------+-------------------+---------------------------------------------+
Before running the test suite, you must prepare the environement by running::
$ /home/opnfv/repos/functest/docker/prepare_env.sh
-By running prepare_env.sh, you build the test environement required by the tests including the retrieval and sourcing of OpenStack credentials.
+By running prepare_env.sh, you build the test environement required by the tests
+including the retrieval and sourcing of OpenStack credentials.
This is an example of the env variables we have in the docker container:
* HOSTNAME=373f77816eb0
@@ -134,7 +148,8 @@ This is an example of the env variables we have in the docker container:
* creds=/home/opnfv/functest/conf/openstack.creds
* _=/usr/bin/env
-The prepare_env.sh will source the credentials, so once run you should have access to the following env variables::
+The prepare_env.sh will source the credentials, so once run you should have
+access to the following env variables::
root@373f77816eb0:~# env|grep OS_
OS_REGION_NAME=RegionOne
@@ -173,7 +188,8 @@ The files of the Functest repository you must modify to integrate Functest are:
Dockerfile
==========
-This file lists the repositories to be cloned in the Functest container. The repositories can be internal or external::
+This file lists the repositories to be cloned in the Functest container.
+The repositories can be internal or external::
RUN git clone https://gerrit.opnfv.org/gerrit/<your porject> ${repos_dir}/<your project>
@@ -181,7 +197,8 @@ This file lists the repositories to be cloned in the Functest container. The rep
Common.sh
==========
-This file can be used to declare the branch and commit variables of your projects::
+This file can be used to declare the branch and commit variables of your
+projects::
<YOUR_PROJECT>_BRANCH=$(cat $config_file | grep -w <your project>_branch | awk 'END {print $NF}')
<YOUR_PROJECT>_COMMIT=$(cat $config_file | grep -w <your project>_commit | awk 'END {print $NF}')
@@ -193,7 +210,8 @@ This file can be used to declare the branch and commit variables of your projec
requirements.pip
================
-This file can be used to preloaded all the needed Python libraries (and avoid that each project does it)
+This file can be used to preloaded all the needed Python libraries (and avoid
+that each project does it)
The current libraries used in Functest container are::
# cat requirements.pip
@@ -213,17 +231,22 @@ The current libraries used in Functest container are::
robotframework-sshlibrary==2.1.1
configObj==5.0.6
Flask==0.10.1
+ xmltodict==0.9.2
+ scp==0.10.2
+ paramiko==1.16.0
prepare_env.sh
==============
-This script can be adapted if you need to set up a specific environment before running the tests.
+This script can be adapted if you need to set up a specific environment before
+running the tests.
run_tests.sh
============
-This script is used to run the tests. You must thus complete the cases with you own project::
+This script is used to run the tests. You must thus complete the cases with your
+own project::
;;
"promise")
@@ -248,7 +271,8 @@ And do not forget to update also the help line::
config_funtest.yaml
===================
-This file is the main configuration file of Functest. You must add the references to your project::
+This file is the main configuration file of Functest. You must add the
+references to your project::
general:
directories:
@@ -259,12 +283,317 @@ This file is the main configuration file of Functest. You must add the reference
<your project>_commit: latest
+====================
+Test Dashboard & API
+====================
+
+The OPNFV testing group created a test collection database to collect the test
+results from CI.
+Any test project running on any lab integrated in CI can push the results to
+this database.
+This database can be used afterwards to see the evolution of the tests and
+compare the results versus the installers, the scenario or the labs.
+
+You can find more information about the dashboard from Testing Dashboard wiki
+page `[3]`_.
+
+
+Overall Architecture
+--------------------
+
+The Test result management in Brahmaputra can be summarized as follows::
+
+ +-------------+ +-------------+ +-------------+
+ | | | | | |
+ | Test | | Test | | Test |
+ | Project #1 | | Project #2 | | Project #N |
+ | | | | | |
+ +-------------+ +-------------+ +-------------+
+ | | |
+ V V V
+ +-----------------------------------------+
+ | |
+ | Test Rest API front end |
+ | http://testresults.opnfv.org/testapi |
+ | |
+ +-----------------------------------------+
+ A |
+ | V
+ | +-------------------------+
+ | | |
+ | | Test Results DB |
+ | | Mongo DB |
+ | | |
+ | +-------------------------+
+ |
+ |
+ +----------------------+
+ | |
+ | test Dashboard |
+ | |
+ +----------------------+
+
+The Test dashboard URL is: TODO LF
+A alternate Test dashboard has been realized to provide a view per installer and
+per scenario for Brahmaputra release::
+
+ http://testresults.opnfv.org/proto/brahma/
+
+This Dashboard consumes the results retrieved thanks to the Test API.
+
+Test API description
+--------------------
+
+The Test API is used to declare pods, projects, test cases and test results. An
+additional method dashboard has been added to post-process the raw results. The
+data model is very basic, 4 objects are created:
+
+ * Pods
+ * Test projects
+ * Test cases
+ * Test results
+
+Pods::
+
+ {
+ "id": <ID>,
+ "details": <URL description of the POD>,
+ "creation_date": YYYY-MM-DD HH:MM:SS ,
+ "name": <The POD Name>,
+ "mode": <metal or virtual>
+ },
+
+Test project::
+
+ {
+ "id": <ID>,
+ "name": <Name of the Project>,
+ "creation_date": "YYYY-MM-DD HH:MM:SS",
+ "description": <Short description>
+ },
+
+Test case::
+
+ {
+ "id": <ID>,
+ "name":<Name of the test case>,
+ "creation_date": "YYYY-MM-DD HH:MM:SS",
+ "description": <short description>,
+ "url":<URL for longer description>
+ },
+
+Test results::
+
+ {
+ "_id": <ID,
+ "project_name": <Reference to project>,
+ "pod_name": <Reference to POD where the test was executed>,
+ "version": <Scenario on which the test was executed>,
+ "installer": <Installer Apex or Compass or Fuel or Joid>,
+ "description": <Short description>,
+ "creation_date": "YYYY-MM-DD HH:MM:SS",
+ "case_name": <Reference to the test case>
+ "details":{
+ <- the results to be put here ->
+ }
+
+
+For Brahmaputra, we got:
+
+ * 16 pods
+ * 18 projects
+ * 101 test cases
+
+The projects and the test cases have been frozen in December.
+But all were not ready for Brahmaputra.
+
+
+
+The API can described as follows:
+
+Version:
+
+ +--------+--------------------------+-----------------------------------------+
+ | Method | Path | Description |
+ +========+==========================+=========================================+
+ | GET | /version | Get API version |
+ +--------+--------------------------+-----------------------------------------+
+
+
+Pods:
+
+ +--------+--------------------------+-----------------------------------------+
+ | Method | Path | Description |
+ +========+==========================+=========================================+
+ | GET | /pods | Get the list of declared Labs (PODs) |
+ +--------+--------------------------+-----------------------------------------+
+ | POST | /pods | Declare a new POD |
+ | | | Content-Type: application/json |
+ | | | { |
+ | | | "name": "pod_foo", |
+ | | | "creation_date": "YYYY-MM-DD HH:MM:SS"|
+ | | | } |
+ +--------+--------------------------+-----------------------------------------+
+
+Projects:
+
+ +--------+--------------------------+-----------------------------------------+
+ | Method | Path | Description |
+ +========+==========================+=========================================+
+ | GET | /test_projects | Get the list of test projects |
+ +--------+--------------------------+-----------------------------------------+
+ | GET |/test_projects/{project} | Get details on {project} |
+ | | | |
+ +--------+--------------------------+-----------------------------------------+
+ | POST | /test_projects | Add a new test project |
+ | | | Content-Type: application/json |
+ | | | { |
+ | | | "name": "project_foo", |
+ | | | "description": "whatever you want" |
+ | | | } |
+ +--------+--------------------------+-----------------------------------------+
+ | PUT | /test_projects/{project} | Update a test project |
+ | | | |
+ | | | Content-Type: application/json |
+ | | | { |
+ | | | <the field(s) you want to modify> |
+ | | | } |
+ +--------+--------------------------+-----------------------------------------+
+ | DELETE | /test_projects/{project} | Delete a test project |
+ +--------+--------------------------+-----------------------------------------+
+
+
+Test cases:
+
+ +--------+--------------------------+-----------------------------------------+
+ | Method | Path | Description |
+ +========+==========================+=========================================+
+ | GET | /test_projects/{project}/| Get the list of test cases of {project} |
+ | | cases | |
+ +--------+--------------------------+-----------------------------------------+
+ | POST | /test_projects/{project}/| Add a new test case to {project} |
+ | | cases | Content-Type: application/json |
+ | | | { |
+ | | | "name": "case_foo", |
+ | | | "description": "whatever you want" |
+ | | | "creation_date": "YYYY-MM-DD HH:MM:SS"|
+ | | | "url": "whatever you want" |
+ | | | } |
+ +--------+--------------------------+-----------------------------------------+
+ | PUT | /test_projects/{project}?| Modify a test case of {project} |
+ | | case_name={case} | |
+ | | | Content-Type: application/json |
+ | | | { |
+ | | | <the field(s) you want to modify> |
+ | | | } |
+ +--------+--------------------------+-----------------------------------------+
+ | DELETE | /test_projects/{project}/| Delete a test case |
+ | | case_name={case} | |
+ +--------+--------------------------+-----------------------------------------+
+
+Test Results:
+
+ +--------+--------------------------+-----------------------------------------+
+ | Method | Path | Description |
+ +========+==========================+=========================================+
+ | GET |/results/project={project}| Get the test results of {project} |
+ +--------+--------------------------+-----------------------------------------+
+ | GET |/results/case={case} | Get the test results of {case} |
+ +--------+--------------------------+-----------------------------------------+
+ | GET |/results?pod={pod} | get the results on pod {pod} |
+ +--------+--------------------------+-----------------------------------------+
+ | GET |/results?installer={inst} | Get the test results of installer {inst}|
+ +--------+--------------------------+-----------------------------------------+
+ | GET |/results?version={version}| Get the test results of scenario |
+ | | | {version}. Initially the version param |
+ | | | was reflecting git version, in Functest |
+ | | | it was decided to move to scenario |
+ +--------+--------------------------+-----------------------------------------+
+ | GET |/results?project={project}| Get all the results of the test case |
+ | |&case={case} | {case} of the project {project} with |
+ | |&version={scenario} | version {scenario} installed by |
+ | |&installer={installer} | {installer} on POD {pod} stored since |
+ | |&pod={pod} | {days} days |
+ | | | {project_name} and {case_name} are |
+ | |&period={days} | mandatory, the other parameters are |
+ | | | optional. |
+ +--------+--------------------------+-----------------------------------------+
+ | POST | /results | Add a new test results |
+ | | | Content-Type: application/json |
+ | | | { |
+ | | | "project_name": "project_foo", |
+ | | | "case_name": "case_foo", |
+ | | | "pod_name": "pod_foo", |
+ | | | "installer": "installer_foo", |
+ | | | "version": "scenario_foo", |
+ | | | "details": <your results> |
+ | | | } |
+ +--------+--------------------------+-----------------------------------------+
+
+
+Dashboard:
+
+ +--------+--------------------------+-----------------------------------------+
+ | Method | Path | Description |
+ +========+==========================+=========================================+
+ | GET |/dashboard? | Get all the dashboard ready results of |
+ | |&project={project} | {case} of the project {project} |
+ | |&case={case} | version {scenario} installed by |
+ | |&version={scenario} | {installer} on POD {pod} stored since |
+ | |&installer={installer} | {days} days |
+ | |&pod={pod} | |
+ | |&period={days} | {project_name} and {case_name} are |
+ | | | mandatory, the other parameters are |
+ | | | optional. |
+ +--------+--------------------------+-----------------------------------------+
+
+
+The results with dashboard method are post-processed from raw results.
+Please note that dashboard results are not stored. Only raw results are stored.
+
+
+===============================================
+How to push your results into the Test Database
+===============================================
+
+The test database is used to collect test results. By default it is enabled only
+in Continuous Integration.
+The architecture and associated API is described in `[2]`_.
+If you want to push your results from CI, you just have to use the API at the
+end of your script.
+
+You can also reuse a python function defined in functest_utils.py::
+
+ def push_results_to_db(db_url, case_name, logger, pod_name,version, payload):
+ """
+ POST results to the Result target DB
+ """
+ url = db_url + "/results"
+ installer = get_installer_type(logger)
+ params = {"project_name": "functest", "case_name": case_name,
+ "pod_name": pod_name, "installer": installer,
+ "version": version, "details": payload}
+
+ headers = {'Content-Type': 'application/json'}
+ try:
+ r = requests.post(url, data=json.dumps(params), headers=headers)
+ if logger:
+ logger.debug(r)
+ return True
+ except Exception, e:
+ print "Error [push_results_to_db('%s', '%s', '%s', '%s', '%s')]:" \
+ % (db_url, case_name, pod_name, version, payload), e
+ return False
+
+::
+
==========
References
==========
-.. _`[1]`: Functest configuration guide URL
-.. _`[2]`: functest user guide URL
+.. _`[1]`: http://artifacts.opnfv.org/functest/docs/configguide/index.html Functest configuration guide URL
+.. _`[2]`: http://artifacts.opnfv.org/functest/docs/userguide/index.html functest user guide URL
+.. _`[3]`: https://wiki.opnfv.org/opnfv_test_dashboard
OPNFV main site: opnfvmain_.
diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst
index d52f0210c..39d795261 100644
--- a/docs/userguide/index.rst
+++ b/docs/userguide/index.rst
@@ -324,9 +324,6 @@ The available 33 test cases can be grouped into 7 test suites:
#. Cleanup test allocations: Destroys all allocations in OpenStack.
-The test results are pushed into the LF test DB:
- * Duration of the Promise test case
- * Number of tests / failures
The specific parameters for Promise can be found in config_functest.yaml and
include::
@@ -397,9 +394,6 @@ vPing_userdata results are displayed in the console::
2016-01-06 16:07:12,661 - vPing- INFO - Deleting network 'vping-net'...
2016-01-06 16:07:14,748 - vPing- INFO - vPing OK
-A json file is produced and pushed into the test result database.
-
-
Tempest
^^^^^^^
@@ -436,10 +430,6 @@ The Tempest results are displayed in the console::
In order to check all the available test cases related debug information, please
inspect tempest.log file stored into related Rally deployment folder.
-The Tempest log is also automatically pushed to OPNFV artifact server in
-Continuous Integration.
-The Tempest results are also pushed to the Test Database from Continuous
-Integration.
Rally
@@ -479,8 +469,6 @@ summary::
2016-02-04 12:50:18,382 - run_rally - INFO - Test scenario: "requests" OK.
-The raw results are pushed into the Test Database from Continuous Integration.
-
Controllers
-----------
@@ -703,279 +691,11 @@ The results can be observed in the console::
**********************************
-
-Functest in test Dashboard
-==========================
-
-The OPNFV testing group created a test collection database to collect the test
-results from CI.
-Any test project running on any lab integrated in CI can push the results to
-this database.
-This database can be used afterwards to see the evolution of the tests and
-compare the results versus the installers, the scenario or the labs.
-
-You can find more information about the dashboard from Testing Dashboard wiki
-page `[6]`_.
-
-
-Overall Architecture
---------------------
-
-The Test result management in Brahmaputra can be summarized as follows::
-
- +-------------+ +-------------+ +-------------+
- | | | | | |
- | Test | | Test | | Test |
- | Project #1 | | Project #2 | | Project #N |
- | | | | | |
- +-------------+ +-------------+ +-------------+
- | | |
- V V V
- +-----------------------------------------+
- | |
- | Test Rest API front end |
- | http://testresults.opnfv.org/testapi |
- | |
- +-----------------------------------------+
- A |
- | V
- | +-------------------------+
- | | |
- | | Test Results DB |
- | | Mongo DB |
- | | |
- | +-------------------------+
- |
- |
- +----------------------+
- | |
- | test Dashboard |
- | |
- +----------------------+
-
-The Test dashboard URL is: TODO LF
-A proto Test dashboard has been realized: http://testresults.opnfv.org/proto/
-
-Test API description
---------------------
-
-The Test API is used to declare pods, projects, test cases and test results. An
-additional method dashboard has been added to post-process the raw results. The
-data model is very basic, 4 objects are created:
-
- * Pods
- * Test projects
- * Test cases
- * Test results
-
-Pods::
-
- {
- "id": <ID>,
- "details": <URL description of the POD>,
- "creation_date": YYYY-MM-DD HH:MM:SS ,
- "name": <The POD Name>,
- "mode": <metal or virtual>
- },
-
-Test project::
-
- {
- "id": <ID>,
- "name": <Name of the Project>,
- "creation_date": "YYYY-MM-DD HH:MM:SS",
- "description": <Short description>
- },
-
-Test case::
-
- {
- "id": <ID>,
- "name":<Name of the test case>,
- "creation_date": "YYYY-MM-DD HH:MM:SS",
- "description": <short description>,
- "url":<URL for longer description>
- },
-
-Test results::
-
- {
- "_id": <ID,
- "project_name": <Reference to project>,
- "pod_name": <Reference to POD where the test was executed>,
- "version": <Scenario on which the test was executed>,
- "installer": <Installer Apex or Compass or Fuel or Joid>,
- "description": <Short description>,
- "creation_date": "YYYY-MM-DD HH:MM:SS",
- "case_name": <Reference to the test case>
- "details":{
- <- the results to be put here ->
- }
-
-
-For Brahmaputra, we got:
-
- * 16 pods
- * 18 projects
- * 101 test cases
-
-The projects and the test cases have been frozen in December.
-But all were not ready for Brahmaputra.
-
-
-
-The API can described as follows:
-
-Version:
-
- +--------+--------------------------+-----------------------------------------+
- | Method | Path | Description |
- +========+==========================+=========================================+
- | GET | /version | Get API version |
- +--------+--------------------------+-----------------------------------------+
-
-
-Pods:
-
- +--------+--------------------------+-----------------------------------------+
- | Method | Path | Description |
- +========+==========================+=========================================+
- | GET | /pods | Get the list of declared Labs (PODs) |
- +--------+--------------------------+-----------------------------------------+
- | POST | /pods | Declare a new POD |
- | | | Content-Type: application/json |
- | | | { |
- | | | "name": "pod_foo", |
- | | | "creation_date": "YYYY-MM-DD HH:MM:SS"|
- | | | } |
- +--------+--------------------------+-----------------------------------------+
-
-Projects:
-
- +--------+--------------------------+-----------------------------------------+
- | Method | Path | Description |
- +========+==========================+=========================================+
- | GET | /test_projects | Get the list of test projects |
- +--------+--------------------------+-----------------------------------------+
- | GET |/test_projects/{project} | Get details on {project} |
- | | | |
- +--------+--------------------------+-----------------------------------------+
- | POST | /test_projects | Add a new test project |
- | | | Content-Type: application/json |
- | | | { |
- | | | "name": "project_foo", |
- | | | "description": "whatever you want" |
- | | | } |
- +--------+--------------------------+-----------------------------------------+
- | PUT | /test_projects/{project} | Update a test project |
- | | | |
- | | | Content-Type: application/json |
- | | | { |
- | | | <the field(s) you want to modify> |
- | | | } |
- +--------+--------------------------+-----------------------------------------+
- | DELETE | /test_projects/{project} | Delete a test project |
- +--------+--------------------------+-----------------------------------------+
-
-
-Test cases:
-
- +--------+--------------------------+-----------------------------------------+
- | Method | Path | Description |
- +========+==========================+=========================================+
- | GET | /test_projects/{project}/| Get the list of test cases of {project} |
- | | cases | |
- +--------+--------------------------+-----------------------------------------+
- | POST | /test_projects/{project}/| Add a new test case to {project} |
- | | cases | Content-Type: application/json |
- | | | { |
- | | | "name": "case_foo", |
- | | | "description": "whatever you want" |
- | | | "creation_date": "YYYY-MM-DD HH:MM:SS"|
- | | | "url": "whatever you want" |
- | | | } |
- +--------+--------------------------+-----------------------------------------+
- | PUT | /test_projects/{project}?| Modify a test case of {project} |
- | | case_name={case} | |
- | | | Content-Type: application/json |
- | | | { |
- | | | <the field(s) you want to modify> |
- | | | } |
- +--------+--------------------------+-----------------------------------------+
- | DELETE | /test_projects/{project}/| Delete a test case |
- | | case_name={case} | |
- +--------+--------------------------+-----------------------------------------+
-
-Test Results:
-
- +--------+--------------------------+-----------------------------------------+
- | Method | Path | Description |
- +========+==========================+=========================================+
- | GET |/results/project={project}| Get the test results of {project} |
- +--------+--------------------------+-----------------------------------------+
- | GET |/results/case={case} | Get the test results of {case} |
- +--------+--------------------------+-----------------------------------------+
- | GET |/results?pod={pod} | get the results on pod {pod} |
- +--------+--------------------------+-----------------------------------------+
- | GET |/results?installer={inst} | Get the test results of installer {inst}|
- +--------+--------------------------+-----------------------------------------+
- | GET |/results?version={version}| Get the test results of scenario |
- | | | {version}. Initially the version param |
- | | | was reflecting git version, in Functest |
- | | | it was decided to move to scenario |
- +--------+--------------------------+-----------------------------------------+
- | GET |/results?project={project}| Get all the results of the test case |
- | |&case={case} | {case} of the project {project} with |
- | |&version={scenario} | version {scenario} installed by |
- | |&installer={installer} | {installer} on POD {pod} stored since |
- | |&pod={pod} | {days} days |
- | | | {project_name} and {case_name} are |
- | |&period={days} | mandatory, the other parameters are |
- | | | optional. |
- +--------+--------------------------+-----------------------------------------+
- | POST | /results | Add a new test results |
- | | | Content-Type: application/json |
- | | | { |
- | | | "project_name": "project_foo", |
- | | | "case_name": "case_foo", |
- | | | "pod_name": "pod_foo", |
- | | | "installer": "installer_foo", |
- | | | "version": "scenario_foo", |
- | | | "details": <your results> |
- | | | } |
- +--------+--------------------------+-----------------------------------------+
-
-
-Dashboard:
-
- +--------+--------------------------+-----------------------------------------+
- | Method | Path | Description |
- +========+==========================+=========================================+
- | GET |/dashboard? | Get all the dashboard ready results of |
- | |&project={project} | {case} of the project {project} |
- | |&case={case} | version {scenario} installed by |
- | |&version={scenario} | {installer} on POD {pod} stored since |
- | |&installer={installer} | {days} days |
- | |&pod={pod} | |
- | |&period={days} | {project_name} and {case_name} are |
- | | | mandatory, the other parameters are |
- | | | optional. |
- +--------+--------------------------+-----------------------------------------+
-
-
-The results with dashboard method are post-processed from raw results.
-Please note that dashboard results are not stored. Only raw results are stored.
-
-
-
Test Dashboard
---------------
-
-Based on dashboard post-porcessed results, a Test dashboard is automatically
-generated.
+==============
-TODO LF
-or http://testresults.opnfv.org/proto/
+Based on results collected in CI, a test dashboard is dynamically generated.
+The URL of this dashboard is TODO LF
Troubleshooting
@@ -1067,8 +787,32 @@ Feature
vIMS
^^^^
-
-TODO
+vIMS deployment may fail for several reasons, the most frequent ones are
+described in the following table:
+
++===================================+====================================+
+| Error | Comments |
++-----------------------------------+------------------------------------+
+| Keystone admin API not reachable | Impossible to create vIMS user and |
+| | tenant |
++-----------------------------------+------------------------------------+
+| Impossible to retrieve admin role | Impossible to create vIMS user and |
+| id | tenant |
++-----------------------------------+------------------------------------+
+| Error when uploading image from | impossible to deploy VNF |
+| OpenStack to glance | |
++-----------------------------------+------------------------------------+
+| Cinder quota cannot be updated | Default quotas not sufficient, they|
+| | are adapted in the script |
++-----------------------------------+------------------------------------+
+| Impossible to create a volume | VNF cannot be deployed |
++-----------------------------------+------------------------------------+
+| SSH connection issue between the | if vPing test fails, vIMS test will|
+| Test container and the VM | fail... |
++-----------------------------------+------------------------------------+
+| No Internet access from a VM | the VMs of the VNF must have an |
+| | external access to Internet |
++-----------------------------------+------------------------------------+
Promise
diff --git a/testcases/VIM/OpenStack/CI/libraries/run_rally-cert.py b/testcases/VIM/OpenStack/CI/libraries/run_rally-cert.py
index de1d11d87..293ef18e5 100755
--- a/testcases/VIM/OpenStack/CI/libraries/run_rally-cert.py
+++ b/testcases/VIM/OpenStack/CI/libraries/run_rally-cert.py
@@ -13,16 +13,17 @@
# 0.3 (19/10/2015) remove Tempest from run_rally
# and push result into test DB
#
-
-import re
-import json
-import os
import argparse
+import json
import logging
-import yaml
+import os
+import re
import requests
import subprocess
import sys
+import time
+import yaml
+
from novaclient import client as novaclient
from glanceclient import client as glanceclient
from keystoneclient.v2_0 import client as keystoneclient
@@ -123,6 +124,8 @@ GLANCE_IMAGE_PATH = functest_yaml.get("general"). \
CINDER_VOLUME_TYPE_NAME = "volume_test"
+SUMMARY = []
+
def push_results_to_db(payload):
url = TEST_DB + "/results"
@@ -198,8 +201,13 @@ def build_task_args(test_file_name):
return task_args
-def get_output(proc):
+def get_output(proc, test_name):
+ global SUMMARY
result = ""
+ nb_tests = 0
+ overall_duration = 0.0
+ success = 0.0
+
if args.verbose:
while proc.poll() is None:
line = proc.stdout.readline()
@@ -215,12 +223,26 @@ def get_output(proc):
"+-" in line or \
"|" in line:
result += line
+ if "| " in line and \
+ "| action" not in line and \
+ "| " not in line and \
+ "| total" not in line:
+ nb_tests += 1
+ percentage = ((line.split('|')[8]).strip(' ')).strip('%')
+ success += float(percentage)
+
elif "test scenario" in line:
result += "\n" + line
elif "Full duration" in line:
result += line + "\n\n"
+ overall_duration += float(line.split(': ')[1])
logger.info("\n" + result)
+ overall_duration="{:10.2f}".format(overall_duration)
+ success_avg = success / nb_tests
+ scenario_summary = {'test_name': test_name, 'overall_duration':overall_duration, \
+ 'nb_tests': nb_tests, 'success': success_avg}
+ SUMMARY.append(scenario_summary)
return result
@@ -230,7 +252,6 @@ def run_task(test_name):
# :param test_name: name for the rally test
# :return: void
#
-
logger.info('Starting test scenario "{}" ...'.format(test_name))
task_file = '{}task.yaml'.format(SCENARIOS_DIR)
@@ -251,7 +272,7 @@ def run_task(test_name):
logger.debug('running command line : {}'.format(cmd_line))
p = subprocess.Popen(cmd_line, stdout=subprocess.PIPE, stderr=RALLY_STDERR, shell=True)
- output = get_output(p)
+ output = get_output(p, test_name)
task_id = get_task_id(output)
logger.debug('task_id : {}'.format(task_id))
@@ -296,13 +317,19 @@ def run_task(test_name):
else:
logger.info('Test scenario: "{}" Failed.'.format(test_name) + "\n")
+def push_results_to_db(payload):
+ # TODO
+ pass
+
def main():
+ global SUMMARY
# configure script
if not (args.test_name in tests):
logger.error('argument not valid')
exit(-1)
+ SUMMARY = []
creds_nova = functest_utils.get_credentials("nova")
nova_client = novaclient.Client('2',**creds_nova)
creds_neutron = functest_utils.get_credentials("neutron")
@@ -361,6 +388,56 @@ def main():
print(args.test_name)
run_task(args.test_name)
+ report="\n"\
+ " \n"\
+ " Rally Summary Report\n"\
+ "+===================+============+===============+===========+\n"\
+ "| Module | Duration | nb. Test Run | Success |\n"\
+ "+===================+============+===============+===========+\n"
+
+ #for each scenario we draw a row for the table
+ total_duration = 0.0
+ total_nb_tests = 0
+ total_success = 0.0
+ for s in SUMMARY:
+ name = "{0:<17}".format(s['test_name'])
+ duration = float(s['overall_duration'])
+ total_duration += duration
+ duration = time.strftime("%M:%S", time.gmtime(duration))
+ duration = "{0:<10}".format(duration)
+ nb_tests = "{0:<13}".format(s['nb_tests'])
+ total_nb_tests += int(s['nb_tests'])
+ success = "{0:<10}".format(str(s['success'])+'%')
+ total_success += float(s['success'])
+ report += ""\
+ "| " + name + " | " + duration + " | " + nb_tests + " | " + success + "|\n"\
+ "+-------------------+------------+---------------+-----------+\n"
+
+
+
+ total_duration_str = time.strftime("%H:%M:%S", time.gmtime(total_duration))
+ total_duration_str2 = "{0:<10}".format(total_duration_str)
+ total_nb_tests_str = "{0:<13}".format(total_nb_tests)
+ total_success = total_success / len(SUMMARY)
+ total_success_str = "{0:<10}".format(str(total_success)+'%')
+ report += "+===================+============+===============+===========+\n"
+ report += "| TOTAL: | " + total_duration_str2 + " | " + \
+ total_nb_tests_str + " | " + total_success_str + "|\n"
+ report += "+===================+============+===============+===========+\n"
+
+ logger.info("\n"+report)
+
+
+ # Generate json results for DB
+ #json_results = {"timestart": time_start, "duration": total_duration,
+ # "tests": int(total_nb_tests), "success": int(total_success)}
+ #logger.info("Results: "+str(json_results))
+
+ #if args.report:
+ # logger.debug("Pushing result into DB...")
+ # push_results_to_db(json_results)
+
+
logger.debug("Deleting image '%s' with ID '%s'..." \
% (GLANCE_IMAGE_NAME, image_id))
if not functest_utils.delete_glance_image(nova_client, image_id):
diff --git a/testcases/features/doctor.py b/testcases/features/doctor.py
index a68c31cd0..8eb85a808 100644
--- a/testcases/features/doctor.py
+++ b/testcases/features/doctor.py
@@ -14,9 +14,10 @@
#
#
+import logging
import os
-import time
import sys
+import time
import yaml
@@ -31,18 +32,28 @@ TEST_DB_URL = functest_yaml.get('results').get('test_db_url')
sys.path.append('%s/testcases' % FUNCTEST_REPO)
import functest_utils
+logger = logging.getLogger('doctor')
+logger.setLevel(logging.DEBUG)
+ch = logging.StreamHandler()
+ch.setLevel(logging.DEBUG)
+formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
+ch.setFormatter(formatter)
+logger.addHandler(ch)
+
def main():
cmd = 'cd %s/tests && ./run.sh' % DOCTOR_REPO
start_time_ts = time.time()
- ret = functest_utils.execute_command(cmd, exit_on_error=False)
+ ret = functest_utils.execute_command(cmd, logger, exit_on_error=False)
end_time_ts = time.time()
duration = round(end_time_ts - start_time_ts, 1)
if ret:
+ logger.info("doctor OK")
test_status = 'OK'
else:
+ logger.info("doctor FAILED")
test_status = 'NOK'
details = {
@@ -50,13 +61,18 @@ def main():
'duration': duration,
'status': test_status,
}
- pod_name = functest_utils.get_pod_name()
- git_version = functest_utils.get_git_branch(DOCTOR_REPO)
+ pod_name = functest_utils.get_pod_name(logger)
+ scenario = functest_utils.get_scenario(logger)
+ logger.info("Pushing result: TEST_DB_URL=%(db)s pod_name=%(pod)s "
+ "scenario=%(s)s details=%(d)s" % {
+ 'db': TEST_DB_URL,
+ 'pod': pod_name,
+ 's': scenario,
+ 'd': details,
+ })
functest_utils.push_results_to_db(TEST_DB_URL,
'doctor-notification',
- None,
- pod_name,
- git_version,
+ logger, pod_name, scenario,
details)