diff options
-rw-r--r-- | docs/development/overview/functest_scenario/doctor-scenario-in-functest.rst | 171 | ||||
-rw-r--r-- | docs/development/overview/functest_scenario/images/Fault-management-design.png | bin | 0 -> 237110 bytes | |||
-rw-r--r-- | docs/development/overview/functest_scenario/images/Maintenance-design.png | bin | 0 -> 316640 bytes | |||
-rw-r--r-- | docs/development/overview/functest_scenario/images/Maintenance-workflow.png | bin | 0 -> 81286 bytes | |||
-rwxr-xr-x | docs/development/overview/functest_scenario/images/figure-p1.png | bin | 60756 -> 0 bytes | |||
-rw-r--r-- | docs/release/release-notes/index.rst | 2 | ||||
-rw-r--r-- | docs/release/release-notes/release-notes.rst | 294 | ||||
-rw-r--r-- | docs/release/release-notes/releasenotes_fraser.rst (renamed from docs/release/release-notes/releasenotes.rst) | 0 | ||||
-rw-r--r-- | doctor_tests/inspector/congress.py | 19 | ||||
-rw-r--r-- | doctor_tests/scenario/fault_management.py | 4 |
10 files changed, 461 insertions, 29 deletions
diff --git a/docs/development/overview/functest_scenario/doctor-scenario-in-functest.rst b/docs/development/overview/functest_scenario/doctor-scenario-in-functest.rst index b3d73d5c..9f92b5bf 100644 --- a/docs/development/overview/functest_scenario/doctor-scenario-in-functest.rst +++ b/docs/development/overview/functest_scenario/doctor-scenario-in-functest.rst @@ -6,7 +6,7 @@ Platform overview """"""""""""""""" -Doctor platform provides these features in `Danube Release <https://wiki.opnfv.org/display/SWREL/Danube>`_: +Doctor platform provides these features since `Danube Release <https://wiki.opnfv.org/display/SWREL/Danube>`_: * Immediate Notification * Consistent resource state awareness for compute host down @@ -15,6 +15,8 @@ Doctor platform provides these features in `Danube Release <https://wiki.opnfv.o These features enable high availability of Network Services on top of the virtualized infrastructure. Immediate notification allows VNF managers (VNFM) to process recovery actions promptly once a failure has occurred. +Same framework can also be utilized to have VNFM awareness about +infrastructure maintenance. Consistency of resource state is necessary to execute recovery actions properly in the VIM. @@ -26,18 +28,20 @@ fault. The Doctor platform consists of the following components: * OpenStack Compute (Nova) +* OpenStack Networking (Neutron) * OpenStack Telemetry (Ceilometer) -* OpenStack Alarming (Aodh) -* Doctor Inspector -* Doctor Monitor +* OpenStack Alarming (AODH) +* Doctor Sample Inspector, OpenStack Congress or OpenStack Vitrage +* Doctor Sample Monitor or any monitor supported by Congress or Vitrage .. note:: - Doctor Inspector and Monitor are sample implementations for reference. + Doctor Sample Monitor is used in Doctor testing. However in real + implementation like Vitrage, there are several other monitors supported. You can see an overview of the Doctor platform and how components interact in :numref:`figure-p1`. -.. figure:: ./images/figure-p1.png +.. figure:: ./images/Fault-management-design.png :name: figure-p1 :width: 100% @@ -47,8 +51,19 @@ Detailed information on the Doctor architecture can be found in the Doctor requirements documentation: http://artifacts.opnfv.org/doctor/docs/requirements/05-implementation.html -Use case -"""""""" +Running test cases +"""""""""""""""""" + +Functest will call the "doctor_tests/main.py" in Doctor to run the test job. +Doctor testing can also be triggered by tox on OPNFV installer jumphost. Tox +is normally used for functional, module and coding style testing in Python +project. + +Currently, 'Apex', 'Daisy', 'Fuel' and 'local' installer are supported. + + +Fault management use case +""""""""""""""""""""""""" * A consumer of the NFVI wants to receive immediate notifications about faults in the NFVI affecting the proper functioning of the virtual resources. @@ -67,7 +82,8 @@ configuration. Detailed workflow information is as follows: * Consumer(VNFM): (step 0) creates resources (network, server/instance) and an - event alarm on state down notification of that server/instance + event alarm on state down notification of that server/instance or Neutron + port. * Monitor: (step 1) periodically checks nodes, such as ping from/to each dplane nic to/from gw of node, (step 2) once it fails to send out event @@ -75,29 +91,26 @@ Detailed workflow information is as follows: * Inspector: when it receives an event, it will (step 3) mark the host down ("mark-host-down"), (step 4) map the PM to VM, and change the VM status to - down + down. In network failure case, also Neutron port is changed to down. -* Controller: (step 5) sends out instance update event to Ceilometer +* Controller: (step 5) sends out instance update event to Ceilometer. In network + failure case, also Neutron port is changed to down and corresponding event is + sent to Ceilometer. -* Notifier: (step 6) Ceilometer transforms and passes the event to Aodh, - (step 7) Aodh will evaluate event with the registered alarm definitions, +* Notifier: (step 6) Ceilometer transforms and passes the events to AODH, + (step 7) AODH will evaluate events with the registered alarm definitions, then (step 8) it will fire the alarm to the "consumer" who owns the instance * Consumer(VNFM): (step 9) receives the event and (step 10) recreates a new instance -Test case -""""""""" - -Functest will call the "run.sh" script in Doctor to run the test job. +Fault management test case +"""""""""""""""""""""""""" -Currently, only 'Apex' and 'local' installer are supported. The test also -can run successfully in 'fuel' installer with the modification of some -configurations of OpenStack in the script. But still need 'fuel' installer -to support these configurations. +Functest will call the 'doctor-test' command in Doctor to run the test job. -The "run.sh" script will execute the following steps. +The following steps are executed: Firstly, get the installer ip according to the installer type. Then ssh to the installer node to get the private key for accessing to the cloud. As @@ -124,3 +137,117 @@ is calculated. According to the Doctor requirements, the Doctor test is successful if the notification time is below 1 second. + +Maintenance use case +"""""""""""""""""""" + +* A consumer of the NFVI wants to interact with NFVI maintenance, upgrade, + scaling and to have graceful retirement. Receiving notifications over these + NFVI events and responding to those within given time window, consumer can + guarantee zero downtime to his service. + +The maintenance use case adds the Doctor platform an `admin tool` and an +`app manager` component. Overview of maintenance components can be seen in +:numref:`figure-p2`. + +.. figure:: ./images/Maintenance-design.png + :name: figure-p2 + :width: 100% + + Doctor platform components in maintenance use case + +In maintenance use case, `app manager` (VNFM) will subscribe to maintenance +notifications triggered by project specific alarms through AODH. This is the way +it gets to know different NFVI maintenance, upgrade and scaling operations that +effect to its instances. The `app manager` can do actions depicted in `green +color` or tell `admin tool` to do admin actions depicted in `orange color` + +Any infrastructure component like `Inspector` can subscribe to maintenance +notifications triggered by host specific alarms through AODH. Subscribing to the +notifications needs admin privileges and can tell when a host is out of use as +in maintenance and when it is taken back to production. + +Maintenance test case +""""""""""""""""""""" + +Maintenance test case is currently running in our Apex CI and executed by tox. +This is because the special limitation mentioned below and also the fact we +currently have only sample implementation as a proof of concept. Environmental +variable TEST_CASE='maintenance' needs to be used when executing +"doctor_tests/main.py". Test case workflow can be seen in :numref:`figure-p3`. + +.. figure:: ./images/Maintenance-workflow.png + :name: figure-p3 + :width: 100% + + Maintenance test case workflow + +In test case all compute capacity will be consumed with project (VNF) instances. +For redundant services on instances and an empty compute needed for maintenance, +test case will need at least 3 compute nodes in system. There will be 2 +instances on each compute, so minimum number of VCPUs is also 2. Depending on +how many compute nodes there is application will always have 2 redundant +instances (ACT-STDBY) on different compute nodes and rest of the compute +capacity will be filled with non-redundant instances. + +For each project specific maintenance message there is a time window for +`app manager` to make any needed action. This will guarantee zero +down time for his service. All replies back are done by calling `admin tool` API +given in the message. + +The following steps are executed: + +Infrastructure admin will call `admin tool` API to trigger maintenance for +compute hosts having instances belonging to a VNF. + +Project specific `MAINTENANCE` notification is triggered to tell `app manager` +that his instances are going to hit by infrastructure maintenance at a specific +point in time. `app manager` will call `admin tool` API to answer back +`ACK_MAINTENANCE`. + +When the time comes to start the actual maintenance workflow in `admin tool`, +a `DOWN_SCALE` notification is triggered as there is no empty compute node for +maintenance (or compute upgrade). Project receives corresponding alarm and scales +down instances and call `admin tool` API to answer back `ACK_DOWN_SCALE`. + +As it might happen instances are not scaled down (removed) from a single +compute node, `admin tool` might need to figure out what compute node should be +made empty first and send `PREPARE_MAINTENANCE` to project telling which instance +needs to be migrated to have the needed empty compute. `app manager` makes sure +he is ready to migrate instance and call `admin tool` API to answer back +`ACK_PREPARE_MAINTENANCE`. `admin tool` will make the migration and answer +`ADMIN_ACTION_DONE`, so `app manager` knows instance can be again used. + +:numref:`figure-p3` has next a light blue section of actions to be done for each +compute. However as we now have one empty compute, we will maintain/upgrade that +first. So on first round, we can straight put compute in maintenance and send +admin level host specific `IN_MAINTENANCE` message. This is caught by `Inspector` +to know host is down for maintenance. `Inspector` can now disable any automatic +fault management actions for the host as it can be down for a purpose. After +`admin tool` has completed maintenance/upgrade `MAINTENANCE_COMPLETE` message +is sent to tell host is back in production. + +Next rounds we always have instances on compute, so we need to have +`PLANNED_MAINTANANCE` message to tell that those instances are now going to hit +by maintenance. When `app manager` now receives this message, he knows instances +to be moved away from compute will now move to already maintained/upgraded host. +In test case no upgrade is done on application side to upgrade instances +according to new infrastructure capabilities, but this could be done here as +this information is also passed in the message. This might be just upgrading +some RPMs, but also totally re-instantiating instance with a new flavor. Now if +application runs an active side of a redundant instance on this compute, +a switch over will be done. After `app manager` is ready he will call +`admin tool` API to answer back `ACK_PLANNED_MAINTENANCE`. In test case the +answer is `migrate`, so `admin tool` will migrate instances and reply +`ADMIN_ACTION_DONE` and then `app manager` knows instances can be again used. +Then we are ready to make the actual maintenance as previously trough +`IN_MAINTENANCE` and `MAINTENANCE_COMPLETE` steps. + +After all computes are maintained, `admin tool` can send `MAINTENANCE_COMPLETE` +to tell maintenance/upgrade is now complete. For `app manager` this means he +can scale back to full capacity. + +This is the current sample implementation and test case. Real life +implementation is started in OpenStack Fenix project and there we should +eventually address requirements more deeply and update the test case with Fenix +implementation. diff --git a/docs/development/overview/functest_scenario/images/Fault-management-design.png b/docs/development/overview/functest_scenario/images/Fault-management-design.png Binary files differnew file mode 100644 index 00000000..6d98cdec --- /dev/null +++ b/docs/development/overview/functest_scenario/images/Fault-management-design.png diff --git a/docs/development/overview/functest_scenario/images/Maintenance-design.png b/docs/development/overview/functest_scenario/images/Maintenance-design.png Binary files differnew file mode 100644 index 00000000..8f21db6a --- /dev/null +++ b/docs/development/overview/functest_scenario/images/Maintenance-design.png diff --git a/docs/development/overview/functest_scenario/images/Maintenance-workflow.png b/docs/development/overview/functest_scenario/images/Maintenance-workflow.png Binary files differnew file mode 100644 index 00000000..9b65fd59 --- /dev/null +++ b/docs/development/overview/functest_scenario/images/Maintenance-workflow.png diff --git a/docs/development/overview/functest_scenario/images/figure-p1.png b/docs/development/overview/functest_scenario/images/figure-p1.png Binary files differdeleted file mode 100755 index e963d8bd..00000000 --- a/docs/development/overview/functest_scenario/images/figure-p1.png +++ /dev/null diff --git a/docs/release/release-notes/index.rst b/docs/release/release-notes/index.rst index 2e6d46e1..a0e30501 100644 --- a/docs/release/release-notes/index.rst +++ b/docs/release/release-notes/index.rst @@ -10,4 +10,4 @@ Doctor Release Notes .. toctree:: :maxdepth: 2 - releasenotes.rst + release-notes.rst diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst new file mode 100644 index 00000000..ad690bb3 --- /dev/null +++ b/docs/release/release-notes/release-notes.rst @@ -0,0 +1,294 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + + +This document provides the release notes for Gambia of Doctor. + +.. contents:: + :depth: 3 + :local: + + +Version history +--------------- + ++--------------------+--------------------+--------------------+-------------+ +| **Date** | **Ver.** | **Author** | **Comment** | ++--------------------+--------------------+--------------------+-------------+ +| 2018-09-20 | 7.0.0 | Tomi Juvonen | | ++--------------------+--------------------+--------------------+-------------+ + +Important notes +=============== + +In Gambia release, Doctor has been working with our second use case over +maintenance. Design guideline is now done and test case exists with sample +maintenance workflow code implemented in Doctor. Work has also started to have +the real implementation done in the OpenStack Fenix project +https://wiki.openstack.org/wiki/Fenix. + +Doctor CI testing has now moved to use tox instead of Functest. + +In this release, Doctor has not been working with the fault management use case as +the basic framework has been already done. However, we might need to get back to +it later to better meet the tough industry requirements as well as requirements +from edge, containers and 5G. + + +Summary +======= + +Gambia Doctor framework uses OpenStack Queens integrated into its test cases. +Compared to the previous release, the Heat project is also being used in the +maintenance test case. + +Release Data +============ + +Doctor changes + ++------------------------------------------+----------------------------------------------------------+ +| **commit-ID** | **Subject** | ++------------------------------------------+----------------------------------------------------------+ +| 825a0a0dd5e8028129b782ed21c549586257b1c5 | delete doctor datasource in congress when cleanup | ++------------------------------------------+----------------------------------------------------------+ +| fcf53129ab2b18b84571faff13d7cb118b3a41b3 | run profile even the notification time is larger than 1S | ++------------------------------------------+----------------------------------------------------------+ +| 495965d0336d42fc36494c81fd15cee2f34c96e9 | Update and add test case | ++------------------------------------------+----------------------------------------------------------+ +| da25598a6a31abe0579ffed12d1719e5ff75f9a7 | bugfix: add doctor datasource in congress | ++------------------------------------------+----------------------------------------------------------+ +| f9e1e3b1ae4be80bc2dc61d9c4213c81c091ea72 | Update the maintenance design document | ++------------------------------------------+----------------------------------------------------------+ +| 4639f15e6db2f1480b41f6fbfd11d70312d4e421 | Add maintenance test code | ++------------------------------------------+----------------------------------------------------------+ +| b54cbc5dd2d32fcb27238680b4657ed384d021c5 | Add setup and cleanup for maintenance test | ++------------------------------------------+----------------------------------------------------------+ +| b2bb504032ac81a2ed3f404113b097d9ce3d7f14 | bugfix: kill the stunnel when cleanup | ++------------------------------------------+----------------------------------------------------------+ +| eaeb3c0f9dc9e6645a159d0a78b9fc181fce53d4 | add ssh_keyfile for connect to installer in Apex | ++------------------------------------------+----------------------------------------------------------+ +| dcbe7bf1c26052b0e95d209254e7273aa1eaace1 | Add tox and test case to testing document | ++------------------------------------------+----------------------------------------------------------+ +| 0f607cb5efd91ee497346b7f792dfa844d15595c | enlarge the time of link down | ++------------------------------------------+----------------------------------------------------------+ +| 1351038a65739b8d799820de515178326ad05f7b | bugfix: fix the filename of ssh tunnel | ++------------------------------------------+----------------------------------------------------------+ +| e70bf248daac03eee6b449cd1654d2ee6265dd8c | Use py34 instead of py35 | ++------------------------------------------+----------------------------------------------------------+ +| 2a60d460eaf018951456451077b7118b60219b32 | add INSPECTOR_TYPE and TEST_CASE to tox env | ++------------------------------------------+----------------------------------------------------------+ +| 2043ceeb08c1eca849daeb2b3696d385425ba061 | [consumer] fix default value for port number | ++------------------------------------------+----------------------------------------------------------+ + +Releng changes + ++------------------------------------------+-----------------------------------------------------------------------+ +| **commit-ID** | **Subject** | ++------------------------------------------+-----------------------------------------------------------------------+ +| c87309f5a75ccc5d595f708817b97793c24c4387 | Add Doctor maintenance job | ++------------------------------------------+-----------------------------------------------------------------------+ +| bd16a9756ffd0743e143f0f2f966da8dd666c7a3 | remove congress test in Daisy | ++------------------------------------------+-----------------------------------------------------------------------+ +| c47aaaa53c91aae93877f2532c72374beaa4eabe | remove fuel job in Doctor | ++------------------------------------------+-----------------------------------------------------------------------+ +| ab2fed2522eaf82ea7c63dd05008a37c56e825d0 | use 'workspace-cleanup' plugin in publisher | ++------------------------------------------+-----------------------------------------------------------------------+ +| 3aaed5cf40092744f1b87680b9205a2901baecf3 | clean the workspace in the publisher | ++------------------------------------------+-----------------------------------------------------------------------+ +| 50151eb3717edd4ddd996f3705fbe1732de7f3b7 | run tox with 'sudo' | ++------------------------------------------+-----------------------------------------------------------------------+ +| a3adc85ecb52f5d19ec4e9c49ca1ac35aa429ff9 | remove inspector variable form job template | ++------------------------------------------+-----------------------------------------------------------------------+ +| adfbaf2a3e8487e4c9152bf864a653a0425b8582 | run doctor tests with different inspectors in sequence | ++------------------------------------------+-----------------------------------------------------------------------+ +| 2e98e56224cd550cb3bf9798e420eece28139bd9 | add the ssh_key info if the key_file is exist | ++------------------------------------------+-----------------------------------------------------------------------+ +| c109c271018e9a85d94be1b9b468338d64589684 | prepare installer info for doctor test | ++------------------------------------------+-----------------------------------------------------------------------+ +| 57cbefc7160958eae1d49e4753779180a25864af | use py34 for tox | ++------------------------------------------+-----------------------------------------------------------------------+ +| 3547754e808a581b09c9d22e013a7d986d9f6cd1 | specify the cacert file when it exits | ++------------------------------------------+-----------------------------------------------------------------------+ +| ef4f36aa1c2ff0819d73cde44f84b99a42e15c7e | bugfix: wrong usage of '!include-raw' | ++------------------------------------------+-----------------------------------------------------------------------+ +| 0e0e0d4cb71fb27b1789a2bef2d3c4ff313e67ff | use tox instead of functest for doctor CI jobs | ++------------------------------------------+-----------------------------------------------------------------------+ +| 5b22f1b95feacaec0380f6a7543cbf510b628451 | pass value to parameters | ++------------------------------------------+-----------------------------------------------------------------------+ +| 44ab0cea07fa2a734c4f6b80776ad48fd006d1b8 | Doctor job bugfix: fix the scenario | ++------------------------------------------+-----------------------------------------------------------------------+ +| 17617f1c0a78c7bdad0d11d329a6c7e119cbbddd | bugfix: run doctor tests parallelly | ++------------------------------------------+-----------------------------------------------------------------------+ +| 811e4ef7f4c37b7bc246afc34ff880c014ecc05d | delete 'opnfv-build-ubuntu-defaults' parameters for doctor verify job | ++------------------------------------------+-----------------------------------------------------------------------+ +| 0705f31ab5bc54c073df120cbe0fe62cf10f9a81 | delete the 'node' parameter in 'doctor-slave-parameter' macro | ++------------------------------------------+-----------------------------------------------------------------------+ +| 304151b15f9d7241db8c5fea067cafe048287d84 | fix the default node label for doctor test | ++------------------------------------------+-----------------------------------------------------------------------+ +| a6963f92f015a33b44b27199886952205499b44c | Fix project name | ++------------------------------------------+-----------------------------------------------------------------------+ +| f122bfed998b3b0e0178106a7538377c609c6512 | add a default value for SSH_KEY | ++------------------------------------------+-----------------------------------------------------------------------+ + +Version change +^^^^^^^^^^^^^^ + +Module version changes +~~~~~~~~~~~~~~~~~~~~~~ + +- OpenStack has changed from Pike-1 to Queens-1 + +Document version changes +~~~~~~~~~~~~~~~~~~~~~~~~ + +These documents have been updated in Gambia release + +- Testing document + docs/development/overview/testing.rst +- Doctor scenario in functest + docs/development/overview/functest_scenario/doctor-scenario-in-functest.rst +- Maintenance design guideline + docs/development/design/maintenance-design-guideline.rst + +Reason for version +^^^^^^^^^^^^^^^^^^ + +Documentation is updated due to tox usage in testing and adding maintenance +use case related documentation. + +Feature additions +~~~~~~~~~~~~~~~~~ + ++--------------------+--------------------------------------------------------+ +| **JIRA REFERENCE** | **SLOGAN** | ++--------------------+--------------------------------------------------------+ +| DOCTOR-106 | Maintenance scenario | ++--------------------+--------------------------------------------------------+ +| DOCTOR-125 | Maintenance design document according to our test case | ++--------------------+--------------------------------------------------------+ +| DOCTOR-126 | Use Tox instead of Functest for doctor CI jobs | ++--------------------+--------------------------------------------------------+ +| DOCTOR-127 | Maintenance test POD | ++--------------------+--------------------------------------------------------+ + + +Deliverables +------------ + + +Software deliverables +===================== + +None + +Documentation deliverables +========================== + +https://git.opnfv.org/doctor/tree/docs + +Known Limitations, Issues and Workarounds +========================================= + +System Limitations +^^^^^^^^^^^^^^^^^^ + +Maintenance test case requirements: + +- Minimum number of nodes: 1 Controller, 3 Computes +- Min number of VCPUs: 2 VCPUs for each compute + +Known issues +^^^^^^^^^^^^ + +None + +Workarounds +^^^^^^^^^^^ + +None + +Test Result +=========== + +Doctor CI results with TEST_CASE='fault_management' and INSPECTOR_TYPE=sample +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + ++--------------------------------------+--------------+ +| **TEST-SUITE** | **Results:** | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Apex' | SUCCESS | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Compass' | N/A | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Daisy' | SUCCESS | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Fuel' | No POD | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Joid' | N/A | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Local' | N/A | ++--------------------------------------+--------------+ + +Doctor CI results with TEST_CASE='fault_management' and INSPECTOR_TYPE=congress +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + ++--------------------------------------+--------------+ +| **TEST-SUITE** | **Results:** | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Apex' | FAILED | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Compass' | N/A | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Daisy' | N/A | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Fuel' | No POD | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Joid' | N/A | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Local' | N/A | ++--------------------------------------+--------------+ + + +Doctor Functest results with TEST_CASE='fault_management' +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + ++--------------------------------------+--------------+ +| **TEST-SUITE** | **Results:** | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Apex' | skipped | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Compass' | N/A | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Daisy' | skipped | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Fuel' | skipped | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Joid' | N/A | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Local' | N/A | ++--------------------------------------+--------------+ + +Note: Installer Functest does not currently test features or skips running the +project test cases + +Doctor CI results with TEST_CASE='maintenance' +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + ++--------------------------------------+--------------+ +| **TEST-SUITE** | **Results:** | ++--------------------------------------+--------------+ +| INSTALLER_TYPE='Apex' | SUCCESS | ++--------------------------------------+--------------+ + +Doctor Functest results with TEST_CASE='maintenance' +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +N/A - Needs special target and currently there is only sample implementation + +References +========== + +For more information about the OPNFV Doctor latest work, please see: + +https://wiki.opnfv.org/display/doctor/Doctor+Home diff --git a/docs/release/release-notes/releasenotes.rst b/docs/release/release-notes/releasenotes_fraser.rst index f1cf9d7e..f1cf9d7e 100644 --- a/docs/release/release-notes/releasenotes.rst +++ b/docs/release/release-notes/releasenotes_fraser.rst diff --git a/doctor_tests/inspector/congress.py b/doctor_tests/inspector/congress.py index fb747ec5..7f918fb2 100644 --- a/doctor_tests/inspector/congress.py +++ b/doctor_tests/inspector/congress.py @@ -31,6 +31,8 @@ class CongressInspector(BaseInspector): def __init__(self, conf, log): super(CongressInspector, self).__init__(conf, log) + self.is_create_doctor_datasource = False + self.doctor_datasource_id = None self.auth = get_identity_auth() self.congress = congress_client(get_session(auth=self.auth)) self._init_driver_and_ds() @@ -48,12 +50,6 @@ class CongressInspector(BaseInspector): 'version < nova_api_min_version(%s)' % self.nova_api_min_version) - # create doctor datasource if it's not exist - if self.doctor_datasource not in datasources: - self.congress.create_datasource( - body={'driver': self.doctor_driver, - 'name': self.doctor_datasource}) - # check whether doctor driver exist drivers = \ {driver['id']: driver for driver in @@ -61,6 +57,14 @@ class CongressInspector(BaseInspector): if self.doctor_driver not in drivers: raise Exception('Do not support doctor driver in congress') + # create doctor datasource if it's not exist + if self.doctor_datasource not in datasources: + response = self.congress.create_datasource( + body={'driver': self.doctor_driver, + 'name': self.doctor_datasource}) + self.doctor_datasource_id = response['id'] + self.is_create_doctor_datasource = True + self.policy_rules = \ {rule['name']: rule for rule in self.congress.list_policy_rules(self.policy)['results']} @@ -86,6 +90,9 @@ class CongressInspector(BaseInspector): for rule_name in self.rules.keys(): self._del_rule(rule_name) + if self.is_create_doctor_datasource: + self.congress.delete_datasource(self.doctor_datasource_id) + def _add_rule(self, rule_name, rule): if rule_name not in self.policy_rules: self.congress.create_policy_rule(self.policy, diff --git a/doctor_tests/scenario/fault_management.py b/doctor_tests/scenario/fault_management.py index f8f53e8e..ee3bf5f1 100644 --- a/doctor_tests/scenario/fault_management.py +++ b/doctor_tests/scenario/fault_management.py @@ -184,6 +184,10 @@ class FaultManagement(object): self.log.info('doctor fault management test successfully,' 'notification_time=%s' % notification_time) else: + if self.conf.profiler_type: + self.log.info('run doctor fault management profile.......') + self.run_profiler() + raise Exception('doctor fault management test failed, ' 'notification_time=%s' % notification_time) |