summaryrefslogtreecommitdiffstats
path: root/doctor_tests/inspector
diff options
context:
space:
mode:
authorTomi Juvonen <tomi.juvonen@nokia.com>2019-11-28 12:31:51 +0200
committerTomi Juvonen <tomi.juvonen@nokia.com>2020-01-08 12:22:50 +0200
commitd8eb12f4200c21f569df5bc01d378a846b4c0db0 (patch)
treeacf0a67ef2a9a0e89d63e5863e9dc7bc53190478 /doctor_tests/inspector
parent7822d631bc2fd2e8faf36d2b809e1e5b69f5251c (diff)
DevStack support
Support running Doctor testing is DevStack multi-node controller JIRA: DOCTOR-136 Signed-off-by: Tomi Juvonen <tomi.juvonen@nokia.com> Change-Id: I1569f3f77d889420b3b8f3c2724c10253e509c28
Diffstat (limited to 'doctor_tests/inspector')
-rw-r--r--doctor_tests/inspector/sample.py4
1 files changed, 2 insertions, 2 deletions
diff --git a/doctor_tests/inspector/sample.py b/doctor_tests/inspector/sample.py
index 70156b20..c44db95d 100644
--- a/doctor_tests/inspector/sample.py
+++ b/doctor_tests/inspector/sample.py
@@ -52,7 +52,7 @@ class SampleInspector(BaseInspector):
driver='messaging',
topics=['notifications'])
self.notif = self.notif.prepare(publisher_id='sample')
- except:
+ except Exception:
self.notif = None
def _init_novaclients(self):
@@ -135,7 +135,7 @@ class SampleInspector(BaseInspector):
def maintenance(self, data):
try:
payload = self._alarm_traits_decoder(data)
- except:
+ except Exception:
payload = ({t[0]: t[2] for t in
data['reason_data']['event']['traits']})
self.log.error('cannot parse alarm data: %s' % payload)
of openstack, multiple openstack clouds will have to be deployed in the DC to manage thousands of servers. In such a DC, it should be possible to deploy VNFs accross openstack clouds. Another typical usecase is Geographic Redundancy (GR). GR deployment is to deal with more catastrophic failures (flood, earthquake, propagating software fault, and etc.) of a single site. In the Geographic redundancy usecase, VNFs are deployed in two sites, which are geographically seperated and are deployed on NFVI managed by seperate VIM. When such a catastrophic failure happens, the VNFs at the failed site can failover to the redundant one so as to continue the service. Different VNFs may have specified requirement of such failover. Some VNFs may need stateful failover, while others may just need their VMs restarted on the redundant site in their initial state. The first would create the overhead of state replication. The latter may still have state replication through the storage. Accordingly for storage we don't want to loose any data, and for networking the NFs should be connected the same way as they were in the original site. We probably want also to have the same number of VMs on the redundant site coming up for the VNFs. The other usecase is the maintainance. When one site is planning for a maintaining, it should first replicate the service to another site before it stops them. Such replication should not disturb the service, nor should it cause any data loss. The service at the second site should be excuted, before the first site is stopped and began maintenance. In such case, the multisite schemes may be used. The multisite scenario is also captured by the Multisite project, in which specific requirements of openstack are also proposed for different usecases. However, the multisite project mainly focuses on the requirement of these multisite usecases on openstack. HA requirements are not necessarily the requirement for the approaches discussed in multisite. While the HA project tries to capture the HA requirements in these usecases. The following links are the scenarios and Usecases discussed in the Multisite project. https://gerrit.opnfv.org/gerrit/#/c/2123/ https://gerrit.opnfv.org/gerrit/#/c/1438/.