summaryrefslogtreecommitdiffstats
path: root/docs/release
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release')
-rw-r--r--docs/release/Calipso-usage-stories.rst446
-rw-r--r--docs/release/admin-guide.rst16
-rw-r--r--docs/release/apex-scenario-guide.rst (renamed from docs/release/scenarios/os-nosdn-calipso-noha/apex-scenario-guide.rst)564
-rw-r--r--docs/release/developer-guide.pdfbin0 -> 252310 bytes
-rw-r--r--docs/release/developer-guide.rst1338
-rw-r--r--docs/release/index.rst20
-rw-r--r--docs/release/media/image101.pngbin0 -> 119090 bytes
-rw-r--r--docs/release/media/image102.pngbin0 -> 104849 bytes
-rw-r--r--docs/release/media/image103.pngbin0 -> 10664 bytes
-rw-r--r--docs/release/media/image104.pngbin0 -> 37854 bytes
-rw-r--r--docs/release/media/image105.pngbin0 -> 23555 bytes
-rw-r--r--docs/release/media/image106.pngbin0 -> 58686 bytes
-rw-r--r--docs/release/media/image107.pngbin0 -> 89583 bytes
-rw-r--r--docs/release/scenarios/os-nosdn-calipso-noha/index.rst15
14 files changed, 2074 insertions, 325 deletions
diff --git a/docs/release/Calipso-usage-stories.rst b/docs/release/Calipso-usage-stories.rst
new file mode 100644
index 0000000..4c0c753
--- /dev/null
+++ b/docs/release/Calipso-usage-stories.rst
@@ -0,0 +1,446 @@
+***The following are fake stories, although providing real examples of
+real problems that are faced today by cloud providers, and showing
+possible resolutions provided by Calipso:***
+
+***Enterprise use-case story (Calipso ‘S’ release):***
+
+Moz is a website publishing and managing product, Moz provides
+reputation and popularity tracking, helps with distributions, listing,
+and ratings and provides content distributions for industry marketing.
+
+Moz considers moving their main content distribution application to be
+hosted on https://www.dreamhost.com/, which provides shared and
+dedicated IaaS and PaaS hosting based on OpenStack.
+
+As a major milestone for Moz’s due diligence for choosing Dreamhost, Moz
+acquires a shared hosting facility from Dreamhost, that is
+cost-effective and stable, it includes 4 mid-sized Web-servers, 4
+large-sized Application-servers and 2 large-sized DB servers, connected
+using several networks, with some security services.
+
+Dreamhost executives instruct their infrastructure operations department
+to make sure proper SLA and Monitoring is in-place so the due diligence
+and final production deployment of Moz’s services in the Dreamhost
+datacenter goes well and that Moz’s engineers receive excellent service
+experience.
+
+Moz received the following SLA with their current VPS contract:
+
+- 97-day money back guarantee, in case of a single service down event
+ or any dissatisfaction.
+
+- 99.5 % uptime/availability with a weekly total downtime of 30
+ minutes.
+
+- 24/7/365 on-call service with a total of 6 hours MTTR.
+
+- Full HA for all networking services.
+
+- Managed VPS using own Control Panel IaaS provisioning with overall
+ health visibility.
+
+- Scalable RAM, starts with 1GB can grow per request to 16GB from
+ within control panel.
+
+- Guaranteed usage of SSD or equivalent speeds, storage capacity from
+ 30GB to 240GB.
+
+- Backup service based on cinder-backup and Ceph’s dedicated backup
+ volumes, with restoration time below 4 hours.
+
+Dreamhost‘s operations factored all requirement and has decided to
+include real-time monitoring and analysis for the VPS for Moz.
+
+One of the tools used now for Moz environment in Dreamhost is Calipso
+for virtual networking.
+
+Here are some benefits provided by Calipso for Dreamhost operations
+during service cycles:
+
+*Reporting:*
+
+Special handling of virtual networking is in place:
+
+- Dreamhost designed a certain virtual networking setup and
+ connectivity that provides the HA and performance required by the SLA
+ and decided on several physical locations for Moz’s virtual servers
+ in different availability zones.
+
+- Scheduling of discovery has been created, Calipso takes a snapshot of
+ Moz’s environment every Sunday at midnight, reporting on connectivity
+ among all 20 servers (10 main and 10 backups) and overall health of
+ that connectivity.
+
+- Every Sunday morning at 8am, before the week’s automatic
+ snapshotting, the NOC administrator runs a manual discovery and saves
+ that snapshot, she then runs a comparison check against last week’s
+ snapshot and against initial design to find any gaps or changes that
+ might happen due to other shared services deployments, virtual
+ instances and their connectivity are analyzed and reported with
+ Calipso’s topology and health monitoring.
+
+- Reports are saved for a bi-weekly reporting sent to Moz’s networking
+ engineers.
+
+ *Change management:*
+
+ If infrastructure changes needs to happen on any virtual service
+ (routers, switches, firewalls etc.) or on any physical server or
+ physical switch the following special guidelines apply:
+
+- Run a search on Calipso for the name of the virtual service, switch
+ or host. Lookup if Moz environment is using this object (using the
+ object’s attributes).
+
+- Using Calipso’s impact analysis, fill a report stating all Moz’s
+ objects, on which host, connected to which switch that is affected by
+ the planed change.
+
+- Run clique-type scan, using the specific object as ‘focal-point’ to
+ create a dedicated topology with accompanied health report before
+ conducting the change itself, use this a *pre snapshot*.
+
+- Simulate the change, using Moz’s testing environment only, make sure
+ HA services are in places and downtime is confirmed to be in the SLA
+ boundaries.
+
+- Using all reports provided by Calipso, along with application and
+ storage reports, send a detailed change request to NOC and later to
+ the end-customer for review.
+
+- During the change, make sure HA is operational, by running the same
+ clique-type snapshotting every 10 minutes and running a comparison.
+
+- NOC, while waiting for the change to complete, looks at Calipso’s
+ dashboard focused on MOZ’s environment, monitoring results for
+ service down event (as expected), impact on other objects in the
+ service chain - the entire Calipso clique for that object (as
+ expected).
+
+- Once operations has reported back to NOC about change done, run the
+ same snapshotting again as *post snapshot* and run a comparison to
+ make sure all virtual networking are back to the ‘as designed’ stage
+ and all networking services are back.
+
+**Example snapshot taken at one stage on Calipso for the Moz virtual
+networking:**
+
+|image0|
+
+ *Troubleshooting:*
+
+ Dreamhost NOC uses Calipso dashboards for Moz’s environment for
+ their daily health-check. Troubleshooting starts in two cases:
+
+1. When a failure is detected on Calipso for any of Moz’s objects on
+ their virtual networking topologies,
+
+2. When a service case has been opened by Moz with “High Priority,
+ service down” flag.
+
+3. Networking department needs to know which virtual services are
+ connected to which ACI switches ports.
+
+ The following actions are taken, using Calipso dashboards:
+
+- Kickoff a discovery through Calipso API for all objects related to
+ Moz.
+
+- For a service request with no Calipso error detected: using Calipso’s
+ impact analysis, create all cliques for all objects as focal point.
+
+- For an error detected by Calipso: using Calipso’s impact analysis,
+ create cliques for objects with errors as focal point.
+
+- Resulted cliques are then analyzed using detailed messaging facility
+ in Calipso (looking deeply into any message generated regarding the
+ related objects).
+
+- Report with ACI ports to virtual services mappings is sent to
+ networking department for further analysis.
+
+ |image1|
+
+- If this is a failure on any physical device (host or switch) and/or
+ on any physical NIC (switch or host side), Calipso immediately points
+ this out and using the specific set of messages generated the
+ administrator can figure out the root cause (like optical failure,
+ driver, disconnect etc.).
+
+- In virtual object failures Calipso saves time pinpointing the servers
+ where erroneous objects are running, and their previous and new
+ connectivity details.
+
+- Calipso alerts on dependencies for :
+
+1. All related objects in the clique for that objects.
+
+2. Related hosts
+
+3. Related projects and networks
+
+4. Related application (\* in case Murano app has been added)
+
+- Administrators connects directly to the specific servers and now,
+ using the specific object attributes can start he’s manual
+ troubleshooting (actual fixing of the software issues is not
+ currently part of the Calipso features).
+
+- The NOC operators approves closing the service ticket only when all
+ related Calipso cliques are showing up as healthy and connectivity is
+ back to it’s original “as designed” stage, using Calipso older
+ snapshots.
+
+**Lookup of message – to – graph object in messaging facility:**
+
+|image2|
+
+**Finding the right object related to a specific logging/monitoring
+message**:
+
+|image3|
+
+***Service Provider use-case story (Calipso ‘P’ release):***
+
+BoingBoing is a specialized video casting service and blogging site. It
+is using several locations to run their service (regional hubs and
+central corporate campus, some hosted and some are private).
+
+BoingBoing contracted AT&T to build an NFV service for them, deployed on
+2 new hosted regional hubs, to be brought up dynamically for special
+sporting, news or cloture events. On each one of the 2 hosted virtual
+environments the following service chain is created:
+
+1. Two vyatta 5600 virtual routers are front-end routing aggregation
+ function.
+
+2. Two Steelhead virtual wan acceleration appliances connected to
+ central campus for accelerating and caching of video casting
+ services.
+
+3. Two f5 BIG-IP Traffic Management (load balancing) virtual appliances.
+
+4. Two Cisco vASA for virtual firewall and remote-access VPN services.
+
+As a major milestone for BoingBoing’s due diligence for choosing AT&T
+NFV service, BoingBoing acquires 2 shared hosting facilities and
+automatic service from AT&T, that is cost-effective and stable, it
+includes This NFV service consist of a total of 16 virtual appliance
+across those 2 sites, to be created on-demand and maintained with a
+certain SLA once provisioned, all NFV devices are connected using
+several networks, provisioned using VPP ml2 on an OpenStack based
+environment..
+
+AT&T executives instruct their infrastructure operations department to
+make sure proper SLA and Monitoring is in-place so the due diligence and
+final production deployment of BoingBoing’s services in the AT&T
+datacenters goes well and that BoingBoing’s engineers receive excellent
+service experience.
+
+BoingBoing received the following SLA with their current VPS contract:
+
+- 30-day money back guarantee, in case of a single service down event
+ or any dissatisfaction.
+
+- 99.9 % uptime/availability with a weekly total downtime of 10
+ minutes.
+
+- 24/7/365 on-call service with a total of 2 hours MTTR.
+
+- Full HA for all networking services.
+
+- Managed service using Control Panel IaaS provisioning with overall
+ health visibility.
+
+- Dedicated RAM, from16GB to 64GB from within control panel.
+
+- Guaranteed usage of SSD or equivalent speeds, storage capacity from
+ 10GB to 80GB.
+
+- Backup service based on cinder-backup and Ceph’s dedicated backup
+ volumes, with restoration time below 4 hours.
+
+- End-to-end throughput from central campus to dynamically created
+ regional sites to be always above 2Gbps, including all devices on the
+ service chain and the virtual networking in place.
+
+AT&T’s operations factored all requirement and has decided to include
+real-time monitoring and analysis for the NFV environment for
+BoingBoing.
+
+One of the tools used now for BoingBoing environment in AT&T is Calipso
+for virtual networking.
+
+Here are some benefits provided by Calipso for AT&T operations during
+service cycles:
+
+*Reporting:*
+
+Special handling of virtual networking is in place:
+
+- AT&T designed a certain virtual networking (SFC) setup and
+ connectivity that provides the HA and performance required by the SLA
+ and decided on several physical locations for BoingBoing’s virtual
+ appliances in different availability zones.
+
+- Scheduling of discovery has been created, Calipso takes a snapshot of
+ BoingBoing’s environment every Sunday at midnight, reporting on
+ connectivity among all 16 instances (8 per regional site, 4 pairs on
+ each) and overall health of that connectivity.
+
+- Every Sunday morning at 8am, before the week’s automatic
+ snapshotting, the NOC administrator runs a manual discovery and saves
+ that snapshot, she then runs a comparison check against last week’s
+ snapshot and against initial design to find any gaps or changes that
+ might happen due to other shared services deployments, virtual
+ instances and their connectivity are analyzed and reported with
+ Calipso’s topology and health monitoring.
+
+- Reports are saved for a bi-weekly reporting sent to BoingBoing’s
+ networking engineers.
+
+- Throughput is measured by a special traffic sampling technology
+ inside the VPP virtual switches and sent back to Calipso for
+ references to virtual objects and topological inventory. Dependencies
+ are analyzed so SFC topologies are now visualized across all sites
+ and includes graphing facility on the Calipso UI to visualize the
+ throughput.
+
+ *Change management:*
+
+ If infrastructure changes needs to happen on any virtual service
+ (NFV virtual appliances, internal routers, switches, firewalls etc.)
+ or on any physical server or physical switch the following special
+ guidelines apply:
+
+- Run a lookup on Calipso search-engine for the name of the virtual
+ service, switch or host, including names of NFV appliances as updated
+ in the Calipso inventory by the NFV provisioning application. Lookup
+ if BoingBoing environment is using this object (using the object’s
+ attributes).
+
+ **Running a lookup on Calipso search-engine**
+
+|image4|
+
+- Using Calipso’s impact analysis, fill a report stating all
+ BoingBoing’s objects, on which host, connected to which switch that
+ is affected by the planed change.
+
+- Run clique-type scan, using the specific object as ‘focal-point’ to
+ create a dedicated topology with accompanied health report before
+ conducting the change itself, use this a *pre snapshot*.
+
+- Simulate the change, using BoingBoing’s testing environment only,
+ make sure HA services are in places and downtime is confirmed to be
+ in the SLA boundaries.
+
+- Using all reports provided by Calipso, along with application and
+ storage reports, send a detailed change request to NOC and later to
+ the end-customer for review.
+
+- During the change, make sure HA is operational, by running the same
+ clique-type snapshotting every 10 minutes and running a comparison.
+
+- NOC, while waiting for the change to complete, looks at Calipso’s
+ dashboard focused on BoingBoing’s environment, monitoring results for
+ SFC service down event (as expected), impact on other objects in the
+ service chain - the entire Calipso clique for that object (as
+ expected).
+
+- Once operations has reported back to NOC about change done, run the
+ same snapshotting again as *post snapshot* and run a comparison to
+ make sure all virtual networking are back to the ‘as designed’ stage
+ and all networking services are back.
+
+**Example snapshot taken at one stage for the BoingBoing virtual
+networking and SFC:**
+
+|image5|
+
+ *Troubleshooting:*
+
+ AT&T NOC uses Calipso dashboards for BoingBoing’s environment for
+ their daily health-check. Troubleshooting starts in two cases:
+
+1. When a failure is detected on Calipso for any of BoingBoing’s objects
+ on their virtual networking topologies,
+
+2. When a service case has been opened by BoingBoing with “High
+ Priority, SFC down” flag.
+
+ The following actions are taken, using Calipso dashboards:
+
+- Kickoff a discovery through Calipso API for all objects related to
+ BoingBoing.
+
+- For a service request with no Calipso error detected: using Calipso’s
+ impact analysis, create all cliques for all objects as focal point.
+
+- For an error detected by Calipso: using Calipso’s impact analysis,
+ create cliques for objects with errors as focal point.
+
+- Resulted cliques are then analyzed using detailed messaging facility
+ in Calipso (looking deeply into any message generated regarding the
+ related objects).
+
+- If this is a failure on any physical device (host or switch) and/or
+ on any physical NIC (switch or host side), Calipso immediately points
+ this out and using the specific set of messages generated the
+ administrator can figure out the root cause (like optical failure,
+ driver, disconnect etc.).
+
+- In virtual object failures Calipso saves time pinpointing the servers
+ where erroneous objects are running, and their previous and new
+ connectivity details.
+
+- \*Sources of alerts ...OpenStack, Calipso’s and Sensu are built-in
+ sources, other NFV related monitoring and alerting sources can be
+ added to Calipso messaging system.
+
+- Calipso alerts on dependencies for :
+
+1. All related objects in the clique for that objects.
+
+2. Related hosts
+
+3. Related projects and networks
+
+4. Related NFV service and SFC (\* in case NFV tacker has been added)
+
+- Administrators connects directly to the specific servers and now,
+ using the specific object attributes can start he’s manual
+ troubleshooting (actual fixing of the software issues is not
+ currently part of the Calipso features).
+
+- The NOC operators approves closing the service ticket only when all
+ related Calipso cliques are showing up as healthy and connectivity is
+ back to it’s original “as designed” stage, using Calipso older
+ snapshots.
+
+**Calipso’s monitoring dashboard shows virtual services are back to
+operational state:**
+
+|image6|
+
+.. |image0| image:: media/image101.png
+ :width: 7.14372in
+ :height: 2.84375in
+.. |image1| image:: media/image102.png
+ :width: 6.99870in
+ :height: 2.87500in
+.. |image2| image:: media/image103.png
+ :width: 6.50000in
+ :height: 0.49444in
+.. |image3| image:: media/image104.png
+ :width: 6.50000in
+ :height: 5.43472in
+.. |image4| image:: media/image105.png
+ :width: 7.24398in
+ :height: 0.77083in
+.. |image5| image:: media/image106.png
+ :width: 6.50000in
+ :height: 3.58611in
+.. |image6| image:: media/image107.png
+ :width: 7.20996in
+ :height: 2.94792in
diff --git a/docs/release/admin-guide.rst b/docs/release/admin-guide.rst
index edf7b00..3529e77 100644
--- a/docs/release/admin-guide.rst
+++ b/docs/release/admin-guide.rst
@@ -923,7 +923,7 @@ Event-based handling details
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| # | Event name | AMQP event | Handler | Workflow | Scans | Notes |
+==========================+===========================+=====================================+=========================================+==================================================================================================================================================================================================================================================================================+======================================================================================================+==========================================================================================================================================================================================================================================================================================================================================+
-| **Instance** |
+| **Instance** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 1 | Create Instance | compute.instance.create.end | EventInstanceAdd | 1. Get *instances\_root* from inventory | **Yes** | ** ** |
| | | | | | | |
@@ -969,7 +969,7 @@ Event-based handling details
| | | | | | | |
| | | | | 2. Execute *self.delete\_handler()* | | |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| **Instance Lifecycle** |
+| **Instance Lifecycle** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 4 | Instance Down | compute.instance.shutdown.start | **Not implemented** | | | |
| | | | | | | |
@@ -981,7 +981,7 @@ Event-based handling details
| | | | | | | |
| | | compute.instance.suspend.end | | | | |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| **Region** |
+| **Region** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 6 | Add Region | servergroup.create | **Not implemented** | | | |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
@@ -991,7 +991,7 @@ Event-based handling details
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 8 | Delete Region | servergroup.delete | **Not implemented** | ** ** | ** ** | ** ** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| **Network** |
+| **Network** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 9 | Add Network | network.create.end | EventNetworkAdd | 1. If network with specified *id* already exists, log error and **return None** | **No** | ** ** |
| | | | | | | |
@@ -1015,7 +1015,7 @@ Event-based handling details
| | | | | | | |
| | | | | 2. Execute *self.delete\_handler()* | | |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| **Subnet** |
+| **Subnet** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 12 | Add Subnet | subnet.create.end | EventSubnetAdd | 1. Get *network\_document* from db | **Yes** {cliques: 1} | 1. I don’t fully understand what `*these lines* <https://cto-github.cisco.com/OSDNA/OSDNA/blob/b8246e3b19732d2f30922791ade23a94b4f52426/app/discover/events/event_subnet_add.py#L123-L126>`__ do. We make sure *ApiAccess.regions* variable is not empty, but why? The widespread usage of static variables is not a good sign anyway. |
| | | | | | | |
@@ -1069,7 +1069,7 @@ Event-based handling details
| | | | | | | |
| | | | | 6. If no subnets are left in *network\_document*, delete related vservice dhcp, port and vnic documents | | |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| **Port** |
+| **Port** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 15 | Create Port | port.create.end | EventPortAdd | 1. Check if ports folder exists, create if not. | **Yes** {cliques: 1} | 1. The port and (maybe) port folder will still persist in db even if we abort the execution on step 6. See idea 1 for details. |
| | | | | | | |
@@ -1127,7 +1127,7 @@ Event-based handling details
| | | | | | | |
| | | | | 6. Execute *self.delete\_handler(vnic)* *for vnic* | | |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| **Router** |
+| **Router** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 18 | Add Router | router.create.end | EventRouterAdd | 1. Get *host* by id from db | **Yes** {cliques: 1} | 1. Looks like code author confused a lot of stuff here. This class needs to be reviewed thoroughly. |
| | | | | | | |
@@ -1193,7 +1193,7 @@ Event-based handling details
| | | | | | | |
| | | | | 2. Execute *self.delete\_handler()* | | |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| **Router Interface** |
+| **Router Interface** |
+--------------------------+---------------------------+-------------------------------------+-----------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 21 | Add Router Interface | router.interface.create | EventInterfaceAdd | 1. Get *network\_doc* from db based on subnet id from interface payload | **Yes** {cliques: 1} | 1. Log message states that we should abort interface adding, though the code does nothing to support that. Moreover, router\_doc can’t be empty at that moment because it’s referenced before. |
| | | | | | | |
diff --git a/docs/release/scenarios/os-nosdn-calipso-noha/apex-scenario-guide.rst b/docs/release/apex-scenario-guide.rst
index 50e4c60..c240b0a 100644
--- a/docs/release/scenarios/os-nosdn-calipso-noha/apex-scenario-guide.rst
+++ b/docs/release/apex-scenario-guide.rst
@@ -1,282 +1,282 @@
-| Calipso.io
-| Installation Guide
-
-|image0|
-
-Project “Calipso” tries to illuminate complex virtual networking with
-real time operational state visibility for large and highly distributed
-Virtual Infrastructure Management (VIM).
-
-We believe that Stability is driven by accurate Visibility.
-
-Calipso provides visible insights using smart discovery and virtual
-topological representation in graphs, with monitoring per object in the
-graph inventory to reduce error vectors and troubleshooting, maintenance
-cycles for VIM operators and administrators.
-
-Table of Contents
-
-Calipso.io Installation Guide 1
-
-1 Pre Requisites 3
-
-1.1 Pre Requisites for Calipso “all in one” application 3
-
-1.2 Pre Requisites for Calipso UI application 3
-
-2 Installation Option used with Apex 4
-
-2.1 Micro Services App, single line install 4
-
-3 OPNFV Scenario 5
-
-3.1 APEX automatic configurator and setup 5
-
-3.2 Apex scenario 5
-
-3.3 Calipso functest 6
-
-TBD 6
-
-Pre Requisites
-===============
-
-Pre Requisites for Calipso “all in one” application
-----------------------------------------------------
-
- Calipso’s main application is written with Python3.5 for Linux
- Servers, tested successfully on Centos 7.3 and Ubuntu 16.04. When
- running using micro-services many of the required software packages
- and libraries are delivered per micro service, but for an “all in
- one” application case there are several dependencies.
-
- Here is a list of the required software packages, and the official
- supported steps required to install them:
-
-1. Python3.5.x for Linux :
- https://docs.python.org/3.5/using/unix.html#on-linux
-
-2. Pip for Python3 : https://docs.python.org/3/installing/index.html
-
-3. Python3 packages to install using pip3 :
-
- **sudo pip3 install falcon (>1.1.0)**
-
- **sudo pip3 install pymongo (>3.4.0)**
-
- **sudo pip3 install gunicorn (>19.6.0)**
-
- **sudo pip3 install ldap3 (>2.1.1)**
-
- **sudo pip3 install setuptools (>34.3.2)**
-
- **sudo pip3 install python3-dateutil (>2.5.3-2)**
-
- **sudo pip3 install bcrypt (>3.1.1)**
-
- **sudo pip3 install bson**
-
- **sudo pip3 install websocket**
-
- **sudo pip3 install datetime**
-
- **sudo pip3 install typing**
-
- **sudo pip3 install kombu**
-
- **sudo pip3 install boltons**
-
- **sudo pip3 install paramiko**
-
- **sudo pip3 install requests **
-
- **sudo pip3 install httplib2**
-
- **sudo pip3 install mysql.connector**
-
- **sudo pip3 install xmltodict**
-
- **sudo pip3 install cryptography**
-
- **sudo pip3 install docker**
-
-1. Git : https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
-
-2. Docker : https://docs.docker.com/engine/installation/
-
-Pre Requisites for Calipso UI application
-------------------------------------------
-
- Calipso UI is developed and maintained using Meteor Framework
- (https://www.meteor.com/tutorials). For stability and manageability
- reasons we decided to always build the latest Calipso UI as a Docker
- container pre-parameterized for stable and supported behavior. The
- required steps for installing the Calipso UI with several options
- are listed below.
-
-Installation Option used with Apex
-==================================
-
-Micro Services App, single line install
----------------------------------------
-
- For most users, this will be the fastest and more reliable install
- option. We currently have Calipso divided into 7 major containers,
- those are installed using a single installer. The Calipso containers
- are pre-packaged and fully customized per our design needs. Here are
- the required steps for installation using this option:
-
-1. Follow steps 1- 5 per section 2.1 above.
-
-2. Install Docker : https://docs.docker.com/engine/installation/
-
-3. Install the following python3 libraries using pip3 : docker, pymongo
-
-4. Although Calipso installer can download all needed containers, if
- they doesn’t exist locally already, we recommend doing a manual
- download of all 7 containers, providing better control and logging:
-
- **sudo docker login** # use your DockerHub username and password to
- login.
-
- **sudo docker pull korenlev/calipso:scan** # scan container used to
- scan VIM
-
- **sudo docker pull korenlev/calipso:listen** # listen container to
- attach to VIM’s BUS.
-
- **sudo docker pull korenlev/calipso:api** # api container for
- application integration
-
- **sudo docker pull korenlev/calipso:sensu** # sensu server container
- for monitoring
-
- **sudo docker pull korenlev/calipso:mongo** # calipso mongo DB
- container
-
- **sudo docker pull korenlev/calipso:ui** # calipso ui container
-
- **sudo docker pull korenlev/calipso:ldap** # calipso ldap container
-
-1. Check that all containers were downloaded and registered
- successfully:
-
- **sudo docker images**
-
- Expected results (As of Aug 2017):
-
- **REPOSITORY TAG IMAGE ID CREATED SIZE**
-
- **korenlev/calipso listen 12086aaedbc3 6 hours ago 1.05GB**
-
- **korenlev/calipso api 34c4c6c1b03e 6 hours ago 992MB**
-
- **korenlev/calipso scan 1ee60c4e61d5 6 hours ago 1.1GB**
-
- **korenlev/calipso sensu a8a17168197a 6 hours ago 1.65GB**
-
- **korenlev/calipso mongo 17f2d62f4445 22 hours ago 1.31GB**
-
- **korenlev/calipso ui ab37b366e812 11 days ago 270MB**
-
- **korenlev/calipso ldap 316bc94b25ad 2 months ago 269MB**
-
-1. Run the calipso installer using single line arguments:
-
- **python3 calipso/app/install/calipso-installer.py--command
- start-all --copy q**
-
- This should launch all calipso modules in sequence along with all
- needed configuration files placed in /home/calipso.
-
-OPNFV Scenario
-===============
-
-Although calipso is designed for any VIM and for enterprise use-cases
-too, service providers can use additional capability to install calipso
-with Apex for OPNFV.
-
-APEX automatic configurator and setup
--------------------------------------
-
- When using apex to install OPNFV, the Triple-O based OpenStack is
- installed automatically and calipso installation can be initiated
- automatically after apex completes the VIM installation process for
- a certain scenario.
-
- In this case setup\_apex\_environment.py can be used for creating a
- new environment automatically into mongoDB and UI of Calipso
- (instead of using the calipso UI to do that as typical user would
- do), then detailed scanning can start immediately, the following
- options are available for setup\_apex\_environment.py:
-
- **-m [MONGO\_CONFIG], --mongo\_config [MONGO\_CONFIG]**
-
- **name of config file with MongoDB server access details**
-
- **(Default: /local\_dir/calipso\_mongo\_access.conf)**
-
- **-d [CONFIG\_DIR], --config\_dir [CONFIG\_DIR]**
-
- **path to directory with config data (Default:**
-
- **/home/calipso/apex\_setup\_files)**
-
- **-i [INSTALL\_DB\_DIR], --install\_db\_dir [INSTALL\_DB\_DIR]**
-
- **path to directory with DB data (Default:**
-
- **/home/calipso/Calipso/app/install/db)**
-
- **-a [APEX], --apex [APEX]**
-
- **name of environment to Apex host**
-
- **-e [ENV], --env [ENV]**
-
- **name of environment to create(Default: Apex-Euphrates)**
-
- **-l [LOGLEVEL], --loglevel [LOGLEVEL]**
-
- **logging level (default: "INFO")**
-
- **-f [LOGFILE], --logfile [LOGFILE]**
-
- **log file (default:**
-
- **"/home/calipso/log/apex\_environment\_fetch.log")**
-
- **-g [GIT], --git [GIT]**
-
- **URL to clone Git repository (default:**
-
- **https://git.opnfv.org/calipso)**
-
-Apex scenario
--------------
-
- Starting Euphrates 1.0 the following scenario added with Apex
- installer:
-
- **os-nosdn-calipso-noha**
-
- Following CI jobs defined:
-
- https://build.opnfv.org/ci/job/calipso-verify-euphrates/
-
- https://build.opnfv.org/ci/job/apex-testsuite-os-nosdn-calipso-noha-baremetal-euphrates/
-
- https://build.opnfv.org/ci/job/apex-os-nosdn-calipso-noha-baremetal-euphrates/
-
- Note: destination deploy server needs to have pre-requisites
- detailed above.
-
-Calipso functest
-----------------
-
-TBD
-----
-
-.. |image0| image:: media/image1.png
- :width: 6.50000in
- :height: 4.27153in
+| Calipso.io
+| Installation Guide
+
+|image0|
+
+Project “Calipso” tries to illuminate complex virtual networking with
+real time operational state visibility for large and highly distributed
+Virtual Infrastructure Management (VIM).
+
+We believe that Stability is driven by accurate Visibility.
+
+Calipso provides visible insights using smart discovery and virtual
+topological representation in graphs, with monitoring per object in the
+graph inventory to reduce error vectors and troubleshooting, maintenance
+cycles for VIM operators and administrators.
+
+Table of Contents
+
+Calipso.io Installation Guide 1
+
+1 Pre Requisites 3
+
+1.1 Pre Requisites for Calipso “all in one” application 3
+
+1.2 Pre Requisites for Calipso UI application 3
+
+2 Installation Option used with Apex 4
+
+2.1 Micro Services App, single line install 4
+
+3 OPNFV Scenario 5
+
+3.1 APEX automatic configurator and setup 5
+
+3.2 Apex scenario 5
+
+3.3 Calipso functest 6
+
+TBD 6
+
+Pre Requisites
+===============
+
+Pre Requisites for Calipso “all in one” application
+----------------------------------------------------
+
+ Calipso’s main application is written with Python3.5 for Linux
+ Servers, tested successfully on Centos 7.3 and Ubuntu 16.04. When
+ running using micro-services many of the required software packages
+ and libraries are delivered per micro service, but for an “all in
+ one” application case there are several dependencies.
+
+ Here is a list of the required software packages, and the official
+ supported steps required to install them:
+
+1. Python3.5.x for Linux :
+ https://docs.python.org/3.5/using/unix.html#on-linux
+
+2. Pip for Python3 : https://docs.python.org/3/installing/index.html
+
+3. Python3 packages to install using pip3 :
+
+ **sudo pip3 install falcon (>1.1.0)**
+
+ **sudo pip3 install pymongo (>3.4.0)**
+
+ **sudo pip3 install gunicorn (>19.6.0)**
+
+ **sudo pip3 install ldap3 (>2.1.1)**
+
+ **sudo pip3 install setuptools (>34.3.2)**
+
+ **sudo pip3 install python3-dateutil (>2.5.3-2)**
+
+ **sudo pip3 install bcrypt (>3.1.1)**
+
+ **sudo pip3 install bson**
+
+ **sudo pip3 install websocket**
+
+ **sudo pip3 install datetime**
+
+ **sudo pip3 install typing**
+
+ **sudo pip3 install kombu**
+
+ **sudo pip3 install boltons**
+
+ **sudo pip3 install paramiko**
+
+ **sudo pip3 install requests **
+
+ **sudo pip3 install httplib2**
+
+ **sudo pip3 install mysql.connector**
+
+ **sudo pip3 install xmltodict**
+
+ **sudo pip3 install cryptography**
+
+ **sudo pip3 install docker**
+
+1. Git : https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
+
+2. Docker : https://docs.docker.com/engine/installation/
+
+Pre Requisites for Calipso UI application
+------------------------------------------
+
+ Calipso UI is developed and maintained using Meteor Framework
+ (https://www.meteor.com/tutorials). For stability and manageability
+ reasons we decided to always build the latest Calipso UI as a Docker
+ container pre-parameterized for stable and supported behavior. The
+ required steps for installing the Calipso UI with several options
+ are listed below.
+
+Installation Option used with Apex
+==================================
+
+Micro Services App, single line install
+---------------------------------------
+
+ For most users, this will be the fastest and more reliable install
+ option. We currently have Calipso divided into 7 major containers,
+ those are installed using a single installer. The Calipso containers
+ are pre-packaged and fully customized per our design needs. Here are
+ the required steps for installation using this option:
+
+1. Follow steps 1- 5 per section 2.1 above.
+
+2. Install Docker : https://docs.docker.com/engine/installation/
+
+3. Install the following python3 libraries using pip3 : docker, pymongo
+
+4. Although Calipso installer can download all needed containers, if
+ they doesn’t exist locally already, we recommend doing a manual
+ download of all 7 containers, providing better control and logging:
+
+ **sudo docker login** # use your DockerHub username and password to
+ login.
+
+ **sudo docker pull korenlev/calipso:scan** # scan container used to
+ scan VIM
+
+ **sudo docker pull korenlev/calipso:listen** # listen container to
+ attach to VIM’s BUS.
+
+ **sudo docker pull korenlev/calipso:api** # api container for
+ application integration
+
+ **sudo docker pull korenlev/calipso:sensu** # sensu server container
+ for monitoring
+
+ **sudo docker pull korenlev/calipso:mongo** # calipso mongo DB
+ container
+
+ **sudo docker pull korenlev/calipso:ui** # calipso ui container
+
+ **sudo docker pull korenlev/calipso:ldap** # calipso ldap container
+
+1. Check that all containers were downloaded and registered
+ successfully:
+
+ **sudo docker images**
+
+ Expected results (As of Aug 2017):
+
+ **REPOSITORY TAG IMAGE ID CREATED SIZE**
+
+ **korenlev/calipso listen 12086aaedbc3 6 hours ago 1.05GB**
+
+ **korenlev/calipso api 34c4c6c1b03e 6 hours ago 992MB**
+
+ **korenlev/calipso scan 1ee60c4e61d5 6 hours ago 1.1GB**
+
+ **korenlev/calipso sensu a8a17168197a 6 hours ago 1.65GB**
+
+ **korenlev/calipso mongo 17f2d62f4445 22 hours ago 1.31GB**
+
+ **korenlev/calipso ui ab37b366e812 11 days ago 270MB**
+
+ **korenlev/calipso ldap 316bc94b25ad 2 months ago 269MB**
+
+1. Run the calipso installer using single line arguments:
+
+ **python3 calipso/app/install/calipso-installer.py--command
+ start-all --copy q**
+
+ This should launch all calipso modules in sequence along with all
+ needed configuration files placed in /home/calipso.
+
+OPNFV Scenario
+===============
+
+Although calipso is designed for any VIM and for enterprise use-cases
+too, service providers can use additional capability to install calipso
+with Apex for OPNFV.
+
+APEX automatic configurator and setup
+-------------------------------------
+
+ When using apex to install OPNFV, the Triple-O based OpenStack is
+ installed automatically and calipso installation can be initiated
+ automatically after apex completes the VIM installation process for
+ a certain scenario.
+
+ In this case setup\_apex\_environment.py can be used for creating a
+ new environment automatically into mongoDB and UI of Calipso
+ (instead of using the calipso UI to do that as typical user would
+ do), then detailed scanning can start immediately, the following
+ options are available for setup\_apex\_environment.py:
+
+ **-m [MONGO\_CONFIG], --mongo\_config [MONGO\_CONFIG]**
+
+ **name of config file with MongoDB server access details**
+
+ **(Default: /local\_dir/calipso\_mongo\_access.conf)**
+
+ **-d [CONFIG\_DIR], --config\_dir [CONFIG\_DIR]**
+
+ **path to directory with config data (Default:**
+
+ **/home/calipso/apex\_setup\_files)**
+
+ **-i [INSTALL\_DB\_DIR], --install\_db\_dir [INSTALL\_DB\_DIR]**
+
+ **path to directory with DB data (Default:**
+
+ **/home/calipso/Calipso/app/install/db)**
+
+ **-a [APEX], --apex [APEX]**
+
+ **name of environment to Apex host**
+
+ **-e [ENV], --env [ENV]**
+
+ **name of environment to create(Default: Apex-Euphrates)**
+
+ **-l [LOGLEVEL], --loglevel [LOGLEVEL]**
+
+ **logging level (default: "INFO")**
+
+ **-f [LOGFILE], --logfile [LOGFILE]**
+
+ **log file (default:**
+
+ **"/home/calipso/log/apex\_environment\_fetch.log")**
+
+ **-g [GIT], --git [GIT]**
+
+ **URL to clone Git repository (default:**
+
+ **https://git.opnfv.org/calipso)**
+
+Apex scenario
+-------------
+
+ Starting Euphrates 1.0 the following scenario added with Apex
+ installer:
+
+ **os-nosdn-calipso-noha**
+
+ Following CI jobs defined:
+
+ https://build.opnfv.org/ci/job/calipso-verify-euphrates/
+
+ https://build.opnfv.org/ci/job/apex-testsuite-os-nosdn-calipso-noha-baremetal-euphrates/
+
+ https://build.opnfv.org/ci/job/apex-os-nosdn-calipso-noha-baremetal-euphrates/
+
+ Note: destination deploy server needs to have pre-requisites
+ detailed above.
+
+Calipso functest
+----------------
+
+TBD
+----
+
+.. |image0| image:: media/image1.png
+ :width: 6.50000in
+ :height: 4.27153in
diff --git a/docs/release/developer-guide.pdf b/docs/release/developer-guide.pdf
new file mode 100644
index 0000000..2ed302e
--- /dev/null
+++ b/docs/release/developer-guide.pdf
Binary files differ
diff --git a/docs/release/developer-guide.rst b/docs/release/developer-guide.rst
new file mode 100644
index 0000000..0de3f57
--- /dev/null
+++ b/docs/release/developer-guide.rst
@@ -0,0 +1,1338 @@
+| Calipso
+| Developer Guide
+
+|image0|
+
+Project “Calipso” tries to illuminate complex virtual networking with
+real time operational state visibility for large and highly distributed
+Virtual Infrastructure Management (VIM).
+
+We believe that Stability is driven by accurate Visibility.
+
+Calipso provides visible insights using smart discovery and virtual
+topological representation in graphs, with monitoring per object in the
+graph inventory to reduce error vectors and troubleshooting, maintenance
+cycles for VIM operators and administrators.
+
+Project architecture
+====================
+
+Calipso comprises two major parts: application and UI. We’ll focus on
+the former in this developer guide.
+
+Current project structure is as follows:
+
+- root/
+
+ - app/
+
+ - api/
+
+ - responders/
+
+ - auth/
+
+ - resource/
+
+ - *server.py*
+
+ - config/
+
+ - *events.json*
+
+ - *scanners.json*
+
+ - discover/
+
+ - events/
+
+ - listeners/
+
+ - *default\_listener.py*
+
+ - *listener\_base.py*
+
+ - handlers/
+
+ - *event\_base.py*
+
+ - *event\_\*.py*
+
+ - fetchers/
+
+ - aci/
+
+ - api/
+
+ - cli/
+
+ - db/
+
+ - *event\_manager.py*
+
+ - *scan.py*
+
+ - *scan\_manager.py*
+
+ - monitoring/
+
+ - checks/
+
+ - handlers/
+
+ - *monitor.py*
+
+ - setup/
+
+ - *monitoring\_setup\_manager.py*
+
+ - test/
+
+ - api/
+
+ - event\_based\_scan/
+
+ - fetch/
+
+ - scan/
+
+ - utils/
+
+ - ui/
+
+Application structure
+---------------------
+
+‘API’ package
+~~~~~~~~~~~~~
+
+Calipso API is designed to be used by native and third-party
+applications that are planning to use Calipso discovery application.
+
+***api/responders***
+
+This package contains all exposed API endpoint handlers:
+
+*auth* package contains token management handlers,
+
+*resource* package contains resource handlers.
+
+***server.py***
+
+API server startup script. In order for it to work correctly, connection
+arguments for a Mongo database used by a Calipso application instance
+are required:
+
+-m [MONGO\_CONFIG], --mongo\_config [MONGO\_CONFIG]
+
+name of config file with mongo access details
+
+--ldap\_config [LDAP\_CONFIG]
+
+name of the config file with ldap server config
+
+details
+
+-l [LOGLEVEL], --loglevel [LOGLEVEL]
+
+logging level (default: 'INFO')
+
+-b [BIND], --bind [BIND]
+
+binding address of the API server (default
+
+127.0.0.1:8000)
+
+-y [INVENTORY], --inventory [INVENTORY]
+
+name of inventory collection (default: 'inventory')
+
+-t [TOKEN\_LIFETIME], --token-lifetime [TOKEN\_LIFETIME]
+
+lifetime of the token
+
+For detailed reference and endpoints guide, see the API Guide document.
+
+‘Discover’ package
+~~~~~~~~~~~~~~~~~~
+
+‘Discover’ package contains the core Calipso functionality which
+involves:
+
+- scanning a network topology using a defined suite of scanners (see
+ `Scanning concepts <#scanning-concepts>`__, `Scanners configuration
+ file structure <#the-scanners-configuration-file-structure>`__) that
+ use fetchers to get all needed data on objects of the topology;
+
+- tracking live events that modifies the topology in any way (by adding
+ new object, updating existing or deleting them) using a suite of
+ event handlers and event listeners;
+
+- managing the aforementioned suites using specialized manager scripts
+ (*scan\_manager.py* and *event\_manager.py*)
+
+‘Tests’ package
+~~~~~~~~~~~~~~~
+
+‘Tests’ package contains unit tests for main Calipso components: API,
+event handlers, fetchers, scanners and utils.
+
+Other packages
+~~~~~~~~~~~~~~
+
+***Install***
+
+Installation and deployment scripts (with initial data for Calipso
+database).
+
+***Monitoring***
+
+Monitoring configurations, checks and handlers (see
+`Monitoring <#monitoring>`__ section and Monitoring Guide document).
+
+***Utils***
+
+Utility modules for app-wide use (inventory manager, mongo access,
+loggers, etc.).
+
+Scanning Guide
+==============
+
+ Introduction to scanning
+-------------------------
+
+Architecture overview
+~~~~~~~~~~~~~~~~~~~~~
+
+Calipso backend will scan any OpenStack environment to discover the
+objects that it is made of, and place the objects it discovered in a
+MongoDB database.
+
+Following discovery of objects, Calipso will:
+
+| Find what links exist between these objects, and save these links to
+ MongoDB as well.
+| For example, it will create a pnic-network link from a pNIC (physical
+ NIC) and the network it is connected to.
+
+Based on user definitions, it will create a 'clique' for each object
+using the links it previously found. These cliques are later used to
+present graphs for objects being viewed in the Calipso UI. This is not a
+clique by graph theory definition, but more like the social definition
+of clique: a graph of related, interconnected nodes.
+
+
+OpenStack Scanning is done using the following methods, in order of
+preference:
+
+1. OpenStack API
+
+2. MySQL DB - fetch any extra detail we can from the infrastructure
+ MySQL DB used by OpenStack
+
+3. CLI - connect by SSH to the hosts in the OpenStack environment to run
+ commands, e.g. ifconfig, that will provide the most in-depth details.
+
+
+| *Note*: 'environment' in Calipso means a single deployment of
+ OpenStack, possibly containing multiple tenants (projects), hosts and
+ instances (VMs). A single Calipso instance can handle multiple
+ OpenStack environments. 
+| However, we expect that typically Calipso will reside inside an
+ OpenStack control node and will handle just that node's OpenStack
+ environment.
+
+
+***Environment***
+
+| The Calipso scan script, written in Python, is called scan.py.
+| It uses Python 3, along with the following libraries:
+
+- pymongo - for MongoDB access
+
+- mysql-connector - For MySQL DB access
+
+- paramiko - for SSH access
+
+- requests - For handling HTTP requests and responses to the OpenStack
+ API
+
+- xmltodict - for handling XML output of CLI commands
+
+- cryptography - used by Paramiko
+
+See Calipso installation guide for environment setup instructions.
+
+***Configuration***
+
+The configuration for accessing the OpenStack environment, by API, DB or
+SSH, is saved in the Calipso MongoDB *“environments\_config”*
+collection.
+
+Calipso can work with a remote MongoDB instance, the details of which
+are read from a configuration file (default: */etc/calipso/mongo.conf*).
+
+| The first column is the configuration key while the second is the
+ configuration value, in the case the value is the server host name or
+ IP address.
+| Other possible keys for MongoDB access:
+
+- port: IP port number
+
+- Other parameters for the PyMongo MongoClient class constructor
+
+Alternate file location can be specified using the CLI -m parameter.
+
+Scanning concepts
+~~~~~~~~~~~~~~~~~
+
+***DB Schema***
+
+Objects are stored in the inventory collection, named *“inventory”* by
+default, along with the accompanying collections, named by
+default: \ *"links", "cliques", "clique\_types" and
+"clique\_constraints"*. For development, separate sets of collections
+can be defined per environment (collection names are created by
+appending the default collection name to the alternative inventory
+collection name).
+
+The inventory, links and cliques collections are all designed to work
+with a multi-environment scenario, so documents are marked with an
+*"environment"* attribute.
+
+The clique\_types collection allows Calipso users (typically
+administrators) to define how the "clique" graphs are to be defined. 
+
+It defines a set of link types to be traversed when an object such as an
+instance is clicked in the UI (therefore referred to as the focal
+point). See "Clique Scanning" below. This definition can differ between
+environments.
+
+Example: for focal point type "instance", the link types are often set
+to
+
+- instance-vnic
+
+- vnic-vconnector
+
+- vconnector-vedge
+
+- vedge-pnic
+
+- pnic-network
+
+| The clique\_constraints collection defines a constraint on links
+ traversed for a specific clique when starting from a given focal
+ point. 
+| For example: instance cliques are constrained to a specific
+ network. If we wouldn't have this constraint, the resulting graph
+ would stretch to include objects from neighboring networks that are
+ not really related to the instance.
+
+\ ***Hierarchy of Scanning***
+
+The initial scanning is done hierarchically, starting from the
+environment level and discovering lower levels in turn.
+
+Examples:
+
+- Under environment we scan for regions and projects (tenants).
+
+- Under availability zone we have hosts, and under hosts we have
+ instances and host services
+
+The actual scanning order is not always same as the logical hierarchical
+order of objects, to improve scanning performance.
+
+Some objects are referenced multiple times in the hierarchy. For
+example, hosts are always in an availability zone, but can also be part
+of a host aggregate. Such extra references are saved as references to
+the main object.
+
+***Clique Scanning***
+
+| For creating cliques based on the discovered objects and links, clique
+ types need to be defined for the given environment.
+| A clique type specifies the list of link types used in building a
+ clique for a specific focal point object type.
+
+For example, it can define that for instance objects we want to have the
+following link types:
+
+- instance-vnic
+
+- vnic-vconnector
+
+- vconnector-vedge
+
+- vedge-pnic
+
+- pnic-network
+
+
+As in many cases the same clique types are used, default clique types
+will be provided with a new Calipso deployment.
+
+\ ***Clique creation algorithm***
+
+- For each clique type CT:
+
+ - For each focal point object F of the type specified as the clique
+ type focal point type:
+
+ - Create a new clique C
+
+ - Add F to the list of objects included in the clique
+
+ - For each link type X-Y of the link types in CT:
+
+ - Find all the source objects of type x that are already
+ included in the clique
+
+ - For each such source object S:
+
+ - for all links L of type X-Y that have S as their source
+
+ - Add the object T of type Y that is the target in L to the
+ list of objects included in the clique
+
+ - Add L to the list of links in the clique C
+
+How to run scans
+----------------
+
+For running environment scans Calipso uses a specialized daemon script
+called *scan manager*. If Calipso application is deployed in docker
+containers, scan manager will run inside the *calipso-scan* container.
+
+Scan manager uses MongoDB connection to fetch requests for environment
+scans and execute them by running a *scan* script. It also performs
+extra checks and procedures connected to scan failure/completion, such
+as marking *environment* as scanned and reporting errors (see
+`details <#scan-manager>`__).
+
+Scan script workflow:
+
+1. Loads specific scanners definitions from a predefined metadata file
+ (which can be extended in order to support scanning of new object
+ types).
+
+2. Runs the root scanner and then children scanners recursively (see
+ `Hierarchy of scanning <#Hierarchy_of_scanning>`__)
+
+ a. Scanners do all necessary work to insert objects in *inventory*.
+
+3. Finalizes the scan and publishes successful scan completion.
+
+Scan manager
+~~~~~~~~~~~~
+
+Scan manager is a script which purpose is to manage the full lifecycle
+of scans requested through API. It runs indefinitely while:
+
+1. Polling the database (*scans* and *scheduled\_scans* collections) for
+ new and scheduled scan requests;
+
+2. Parsing their configurations;
+
+3. Running the scans;
+
+4. Logging the results.
+
+Scan manager can be run in a separate container provided that it has
+connection to the database and the topology source system.
+
+Monitoring
+----------
+
+***Monitoring Subsystem Overview***
+
+Calipso monitoring uses Sensu to remotely track actual state of hosts.
+
+A Sensu server is installed as a Docker image along with the other
+Calipso components.
+
+
+Remote hosts send check events to the Sensu server. 
+
+We use a filtering of events such that the first occurrence of a check
+is always used, after that cases where status is unchanged are ignored.
+
+When handling a check event, the Calipso Sensu handlers will find the
+matching Calipso object, and update its status.
+
+We also keep the timestamp of the last status update, along with the
+full check output.
+
+Setup of checks and handlers code on the server and the remote hosts can
+be done by Calipso. It is also possible to have this done using another
+tool, e.g. Ansible or Puppet.
+
+More info is available in Monitoring Guide document.
+
+
+***Package Structure***
+
+Monitoring package is divided like this:
+
+1.      Checks: these are the actual check scripts that will be run on
+the hosts;
+
+2.      Handlers: the code that does handling of check events;
+
+3.      Setup: code for setting up handlers and checks.
+
+Events Guide
+============
+
+Introduction
+------------
+
+Events
+~~~~~~
+
+Events in general sense are any changes to the monitored topology
+objects that are trackable by Calipso. We currently support subscription
+to Neutron notification queues for several OpenStack distributions as a
+source of events.
+
+The two core concepts of working with events are *listening to events*
+and *event handling*, so the main module groups in play are the *event
+listener* and *event handlers*.
+
+Event listeners
+~~~~~~~~~~~~~~~
+
+An event listener is a module that handles connection to the event
+source, listening to the new events and routing them to respective event
+handlers.
+
+An event listener class should be designed to run indefinitely in
+foreground or background (daemon) while maintaining a connection to the
+source of events (generally a message queue like RabbitMQ or Apache
+Kafka). Each incoming event is examined and, if it has the correct
+format, is routed to the corresponding event handler class. The routing
+can be accomplished through a dedicated event router class using a
+metadata file and a metadata parser (see `Metadata
+parsers <#metadata-parsers>`__).
+
+Event handlers
+~~~~~~~~~~~~~~
+
+An event handler is a specific class that parses the incoming event
+payload and performs a certain CUD (Create/Update/Delete) operation on
+zero or more database objects. Event handler should be independent of
+the event listener implementation.
+
+Event manager
+~~~~~~~~~~~~~
+
+Event manager is a script which purpose is to manage event listeners. It
+runs indefinitely and performs the following operations:
+
+1. Starts a process for each valid entry in *environments\_config*
+ collection that is scanned (*scanned == true*) and has the *listen*
+ flag set to *true*;
+
+2. Checks the *operational* statuses of event listeners and updating
+ them in *environments\_config* collection;
+
+3. Stops the event listeners that no longer qualify for listening (see
+ step 1);
+
+4. Restarts the event listeners that quit unexpectedly;
+
+5. Repeats steps 1-5
+
+Event manager can be run in a separate container provided that it has
+connection to the database and to all events source systems that event
+listeners use.
+
+Contribution
+~~~~~~~~~~~~
+
+You can contribute to Calipso *events* system in several ways:
+
+- create custom event handlers for an existing listener;
+
+- create custom event listeners and reuse existing handlers;
+
+- create custom event handlers and listeners.
+
+See `Creating new event handlers <#creating-new-event-handlers>`__ and
+`Creating new event listeners <#creating-new-event-listeners>`__ for the
+respective guides.
+
+Contribution
+============
+
+This section covers the designed approach to contribution to Calipso.
+
+The main scenario of contribution consists of introducing a new *object*
+type to the discovery engine, defining *links* that connect this new
+object to existing ones, and describing a *clique* (or cliques) that
+makes use of the object and its links. Below we describe how this
+scenario should be implemented, step-by-step.
+
+*Note*: Before writing any new code, you need to create your own
+environment using UI (see User Guide document) or API (see the API guide
+doc). Creating an entry directly in *“environments\_config”* collection
+is not recommended.
+
+Creating new object types
+-------------------------
+
+Before you proceed with creation of new object type, you need to make
+sure the following requirements are met:
+
+- New object type has a unique name and purpose
+
+- New object type has an existing parent object type
+
+First of all, you need to create a fetcher that will take care of
+getting info on objects of the new type, processing it and adding new
+entries in Calipso database.
+
+Creating new fetchers
+~~~~~~~~~~~~~~~~~~~~~
+
+A fetcher is a common name for a class that handles fetching of all
+objects of a certain type that have a common parent object. The source
+of this data may be already implemented in Calipso (like OpenStack API,
+CLI and DB sources) or you may create one yourself.
+
+***Common fetchers***
+
+Fetchers package structure should adhere to the following pattern (where
+*%source\_name%* is a short prefix like *api, cli, db*):
+
+- app
+
+ - discover
+
+ - fetchers
+
+ - *%source\_name%*
+
+ - *%source\_name%*\ \_\ *%fetcher\_name%.*\ py
+
+If you reuse the existing data source, your new fetcher should subclass
+the class located in *%source\_name%\_access* module inside the
+*%source\_name%* directory.
+
+Fetcher class name should repeat the module name, except in CamelCase
+instead of snake\_case.
+
+Example: if you are adding a new cli fetcher, you should subclass
+*CliAccess* class found by *app/discover/fetchers/cli/cli\_access.py*
+path. If the module is named *cli\_fetch\_new\_objects.py*, fetcher
+class should be named *CliFetchNewObjects*.
+
+If you are creating a fetcher that uses new data source, you may
+consider adding an “access” class for this data source to store
+convenience methods. In this case, the “access” class should subclass
+the base Fetcher class (found in *app/discover/fetcher.py*) and the
+fetcher class should subclass the “access” class.
+
+All business logic of a fetcher should be defined inside the overridden
+method from base Fetcher class *get(self, parent\_id)*. You should use
+the second argument that is automatically passed by parent scanner to
+get the parent entity from database and any data you may need. This
+method has to return a list of new objects (dicts) that need to be
+inserted in Calipso database. Their parent object should be passed along
+other fields (see example).
+
+*Note*: types of returned objects should match the one their fetcher is
+designed for.
+
+***Example***:
+
+**app/discover/fetchers/cli/cli\_fetch\_new\_objects.py**
+
+ | **from** discover.fetchers.cli.cli\_access **import** CliAccess
+ | **from** utils.inventory\_mgr **import** InventoryMgr
+ | **class** CliFetchNewObjects(CliAccess):
+ | **def** \_\_init\_\_(self):
+ | super().\_\_init\_\_()
+ | self.inv = InventoryMgr()
+ | **def** get(self, parent\_id):
+ | parent = self.inv.get\_by\_id(self.env, parent\_id)
+ | *# do something
+ *\ objects = [{**"type"**: **"new\_type"**, **"id"**: **"1234"**,
+ **"parent"**: parent},
+ | {**"type"**: **"new\_type"**, **"id"**: **"2345"**, **"parent"**:
+ parent}]
+ | **return** objects
+
+This is an example of a fetcher that deals with the objects of type
+*“new\_type”*. It uses the parent id to fetch the parent object, then
+performs some operations in order to fetch the new objects and
+ultimately returns the objects list, at which point it has gathered all
+required information.
+
+\ ***Folder fetcher***
+
+A special type of fetchers is the folder fetcher. It serves as a dummy
+object used to aggregate objects in a specific point in objects
+hierarchy. If you would like to logically separate children objects from
+parent, you may use folder fetcher found at
+*app/discover/fetchers/folder\_fetcher.py*.
+
+Usage is described `here <#Folder_scanner>`__.
+
+The scanners configuration file structure
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**Scanners.json** (full path *app/config/scanners.json*) is an essential
+configuration file that defines scanners hierarchy. It has a forest
+structure, meaning that it is a set of trees, where each tree has a
+*root* scanner, potentially many levels of *children* scanners and
+pointers from parent scanners to children scanners. Scanning hierarchy
+is described `here <#Hierarchy_of_scanning>`__.
+
+A scanner is essentially a list of fetchers with configuration (we’ll
+call those **Fetch types**). Fetch types can be **Simple** and
+**Folder**, described below.
+
+***Simple fetch type***
+
+A simple fetch type looks like this:
+
+ | {
+ | **"type"**: **"project"**,
+ | **"fetcher"**: **"ApiFetchProjects"**,
+ | **"object\_id\_to\_use\_in\_child"**: **"name"**,
+
+ | **"environment\_condition"**: {
+ | **"mechanism\_drivers"**: **"VPP"
+ ** },
+ | **"children\_scanner"**: **"ScanProject"
+ **}
+
+Supported fields include:
+
+- *“fetcher”* – class name of fetcher that the scanner uses;
+
+- *“type”* – object type that the fetcher works with;
+
+- *“children\_scanner”* – (optional) full name of a scanner that should
+ run after current one finishes;
+
+- *“environment\_condition”* – (optional) specific constraints that
+ should be checked against the environment in *environments\_config*
+ collection before execution;
+
+- *“object\_id\_to\_use\_in\_child*\ ” – (optional) which parent field
+ should be passed as parent id to the fetcher (default: “id”).
+
+ \ ***Folder fetch type***
+
+Folder fetch types deal with folder fetchers (described
+`here <#Folder_fetcher>`__) and have a slightly different structure:
+
+ | {
+ | **"type"**: **"aggregates\_folder"**,
+ | **"fetcher"**: {
+ | **"folder"**: **true**,
+ | **"types\_name"**: **"aggregates"**,
+ | **"parent\_type"**: **"region"
+ **},
+
+ **"object\_id\_to\_use\_in\_child"**: **"name"**,
+
+ | **"environment\_condition"**: {
+ | **"mechanism\_drivers"**: **"VPP"
+ ** },
+ | **"children\_scanner"**: **"ScanAggregatesRoot"
+ **}
+
+The only difference is that *“fetcher”* field is now a dictionary with
+the following fields:
+
+- *“folder”* – should always be **true**;
+
+- *“types\_name”* – type name in plural (with added ‘s’) of objects
+ that serve as folder’s children
+
+- *“parent\_type”* – folder’s parent type (basically the parent type of
+ folder’s objects).
+
+Updating scanners
+~~~~~~~~~~~~~~~~~
+
+After creating a new fetcher, you should integrate it into scanners
+hierarchy. There are several possible courses of action:
+
+***Add new scanner as a child of an existing one***
+
+If the parent type of your newly added object type already has a
+scanner, you can add your new scanner as a child of an existing one.
+There are two ways to do that:
+
+1. Add new scanner as a *“children\_scanner”* field to parent scanner
+
+ ***Example***
+
+ Before:
+
+ | **"ScanHost"**: [
+ | {
+ | **"type"**: **"host"**,
+ | **"fetcher"**: **"ApiFetchProjectHosts"**,
+ | }
+ | ],
+
+ After:
+
+ | **"ScanHost"**: [
+ | {
+ | **"type"**: **"host"**,
+ | **"fetcher"**: **"ApiFetchProjectHosts"**,
+ | **"children\_scanner"**: **"NewTypeScanner"
+ **}
+ | ],
+ | **"NewTypeScanner"**: [
+ | {
+ | **"type"**: **"new\_type"**,
+ | **"fetcher"**: **"CliFetchNewType"
+ **}
+ | ]
+
+1. Add new fetch type to parent scanner (in case if children scanner
+ already exists)
+
+ ***Example***
+
+ Before:
+
+ | **"ScanHost"**: [
+ | {
+ | **"type"**: **"host"**,
+ | **"fetcher"**: **"ApiFetchProjectHosts"**,
+ | **"children\_scanner"**: **"ScanHostPnic"
+ **}
+ | ],
+
+ After:
+
+ | **"ScanHost"**: [
+ | {
+ | **"type"**: **"host"**,
+ | **"fetcher"**: **"ApiFetchProjectHosts"**,
+ | **"children\_scanner"**: **"ScanHostPnic"
+ **},
+ | {
+ | **"type"**: **"new\_type"**,
+ | **"fetcher"**: **"CliFetchNewType"
+ **}
+ | ],
+
+***Add new scanner and set an existing one as a child***
+
+***Example***
+
+ Before:
+
+ | **"ScanHost"**: [
+ | {
+ | **"type"**: **"host"**,
+ | **"fetcher"**: **"ApiFetchProjectHosts"**,
+ | **"children\_scanner"**: **"ScanHostPnic"
+ **}
+ | ],
+
+ After:
+
+ | **"NewTypeScanner"**: [
+ | {
+ | **"type"**: **"new\_type"**,
+ | **"fetcher"**: **"CliFetchNewType"**,
+ | **"children\_scanner"**: **"ScanHost"
+ **}
+ | ]
+
+ | **"ScanHost"**: [
+ | {
+ | **"type"**: **"host"**,
+ | **"fetcher"**: **"ApiFetchProjectHosts"**,
+ | **"children\_scanner"**: **"ScanHostPnic"
+ **}
+ | ],
+
+***Other cases***
+
+You may choose to combine approaches or use none of them and create an
+isolated scanner if needed.
+
+Updating constants collection
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before testing your new scanner and fetcher you need to add the newly
+created object type to *“constants”* collection in Calipso database:
+
+1. **constants.object\_types** document
+
+ Append a *{“value”: “new\_type”, “label”: “new\_type”}* object to
+ **data** list.
+
+1. **constants.scan\_object\_types** document
+
+ Append a *{“value”: “new\_type”, “label”: “new\_type”}* object to
+ **data** list.
+
+1. **constants.object\_types\_for\_links** document
+
+ If you’re planning to build links using this object type (you
+ probably are), append a *{“value”: “new\_type”, “label”:
+ “new\_type”}* object to **data** list.
+
+Setting up monitoring
+~~~~~~~~~~~~~~~~~~~~~
+
+In order to setup monitoring for the new object type you have defined,
+you’ll need to add a Sensu check:
+
+1. Add a check script in app/monitoring/checks:
+
+ a. | Checks should return the following values:
+ | 0: **OK**
+ | 1: **Warning**
+ | 2: **Error**
+
+ b. Checks can print the underlying query results to stdout. Do so
+ within reason, as this output is later stored in the DB, so avoid
+ giving too much output;
+
+ c. Test your script on a remote host:
+
+ i. Write it in */etc/sensu/plugins* directory;
+
+ ii. Update the Sensu configuration on the remote host to run this
+ check;
+
+ iii. Add the check in the “checks” section of
+ */etc/sensu/conf.d/client.json*;
+
+ iv. The name under which you save the check will be used by the
+ handler to determine the DB object that it relates to;
+
+ v. Restart the client with the command: *sudo service
+ sensu-client restart*;
+
+ vi. Check the client log file to see the check is run and
+ produces the expected output (in */var/log/sensu* directory).
+
+ d. Add the script to the source directory (*app/monitoring/checks*).
+
+2. Add a handler in app/monitoring/handlers:
+
+ a. If you use a standard check naming scheme and check an object, the
+ *BasicCheckHandler* can take care of this, but add the object type
+ in *basic\_handling\_types* list in *get\_handler()*;
+
+ b. If you have a more complex naming scheme, override
+ MonitoringCheckHandler. See HandleOtep for example.
+
+3. If you deploy monitoring using Calipso:
+
+ a. Add the check in the *monitoring\_config\_templates* collection.
+
+*Check Naming*
+
+The check name should start with the type of the related object,
+followed by an underscore (“\_”). For example, the name for a check
+related to an OTEP (type “otep”)  will start with “otep\_“. It should
+then be followed by the object ID.
+
+
+For checks related to links, the check name will have this
+format:
link\_<link type>\_<from\_id>\_<to\_id>
+
+Creating new link types
+-----------------------
+
+After you’ve added a new object type you may consider adding new link
+types to connect objects of new type to existing objects in topology.
+Your new object type may serve as a *source* and/or *target* type for
+the new link type.
+
+The process of new link type creation includes several steps:
+
+1. Write a link finder class;
+
+2. Add the link finder class to the link finders configuration file;
+
+3. Update *“constants”* collection with the new link types.
+
+Writing link finder classes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A new link finder class should:
+
+1. Subclass *app.discover.link\_finders.FindLinks* class;
+
+2. Be located in the *app.discover.link\_finders* package;
+
+3. Define an instance method called *add\_links(self)* with no
+ additional arguments. This method is the only entry point for link
+ finder classes.
+
+*FindLinks* class provides access to inventory manager to its subclasses
+which they should use to their advantage. It also provides a convenience
+method *create\_links(self, …)* for saving links to database. It is
+reasonable to call this method at the end of *add\_links* method.
+
+You may opt to add more than one link type at a time in a single link
+finder.
+
+***Example***
+
+ | **from** discover.find\_links **import** FindLinks
+ | **class** FindLinksForNewType(FindLinks):
+ | **def** add\_links(self):
+ | new\_objects = self.inv.find\_items({\ **"environment"**:
+ self.get\_env(),
+ | **"type"**: **"new\_type"**})
+ | **for** new\_object **in** new\_objects:
+ | old\_object = self.inv.get\_by\_id(environment=self.get\_env(),
+ | item\_id=new\_object[**"old\_object\_id"**])
+ | link\_type = **"old\_type-new\_type"
+ **\ link\_name = **"{}-{}"**.format(old\_object[**"name"**],
+ new\_object[**"name"**])
+ | state = **"up"** *# TBD
+ *\ link\_weight = 0 *# TBD
+ *\ self.create\_link(env=self.get\_env(),
+ | source=old\_object[**"\_id"**],
+ | source\_id=old\_object[**"id"**],
+ | target=new\_object[**"\_id"**],
+ | target\_id=new\_object[**"id"**],
+ | link\_type=link\_type,
+ | link\_name=link\_name,
+ | state=state,
+ | link\_weight=link\_weight)
+
+Updating the link finders configuration file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Default link finders configuration file can be found at
+*/app/config/link\_finders.json* and has the following structure:
+
+ | {
+ | **"finders\_package"**: **"discover.link\_finders"**,
+ | **"base\_finder"**: **"FindLinks"**,
+ | **"link\_finders"**: [
+ | **"FindLinksForInstanceVnics"**,
+ | **"FindLinksForOteps"**,
+ | **"FindLinksForPnics"**,
+ | **"FindLinksForVconnectors"**,
+ | **"FindLinksForVedges"**,
+ | **"FindLinksForVserviceVnics"
+ **]
+ | }
+
+File contents:
+
+- *finders\_package* – python path to the package that contains link
+ finders (relative to $PYTHONPATH environment variable);
+
+- *base\_finder* – base link finder class name;
+
+- *link\_finders* – class names of actual link finders.
+
+If your new fetcher meets the requirements described in `Writing link
+finder classes <#writing-link-finder-classes>`__ section, you can append
+its name to the *“link\_finders”* list in *link\_finders.json* file.
+
+Updating constants collection
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before testing your new links finder, you need to add the newly created
+link types to *“constants”* collection in Calipso database:
+
+1. **constants.link\_types** document
+
+ Append a *{“value”: “source\_type-target\_type”, “label”:
+ “source\_type-target\_type”}* object to **data** list for each new
+ link type.
+
+Creating custom link finders configuration file
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you consider writing a custom list finders configuration file, you
+should also follow the guidelines from 4.2.1-4.2.3 while designing link
+finder classes and including them in the new link finders source file.
+
+The general approach is the following:
+
+1. Custom configuration file should have the same key structure with the
+ basic one;
+
+2. You should create a *base\_finder* class that subclasses the basic
+ FindLinks class (see `Writing link finder
+ classes <#writing-link-finder-classes>`__);
+
+3. Your link finder classes should be located in the same package with
+ your *base\_finder* class;
+
+4. Your link finder classes should subclass your *base\_finder* class
+ and override the *add\_links(self)* method.
+
+Creating new clique types
+-------------------------
+
+Two steps in creating new clique types and including them in clique
+finder are:
+
+1. Designing new clique types
+
+2. Updating clique types collection
+
+Designing new clique types
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A clique type is basically a list of links that will be traversed during
+clique scans (see `Clique creation algorithm <#clique_creation>`__). The
+process of coming up with clique types involves general networking
+concepts knowledge as well as expertise in monitored system details
+(e.g. OpenStack distribution specifics). In a nutshell, it is not a
+trivial process, so the clique design should be considered carefully.
+
+The predefined clique types (in *clique\_types* collection) may give you
+some idea about the rationale behind clique design.
+
+Updating clique types collection
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After designing the new clique type, you need to update the
+*clique\_types* collection in order for the clique finder to use it. For
+this purpose, you should add a document of the following structure:
+
+ { 
+
+     "environment": "ANY", 
+
+     "link\_types": [
+
+         "instance-vnic", 
+
+         "vnic-vconnector", 
+
+         "vconnector-vedge", 
+
+         "vedge-otep", 
+
+         "otep-vconnector", 
+
+         "vconnector-host\_pnic", 
+
+         "host\_pnic-network"
+
+     ], 
+
+     "name": "instance", 
+
+     "focal\_point\_type": "instance"
+
+ }
+
+Document fields are:
+
+- *environment* – can either hold the environment name, for which the
+ new clique type is designed, or **“ANY”** if the new clique type
+ should be added to all environments;
+
+- *name* – display name for the new clique type;
+
+- *focal\_point\_type* – the aggregate object type for the new clique
+ type to use as a starting point;
+
+- *link\_types* – a list of links that constitute the new clique type.
+
+Creating new event handlers
+---------------------------
+
+There are three steps to creating a new event handler:
+
+1. Determining *event types* that will be handled by the new handler;
+
+2. Writing the new handler module and class;
+
+3. Adding the (event type -> handler) mapping to the event handlers
+ configuration file.
+
+Writing custom handler classes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Each event handler should adhere to the following design:
+
+1. Event handler class should subclass
+ the app.discover.events.event\_base.EventBase class;
+
+2. Event handler class should override handle method of EventBase.
+ Business logic of the event handler should be placed inside
+ the handle method;
+
+ a. Handle method accepts two arguments: environment name (str) and
+ notification contents (dict). No other event data will be provided
+ to the method;
+
+ b. Handle method returns an EventResult object, which accepts the
+ following arguments in its constructor:
+
+ i. *result* (mandatory) - determines whether the event handling
+ was successful;
+
+ ii. *retry* (optional) - determines whether the message should be
+ put back in the queue in order to be processed later. This
+ argument is checked only if result was set to False;
+
+ iii. *message* (optional) - (Currently unused) a string comment on
+ handling status;
+
+ iv. *related\_object* (optional) – id of the object related to
+ the handled event;
+
+ v. *display\_context* (optional) – (Calipso UI requirement).
+
+3. Module containing event handler class should have the same name as
+ the relevant handler class except translated
+ from UpperCamelCase to snake\_case.
+
+ ***Example:***
+
+ **app/discover/events/event\_new\_object\_add.py**
+
+ | **from** discover.events.event\_base **import** EventBase,
+ EventResult
+ | **class** EventNewObjectAdd(EventBase):
+ | **def** handle(self, env: str, notification: dict) -> EventResult:
+ | obj\_id = notification[**'payload'**][**'new\_object'**][**'id'**]
+ | obj = {
+ | **'id'**: obj\_id,
+ | **'type'**: **'new\_object'
+ **}
+ | self.inv.set(obj)
+ | **return** EventResult(result=\ **True**)
+
+ Modifications in *events.json*:
+
+ <...>
+
+ | **"event\_handlers"**: {
+ | <...>
+ | **"new\_object.create"**: **"EventNewObjectAdd"**,
+ | <...>**
+ **}
+
+ <...>
+
+After these changes are implemented, any event of type
+new\_object.create will be consumed by the event listener and the
+payload will be passed to EventNewObjectAdd handler which will insert a
+new document in the database.
+
+Event handlers configuration file structure
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**Events.json** (full path *app/config/events.json*) is a configuration
+file that contains information about events and event handlers,
+including:
+
+- Event subscription details (queues and exchanges for Neutron
+ notifications);
+
+- Location of event handlers package;
+
+- Mappings between event types and respective event handlers.
+
+The structure of *events.json* is as following:
+
+ | {
+ | **"handlers\_package"**: **"discover.events"**,
+ | **"queues"**: [
+ | {
+ | **"queue"**: **"notifications.nova"**,
+ | **"exchange"**: **"nova"
+ **},
+ | <…>
+ | ],
+ | **"event\_handlers"**: {
+ | **"compute.instance.create.end"**: **"EventInstanceAdd"**,
+ | **"compute.instance.update"**: **"EventInstanceUpdate"**,
+ | **"compute.instance.delete.end"**: **"EventInstanceDelete"**,
+ | **"network.create.end"**: **"EventNetworkAdd"**,
+
+ | <…>**
+ **}
+ | }
+
+The root object contains the following fields:
+
+- **handlers\_package** - python path to the package that contains
+ event handlers (relative to $PYTHONPATH environment variable)
+
+- **queues –** RabbitMQ queues and exchanges to consume messages from
+ (for Neutron notifications case)
+
+- **event\_handlers** – mappings of event types to the respective
+ handlers. The structure suggests that any event can have only one
+ handler.
+
+In order to add a new event handler to the configuration file, you
+should add another mapping to the event\_handlers object, where key is
+the event type being handled and value is the handler class name (module
+name will be determined automatically).
+
+If your event is being published to a queue and/or exchange that the
+listener is not subscribed to, you should add another entry to the
+queues list.
+
+Creating new event listeners
+----------------------------
+
+At the moment, the only guideline for creation of new event listeners is
+that they should subclass the *ListenerBase* class (full path
+*app/discover/events/listeners/listener\_base.py*) and override the
+*listen(self)* method that listens to incoming events indefinitely
+(until terminated by a signal).
+
+In future versions, a comprehensive guide to listeners structure is
+planned.
+
+Metadata parsers
+----------------
+
+Metadata parsers are specialized classes that are designed to verify
+metadata files (found in *app/*\ config directory), use data from them
+to load instances of implementation classes (e.g. scanners, event
+handlers, link finders) in memory, and supply them by request. Scanners
+and link finders configuration files are used in scanner, event handlers
+configuration file – in event listener.
+
+In order to create a new metadata parser, you should consider
+subclassing *MetadataParser* class (found in
+*app/utils/metadata\_parser.py*). *MetadataParser* supports parsing and
+validating of json files out of the box. Entry point for the class is
+the *parse\_metadata\_file* method, which requires the abstract
+*get\_required\_fields* method to be overridden in subclasses. This
+method should return a list of keys that the metadata file is required
+to contain.
+
+For different levels of customization you may consider:
+
+1. Overriding *validate\_metadata* method to provide more precise
+ validation of metadata;
+
+2. Overriding *parse\_metadata\_file* to provide custom metadata
+ handling required by your use case.
+
+.. |image0| image:: media/image1.png
+ :width: 6.50000in
+ :height: 4.27153in
diff --git a/docs/release/index.rst b/docs/release/index.rst
deleted file mode 100644
index 4c6c80c..0000000
--- a/docs/release/index.rst
+++ /dev/null
@@ -1,20 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) OPNFV and others.
-
-.. _calipso-release-guide:
-
-=====================
-Calipso Release Guide
-=====================
-
-.. toctree::
- :maxdepth: 2
-
- about.rst
- admin-guide.rst
- api-guide.rst
- calipso-model.rst
- install-guide-rst
- monotoring-guide.rst
- quickstart-guide.rst
diff --git a/docs/release/media/image101.png b/docs/release/media/image101.png
new file mode 100644
index 0000000..b0a8a5c
--- /dev/null
+++ b/docs/release/media/image101.png
Binary files differ
diff --git a/docs/release/media/image102.png b/docs/release/media/image102.png
new file mode 100644
index 0000000..8c8d413
--- /dev/null
+++ b/docs/release/media/image102.png
Binary files differ
diff --git a/docs/release/media/image103.png b/docs/release/media/image103.png
new file mode 100644
index 0000000..cc65824
--- /dev/null
+++ b/docs/release/media/image103.png
Binary files differ
diff --git a/docs/release/media/image104.png b/docs/release/media/image104.png
new file mode 100644
index 0000000..2418dcf
--- /dev/null
+++ b/docs/release/media/image104.png
Binary files differ
diff --git a/docs/release/media/image105.png b/docs/release/media/image105.png
new file mode 100644
index 0000000..1d7fc26
--- /dev/null
+++ b/docs/release/media/image105.png
Binary files differ
diff --git a/docs/release/media/image106.png b/docs/release/media/image106.png
new file mode 100644
index 0000000..029589a
--- /dev/null
+++ b/docs/release/media/image106.png
Binary files differ
diff --git a/docs/release/media/image107.png b/docs/release/media/image107.png
new file mode 100644
index 0000000..7ac129d
--- /dev/null
+++ b/docs/release/media/image107.png
Binary files differ
diff --git a/docs/release/scenarios/os-nosdn-calipso-noha/index.rst b/docs/release/scenarios/os-nosdn-calipso-noha/index.rst
deleted file mode 100644
index 0e65a74..0000000
--- a/docs/release/scenarios/os-nosdn-calipso-noha/index.rst
+++ /dev/null
@@ -1,15 +0,0 @@
-.. _os-nosdn-calipso-noha:
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International Licence.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) <optionally add copywriters name>
-
-==============================================
-os-nosdn-calipso-noha overview and description
-==============================================
-
-.. toctree::
- :numbered:
- :maxdepth: 4
-
- apex-scenario-guide.rst