summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/requirements/use_cases.rst3
-rw-r--r--docs/requirements/use_cases/georedundancy.rst50
-rw-r--r--docs/requirements/use_cases/georedundancy_cells.rst60
-rw-r--r--docs/requirements/use_cases/georedundancy_regions_insances.rst13
4 files changed, 69 insertions, 57 deletions
diff --git a/docs/requirements/use_cases.rst b/docs/requirements/use_cases.rst
index 0488d69..a39e6f0 100644
--- a/docs/requirements/use_cases.rst
+++ b/docs/requirements/use_cases.rst
@@ -10,5 +10,4 @@ The following sections address networking use cases that have been identified to
use_cases/l3vpn.rst
use_cases/port_abstraction.rst
use_cases/programmable_provisioning.rst
- use_cases/georedundancy_cells.rst
- use_cases/georedundancy_regions_insances.rst
+ use_cases/georedundancy.rst
diff --git a/docs/requirements/use_cases/georedundancy.rst b/docs/requirements/use_cases/georedundancy.rst
new file mode 100644
index 0000000..47bd9ca
--- /dev/null
+++ b/docs/requirements/use_cases/georedundancy.rst
@@ -0,0 +1,50 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+Georedundancy Use Cases
+=======================
+Georedundancy refers to a configuration which ensures the service continuity of
+the VNF-s even if a whole datacenter fails [Q: Do we include or exclude VNF
+pooling?].
+
+This can be achieved by redundant VNF-s in a hot (spare VNF is running its
+configuration and internal state is synchronised to the active VNF),
+warm (spare VNF is running, its configuration is synchronised to the active VNF)
+or cold (spare VNF is not running, active VNF-s configuration is stored in a
+database and dropped to the spare VNF during its activation) standby state in a
+different datacenter from where the active VNF-s are running.
+The synchronisation and data transfer can be handled by the application or the infrastructure.
+In all of these georedundancy setups there is a need for a network connection
+between the datacenter running the active VNF and the datacenter running the
+spare VNF.
+
+In case of a distributed cloud it is possible that the georedundant cloud of an application
+is not predefined or changed and the change requires configuration in the underlay networks.
+
+This set of georedundancy use cases is about enabling the possiblity to select a datacenter as
+backup datacenter and build the connectivity between the NFVI-s in the
+different datacenters in a programmable way.
+
+As an example the following picture (:numref:`georedundancy-before`) shows a
+multicell cloud setup where the underlay network is not fully mashed.
+
+.. figure:: images/georedundancy-before.png
+ :name: georedundancy-before
+ :width: 25%
+
+Each datacenter (DC) is a separate OpenStack cell, region or instance. Let's
+assume that a new VNF is started in DC b with a Redundant VNF in DC d. In this
+case a direct underlay network connection is needed between DC b and DC d. The
+configuration of this connection should be programable in both DC b and DC d.
+The result of the deployment is shown in the following figure
+(:numref:`georedundancy-after`):
+
+.. figure:: images/georedundancy-after.png
+ :name: georedundancy-after
+ :width: 25%
+
+
+
+.. toctree::
+ georedundancy_cells.rst
+ georedundancy_regions_insances.rst
diff --git a/docs/requirements/use_cases/georedundancy_cells.rst b/docs/requirements/use_cases/georedundancy_cells.rst
index 95ffc6f..34269dc 100644
--- a/docs/requirements/use_cases/georedundancy_cells.rst
+++ b/docs/requirements/use_cases/georedundancy_cells.rst
@@ -1,71 +1,31 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-Georedundancy: Connection between different OpenStack cells
------------------------------------------------------------
-Georedundancy refers to a configuration which ensures the service continuity of
-the VNF-s even if a whole datacenter fails [Q: Do we include or exclude VNF
-pooling?].
-
-This can be achieved by redundant VNF-s in a hot (spare VNF is running its
-configuration and internal state is synchronised to the active VNF),
-warm (spare VNF is running, its configuration is synchronised to the active VNF)
-or cold (spare VNF is not running, active VNF-s configuration is stored in a
-database and dropped to the spare VNF during its activation) standby state in a
-different datacenter from where the active VNF-s are running.
-The synchronisation and data transfer can be handled by the application or the infrastructure.
-In all of these georedundancy setups there is a need for a network connection
-between the datacenter running the active VNF and the datacenter running the
-spare VNF.
-
-In case of a distributed cloud it is possible that the georedundant cloud of an application
-is not predefined or changed and the change requires configuration in the underlay networks.
-
-This set of georedundancy use cases is about enabling the possiblity to select a datacenter as
-backup datacenter and build the connectivity between the NFVI-s in the
-different datacenters in a programmable way.
-
-As an example the following picture (:numref:`georedundancy-before`) shows a
-multicell cloud setup where the underlay network is not fully mashed.
-
-.. figure:: images/georedundancy-before.png
- :name: georedundancy-before
- :width: 25%
-
-Each datacenter (DC) is a separate OpenStack cell, region or instance. Let's
-assume that a new VNF is started in DC b with a Redundant VNF in DC d. In this
-case a direct underlay network connection is needed between DC b and DC d. The
-configuration of this connection should be programable in both DC b and DC d.
-The result of the deployment is shown in the following figure
-(:numref:`georedundancy-after`):
-
-.. figure:: images/georedundancy-after.png
- :name: georedundancy-after
- :width: 25%
-
+Connection between different OpenStack cells
+--------------------------------------------
Description
^^^^^^^^^^^
There should be an API to manage the infrastructure-s networks between two
OpenStack cells.
(Note: In the Mitaka release of OpenStack cells v1 are considered as, cells v2
functionaity is under implementation)
+This capability exists in the different SDN controllers, like the Add New BGP
+neighbour API of OpenDaylight. OpenStack Neutron should provide and abstracted
+API for this functionality what later calls the given SDN controllers related
+API.
-- Maybe the existing capability of Neutron to have several subnets associated
- to an external network is enough?
-
-Requirements
-^^^^^^^^^^^^
+Derrived Requirements
+^^^^^^^^^^^^^^^^^^^^^
- Possibility to define a remote and a local endpoint
- As in case of cells the nova-api service is shared it should be possible
to identify the cell in the API calls
-Northbound API / Workflow
-"""""""""""""""""""""""""
+Northbound API
+""""""""""""""
- An infrastructure network management API is needed
- When the endpoints are created neutron is configured to use the new network.
(Note: Nova networking is not considered as it is deprecated.)
-
Data model objects
""""""""""""""""""
- TBD
diff --git a/docs/requirements/use_cases/georedundancy_regions_insances.rst b/docs/requirements/use_cases/georedundancy_regions_insances.rst
index 408b425..9e74f74 100644
--- a/docs/requirements/use_cases/georedundancy_regions_insances.rst
+++ b/docs/requirements/use_cases/georedundancy_regions_insances.rst
@@ -1,17 +1,21 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-Georedundancy: Connection between different OpenStack regions or cloud instances
---------------------------------------------------------------------------------
+Connection between different OpenStack regions or cloud instances
+-----------------------------------------------------------------
Description
^^^^^^^^^^^
There should be an API to manage the infrastructure-s networks between two
OpenStack regions or between two OpenStack cloud instances.
(The only difference is the shared keystone in case of a region)
+This capability exists in the different SDN controllers, like the Add New BGP
+neighbour API of OpenDaylight. OpenStack Neutron should provide and abstracted
+API for this functionality what later calls the given SDN controllers related
+API.
-Requirements
-^^^^^^^^^^^^
+Derrived Requirements
+^^^^^^^^^^^^^^^^^^^^^
- Possibility to define a remote and a local endpoint
- Possiblity to define an overlay/segregation technology
@@ -21,7 +25,6 @@ Northbound API / Workflow
- When the endpoints are created neutron is configured to use the new network.
(Note: Nova networking is not considered as it is deprecated.)
-
Data model objects
""""""""""""""""""
- TBD