summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorcsatari <gergely.csatari@nokia.com>2016-07-01 11:19:54 +0200
committercsatari <gergely.csatari@nokia.com>2016-07-05 11:22:57 +0200
commit791354a57e2e6c6e049877a95cbaecd9d27d6966 (patch)
treee835a62d984bcef2c59ad822220f7799314cd741 /docs
parentd3d984a25934207b9c181e6ce775dae0305252e4 (diff)
Polishing of georedundancy and provider network uc-s
Adding the final touch to my use cases: - Clarification of the introduction text in the general georedundancy capter - Clarification of the description of both the georedundancy and the provider networks use cases - Adding references. Change-Id: I9734154dc7fbb5f3a86af17c451b02cca839a741 Signed-off-by: csatari <gergely.csatari@nokia.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/requirements/references.rst2
-rw-r--r--docs/requirements/use_cases/georedundancy.rst56
-rw-r--r--docs/requirements/use_cases/georedundancy_cells.rst59
-rw-r--r--docs/requirements/use_cases/georedundancy_regions_insances.rst60
-rw-r--r--docs/requirements/use_cases/programmable_provisioning.rst18
5 files changed, 118 insertions, 77 deletions
diff --git a/docs/requirements/references.rst b/docs/requirements/references.rst
index a777c01..d752e9e 100644
--- a/docs/requirements/references.rst
+++ b/docs/requirements/references.rst
@@ -6,3 +6,5 @@
.. [BGPVPN] http://docs.openstack.org/developer/networking-bgpvpn/
.. [NETWORKING-SFC] https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
+.. [MULTISITE] https://wiki.opnfv.org/display/multisite/Multisite
+.. [TRICICLE] https://wiki.openstack.org/wiki/Tricircle#Requirements
diff --git a/docs/requirements/use_cases/georedundancy.rst b/docs/requirements/use_cases/georedundancy.rst
index 47bd9ca..bc0dd29 100644
--- a/docs/requirements/use_cases/georedundancy.rst
+++ b/docs/requirements/use_cases/georedundancy.rst
@@ -1,36 +1,53 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-Georedundancy Use Cases
-=======================
+Georedundancy
+=============
Georedundancy refers to a configuration which ensures the service continuity of
-the VNF-s even if a whole datacenter fails [Q: Do we include or exclude VNF
-pooling?].
+the VNF-s even if a whole datacenter fails.
-This can be achieved by redundant VNF-s in a hot (spare VNF is running its
+It is possible that the VNF application layer provides additional redundancy
+with VNF pooling on top of the georedundancy functionality described here.
+
+It is possible that either the VNFC-s of a single VNF are spread across several
+datacenters (this case is covered by the OPNFV multisite project [MULTISITE]_
+or different, redundant VNF-s are started in different datacenters.
+
+When the different VNF-s are started in different datacenters the redundancy
+can be achieved by redundant VNF-s in a hot (spare VNF is running its
configuration and internal state is synchronised to the active VNF),
warm (spare VNF is running, its configuration is synchronised to the active VNF)
or cold (spare VNF is not running, active VNF-s configuration is stored in a
-database and dropped to the spare VNF during its activation) standby state in a
-different datacenter from where the active VNF-s are running.
-The synchronisation and data transfer can be handled by the application or the infrastructure.
+persistent, central store and configured to the spare VNF during its activation)
+standby state in a different datacenter from where the active VNF-s are running.
+The synchronisation and data transfer can be handled by the application or by
+the infrastructure.
+
In all of these georedundancy setups there is a need for a network connection
between the datacenter running the active VNF and the datacenter running the
spare VNF.
-In case of a distributed cloud it is possible that the georedundant cloud of an application
-is not predefined or changed and the change requires configuration in the underlay networks.
+In case of a distributed cloud it is possible that the georedundant cloud of an
+application is not predefined or changed and the change requires configuration
+in the underlay networks when the network operator uses network isolation.
+Isolation of the traffic between the datacenters might be needed due to the
+multitenant usage of NFVI/VIM or due to the IP pool management of the network
+operator.
+
+This set of georedundancy use cases is about enabling the possiblity to select a
+datacenter as backup datacenter and build the connectivity between the NFVI-s in
+the different datacenters in a programmable way.
-This set of georedundancy use cases is about enabling the possiblity to select a datacenter as
-backup datacenter and build the connectivity between the NFVI-s in the
-different datacenters in a programmable way.
+The focus of these uses cases is on the functionality of OpenStack it is not
+considered how the provisioning of physical resources is handled by the SDN
+controllers to interconnect the two datacenters.
As an example the following picture (:numref:`georedundancy-before`) shows a
multicell cloud setup where the underlay network is not fully mashed.
.. figure:: images/georedundancy-before.png
:name: georedundancy-before
- :width: 25%
+ :width: 50%
Each datacenter (DC) is a separate OpenStack cell, region or instance. Let's
assume that a new VNF is started in DC b with a Redundant VNF in DC d. In this
@@ -41,10 +58,15 @@ The result of the deployment is shown in the following figure
.. figure:: images/georedundancy-after.png
:name: georedundancy-after
- :width: 25%
-
-
+ :width: 50%
.. toctree::
georedundancy_cells.rst
georedundancy_regions_insances.rst
+
+Conclusion
+----------
+ An API is needed what provides possibility to set up the local and remote
+ endpoints for the underlay network. This API present in the SDN solutions, but
+ OpenStack does not provides and abstracted API for this functionality to hide
+ the differences of the SDN solutions.
diff --git a/docs/requirements/use_cases/georedundancy_cells.rst b/docs/requirements/use_cases/georedundancy_cells.rst
index 34269dc..1a98c77 100644
--- a/docs/requirements/use_cases/georedundancy_cells.rst
+++ b/docs/requirements/use_cases/georedundancy_cells.rst
@@ -4,40 +4,51 @@
Connection between different OpenStack cells
--------------------------------------------
Description
-^^^^^^^^^^^
+~~~~~~~~~~~
There should be an API to manage the infrastructure-s networks between two
-OpenStack cells.
-(Note: In the Mitaka release of OpenStack cells v1 are considered as, cells v2
-functionaity is under implementation)
-This capability exists in the different SDN controllers, like the Add New BGP
-neighbour API of OpenDaylight. OpenStack Neutron should provide and abstracted
-API for this functionality what later calls the given SDN controllers related
-API.
+OpenStack cells. (Note: In the Mitaka release of OpenStack cells v1 are
+considered as experimental, while cells v2 functionality is under
+implementation). Cells are considered to be problematic from maintainability
+perspective as the sub-cells are using only the internal message bus and there
+is no API (and CLI) to do maintenance actions in case of a network connectivity
+problem between the main cell and the sub cells.
+
+The functionality behind the API depends on the underlying network providers (SDN
+controllers) and the networking setup.
+(For example OpenDaylight has an API to add new BGP neighbour.)
+
+OpenStack Neutron should provide an abstracted API for this functionality what
+calls the underlying SDN controllers API.
Derrived Requirements
-^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~
- Possibility to define a remote and a local endpoint
- As in case of cells the nova-api service is shared it should be possible
to identify the cell in the API calls
-Northbound API
-""""""""""""""
+Northbound API / Workflow
++++++++++++++++++++++++++
- An infrastructure network management API is needed
+ - API call to define the remote and local infrastructure endpoints
- When the endpoints are created neutron is configured to use the new network.
- (Note: Nova networking is not considered as it is deprecated.)
-
-Data model objects
-""""""""""""""""""
- - TBD
-
-Orchestration
-"""""""""""""
- - TBD
Dependencies on compute services
-""""""""""""""""""""""""""""""""
+++++++++++++++++++++++++++++++++
None.
-Potential implementation
-""""""""""""""""""""""""
- - TBD
+Data model objects
+++++++++++++++++++
+ - local and remote endpoint objects (Most probably IP addresses with some
+ additional properties).
+
+Current implementation
+~~~~~~~~~~~~~~~~~~~~~~
+ Current OpenStack implementation provides no way to set up the underlay
+ network connection.
+ OpenStack Tricicle project [TRICICLE]_
+ has plans to build up inter datacenter L2 and L3 networks.
+
+Gaps in the current solution
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ An infrastructure management API is missing from Neutron where the local and
+ remote endpoints of the underlay network could be configured.
diff --git a/docs/requirements/use_cases/georedundancy_regions_insances.rst b/docs/requirements/use_cases/georedundancy_regions_insances.rst
index 9e74f74..6683c27 100644
--- a/docs/requirements/use_cases/georedundancy_regions_insances.rst
+++ b/docs/requirements/use_cases/georedundancy_regions_insances.rst
@@ -5,38 +5,42 @@ Connection between different OpenStack regions or cloud instances
-----------------------------------------------------------------
Description
-^^^^^^^^^^^
+~~~~~~~~~~~
There should be an API to manage the infrastructure-s networks between two
-OpenStack regions or between two OpenStack cloud instances.
-(The only difference is the shared keystone in case of a region)
-This capability exists in the different SDN controllers, like the Add New BGP
-neighbour API of OpenDaylight. OpenStack Neutron should provide and abstracted
-API for this functionality what later calls the given SDN controllers related
-API.
+OpenStack regions or instances.
+
+The functionality behind the API depends on the underlying network providers (SDN
+controllers) and the networking setup.
+(For example OpenDaylight has an API to add new BGP neighbour.)
+
+OpenStack Neutron should provide an abstracted API for this functionality what
+calls the underlying SDN controllers API.
Derrived Requirements
-^^^^^^^^^^^^^^^^^^^^^
- - Possibility to define a remote and a local endpoint
- - Possiblity to define an overlay/segregation technology
+~~~~~~~~~~~~~~~~~~~~~
+- Possibility to define a remote and a local endpoint
+- As in case of cells the nova-api service is shared it should be possible
+ to identify the cell in the API calls
Northbound API / Workflow
-"""""""""""""""""""""""""
- - An infrastructure network management API is needed
- - When the endpoints are created neutron is configured to use the new network.
- (Note: Nova networking is not considered as it is deprecated.)
++++++++++++++++++++++++++
+- An infrastructure network management API is needed
+- API call to define the remote and local infrastructure endpoints
+- When the endpoints are created neutron is configured to use the new network.
Data model objects
-""""""""""""""""""
- - TBD
-
-Orchestration
-"""""""""""""
- - TBD
-
-Dependencies on compute services
-""""""""""""""""""""""""""""""""
- - TBD
-
-Potential implementation
-""""""""""""""""""""""""
- - TBD
+++++++++++++++++++
+- local and remote endpoint objects (Most probably IP addresses with some
+additional properties).
+
+Current implementation
+~~~~~~~~~~~~~~~~~~~~~~
+ Current OpenStack implementation provides no way to set up the underlay
+ network connection.
+ OpenStack Tricicle project [TRICICLE]_
+ has plans to build up inter datacenter L2 and L3 networks.
+
+Gaps in the current solution
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ An infrastructure management API is missing from Neutron where the local and
+ remote endpoints of the underlay network could be configured.
diff --git a/docs/requirements/use_cases/programmable_provisioning.rst b/docs/requirements/use_cases/programmable_provisioning.rst
index eb9b6f5..4093a27 100644
--- a/docs/requirements/use_cases/programmable_provisioning.rst
+++ b/docs/requirements/use_cases/programmable_provisioning.rst
@@ -5,18 +5,20 @@ Programmable Provisioning of Provider networks
----------------------------------------------
Description
~~~~~~~~~~~
-In NFV environment the VNFM (consumer of OpenStack IaaS API) have no administrative
-rights, however in this environment provider networks are used in some cases.
-When a provider network is ceated administrative rights are needed what in the
-case of non admin VNFM leeds to manual work.
+In NFV environment the VNFM (consumer of OpenStack IaaS API) have no
+administrative rights, however in the telco domain provider networks are used in
+some cases. When a provider network is ceated administrative rights are needed
+what in the case of a VNFM without administrative rights needs manual work.
It shall be possible to configure provider networks without administrative rights.
-It should be possible to assign the capability to create provider networks to any roles.
+It should be possible to assign the capability to create provider networks to
+any roles.
Derrived Requirements
~~~~~~~~~~~~~~~~~~~~~
- Authorize the possibility of provider network creation based on policy
- There should be a new entry in :code:`policy.json` which controls the provider network creation
- - Default policy of this new enrty should be :code:`rule:admin_or_owner`.
+ - Default policy of this new entry should be :code:`rule:admin_or_owner`.
+ - This policy should be respected by neutron API
Northbound API / Workflow
+++++++++++++++++++++++++
@@ -28,11 +30,11 @@ Data model objects
Orchestration
+++++++++++++
- - TBD
+ None.
Dependencies on compute services
++++++++++++++++++++++++++++++++
- - TBD
+ None.
Potential implementation
++++++++++++++++++++++++