summaryrefslogtreecommitdiffstats
path: root/docs/requirements
diff options
context:
space:
mode:
authorjoehuang <joehuang@huawei.com>2017-02-07 04:17:31 -0500
committerjoehuang <joehuang@huawei.com>2017-02-16 04:11:13 -0500
commit7dbbb63739db4aac973fb6d5f3f16b5e9206ce14 (patch)
tree47747f6e2c42ca5c0be7e025110bf40eac8a65ea /docs/requirements
parenta45633054f93a24401847c3a54e88e9a3344250a (diff)
Update the multisite documentations to reflect the progress in D
As some changes in OpenStack projects like KeyStone PKI token deprecation, L2GW moved away from Neutron stadium, Tricircle shrinked scope and became OpenStack big-tent project, and Kingbird has made great progress in feature development after the initial requirements discussion. Documents need to update to reflect these recent changes. python-kingbirdclient was introduced recently, so the usage guide is updated to use python-kingbirdclient. The new feature key pair synchronization is also included in the usage guide. Change-Id: Iad9fbd441d191defa5e8793633a626ab5a24f217 Signed-off-by: joehuang <joehuang@huawei.com>
Diffstat (limited to 'docs/requirements')
-rw-r--r--docs/requirements/VNF_high_availability_across_VIM.rst92
-rw-r--r--docs/requirements/multisite-centralized-service.rst109
-rw-r--r--docs/requirements/multisite-identity-service-management.rst23
3 files changed, 161 insertions, 63 deletions
diff --git a/docs/requirements/VNF_high_availability_across_VIM.rst b/docs/requirements/VNF_high_availability_across_VIM.rst
index 6c2e9f1..42c479e 100644
--- a/docs/requirements/VNF_high_availability_across_VIM.rst
+++ b/docs/requirements/VNF_high_availability_across_VIM.rst
@@ -1,21 +1,21 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-=======================================
+================================
VNF high availability across VIM
-=======================================
+================================
Problem description
===================
Abstract
-------------
+--------
a VNF (telecom application) should, be able to realize high availability
deloyment across OpenStack instances.
Description
-------------
+-----------
VNF (Telecom application running over cloud) may (already) be designed as
Active-Standby/Active-Active/N-Way to achieve high availability,
@@ -64,7 +64,7 @@ the potential for correlated failure to very low levels (at least as low as the
required overall application availability).
Analysis of requirements to OpenStack
-===========================
+=====================================
The VNF often has different networking plane for different purpose:
external network plane: using for communication with other VNF
@@ -76,24 +76,37 @@ between the component's active/standy or active/active or N-way cluster.
management plane: this plane is mainly for the management purpose
Generally these planes are seperated with each other. And for legacy telecom
-application, each internal plane will have its fixed or flexsible IP addressing
-plane.
+application, each internal plane will have its fixed or flexible IP addressing
+plan.
-to make the VNF can work with HA mode across different OpenStack instances in
+To make the VNF can work with HA mode across different OpenStack instances in
one site (but not limited to), need to support at lease the backup plane across
different OpenStack instances:
-1) Overlay L2 networking or shared L2 provider networks as the backup plance for
-heartbeat or state replication. Overlay L2 network is preferred, the reason is:
-a. Support legacy compatibility: Some telecom app with built-in internal L2
-network, for easy to move these app to VNF, it would be better to provide L2
-network b. Support IP overlapping: multiple VNFs may have overlaping IP address
-for cross OpenStack instance networking
+1) L2 networking across OpenStack instance for heartbeat or state replication.
+Overlay L2 networking or shared L2 provider networks can work as the backup
+plance for heartbeat or state replication. Overlay L2 network is preferred,
+the reason is:
+
+ a. Support legacy compatibility: Some telecom app with built-in internal L2
+ network, for easy to move these app to VNF, it would be better to provide
+ L2 network.
+ b. Isolated L2 network will simplify the security management between
+ different network planes.
+ c. Easy to support IP/mac floating across OpenStack.
+ d. Support IP overlapping: multiple VNFs may have overlaping IP address for
+ cross OpenStack instance networking.
+
Therefore, over L2 networking across Neutron feature is required in OpenStack.
-2) L3 networking cross OpenStack instance for heartbeat or state replication.
-For L3 networking, we can leverage the floating IP provided in current Neutron,
-so no new feature requirement to OpenStack.
+2) L3 networking across OpenStack instance for heartbeat or state replication.
+For L3 networking, we can leverage the floating IP provided in current
+Neutron, or use VPN or BGPVPN(networking-bgpvpn) to setup the connection.
+
+L3 networking to support the VNF HA will consume more resources and need to
+take more security factors into consideration, this make the networking
+more complex. And L3 networking is also not possible to provide IP floating
+across OpenStack instances.
3) The IP address used for VNF to connect with other VNFs should be able to be
floating cross OpenStack instance. For example, if the master failed, the IP
@@ -103,48 +116,20 @@ external IP, so no new feature will be added to OpenStack.
Prototype
------------
+---------
None.
Proposed solution
------------
-
- requirements perspective It's up to application descision to use L2 or L3
-networking across Neutron.
-
- For Neutron, a L2 network is consisted of lots of ports. To make the cross
-Neutron L2 networking is workable, we need some fake remote ports in local
-Neutron to represent VMs in remote site ( remote OpenStack ).
-
- the fake remote port will reside on some VTEP ( for VxLAN ), the tunneling
-IP address of the VTEP should be the attribute of the fake remote port, so that
-the local port can forward packet to correct tunneling endpoint.
-
- the idea is to add one more ML2 mechnism driver to capture the fake remote
-port CRUD( creation, retievement, update, delete)
-
- when a fake remote port is added/update/deleted, then the ML2 mechanism
-driver for these fake ports will activate L2 population, so that the VTEP
-tunneling endpoint information could be understood by other local ports.
-
- it's also required to be able to query the port's VTEP tunneling endpoint
-information through Neutron API, in order to use these information to create
-fake remote port in another Neutron.
-
- In the past, the port's VTEP ip address is the host IP where the VM resides.
-But the this BP https://review.openstack.org/#/c/215409/ will make the port free
-of binding to host IP as the tunneling endpoint, you can even specify L2GW ip
-address as the tunneling endpoint.
-
- Therefore a new BP will be registered to processing the fake remote port, in
-order make cross Neutron L2 networking is feasible. RFE is registered first:
-https://bugs.launchpad.net/neutron/+bug/1484005
-
+-----------------
+Several projects are addressing the networking requirements:
+ * Tricircle: https://github.com/openstack/tricircle/
+ * Networking-BGPVPN: https://github.com/openstack/networking-bgpvpn/
+ * VPNaaS: https://github.com/openstack/neutron-vpnaas
Gaps
====
- 1) fake remote port for cross Neutron L2 networking
-
+ Inter-networking among OpenStack clouds for application HA need is lack
+ in Neutron, and covered by sevral new created projects.
**NAME-THE-MODULE issues:**
@@ -156,4 +141,3 @@ Affected By
References
==========
-
diff --git a/docs/requirements/multisite-centralized-service.rst b/docs/requirements/multisite-centralized-service.rst
new file mode 100644
index 0000000..5dbbfc8
--- /dev/null
+++ b/docs/requirements/multisite-centralized-service.rst
@@ -0,0 +1,109 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+==============================
+ Multisite centralized service
+==============================
+
+
+Problem description
+===================
+
+Abstract
+--------
+
+a user should have one centralized service for resources management and/or
+replication(sync tenant resources like images, ssh-keys, etc) across multiple
+OpenStack clouds.
+
+Description
+------------
+
+For multisite management use cases, some common requirements in term of
+centralized or shared services over the multiple openstack instances could
+be summarized here.
+
+A user should be able to manage all their virtual resouces from one
+centralized management interface, at least to have a summarized view of
+the total resource capacity and the live utilization of their virtual
+resources, for example:
+
+- Centralized Quota Management
+ Currently all quotas are set for each region separataly. And different
+ services (Nova, Cinder, Neutron, Glance, ...) have different quota to
+ be set. The requirement is to provide global view for quota per tenant
+ across multiple regions, and soft/hard quotas based on current usage for
+ all regions for this tenant.
+
+- A service to clone ssh keys across regions
+ A user may upload keypair to access the VMs allocated for her. But if her
+ VMs are spread in multiple regions, the user has to upload the keypair
+ seperatly to different region. Need a service to clone the SSH key to
+ desired OpenStack clouds.
+
+- A service to sync images across regions
+ In multi-site scenario, a user has to upload image seperatly to different
+ region. There can be 4 cases need to be considered:
+ No image sync
+ Auto-sync of images
+ Lazy sync - clone the requested image on demand.
+ Controlled sync, where you can control propagation and rollback if
+ problems.
+
+- Global view for tenant level IP address / mac address space management
+ If a tenant has networks in multiple region, and these networks are routable
+ (for example, connected with VPN), then, IP address may be duplicated. Need
+ a global view for IP address space management.
+ If IP v4 used, this issue needs to be considered. For IPv6, it should als
+ be managed. This requirement is important not only just for prevention of
+ duplicate address.
+ For security and other reasons it's important to know which IP Addresses
+ (IPv4 and IPv6) are used in which region.
+ Need to extend such requirement to floating and public IP Addresses.
+
+- A service to clone security groups across regions
+ No appropriate service to security groups across multiple region if the
+ tenant has resources distributed, has to set the security groups in
+ different region manually.
+
+- A user should be able to access all the logs and indicators produced by
+ multiple openstack instances, in a centralized way.
+
+Requirement analysis
+====================
+
+All problems me here are not covered by existing projects in OpenStack.
+
+Candidate solution analysis
+---------------------------
+
+- Kingbird[1][2]
+ Kingbird is an centralized OpenStack service that provides resource
+ operation and management across multiple OpenStack instances in a
+ multi-region OpenStack deployment. Kingbird provides features like
+ centralized quota management, centralized view for distributed virtual
+ resources, synchronisation of ssh keys, images, flavors etc. across regions.
+
+- Tricircle[3][4]
+ Tricricle is to provide networking automation across Neutron in multi-region
+ OpenStack deployments. Tricircle can address the challenges mentioned here:
+ Tenant level IP/mac addresses management to avoid conflict across OpenStack
+ clouds, global L2 network segement management and cross OpenStack L2
+ networking, and make security group being sync-ed across OpenStack clouds.
+
+
+Affected By
+-----------
+ OPNFV multisite cloud.
+
+Conclusion
+----------
+ Kingbird and Tricircle are candidate solutions for these centralized
+ services in OpenStack multi-region clouds.
+
+References
+==========
+[1] Kingbird repository: https://github.com/openstack/kingbird
+[2] Kingbird launchpad: https://launchpad.net/kingbird
+[3] Tricricle wiki: https://wiki.openstack.org/wiki/Tricircle
+[4] Tricircle repository: https://github.com/openstack/tricircle/
diff --git a/docs/requirements/multisite-identity-service-management.rst b/docs/requirements/multisite-identity-service-management.rst
index ad2cea1..c1eeb2b 100644
--- a/docs/requirements/multisite-identity-service-management.rst
+++ b/docs/requirements/multisite-identity-service-management.rst
@@ -9,12 +9,12 @@ Glossary
========
There are 3 types of token supported by OpenStack KeyStone
+ **FERNET**
+
**UUID**
**PKI/PKIZ**
- **FERNET**
-
Please refer to reference section for these token formats, benchmark and
comparation.
@@ -189,7 +189,7 @@ cover very well.
multi-cluster mode).
We may have several KeyStone cluster with Fernet token, for example,
-cluster1 ( site1, site2, … site 10 ), cluster 2 ( site11, site 12,..,site 20).
+cluster1(site1, site2, .., site 10), cluster 2(site11, site 12,.., site 20).
Then do the DB async replication among different cluster asynchronously.
A prototype of this has been down on this. In some blogs they call it
@@ -208,14 +208,16 @@ http://lbragstad.com/?p=156
- KeyStone service(Distributed) with Fernet token + Async replication (
star-mode).
- one master KeyStone cluster with Fernet token in two sites (for site level
-high availability purpose), other sites will be installed with at least 2 slave
-nodes where the node is configured with DB async replication from the master
-cluster members, and one slave’s mater node in site1, another slave’s master
-node in site 2.
+ one master KeyStone cluster with Fernet token in one or two sites (two
+sites if site level high availability is required), other sites will be
+installed with at least 2 slave nodes where the node is configured with
+DB async replication from the master cluster member. The async. replication
+data source is better to be from different member of the master cluster, if
+there are two sites for the KeyStone cluster, it'll be better that source
+members for async. replication are located in different site.
Only the master cluster nodes are allowed to write, other slave nodes
-waiting for replication from the master cluster ( very little delay) member.
+waiting for ( very little delay) replication from the master cluster member.
But the chanllenge of key distribution and rotation for Fernet token should be
settled, you can refer to these two blogs: http://lbragstad.com/?p=133,
http://lbragstad.com/?p=156
@@ -349,6 +351,9 @@ in deployment and maintenance, with better scalability.
token + Async replication ( star-mode)" for multsite OPNFV cloud is
recommended.
+ PKI token has been deprecated, so all proposals about PKI token are not
+recommended.
+
References
==========