diff options
author | joehuang <joehuang@huawei.com> | 2017-02-07 04:17:31 -0500 |
---|---|---|
committer | joehuang <joehuang@huawei.com> | 2017-02-16 04:11:13 -0500 |
commit | 7dbbb63739db4aac973fb6d5f3f16b5e9206ce14 (patch) | |
tree | 47747f6e2c42ca5c0be7e025110bf40eac8a65ea | |
parent | a45633054f93a24401847c3a54e88e9a3344250a (diff) |
Update the multisite documentations to reflect the progress in D
As some changes in OpenStack projects like KeyStone PKI token
deprecation, L2GW moved away from Neutron stadium, Tricircle
shrinked scope and became OpenStack big-tent project, and
Kingbird has made great progress in feature development
after the initial requirements discussion. Documents need to
update to reflect these recent changes.
python-kingbirdclient was introduced recently, so the usage
guide is updated to use python-kingbirdclient. The new feature
key pair synchronization is also included in the usage guide.
Change-Id: Iad9fbd441d191defa5e8793633a626ab5a24f217
Signed-off-by: joehuang <joehuang@huawei.com>
15 files changed, 623 insertions, 369 deletions
diff --git a/docs/installationprocedure/index.rst b/docs/release/configguide/index.rst index 746f819..2ee37cb 100644 --- a/docs/installationprocedure/index.rst +++ b/docs/release/configguide/index.rst @@ -1,19 +1,17 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 .. (c) Sofia Wallin Ericsson AB +.. (c) Chaoyi Huang, Huawei Technologies Co., Ltd. -********************** -Installation procedure -********************** -Colorado 1.0 ------------- +***************************** +Multisite Configuration Guide +***************************** .. toctree:: :numbered: :maxdepth: 2 abstract.rst - multisite.kingbird.installation.rst multisite.configuration.rst multisite.kingbird.configuration.rst diff --git a/docs/installationprocedure/multisite.configuration.rst b/docs/release/configguide/multisite.configuration.rst index c005e8d..0a38505 100644 --- a/docs/installationprocedure/multisite.configuration.rst +++ b/docs/release/configguide/multisite.configuration.rst @@ -1,10 +1,6 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -============================= -Multisite configuration guide -============================= - Multisite identity service management ===================================== diff --git a/docs/installationprocedure/multisite.kingbird.configuration.rst b/docs/release/configguide/multisite.kingbird.configuration.rst index 7eb6106..7eb6106 100644 --- a/docs/installationprocedure/multisite.kingbird.configuration.rst +++ b/docs/release/configguide/multisite.kingbird.configuration.rst diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst new file mode 100644 index 0000000..0687f6c --- /dev/null +++ b/docs/release/installation/index.rst @@ -0,0 +1,15 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) Sofia Wallin Ericsson AB +.. (c) Chaoyi Huang, Huawei Technologies Co., Ltd. + +******************************** +Multisite Installation procedure +******************************** + +.. toctree:: + :numbered: + :maxdepth: 2 + + abstract.rst + multisite.kingbird.installation.rst diff --git a/docs/installationprocedure/multisite.kingbird.installation.rst b/docs/release/installation/multisite.kingbird.installation.rst index 9abb669..54b622d 100644 --- a/docs/installationprocedure/multisite.kingbird.installation.rst +++ b/docs/release/installation/multisite.kingbird.installation.rst @@ -1,9 +1,9 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -=========================================== -Multisite Kingbird installation instruction -=========================================== +================================= +Kingbird installation instruction +================================= Abstract -------- @@ -142,6 +142,9 @@ By default, the bind_host of kingbird-api is local_host(127.0.0.1), and the port for the service is 8118, you can leave it as the default if no port conflict happened. +Please replace the address of Kingbird service "127.0.0.1" which is mentioned +below to the address you get from OpenStack Kingbird endpoint. + To make the Kingbird work normally, you have to edit these configuration items. The [cache] section is used by kingbird engine to access the quota information of Nova, Cinder, Neutron in each region, replace the @@ -229,16 +232,7 @@ bus configuration in Nova, Cinder, Neutron configuration file. .. code-block:: bash [DEFAULT] - rpc_backend = rabbit - control_exchange = openstack - transport_url = None - - [oslo_messaging_rabbit] - rabbit_host = 127.0.0.1 - rabbit_port = 5671 - rabbit_userid = guest - rabbit_password = guest - rabbit_virtual_host = / + transport_url = rabbit://stackrabbit:password@127.0.0.1:5672/ After these basic configuration items configured, now the database schema of "kingbird" should be created: @@ -253,10 +247,9 @@ according to your cloud planning: .. code-block:: bash openstack service create --name=kingbird synchronization - openstack endpoint create --region=RegionOne \ - --publicurl=http://127.0.0.1:8118/v1.0 \ - --adminurl=http://127.0.0.1:8118/v1.0 \ - --internalurl=http://127.0.0.1:8118/v1.0 kingbird + openstack endpoint create --region=RegionOne kingbird public http://127.0.0.1:8118/v1.0 + openstack endpoint create --region=RegionOne kingbird admin http://127.0.0.1:8118/v1.0 + openstack endpoint create --region=RegionOne kingbird internal http://127.0.0.1:8118/v1.0 Now it's ready to run kingbird-api and kingbird-engine: @@ -277,12 +270,12 @@ Post-installation activities ---------------------------- Run the following commands to check whether kingbird-api is serving, please -replace $token to the token you get from "openstack token issue": +replace $mytoken to the token you get from "openstack token issue": .. code-block:: bash openstack token issue - curl -H "Content-Type: application/json" -H "X-Auth-Token: $token" \ + curl -H "Content-Type: application/json" -H "X-Auth-Token: $mytoken" \ http://127.0.0.1:8118/ If the response looks like following: {"versions": [{"status": "CURRENT", @@ -291,12 +284,12 @@ If the response looks like following: {"versions": [{"status": "CURRENT", then that means the kingbird-api is working normally. Run the following commands to check whether kingbird-engine is serving, please -replace $token to the token you get from "openstack token issue", and the +replace $mytoken to the token you get from "openstack token issue", and the $admin_project_id to the admin project id in your environment: .. code-block:: bash - curl -H "Content-Type: application/json" -H "X-Auth-Token: $token" \ + curl -H "Content-Type: application/json" -H "X-Auth-Token: $mytoken" \ -X PUT \ http://127.0.0.1:8118/v1.0/$admin_project_id/os-quota-sets/$admin_project_id/sync diff --git a/docs/releasenotes/index.rst b/docs/release/overview/index.rst index df1e186..716f5a0 100644 --- a/docs/releasenotes/index.rst +++ b/docs/release/overview/index.rst @@ -1,9 +1,9 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -************************** +*********************** Multisite Release Notes -************************** +*********************** .. toctree:: :numbered: diff --git a/docs/releasenotes/multisite.release.notes.rst b/docs/release/overview/multisite.release.notes.rst index d90a064..85b9561 100644 --- a/docs/releasenotes/multisite.release.notes.rst +++ b/docs/release/overview/multisite.release.notes.rst @@ -1,14 +1,11 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -Release Notes of Multisite project -================================== - Multisite is to identify the requirements and gaps for the VIM(OpenStack) to support multi-site NFV cloud. The documentation of requirements, installation, configuration and usage guide for multi-site and Kingbird are provided. -It's the first release for Kingbird service, known bugs are registered at +For Kingbird service, known bugs are registered at https://bugs.launchpad.net/kingbird. diff --git a/docs/userguide/index.rst b/docs/release/userguide/index.rst index 25de482..2726184 100644 --- a/docs/userguide/index.rst +++ b/docs/release/userguide/index.rst @@ -1,5 +1,6 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 +.. (c) Chaoyi Huang, Huawei Technologies Co., Ltd. ************************** Multisite Admin User Guide @@ -11,3 +12,4 @@ Multisite Admin User Guide multisite.admin.usage.rst multisite.kingbird.usage.rst + multisite.tricircle.usage.rst diff --git a/docs/userguide/multisite.admin.usage.rst b/docs/release/userguide/multisite.admin.usage.rst index 41f23c0..544c9b1 100644 --- a/docs/userguide/multisite.admin.usage.rst +++ b/docs/release/userguide/multisite.admin.usage.rst @@ -1,10 +1,6 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -========================== -Multisite admin user guide -========================== - Multisite identity service management ===================================== @@ -19,15 +15,17 @@ Token Format There are 3 types of token format supported by OpenStack KeyStone + * **FERNET** * **UUID** * **PKI/PKIZ** - * **FERNET** It's very important to understand these token format before we begin the mutltisite identity service management. Please refer to the OpenStack official site for the identity management. http://docs.openstack.org/admin-guide-cloud/identity_management.html +Please note that PKI/PKIZ token format has been deprecated. + Key consideration in multisite scenario --------------------------------------- @@ -53,6 +51,13 @@ region as the service itself. The challenge to distribute KeyStone service into each region is the KeyStone backend. Different token format has different data persisted in the backend. +* Fernet: Tokens are non persistent cryptographic based tokens and validated + online by the Keystone service. Fernet tokens are more lightweight + than PKI tokens and have a fixed size. Fernet tokens require Keystone + deployed in a distributed manner, again to avoid inter region traffic. The + data synchronization cost for the Keystone backend is smaller due to the non- + persisted token. + * UUID: UUID tokens have a fixed size. Tokens are persistently stored and create a lot of database traffic, the persistence of token is for the revoke purpose. UUID tokens are validated online by Keystone, call to service will @@ -61,25 +66,6 @@ backend. Different token format has different data persisted in the backend. for use in multi region clouds, no matter the Keystone database replicates or not. -* PKI: Tokens are non persistent cryptographic based tokens and validated - offline (not by the Keystone service) by Keystone middleware which is part - of other services such as Nova. Since PKI tokens include endpoint for all - services in all regions, the token size can become big. There are - several ways to reduce the token size such as no catalog policy, endpoint - filter to make a project binding with limited endpoints, and compressed PKI - token - PKIZ, but the size of token is still unpredictable, making it difficult - to manage. If catalog is not applied, that means the user can access all - regions, in some scenario, it's not allowed to do like this. Centralized - Keystone with PKI token to reduce inter region backend synchronization traffic. - PKI tokens do produce Keystone traffic for revocation lists. - -* Fernet: Tokens are non persistent cryptographic based tokens and validated - online by the Keystone service. Fernet tokens are more lightweight - than PKI tokens and have a fixed size. Fernet tokens require Keystone - deployed in a distributed manner, again to avoid inter region traffic. The - data synchronization cost for the Keystone backend is smaller due to the non- - persisted token. - Cryptographic tokens bring new (compared to UUID tokens) issues/use-cases like key rotation, certificate revocation. Key management is out of scope for this use case. @@ -110,7 +96,7 @@ Only the Keystone database can be replicated to other sites. Replicating databases for other services will cause those services to get of out sync and malfunction. -Since only the Keystone database is to be sync or replicated to another +Since only the Keystone database is to be replicated sync. or async. to another region/site, it's better to deploy Keystone database into its own database server with extra networking requirement, cluster or replication configuration. How to support this by installer is out of scope. @@ -121,40 +107,6 @@ used, if global transaction identifiers GTID is enabled. Deployment options ------------------ -**Distributed KeyStone service with PKI token** - -Deploy KeyStone service in two sites with database replication. If site -level failure impact is not considered, then KeyStone service can only be -deployed into one site. - -The PKI token has one great advantage is that the token validation can be -done locally, without sending token validation request to KeyStone server. -The drawback of PKI token is -the endpoint list size in the token. If a project will be only spread in -very limited site number(region number), then we can use the endpoint -filter to reduce the token size, make it workable even a lot of sites -in the cloud. -KeyStone middleware(which is co-located in the service like -Nova-API/xxx-API) will have to send the request to the KeyStone server -frequently for the revoke-list, in order to reject some malicious API -request, for example, a user has to be deactivated, but use an old token -to access OpenStack service. - -For this option, needs to leverage database replication to provide -KeyStone Active-Active mode across sites to reduce the impact of site failure. -And the revoke-list request is very frequently asked, so the performance of the -KeyStone server needs also to be taken care. - -Site level keystone load balance is required to provide site level -redundancy, otherwise the KeyStone middleware will not switch request to the -healthy KeyStone server in time. - -And also the cert distribution/revoke to each site / API server for token -validation is required. - -This option can be used for some scenario where there are very limited -sites, especially if each project only spreads into limited sites ( regions ). - **Distributed KeyStone service with Fernet token** Fernet token is a very new format, and just introduced recently,the biggest @@ -186,11 +138,13 @@ cover very well. **Distributed KeyStone service with Fernet token + Async replication (star-mode)** -One master KeyStone cluster with Fernet token in two sites (for site level -high availability purpose), other sites will be installed with at least 2 slave -nodes where the node is configured with DB async replication from the master -cluster members, and one slave’s mater node in site1, another slave’s master -node in site 2. +One master KeyStone cluster with Fernet token in one or two sites(for site +level high availability purpose), other sites will be installed with at least +2 slave nodes where the node is configured with DB async replication from the +master cluster members. The async. replication data source is better to be +from different member of the master cluster, if there are two sites for the +KeyStone cluster, it'll be better that source members for async. replication +are located in different site. Only the master cluster nodes are allowed to write, other slave nodes waiting for replication from the master cluster member( very little delay). @@ -211,8 +165,6 @@ Cons: * Need to be aware of the chanllenge of key distribution and rotation for Fernet token. -Note: PKI token will be deprecated soon, so Fernet token is encouraged. - Multisite VNF Geo site disaster recovery ======================================== @@ -364,27 +316,50 @@ purpose: configuration Generally these planes are separated with each other. And for legacy telecom -application, each internal plane will have its fixed or flexible IP addressing -plane. There are some interesting/hard requirements on the networking (L2/L3) +application, each internal plane will have its fixed or flexble IP addressing +plan. + +There are some interesting/hard requirements on the networking (L2/L3) between OpenStack instances, at lease the backup plane across different OpenStack instances: -1) Overlay L2 networking is prefered as the backup plane for heartbeat or state - replication, the reason is: - - a) Support legacy compatibility: Some telecom app with built-in internal L2 - network, for easy to move these app to virtualized telecom application, it - would be better to provide L2 network. - - b) Support IP overlapping: multiple telecom applications may have - overlapping IP address for cross OpenStack instance networking. - Therefore over L2 networking across Neutron feature is required - in OpenStack. - -2) L3 networking cross OpenStack instance for heartbeat or state replication. - Can leverage FIP or vRouter inter-connected with overlay L2 network to - establish overlay L3 networking. - -Note: L2 border gateway spec was merged in L2GW project: -https://review.openstack.org/#/c/270786/. Code will be availabe in later -release. +To make the VNF can work with HA mode across different OpenStack instances in +one site (but not limited to), need to support at lease the backup plane across +different OpenStack instances: + +1) L2 networking across OpenStack instance for heartbeat or state replication. +Overlay L2 networking or shared L2 provider networks can work as the backup +plance for heartbeat or state replication. Overlay L2 network is preferred, +the reason is: + + a. Support legacy compatibility: Some telecom app with built-in internal L2 + network, for easy to move these app to VNF, it would be better to provide + L2 network. + b. Isolated L2 network will simplify the security management between + different network planes. + c. Easy to support IP/mac floating across OpenStack. + d. Support IP overlapping: multiple VNFs may have overlaping IP address for + cross OpenStack instance networking. + +Therefore, over L2 networking across Neutron feature is required in OpenStack. + +2) L3 networking across OpenStack instance for heartbeat or state replication. +For L3 networking, we can leverage the floating IP provided in current +Neutron, or use VPN or BGPVPN(networking-bgpvpn) to setup the connection. + +L3 networking to support the VNF HA will consume more resources and need to +take more security factors into consideration, this make the networking +more complex. And L3 networking is also not possible to provide IP floating +across OpenStack instances. + +3) The IP address used for VNF to connect with other VNFs should be able to be +floating cross OpenStack instance. For example, if the master failed, the IP +address should be used in the standby which is running in another OpenStack +instance. There are some method like VRRP/GARP etc can help the movement of the +external IP, so no new feature will be added to OpenStack. + +Several projects are addressing the networking requirements, deployment should +consider the factors mentioned above. + * Tricircle: https://github.com/openstack/tricircle/ + * Networking-BGPVPN: https://github.com/openstack/networking-bgpvpn/ + * VPNaaS: https://github.com/openstack/neutron-vpnaas diff --git a/docs/release/userguide/multisite.kingbird.usage.rst b/docs/release/userguide/multisite.kingbird.usage.rst new file mode 100644 index 0000000..e9ead90 --- /dev/null +++ b/docs/release/userguide/multisite.kingbird.usage.rst @@ -0,0 +1,349 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +============================= +Multisite.Kingbird user guide +============================= + +Quota management for OpenStack multi-region deployments +------------------------------------------------------- +Kingbird is centralized synchronization service for multi-region OpenStack +deployments. In OPNFV Colorado release, Kingbird provides centralized quota +management feature. Administrator can set quota per project based in Kingbird +and sync the quota limit to multi-region OpenStack periodiclly or on-demand. +The tenant can check the total quota limit and usage from Kingbird for all +regions. Administrator can also manage the default quota by quota class +setting. + +Following quota items are supported to be managed in Kingbird: + +- **instances**: Number of instances allowed per project. +- **cores**: Number of instance cores allowed per project. +- **ram**: Megabytes of instance RAM allowed per project. +- **metadata_items**: Number of metadata items allowed per instance. +- **key_pairs**: Number of key pairs per user. +- **fixed_ips**: Number of fixed IPs allowed per project, + valid if Nova Network is used. +- **security_groups**: Number of security groups per project, + valid if Nova Network is used. +- **floating_ips**: Number of floating IPs allowed per project, + valid if Nova Network is used. +- **network**: Number of networks allowed per project, + valid if Neutron is used. +- **subnet**: Number of subnets allowed per project, + valid if Neutron is used. +- **port**: Number of ports allowed per project, + valid if Neutron is used. +- **security_group**: Number of security groups allowed per project, + valid if Neutron is used. +- **security_group_rule**: Number of security group rules allowed per project, + valid if Neutron is used. +- **router**: Number of routers allowed per project, + valid if Neutron is used. +- **floatingip**: Number of floating IPs allowed per project, + valid if Neutron is used. +- **volumes**: Number of volumes allowed per project. +- **snapshots**: Number of snapshots allowed per project. +- **gigabytes**: Total amount of storage, in gigabytes, allowed for volumes + and snapshots per project. +- **backups**: Number of volume backups allowed per project. +- **backup_gigabytes**: Total amount of storage, in gigabytes, allowed for volume + backups per project. + +Key pair is the only resource type supported in resource synchronization. + +Only restful APIs are provided for Kingbird in Colorado release, so curl or +other http client can be used to call Kingbird API. + +Before use the following command, get token, project id, and kingbird service +endpoint first. Use $kb_token to repesent the token, and $admin_tenant_id as +administrator project_id, and $tenant_id as the target project_id for quota +management and $kb_ip_addr for the kingbird service endpoint ip address. + +Note: +To view all tenants (projects), run: + + .. code-block:: bash + + openstack project list + +To get token, run: + + .. code-block:: bash + + openstack token issue + +To get Kingbird service endpoint, run: + + .. code-block:: bash + + openstack endpoint list + +Quota Management API +-------------------- + +1. Update global limit for a tenant + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota update b8eea2ceda4c47f1906fda7e7152a322 --port 10 --security_groups 10 + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X PUT \ + -d '{"quota_set":{"cores": 10,"ram": 51200, "metadata_items": 100,"key_pairs": 100, "network":20,"security_group": 20,"security_group_rule": 20}}' \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id + +2. Get global limit for a tenant + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota show --tenant $tenant_id + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id + +3. A tenant can also get the global limit by himself + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota show + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-quota-sets/$tenant_id + +4. Get defaults limits + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota defaults + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/defaults + +5. Get total usage for a tenant + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota detail --tenant $tenant_id + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X GET \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id/detail + +6. A tenant can also get the total usage by himself + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota detail + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X GET \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-quota-sets/$tenant_id/detail + +7. On demand quota sync + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota sync $tenant_id + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X PUT \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id/sync + + +8. Delete specific global limit for a tenant + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X DELETE \ + -d '{"quota_set": [ "cores", "ram"]}' \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id + +9. Delete all kingbird global limit for a tenant + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota delete $tenant_id + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X DELETE \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id + + +Quota Class API +--------------- + +1. Update default quota class + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota-class update --port 10 --security_groups 10 <quota class> + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X PUT \ + -d '{"quota_class_set":{"cores": 100, "network":50,"security_group": 50,"security_group_rule": 50}}' \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default + +2. Get default quota class + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota-class show default + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default + +3. Delete default quota class + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota-class delete default + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X DELETE \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default + + +Resource Synchronization API +----------------------------- + +1. Create synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X POST -d \ + '{"resource_set":{"resources": ["<Keypair_name>"],"force":<True/False>,"resource_type": "keypair","source": <"Source_Region">,"target": [<"List_of_target_regions">]}}' \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync + +2. Get synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync/ + +3. Get active synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync/active + +4. Get detail information of a synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync/$job_id + +5. Delete a synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X DELETE \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync/job_id diff --git a/docs/release/userguide/multisite.tricircle.usage.rst b/docs/release/userguide/multisite.tricircle.usage.rst new file mode 100644 index 0000000..d42f5b0 --- /dev/null +++ b/docs/release/userguide/multisite.tricircle.usage.rst @@ -0,0 +1,13 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +============================== +Multisite.Tricircle user guide +============================== + +Tricircle is one OpenStack big-tent project. All user guide related documents +could be found from OpenStack website: + * Developer Guide: http://docs.openstack.org/developer/tricircle/ + * Installation Guide: http://docs.openstack.org/developer/tricircle/installation-guide.html + * Configuration Guide: http://docs.openstack.org/developer/tricircle/configuration.html + * Networking Guide: http://docs.openstack.org/developer/tricircle/networking-guide.html diff --git a/docs/requirements/VNF_high_availability_across_VIM.rst b/docs/requirements/VNF_high_availability_across_VIM.rst index 6c2e9f1..42c479e 100644 --- a/docs/requirements/VNF_high_availability_across_VIM.rst +++ b/docs/requirements/VNF_high_availability_across_VIM.rst @@ -1,21 +1,21 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -======================================= +================================ VNF high availability across VIM -======================================= +================================ Problem description =================== Abstract ------------- +-------- a VNF (telecom application) should, be able to realize high availability deloyment across OpenStack instances. Description ------------- +----------- VNF (Telecom application running over cloud) may (already) be designed as Active-Standby/Active-Active/N-Way to achieve high availability, @@ -64,7 +64,7 @@ the potential for correlated failure to very low levels (at least as low as the required overall application availability). Analysis of requirements to OpenStack -=========================== +===================================== The VNF often has different networking plane for different purpose: external network plane: using for communication with other VNF @@ -76,24 +76,37 @@ between the component's active/standy or active/active or N-way cluster. management plane: this plane is mainly for the management purpose Generally these planes are seperated with each other. And for legacy telecom -application, each internal plane will have its fixed or flexsible IP addressing -plane. +application, each internal plane will have its fixed or flexible IP addressing +plan. -to make the VNF can work with HA mode across different OpenStack instances in +To make the VNF can work with HA mode across different OpenStack instances in one site (but not limited to), need to support at lease the backup plane across different OpenStack instances: -1) Overlay L2 networking or shared L2 provider networks as the backup plance for -heartbeat or state replication. Overlay L2 network is preferred, the reason is: -a. Support legacy compatibility: Some telecom app with built-in internal L2 -network, for easy to move these app to VNF, it would be better to provide L2 -network b. Support IP overlapping: multiple VNFs may have overlaping IP address -for cross OpenStack instance networking +1) L2 networking across OpenStack instance for heartbeat or state replication. +Overlay L2 networking or shared L2 provider networks can work as the backup +plance for heartbeat or state replication. Overlay L2 network is preferred, +the reason is: + + a. Support legacy compatibility: Some telecom app with built-in internal L2 + network, for easy to move these app to VNF, it would be better to provide + L2 network. + b. Isolated L2 network will simplify the security management between + different network planes. + c. Easy to support IP/mac floating across OpenStack. + d. Support IP overlapping: multiple VNFs may have overlaping IP address for + cross OpenStack instance networking. + Therefore, over L2 networking across Neutron feature is required in OpenStack. -2) L3 networking cross OpenStack instance for heartbeat or state replication. -For L3 networking, we can leverage the floating IP provided in current Neutron, -so no new feature requirement to OpenStack. +2) L3 networking across OpenStack instance for heartbeat or state replication. +For L3 networking, we can leverage the floating IP provided in current +Neutron, or use VPN or BGPVPN(networking-bgpvpn) to setup the connection. + +L3 networking to support the VNF HA will consume more resources and need to +take more security factors into consideration, this make the networking +more complex. And L3 networking is also not possible to provide IP floating +across OpenStack instances. 3) The IP address used for VNF to connect with other VNFs should be able to be floating cross OpenStack instance. For example, if the master failed, the IP @@ -103,48 +116,20 @@ external IP, so no new feature will be added to OpenStack. Prototype ------------ +--------- None. Proposed solution ------------ - - requirements perspective It's up to application descision to use L2 or L3 -networking across Neutron. - - For Neutron, a L2 network is consisted of lots of ports. To make the cross -Neutron L2 networking is workable, we need some fake remote ports in local -Neutron to represent VMs in remote site ( remote OpenStack ). - - the fake remote port will reside on some VTEP ( for VxLAN ), the tunneling -IP address of the VTEP should be the attribute of the fake remote port, so that -the local port can forward packet to correct tunneling endpoint. - - the idea is to add one more ML2 mechnism driver to capture the fake remote -port CRUD( creation, retievement, update, delete) - - when a fake remote port is added/update/deleted, then the ML2 mechanism -driver for these fake ports will activate L2 population, so that the VTEP -tunneling endpoint information could be understood by other local ports. - - it's also required to be able to query the port's VTEP tunneling endpoint -information through Neutron API, in order to use these information to create -fake remote port in another Neutron. - - In the past, the port's VTEP ip address is the host IP where the VM resides. -But the this BP https://review.openstack.org/#/c/215409/ will make the port free -of binding to host IP as the tunneling endpoint, you can even specify L2GW ip -address as the tunneling endpoint. - - Therefore a new BP will be registered to processing the fake remote port, in -order make cross Neutron L2 networking is feasible. RFE is registered first: -https://bugs.launchpad.net/neutron/+bug/1484005 - +----------------- +Several projects are addressing the networking requirements: + * Tricircle: https://github.com/openstack/tricircle/ + * Networking-BGPVPN: https://github.com/openstack/networking-bgpvpn/ + * VPNaaS: https://github.com/openstack/neutron-vpnaas Gaps ==== - 1) fake remote port for cross Neutron L2 networking - + Inter-networking among OpenStack clouds for application HA need is lack + in Neutron, and covered by sevral new created projects. **NAME-THE-MODULE issues:** @@ -156,4 +141,3 @@ Affected By References ========== - diff --git a/docs/requirements/multisite-centralized-service.rst b/docs/requirements/multisite-centralized-service.rst new file mode 100644 index 0000000..5dbbfc8 --- /dev/null +++ b/docs/requirements/multisite-centralized-service.rst @@ -0,0 +1,109 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +============================== + Multisite centralized service +============================== + + +Problem description +=================== + +Abstract +-------- + +a user should have one centralized service for resources management and/or +replication(sync tenant resources like images, ssh-keys, etc) across multiple +OpenStack clouds. + +Description +------------ + +For multisite management use cases, some common requirements in term of +centralized or shared services over the multiple openstack instances could +be summarized here. + +A user should be able to manage all their virtual resouces from one +centralized management interface, at least to have a summarized view of +the total resource capacity and the live utilization of their virtual +resources, for example: + +- Centralized Quota Management + Currently all quotas are set for each region separataly. And different + services (Nova, Cinder, Neutron, Glance, ...) have different quota to + be set. The requirement is to provide global view for quota per tenant + across multiple regions, and soft/hard quotas based on current usage for + all regions for this tenant. + +- A service to clone ssh keys across regions + A user may upload keypair to access the VMs allocated for her. But if her + VMs are spread in multiple regions, the user has to upload the keypair + seperatly to different region. Need a service to clone the SSH key to + desired OpenStack clouds. + +- A service to sync images across regions + In multi-site scenario, a user has to upload image seperatly to different + region. There can be 4 cases need to be considered: + No image sync + Auto-sync of images + Lazy sync - clone the requested image on demand. + Controlled sync, where you can control propagation and rollback if + problems. + +- Global view for tenant level IP address / mac address space management + If a tenant has networks in multiple region, and these networks are routable + (for example, connected with VPN), then, IP address may be duplicated. Need + a global view for IP address space management. + If IP v4 used, this issue needs to be considered. For IPv6, it should als + be managed. This requirement is important not only just for prevention of + duplicate address. + For security and other reasons it's important to know which IP Addresses + (IPv4 and IPv6) are used in which region. + Need to extend such requirement to floating and public IP Addresses. + +- A service to clone security groups across regions + No appropriate service to security groups across multiple region if the + tenant has resources distributed, has to set the security groups in + different region manually. + +- A user should be able to access all the logs and indicators produced by + multiple openstack instances, in a centralized way. + +Requirement analysis +==================== + +All problems me here are not covered by existing projects in OpenStack. + +Candidate solution analysis +--------------------------- + +- Kingbird[1][2] + Kingbird is an centralized OpenStack service that provides resource + operation and management across multiple OpenStack instances in a + multi-region OpenStack deployment. Kingbird provides features like + centralized quota management, centralized view for distributed virtual + resources, synchronisation of ssh keys, images, flavors etc. across regions. + +- Tricircle[3][4] + Tricricle is to provide networking automation across Neutron in multi-region + OpenStack deployments. Tricircle can address the challenges mentioned here: + Tenant level IP/mac addresses management to avoid conflict across OpenStack + clouds, global L2 network segement management and cross OpenStack L2 + networking, and make security group being sync-ed across OpenStack clouds. + + +Affected By +----------- + OPNFV multisite cloud. + +Conclusion +---------- + Kingbird and Tricircle are candidate solutions for these centralized + services in OpenStack multi-region clouds. + +References +========== +[1] Kingbird repository: https://github.com/openstack/kingbird +[2] Kingbird launchpad: https://launchpad.net/kingbird +[3] Tricricle wiki: https://wiki.openstack.org/wiki/Tricircle +[4] Tricircle repository: https://github.com/openstack/tricircle/ diff --git a/docs/requirements/multisite-identity-service-management.rst b/docs/requirements/multisite-identity-service-management.rst index ad2cea1..c1eeb2b 100644 --- a/docs/requirements/multisite-identity-service-management.rst +++ b/docs/requirements/multisite-identity-service-management.rst @@ -9,12 +9,12 @@ Glossary ======== There are 3 types of token supported by OpenStack KeyStone + **FERNET** + **UUID** **PKI/PKIZ** - **FERNET** - Please refer to reference section for these token formats, benchmark and comparation. @@ -189,7 +189,7 @@ cover very well. multi-cluster mode). We may have several KeyStone cluster with Fernet token, for example, -cluster1 ( site1, site2, … site 10 ), cluster 2 ( site11, site 12,..,site 20). +cluster1(site1, site2, .., site 10), cluster 2(site11, site 12,.., site 20). Then do the DB async replication among different cluster asynchronously. A prototype of this has been down on this. In some blogs they call it @@ -208,14 +208,16 @@ http://lbragstad.com/?p=156 - KeyStone service(Distributed) with Fernet token + Async replication ( star-mode). - one master KeyStone cluster with Fernet token in two sites (for site level -high availability purpose), other sites will be installed with at least 2 slave -nodes where the node is configured with DB async replication from the master -cluster members, and one slave’s mater node in site1, another slave’s master -node in site 2. + one master KeyStone cluster with Fernet token in one or two sites (two +sites if site level high availability is required), other sites will be +installed with at least 2 slave nodes where the node is configured with +DB async replication from the master cluster member. The async. replication +data source is better to be from different member of the master cluster, if +there are two sites for the KeyStone cluster, it'll be better that source +members for async. replication are located in different site. Only the master cluster nodes are allowed to write, other slave nodes -waiting for replication from the master cluster ( very little delay) member. +waiting for ( very little delay) replication from the master cluster member. But the chanllenge of key distribution and rotation for Fernet token should be settled, you can refer to these two blogs: http://lbragstad.com/?p=133, http://lbragstad.com/?p=156 @@ -349,6 +351,9 @@ in deployment and maintenance, with better scalability. token + Async replication ( star-mode)" for multsite OPNFV cloud is recommended. + PKI token has been deprecated, so all proposals about PKI token are not +recommended. + References ========== diff --git a/docs/userguide/multisite.kingbird.usage.rst b/docs/userguide/multisite.kingbird.usage.rst deleted file mode 100644 index 4cdab4f..0000000 --- a/docs/userguide/multisite.kingbird.usage.rst +++ /dev/null @@ -1,182 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -============================= -Multisite.Kingbird user guide -============================= - -Quota management for OpenStack multi-region deployments -------------------------------------------------------- -Kingbird is centralized synchronization service for multi-region OpenStack -deployments. In OPNFV Colorado release, Kingbird provides centralized quota -management feature. Administrator can set quota per project based in Kingbird -and sync the quota limit to multi-region OpenStack periodiclly or on-demand. -The tenant can check the total quota limit and usage from Kingbird for all -regions. Administrator can aslo manage the default quota by quota class -setting. - -Following quota items are supported to be managed in Kingbird: - -- **instances**: Number of instances allowed per project. -- **cores**: Number of instance cores allowed per project. -- **ram**: Megabytes of instance RAM allowed per project. -- **metadata_items**: Number of metadata items allowed per instance. -- **key_pairs**: Number of key pairs per user. -- **fixed_ips**: Number of fixed IPs allowed per project, - valid if Nova Network is used. -- **security_groups**: Number of security groups per project, - valid if Nova Network is used. -- **floating_ips**: Number of floating IPs allowed per project, - valid if Nova Network is used. -- **network**: Number of networks allowed per project, - valid if Neutron is used. -- **subnet**: Number of subnets allowed per project, - valid if Neutron is used. -- **port**: Number of ports allowed per project, - valid if Neutron is used. -- **security_group**: Number of security groups allowed per project, - valid if Neutron is used. -- **security_group_rule**: Number of security group rules allowed per project, - valid if Neutron is used. -- **router**: Number of routers allowed per project, - valid if Neutron is used. -- **floatingip**: Number of floating IPs allowed per project, - valid if Neutron is used. -- **volumes**: Number of volumes allowed per project. -- **snapshots**: Number of snapshots allowed per project. -- **gigabytes**: Total amount of storage, in gigabytes, allowed for volumes - and snapshots per project. -- **backups**: Number of volume backups allowed per project. -- **backup_gigabytes**: Total amount of storage, in gigabytes, allowed for volume - backups per project. - -Only restful APIs are provided for Kingbird in Colorado release, so curl or -other http client can be used to call Kingbird API. - -Before use the following command, get token, project id, and kingbird service -endpoint first. Use $kb_token to repesent the token, and $admin_tenant_id as -administrator project_id, and $tenant_id as the target project_id for quota -management and $kb_ip_addr for the kingbird service endpoint ip address. - -Note: -To view all tenants (projects), run: - -.. code-block:: bash - - openstack project list - -To get token, run: - -.. code-block:: bash - - openstack token issue - -To get Kingbird service endpoint, run: - -.. code-block:: bash - - openstack endpoint list - -Quota Management API --------------------- - -1. Update global limit for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X PUT \ - -d '{"quota_set":{"cores": 10,"ram": 51200, "metadata_items": 100,"key_pairs": 100, "network":20,"security_group": 20,"security_group_rule": 20}}' \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id - -2. Get global limit for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id - -3. A tenant can also get the global limit by himself - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - http://$kb_ip_addr:8118/v1.0/$tenant_id/os-quota-sets/$tenant_id - -4. Get defaults limits - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/defaults - -5. Get total usage for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X GET \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id/detail - -6. A tenant can also get the total usage by himself - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X GET \ - http://$kb_ip_addr:8118/v1.0/$tenant_id/os-quota-sets/$tenant_id/detail - -7. On demand quota sync - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X PUT \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id/sync - - -8. Delete specific global limit for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X DELETE \ - -d '{"quota_set": [ "cores", "ram"]}' \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id - -9. Delete all kingbird global limit for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X DELETE \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id - - -Quota Class API ---------------- - -1. Update default quota class - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X PUT \ - -d '{"quota_class_set":{"cores": 100, "network":50,"security_group": 50,"security_group_rule": 50}}' \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default - -2. Get default quota class - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default - -3. Delete default quota class - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X DELETE \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default - |