From 7dbbb63739db4aac973fb6d5f3f16b5e9206ce14 Mon Sep 17 00:00:00 2001 From: joehuang Date: Tue, 7 Feb 2017 04:17:31 -0500 Subject: Update the multisite documentations to reflect the progress in D As some changes in OpenStack projects like KeyStone PKI token deprecation, L2GW moved away from Neutron stadium, Tricircle shrinked scope and became OpenStack big-tent project, and Kingbird has made great progress in feature development after the initial requirements discussion. Documents need to update to reflect these recent changes. python-kingbirdclient was introduced recently, so the usage guide is updated to use python-kingbirdclient. The new feature key pair synchronization is also included in the usage guide. Change-Id: Iad9fbd441d191defa5e8793633a626ab5a24f217 Signed-off-by: joehuang --- docs/installationprocedure/index.rst | 19 - .../multisite.configuration.rst | 110 ------ .../multisite.kingbird.configuration.rst | 264 -------------- .../multisite.kingbird.installation.rst | 305 ---------------- docs/release/configguide/index.rst | 17 + .../configguide/multisite.configuration.rst | 106 ++++++ .../multisite.kingbird.configuration.rst | 264 ++++++++++++++ docs/release/installation/index.rst | 15 + .../multisite.kingbird.installation.rst | 298 ++++++++++++++++ docs/release/overview/index.rst | 12 + docs/release/overview/multisite.release.notes.rst | 11 + docs/release/userguide/index.rst | 15 + docs/release/userguide/multisite.admin.usage.rst | 365 +++++++++++++++++++ .../release/userguide/multisite.kingbird.usage.rst | 349 ++++++++++++++++++ .../userguide/multisite.tricircle.usage.rst | 13 + docs/releasenotes/index.rst | 12 - docs/releasenotes/multisite.release.notes.rst | 14 - .../VNF_high_availability_across_VIM.rst | 92 ++--- .../requirements/multisite-centralized-service.rst | 109 ++++++ .../multisite-identity-service-management.rst | 23 +- docs/userguide/index.rst | 13 - docs/userguide/multisite.admin.usage.rst | 390 --------------------- docs/userguide/multisite.kingbird.usage.rst | 182 ---------- 23 files changed, 1626 insertions(+), 1372 deletions(-) delete mode 100644 docs/installationprocedure/index.rst delete mode 100644 docs/installationprocedure/multisite.configuration.rst delete mode 100644 docs/installationprocedure/multisite.kingbird.configuration.rst delete mode 100644 docs/installationprocedure/multisite.kingbird.installation.rst create mode 100644 docs/release/configguide/index.rst create mode 100644 docs/release/configguide/multisite.configuration.rst create mode 100644 docs/release/configguide/multisite.kingbird.configuration.rst create mode 100644 docs/release/installation/index.rst create mode 100644 docs/release/installation/multisite.kingbird.installation.rst create mode 100644 docs/release/overview/index.rst create mode 100644 docs/release/overview/multisite.release.notes.rst create mode 100644 docs/release/userguide/index.rst create mode 100644 docs/release/userguide/multisite.admin.usage.rst create mode 100644 docs/release/userguide/multisite.kingbird.usage.rst create mode 100644 docs/release/userguide/multisite.tricircle.usage.rst delete mode 100644 docs/releasenotes/index.rst delete mode 100644 docs/releasenotes/multisite.release.notes.rst create mode 100644 docs/requirements/multisite-centralized-service.rst delete mode 100644 docs/userguide/index.rst delete mode 100644 docs/userguide/multisite.admin.usage.rst delete mode 100644 docs/userguide/multisite.kingbird.usage.rst (limited to 'docs') diff --git a/docs/installationprocedure/index.rst b/docs/installationprocedure/index.rst deleted file mode 100644 index 746f819..0000000 --- a/docs/installationprocedure/index.rst +++ /dev/null @@ -1,19 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 -.. (c) Sofia Wallin Ericsson AB - -********************** -Installation procedure -********************** -Colorado 1.0 ------------- - -.. toctree:: - :numbered: - :maxdepth: 2 - - abstract.rst - multisite.kingbird.installation.rst - multisite.configuration.rst - multisite.kingbird.configuration.rst - diff --git a/docs/installationprocedure/multisite.configuration.rst b/docs/installationprocedure/multisite.configuration.rst deleted file mode 100644 index c005e8d..0000000 --- a/docs/installationprocedure/multisite.configuration.rst +++ /dev/null @@ -1,110 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -============================= -Multisite configuration guide -============================= - -Multisite identity service management -===================================== - -Goal ----- - -A user should, using a single authentication point be able to manage virtual -resources spread over multiple OpenStack regions. - -Before you read ---------------- - -This chapter does not intend to cover all configuration of KeyStone and other -OpenStack services to work together with KeyStone. - -This chapter focuses only on the configuration part should be taken into -account in multi-site scenario. - -Please read the configuration documentation related to identity management -of OpenStack for all configuration items. - -http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-identity.html - -How to configure the database cluster for synchronization or asynchrounous -repliation in multi-site scenario is out of scope of this document. The only -remainder is that for the synchronization or replication, only Keystone -database is required. If you are using MySQL, you can configure like this: - -In the master: - - .. code-block:: bash - - binlog-do-db=keystone - -In the slave: - - .. code-block:: bash - - replicate-do-db=keystone - - -Deployment options ------------------- - -For each detail description of each deployment option, please refer to the -admin-user-guide. - -- Distributed KeyStone service with PKI token - - In KeyStone configuration file, PKI token format should be configured - - .. code-block:: bash - - provider = pki - - or - - .. code-block:: bash - - provider = pkiz - - In the [keystone_authtoken] section of each OpenStack service configuration - file in each site, configure the identity_url and auth_uri to the address - of KeyStone service - - .. code-block:: bash - - identity_uri = https://keystone.your.com:35357/ - auth_uri = http://keystone.your.com:5000/v2.0 - - It's better to use domain name for the KeyStone service, but not to use IP - address directly, especially if you deployed KeyStone service in at least - two sites for site level high availability. - -- Distributed KeyStone service with Fernet token -- Distributed KeyStone service with Fernet token + Async replication ( - star-mode). - - In these two deployment options, the token validation is planned to be done - in local site. - - In KeyStone configuration file, Fernet token format should be configured - - .. code-block:: bash - - provider = fernet - - In the [keystone_authtoken] section of each OpenStack service configuration - file in each site, configure the identity_url and auth_uri to the address - of local KeyStone service - - .. code-block:: bash - - identity_uri = https://local-keystone.your.com:35357/ - auth_uri = http://local-keystone.your.com:5000/v2.0 - - and especially, configure the region_name to your local region name, for - example, if you are configuring services in RegionOne, and there is local - KeyStone service in RegionOne, then - - .. code-block:: bash - - region_name = RegionOne diff --git a/docs/installationprocedure/multisite.kingbird.configuration.rst b/docs/installationprocedure/multisite.kingbird.configuration.rst deleted file mode 100644 index 7eb6106..0000000 --- a/docs/installationprocedure/multisite.kingbird.configuration.rst +++ /dev/null @@ -1,264 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - - -Configuration of Multisite.Kingbird -=================================== - -A brief introduction to configure Multisite Kingbird service. Only the -configuration items for Kingbird will be described here. Logging, -messaging, database, keystonemiddleware etc configuration which are -generated from OpenStack OSLO libary, will not be described here, for -these configuration items are common to Nova, Cinder, Neutron. So please -refer to corresponding description from Nova or Cinder or Neutron. - - -Configuration in [DEFAULT] --------------------------- - -configuration items for kingbird-api -"""""""""""""""""""""""""""""""""""" - -bind_host -********* -- default value: *bind_host = 0.0.0.0* -- description: The host IP to bind for kingbird-api service - -bind_port -********* -- default value: *bind_port = 8118* -- description: The port to bind for kingbird-api service - -api_workers -*********** -- default value: *api_workers = 2* -- description: Number of kingbird-api workers - -configuration items for kingbird-engine -""""""""""""""""""""""""""""""""""""""" - -host -**** -- default value: *host = localhost* -- description: The host name kingbird-engine service is running on - -workers -******* -- default value: *workers = 1* -- description: Number of kingbird-engine workers - -report_interval -*************** -- default value: *report_interval = 60* -- description: Seconds between running periodic reporting tasks to - keep the engine alive in the DB. If the engine doesn't report its - aliveness to the DB more than two intervals, then the lock accquired - by the engine will be removed by other engines. - -common configuration items for kingbird-api and kingbird-engine -""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" - -use_default_quota_class -*********************** -- default value: *use_default_quota_class = true* -- description: Enables or disables use of default quota class with default - quota, boolean value - -Configuration in [kingbird_global_limit] ----------------------------------------- - -For quota limit, a negative value means unlimited. - -configuration items for kingbird-api and kingbird-engine -"""""""""""""""""""""""""""""""""""""""""""""""""""""""" - -quota_instances -*************** -- default value: *quota_instances = 10* -- description: Number of instances allowed per project, integer value. - -quota_cores -*********** -- default value: *quota_cores = 20* -- description: Number of instance cores allowed per project, integer value. - -quota_ram -********* -- default value: *quota_ram = 512* -- description: Megabytes of instance RAM allowed per project, integer value. - -quota_metadata_items -******************** -- default value: *quota_metadata_items = 128* -- description: Number of metadata items allowed per instance, integer value. - -quota_key_pairs -*************** -- default value: *quota_key_pairs = 10* -- description: Number of key pairs per user, integer value. - -quota_fixed_ips -*************** -- default value: *quota_fixed_ips = -1* -- description: Number of fixed IPs allowed per project, this should be at - least the number of instances allowed, integer value. - -quota_security_groups -********************* -- default value: *quota_security_groups = 10* -- description: Number of security groups per project, integer value. - -quota_floating_ips -****************** -- default value: *quota_floating_ips = 10* -- description: Number of floating IPs allowed per project, integer value. - -quota_network -*************** -- default value: *quota_network = 10* -- description: Number of networks allowed per project, integer value. - -quota_subnet -*************** -- default value: *quota_subnet = 10* -- description: Number of subnets allowed per project, integer value. - -quota_port -*************** -- default value: *quota_port = 50* -- description: Number of ports allowed per project, integer value. - -quota_security_group -******************** -- default value: *quota_security_group = 10* -- description: Number of security groups allowed per project, integer value. - -quota_security_group_rule -************************* -- default value: *quota_security_group_rule = 100* -- description: Number of security group rules allowed per project, integer - value. - -quota_router -************ -- default value: *quota_router = 10* -- description: Number of routers allowed per project, integer value. - -quota_floatingip -**************** -- default value: *quota_floatingip = 50* -- description: Number of floating IPs allowed per project, integer value. - -quota_volumes -*************** -- default value: *quota_volumes = 10* -- description: Number of volumes allowed per project, integer value. - -quota_snapshots -*************** -- default value: *quota_snapshots = 10* -- description: Number of snapshots allowed per project, integer value. - -quota_gigabytes -*************** -- default value: *quota_gigabytes = 1000* -- description: Total amount of storage, in gigabytes, allowed for volumes - and snapshots per project, integer value. - -quota_backups -************* -- default value: *quota_backups = 10* -- description: Number of volume backups allowed per project, integer value. - -quota_backup_gigabytes -********************** -- default value: *quota_backup_gigabytes = 1000* -- description: Total amount of storage, in gigabytes, allowed for volume - backups per project, integer value. - -Configuration in [cache] ----------------------------------------- - -The [cache] section is used by kingbird engine to access the quota -information for Nova, Cinder, Neutron in each region in order to reduce -the KeyStone load while retrieving the endpoint information each time. - -configuration items for kingbird-engine -""""""""""""""""""""""""""""""""""""""" - -auth_uri -*************** -- default value: -- description: Keystone authorization url, for example, http://127.0.0.1:5000/v3. - -admin_username -************** -- default value: -- description: Username of admin account, for example, admin. - -admin_password -************** -- default value: -- description: Password for admin account, for example, password. - -admin_tenant -************ -- default value: -- description: Tenant name of admin account, for example, admin. - -admin_user_domain_name -********************** -- default value: *admin_user_domain_name = Default* -- description: User domain name of admin account. - -admin_project_domain_name -************************* -- default value: *admin_project_domain_name = Default* -- description: Project domain name of admin account. - -Configuration in [scheduler] ----------------------------------------- - -The [scheduler] section is used by kingbird engine to periodically synchronize -and rebalance the quota for each project. - -configuration items for kingbird-engine -""""""""""""""""""""""""""""""""""""""" - -periodic_enable -*************** -- default value: *periodic_enable = True* -- description: Boolean value for enable/disable periodic tasks. - -periodic_interval -***************** -- default value: *periodic_interval = 900* -- description: Periodic time interval for automatic quota sync job, unit is - seconds. - -Configuration in [batch] ----------------------------------------- - -The [batch] section is used by kingbird engine to periodicly synchronize -and rebalance the quota for each project. - -batch_size -*************** -- default value: *batch_size = 3* -- description: Batch size number of projects will be synced at a time. - -Configuration in [locks] ----------------------------------------- - -The [locks] section is used by kingbird engine to periodically synchronize -and rebalance the quota for each project. - -lock_retry_times -**************** -- default value: *lock_retry_times = 3* -- description: Number of times trying to grab a lock. - -lock_retry_interval -******************* -- default value: *lock_retry_interval =10* -- description: Number of seconds between lock retries. diff --git a/docs/installationprocedure/multisite.kingbird.installation.rst b/docs/installationprocedure/multisite.kingbird.installation.rst deleted file mode 100644 index 9abb669..0000000 --- a/docs/installationprocedure/multisite.kingbird.installation.rst +++ /dev/null @@ -1,305 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -=========================================== -Multisite Kingbird installation instruction -=========================================== - -Abstract --------- -This document will give the user instructions on how to deploy -available scenarios verified for the Colorado release of OPNFV -platform. - - -Preparing the installation --------------------------- -Kingbird is centralized synchronization service for multi-region OpenStack -deployments. In OPNFV Colorado release, Kingbird provides centralized quota -management feature. At least two OpenStack regions with shared KeyStone should -be installed first. - -Kingbird includes kingbird-api and kingbird-engine, kingbird-api and -kingbird-engine which talk to each other through message bus, and both -services access the database. Kingbird-api receives the RESTful -API request for quota management and forward the request to kingbird-engine -to do quota synchronization etc task. - -Therefore install Kingbird on the controller nodes of one of the OpenStack -region, these two services could be deployed in same node or different node. -Both kingbird-api and kingbird-engine can run in multiple nodes with -multi-workers mode. It's up to you how many nodes you want to deploy -kingbird-api and kingbird-engine and they can work in same node or -different nodes. - -HW requirements ---------------- -No special hardware requirements - -Installation instruction ------------------------- - -In colorado release, Kingbird is recommended to be installed in a python -virtual environment. So install and activate virtualenv first. - -.. code-block:: bash - - sudo pip install virtualenv - virtualenv venv - source venv/bin/activate - -Get the latest code of Kingbird from git repository: - -.. code-block:: bash - - git clone https://github.com/openstack/kingbird.git - cd kingbird/ - pip install -e . - - -or get the stable release from PyPI repository: - -.. code-block:: bash - - pip install kingbird - -In case of the database package are not installed, you may need to install: - -.. code-block:: bash - - pip install mysql - pip install pymysql - -In the Kingbird root folder, where you can find the source code of Kingbird, -generate the configuration sample file for Kingbird: - -.. code-block:: bash - - oslo-config-generator --config-file=./tools/config-generator.conf - -prepare the folder used for cache, log and configuration for Kingbird: - -.. code-block:: bash - - sudo rm -rf /var/cache/kingbird - sudo mkdir -p /var/cache/kingbird - sudo chown `whoami` /var/cache/kingbird - sudo rm -rf /var/log/kingbird - sudo mkdir -p /var/log/kingbird - sudo chown `whoami` /var/log/kingbird - sudo rm -rf /etc/kingbird - sudo mkdir -p /etc/kingbird - sudo chown `whoami` /etc/kingbird - -Copy the sample configuration to the configuration folder /etc/kingbird: - -.. code-block:: bash - - cp etc/kingbird/kingbird.conf.sample /etc/kingbird/kingbird.conf - -Before editing the configuration file, prepare the database info for Kingbird. - -.. code-block:: bash - - mysql -uroot -e "CREATE DATABASE $kb_db CHARACTER SET utf8;" - mysql -uroot -e "GRANT ALL PRIVILEGES ON $kb_db.* TO '$kb_db_user'@'%' IDENTIFIED BY '$kb_db_pwd';" - -For example, the following command will create database "kingbird", and grant the -privilege for the db user "kingbird" with password "password": - -.. code-block:: bash - - mysql -uroot -e "CREATE DATABASE kingbird CHARACTER SET utf8;" - mysql -uroot -e "GRANT ALL PRIVILEGES ON kingbird.* TO 'kingbird'@'%' IDENTIFIED BY 'password';" - -Create the service user in OpenStack: - -.. code-block:: bash - - source openrc admin admin - openstack user create --project=service --password=$kb_svc_pwd $kb_svc_user - openstack role add --user=$kb_svc_user --project=service admin - -For example, the following command will create service user "kingbird", -and grant the user "kingbird" with password "password" the role of admin -in service project: - -.. code-block:: bash - - source openrc admin admin - openstack user create --project=service --password=password kingbird - openstack role add --user=kingbird --project=service admin - - - -Then edit the configuration file for Kingbird: - -.. code-block:: bash - - vim /etc/kingbird/kingbird.conf - -By default, the bind_host of kingbird-api is local_host(127.0.0.1), and the -port for the service is 8118, you can leave it as the default if no port -conflict happened. - -To make the Kingbird work normally, you have to edit these configuration -items. The [cache] section is used by kingbird engine to access the quota -information of Nova, Cinder, Neutron in each region, replace the -auth_uri to the keystone service in your environment, -especially if the keystone service is not located in the same node, and -also for the account to access the Nova, Cinder, Neutron in each region, -in the following configuration, user "admin" with password "password" of -the tenant "admin" is configured to access other Nova, Cinder, Neutron in -each region: - -.. code-block:: bash - - [cache] - auth_uri = http://127.0.0.1:5000/v3 - admin_tenant = admin - admin_password = password - admin_username = admin - -Configure the database section with the service user "kingbird" and its -password, to access database "kingbird". For detailed database section -configuration, please refer to http://docs.openstack.org/developer/oslo.db/opts.html, -and change the following configuration accordingly based on your -environment. - -.. code-block:: bash - - [database] - connection = mysql+pymysql://$kb_db_user:$kb_db_pwd@127.0.0.1/$kb_db?charset=utf8 - -For example, if the database is "kingbird", and the db user "kingbird" with -password "password", then the configuration is as following: - -.. code-block:: bash - - [database] - connection = mysql+pymysql://kingbird:password@127.0.0.1/kingbird?charset=utf8 - -The [keystone_authtoken] section is used by keystonemiddleware for token -validation during the API request to the kingbird-api, please refer to -http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html -on how to configure the keystone_authtoken section for the keystonemiddleware -in detail, and change the following configuration accordingly based on your -environment: - -*please specify the region_name where you want the token will be validated if the -KeyStone is deployed in multiple regions* - -.. code-block:: bash - - [keystone_authtoken] - signing_dir = /var/cache/kingbird - cafile = /opt/stack/data/ca-bundle.pem - auth_uri = http://127.0.0.1:5000/v3 - project_domain_name = Default - project_name = service - user_domain_name = Default - password = $kb_svc_pwd - username = $kb_svc_user - auth_url = http://127.0.0.1:35357/v3 - auth_type = password - region_name = RegionOne - -For example, if the service user is "kingbird, and the password for the user -is "password", then the configuration will look like this: - -.. code-block:: bash - - [keystone_authtoken] - signing_dir = /var/cache/kingbird - cafile = /opt/stack/data/ca-bundle.pem - auth_uri = http://127.0.0.1:5000/v3 - project_domain_name = Default - project_name = service - user_domain_name = Default - password = password - username = kingbird - auth_url = http://127.0.0.1:35357/v3 - auth_type = password - region_name = RegionOne - - -And also configure the message bus connection, you can refer to the message -bus configuration in Nova, Cinder, Neutron configuration file. - -.. code-block:: bash - - [DEFAULT] - rpc_backend = rabbit - control_exchange = openstack - transport_url = None - - [oslo_messaging_rabbit] - rabbit_host = 127.0.0.1 - rabbit_port = 5671 - rabbit_userid = guest - rabbit_password = guest - rabbit_virtual_host = / - -After these basic configuration items configured, now the database schema of -"kingbird" should be created: - -.. code-block:: bash - - python kingbird/cmd/manage.py --config-file=/etc/kingbird/kingbird.conf db_sync - -And create the service and endpoint for Kingbird, please change the endpoint url -according to your cloud planning: - -.. code-block:: bash - - openstack service create --name=kingbird synchronization - openstack endpoint create --region=RegionOne \ - --publicurl=http://127.0.0.1:8118/v1.0 \ - --adminurl=http://127.0.0.1:8118/v1.0 \ - --internalurl=http://127.0.0.1:8118/v1.0 kingbird - -Now it's ready to run kingbird-api and kingbird-engine: - -.. code-block:: bash - - nohup python kingbird/cmd/api.py --config-file=/etc/kingbird/kingbird.conf & - nohup python kingbird/cmd/engine.py --config-file=/etc/kingbird/kingbird.conf & - -Run the following command to check whether kingbird-api and kingbird-engine -are running: - -.. code-block:: bash - - ps aux|grep python - - -Post-installation activities ----------------------------- - -Run the following commands to check whether kingbird-api is serving, please -replace $token to the token you get from "openstack token issue": - -.. code-block:: bash - - openstack token issue - curl -H "Content-Type: application/json" -H "X-Auth-Token: $token" \ - http://127.0.0.1:8118/ - -If the response looks like following: {"versions": [{"status": "CURRENT", -"updated": "2016-03-07", "id": "v1.0", "links": [{"href": -"http://127.0.0.1:8118/v1.0/", "rel": "self"}]}]}, -then that means the kingbird-api is working normally. - -Run the following commands to check whether kingbird-engine is serving, please -replace $token to the token you get from "openstack token issue", and the -$admin_project_id to the admin project id in your environment: - -.. code-block:: bash - - curl -H "Content-Type: application/json" -H "X-Auth-Token: $token" \ - -X PUT \ - http://127.0.0.1:8118/v1.0/$admin_project_id/os-quota-sets/$admin_project_id/sync - -If the response looks like following: "triggered quota sync for -0320065092b14f388af54c5bd18ab5da", then that means the kingbird-engine -is working normally. diff --git a/docs/release/configguide/index.rst b/docs/release/configguide/index.rst new file mode 100644 index 0000000..2ee37cb --- /dev/null +++ b/docs/release/configguide/index.rst @@ -0,0 +1,17 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) Sofia Wallin Ericsson AB +.. (c) Chaoyi Huang, Huawei Technologies Co., Ltd. + +***************************** +Multisite Configuration Guide +***************************** + +.. toctree:: + :numbered: + :maxdepth: 2 + + abstract.rst + multisite.configuration.rst + multisite.kingbird.configuration.rst + diff --git a/docs/release/configguide/multisite.configuration.rst b/docs/release/configguide/multisite.configuration.rst new file mode 100644 index 0000000..0a38505 --- /dev/null +++ b/docs/release/configguide/multisite.configuration.rst @@ -0,0 +1,106 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +Multisite identity service management +===================================== + +Goal +---- + +A user should, using a single authentication point be able to manage virtual +resources spread over multiple OpenStack regions. + +Before you read +--------------- + +This chapter does not intend to cover all configuration of KeyStone and other +OpenStack services to work together with KeyStone. + +This chapter focuses only on the configuration part should be taken into +account in multi-site scenario. + +Please read the configuration documentation related to identity management +of OpenStack for all configuration items. + +http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-identity.html + +How to configure the database cluster for synchronization or asynchrounous +repliation in multi-site scenario is out of scope of this document. The only +remainder is that for the synchronization or replication, only Keystone +database is required. If you are using MySQL, you can configure like this: + +In the master: + + .. code-block:: bash + + binlog-do-db=keystone + +In the slave: + + .. code-block:: bash + + replicate-do-db=keystone + + +Deployment options +------------------ + +For each detail description of each deployment option, please refer to the +admin-user-guide. + +- Distributed KeyStone service with PKI token + + In KeyStone configuration file, PKI token format should be configured + + .. code-block:: bash + + provider = pki + + or + + .. code-block:: bash + + provider = pkiz + + In the [keystone_authtoken] section of each OpenStack service configuration + file in each site, configure the identity_url and auth_uri to the address + of KeyStone service + + .. code-block:: bash + + identity_uri = https://keystone.your.com:35357/ + auth_uri = http://keystone.your.com:5000/v2.0 + + It's better to use domain name for the KeyStone service, but not to use IP + address directly, especially if you deployed KeyStone service in at least + two sites for site level high availability. + +- Distributed KeyStone service with Fernet token +- Distributed KeyStone service with Fernet token + Async replication ( + star-mode). + + In these two deployment options, the token validation is planned to be done + in local site. + + In KeyStone configuration file, Fernet token format should be configured + + .. code-block:: bash + + provider = fernet + + In the [keystone_authtoken] section of each OpenStack service configuration + file in each site, configure the identity_url and auth_uri to the address + of local KeyStone service + + .. code-block:: bash + + identity_uri = https://local-keystone.your.com:35357/ + auth_uri = http://local-keystone.your.com:5000/v2.0 + + and especially, configure the region_name to your local region name, for + example, if you are configuring services in RegionOne, and there is local + KeyStone service in RegionOne, then + + .. code-block:: bash + + region_name = RegionOne diff --git a/docs/release/configguide/multisite.kingbird.configuration.rst b/docs/release/configguide/multisite.kingbird.configuration.rst new file mode 100644 index 0000000..7eb6106 --- /dev/null +++ b/docs/release/configguide/multisite.kingbird.configuration.rst @@ -0,0 +1,264 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + + +Configuration of Multisite.Kingbird +=================================== + +A brief introduction to configure Multisite Kingbird service. Only the +configuration items for Kingbird will be described here. Logging, +messaging, database, keystonemiddleware etc configuration which are +generated from OpenStack OSLO libary, will not be described here, for +these configuration items are common to Nova, Cinder, Neutron. So please +refer to corresponding description from Nova or Cinder or Neutron. + + +Configuration in [DEFAULT] +-------------------------- + +configuration items for kingbird-api +"""""""""""""""""""""""""""""""""""" + +bind_host +********* +- default value: *bind_host = 0.0.0.0* +- description: The host IP to bind for kingbird-api service + +bind_port +********* +- default value: *bind_port = 8118* +- description: The port to bind for kingbird-api service + +api_workers +*********** +- default value: *api_workers = 2* +- description: Number of kingbird-api workers + +configuration items for kingbird-engine +""""""""""""""""""""""""""""""""""""""" + +host +**** +- default value: *host = localhost* +- description: The host name kingbird-engine service is running on + +workers +******* +- default value: *workers = 1* +- description: Number of kingbird-engine workers + +report_interval +*************** +- default value: *report_interval = 60* +- description: Seconds between running periodic reporting tasks to + keep the engine alive in the DB. If the engine doesn't report its + aliveness to the DB more than two intervals, then the lock accquired + by the engine will be removed by other engines. + +common configuration items for kingbird-api and kingbird-engine +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +use_default_quota_class +*********************** +- default value: *use_default_quota_class = true* +- description: Enables or disables use of default quota class with default + quota, boolean value + +Configuration in [kingbird_global_limit] +---------------------------------------- + +For quota limit, a negative value means unlimited. + +configuration items for kingbird-api and kingbird-engine +"""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +quota_instances +*************** +- default value: *quota_instances = 10* +- description: Number of instances allowed per project, integer value. + +quota_cores +*********** +- default value: *quota_cores = 20* +- description: Number of instance cores allowed per project, integer value. + +quota_ram +********* +- default value: *quota_ram = 512* +- description: Megabytes of instance RAM allowed per project, integer value. + +quota_metadata_items +******************** +- default value: *quota_metadata_items = 128* +- description: Number of metadata items allowed per instance, integer value. + +quota_key_pairs +*************** +- default value: *quota_key_pairs = 10* +- description: Number of key pairs per user, integer value. + +quota_fixed_ips +*************** +- default value: *quota_fixed_ips = -1* +- description: Number of fixed IPs allowed per project, this should be at + least the number of instances allowed, integer value. + +quota_security_groups +********************* +- default value: *quota_security_groups = 10* +- description: Number of security groups per project, integer value. + +quota_floating_ips +****************** +- default value: *quota_floating_ips = 10* +- description: Number of floating IPs allowed per project, integer value. + +quota_network +*************** +- default value: *quota_network = 10* +- description: Number of networks allowed per project, integer value. + +quota_subnet +*************** +- default value: *quota_subnet = 10* +- description: Number of subnets allowed per project, integer value. + +quota_port +*************** +- default value: *quota_port = 50* +- description: Number of ports allowed per project, integer value. + +quota_security_group +******************** +- default value: *quota_security_group = 10* +- description: Number of security groups allowed per project, integer value. + +quota_security_group_rule +************************* +- default value: *quota_security_group_rule = 100* +- description: Number of security group rules allowed per project, integer + value. + +quota_router +************ +- default value: *quota_router = 10* +- description: Number of routers allowed per project, integer value. + +quota_floatingip +**************** +- default value: *quota_floatingip = 50* +- description: Number of floating IPs allowed per project, integer value. + +quota_volumes +*************** +- default value: *quota_volumes = 10* +- description: Number of volumes allowed per project, integer value. + +quota_snapshots +*************** +- default value: *quota_snapshots = 10* +- description: Number of snapshots allowed per project, integer value. + +quota_gigabytes +*************** +- default value: *quota_gigabytes = 1000* +- description: Total amount of storage, in gigabytes, allowed for volumes + and snapshots per project, integer value. + +quota_backups +************* +- default value: *quota_backups = 10* +- description: Number of volume backups allowed per project, integer value. + +quota_backup_gigabytes +********************** +- default value: *quota_backup_gigabytes = 1000* +- description: Total amount of storage, in gigabytes, allowed for volume + backups per project, integer value. + +Configuration in [cache] +---------------------------------------- + +The [cache] section is used by kingbird engine to access the quota +information for Nova, Cinder, Neutron in each region in order to reduce +the KeyStone load while retrieving the endpoint information each time. + +configuration items for kingbird-engine +""""""""""""""""""""""""""""""""""""""" + +auth_uri +*************** +- default value: +- description: Keystone authorization url, for example, http://127.0.0.1:5000/v3. + +admin_username +************** +- default value: +- description: Username of admin account, for example, admin. + +admin_password +************** +- default value: +- description: Password for admin account, for example, password. + +admin_tenant +************ +- default value: +- description: Tenant name of admin account, for example, admin. + +admin_user_domain_name +********************** +- default value: *admin_user_domain_name = Default* +- description: User domain name of admin account. + +admin_project_domain_name +************************* +- default value: *admin_project_domain_name = Default* +- description: Project domain name of admin account. + +Configuration in [scheduler] +---------------------------------------- + +The [scheduler] section is used by kingbird engine to periodically synchronize +and rebalance the quota for each project. + +configuration items for kingbird-engine +""""""""""""""""""""""""""""""""""""""" + +periodic_enable +*************** +- default value: *periodic_enable = True* +- description: Boolean value for enable/disable periodic tasks. + +periodic_interval +***************** +- default value: *periodic_interval = 900* +- description: Periodic time interval for automatic quota sync job, unit is + seconds. + +Configuration in [batch] +---------------------------------------- + +The [batch] section is used by kingbird engine to periodicly synchronize +and rebalance the quota for each project. + +batch_size +*************** +- default value: *batch_size = 3* +- description: Batch size number of projects will be synced at a time. + +Configuration in [locks] +---------------------------------------- + +The [locks] section is used by kingbird engine to periodically synchronize +and rebalance the quota for each project. + +lock_retry_times +**************** +- default value: *lock_retry_times = 3* +- description: Number of times trying to grab a lock. + +lock_retry_interval +******************* +- default value: *lock_retry_interval =10* +- description: Number of seconds between lock retries. diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst new file mode 100644 index 0000000..0687f6c --- /dev/null +++ b/docs/release/installation/index.rst @@ -0,0 +1,15 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) Sofia Wallin Ericsson AB +.. (c) Chaoyi Huang, Huawei Technologies Co., Ltd. + +******************************** +Multisite Installation procedure +******************************** + +.. toctree:: + :numbered: + :maxdepth: 2 + + abstract.rst + multisite.kingbird.installation.rst diff --git a/docs/release/installation/multisite.kingbird.installation.rst b/docs/release/installation/multisite.kingbird.installation.rst new file mode 100644 index 0000000..54b622d --- /dev/null +++ b/docs/release/installation/multisite.kingbird.installation.rst @@ -0,0 +1,298 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +================================= +Kingbird installation instruction +================================= + +Abstract +-------- +This document will give the user instructions on how to deploy +available scenarios verified for the Colorado release of OPNFV +platform. + + +Preparing the installation +-------------------------- +Kingbird is centralized synchronization service for multi-region OpenStack +deployments. In OPNFV Colorado release, Kingbird provides centralized quota +management feature. At least two OpenStack regions with shared KeyStone should +be installed first. + +Kingbird includes kingbird-api and kingbird-engine, kingbird-api and +kingbird-engine which talk to each other through message bus, and both +services access the database. Kingbird-api receives the RESTful +API request for quota management and forward the request to kingbird-engine +to do quota synchronization etc task. + +Therefore install Kingbird on the controller nodes of one of the OpenStack +region, these two services could be deployed in same node or different node. +Both kingbird-api and kingbird-engine can run in multiple nodes with +multi-workers mode. It's up to you how many nodes you want to deploy +kingbird-api and kingbird-engine and they can work in same node or +different nodes. + +HW requirements +--------------- +No special hardware requirements + +Installation instruction +------------------------ + +In colorado release, Kingbird is recommended to be installed in a python +virtual environment. So install and activate virtualenv first. + +.. code-block:: bash + + sudo pip install virtualenv + virtualenv venv + source venv/bin/activate + +Get the latest code of Kingbird from git repository: + +.. code-block:: bash + + git clone https://github.com/openstack/kingbird.git + cd kingbird/ + pip install -e . + + +or get the stable release from PyPI repository: + +.. code-block:: bash + + pip install kingbird + +In case of the database package are not installed, you may need to install: + +.. code-block:: bash + + pip install mysql + pip install pymysql + +In the Kingbird root folder, where you can find the source code of Kingbird, +generate the configuration sample file for Kingbird: + +.. code-block:: bash + + oslo-config-generator --config-file=./tools/config-generator.conf + +prepare the folder used for cache, log and configuration for Kingbird: + +.. code-block:: bash + + sudo rm -rf /var/cache/kingbird + sudo mkdir -p /var/cache/kingbird + sudo chown `whoami` /var/cache/kingbird + sudo rm -rf /var/log/kingbird + sudo mkdir -p /var/log/kingbird + sudo chown `whoami` /var/log/kingbird + sudo rm -rf /etc/kingbird + sudo mkdir -p /etc/kingbird + sudo chown `whoami` /etc/kingbird + +Copy the sample configuration to the configuration folder /etc/kingbird: + +.. code-block:: bash + + cp etc/kingbird/kingbird.conf.sample /etc/kingbird/kingbird.conf + +Before editing the configuration file, prepare the database info for Kingbird. + +.. code-block:: bash + + mysql -uroot -e "CREATE DATABASE $kb_db CHARACTER SET utf8;" + mysql -uroot -e "GRANT ALL PRIVILEGES ON $kb_db.* TO '$kb_db_user'@'%' IDENTIFIED BY '$kb_db_pwd';" + +For example, the following command will create database "kingbird", and grant the +privilege for the db user "kingbird" with password "password": + +.. code-block:: bash + + mysql -uroot -e "CREATE DATABASE kingbird CHARACTER SET utf8;" + mysql -uroot -e "GRANT ALL PRIVILEGES ON kingbird.* TO 'kingbird'@'%' IDENTIFIED BY 'password';" + +Create the service user in OpenStack: + +.. code-block:: bash + + source openrc admin admin + openstack user create --project=service --password=$kb_svc_pwd $kb_svc_user + openstack role add --user=$kb_svc_user --project=service admin + +For example, the following command will create service user "kingbird", +and grant the user "kingbird" with password "password" the role of admin +in service project: + +.. code-block:: bash + + source openrc admin admin + openstack user create --project=service --password=password kingbird + openstack role add --user=kingbird --project=service admin + + + +Then edit the configuration file for Kingbird: + +.. code-block:: bash + + vim /etc/kingbird/kingbird.conf + +By default, the bind_host of kingbird-api is local_host(127.0.0.1), and the +port for the service is 8118, you can leave it as the default if no port +conflict happened. + +Please replace the address of Kingbird service "127.0.0.1" which is mentioned +below to the address you get from OpenStack Kingbird endpoint. + +To make the Kingbird work normally, you have to edit these configuration +items. The [cache] section is used by kingbird engine to access the quota +information of Nova, Cinder, Neutron in each region, replace the +auth_uri to the keystone service in your environment, +especially if the keystone service is not located in the same node, and +also for the account to access the Nova, Cinder, Neutron in each region, +in the following configuration, user "admin" with password "password" of +the tenant "admin" is configured to access other Nova, Cinder, Neutron in +each region: + +.. code-block:: bash + + [cache] + auth_uri = http://127.0.0.1:5000/v3 + admin_tenant = admin + admin_password = password + admin_username = admin + +Configure the database section with the service user "kingbird" and its +password, to access database "kingbird". For detailed database section +configuration, please refer to http://docs.openstack.org/developer/oslo.db/opts.html, +and change the following configuration accordingly based on your +environment. + +.. code-block:: bash + + [database] + connection = mysql+pymysql://$kb_db_user:$kb_db_pwd@127.0.0.1/$kb_db?charset=utf8 + +For example, if the database is "kingbird", and the db user "kingbird" with +password "password", then the configuration is as following: + +.. code-block:: bash + + [database] + connection = mysql+pymysql://kingbird:password@127.0.0.1/kingbird?charset=utf8 + +The [keystone_authtoken] section is used by keystonemiddleware for token +validation during the API request to the kingbird-api, please refer to +http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html +on how to configure the keystone_authtoken section for the keystonemiddleware +in detail, and change the following configuration accordingly based on your +environment: + +*please specify the region_name where you want the token will be validated if the +KeyStone is deployed in multiple regions* + +.. code-block:: bash + + [keystone_authtoken] + signing_dir = /var/cache/kingbird + cafile = /opt/stack/data/ca-bundle.pem + auth_uri = http://127.0.0.1:5000/v3 + project_domain_name = Default + project_name = service + user_domain_name = Default + password = $kb_svc_pwd + username = $kb_svc_user + auth_url = http://127.0.0.1:35357/v3 + auth_type = password + region_name = RegionOne + +For example, if the service user is "kingbird, and the password for the user +is "password", then the configuration will look like this: + +.. code-block:: bash + + [keystone_authtoken] + signing_dir = /var/cache/kingbird + cafile = /opt/stack/data/ca-bundle.pem + auth_uri = http://127.0.0.1:5000/v3 + project_domain_name = Default + project_name = service + user_domain_name = Default + password = password + username = kingbird + auth_url = http://127.0.0.1:35357/v3 + auth_type = password + region_name = RegionOne + + +And also configure the message bus connection, you can refer to the message +bus configuration in Nova, Cinder, Neutron configuration file. + +.. code-block:: bash + + [DEFAULT] + transport_url = rabbit://stackrabbit:password@127.0.0.1:5672/ + +After these basic configuration items configured, now the database schema of +"kingbird" should be created: + +.. code-block:: bash + + python kingbird/cmd/manage.py --config-file=/etc/kingbird/kingbird.conf db_sync + +And create the service and endpoint for Kingbird, please change the endpoint url +according to your cloud planning: + +.. code-block:: bash + + openstack service create --name=kingbird synchronization + openstack endpoint create --region=RegionOne kingbird public http://127.0.0.1:8118/v1.0 + openstack endpoint create --region=RegionOne kingbird admin http://127.0.0.1:8118/v1.0 + openstack endpoint create --region=RegionOne kingbird internal http://127.0.0.1:8118/v1.0 + +Now it's ready to run kingbird-api and kingbird-engine: + +.. code-block:: bash + + nohup python kingbird/cmd/api.py --config-file=/etc/kingbird/kingbird.conf & + nohup python kingbird/cmd/engine.py --config-file=/etc/kingbird/kingbird.conf & + +Run the following command to check whether kingbird-api and kingbird-engine +are running: + +.. code-block:: bash + + ps aux|grep python + + +Post-installation activities +---------------------------- + +Run the following commands to check whether kingbird-api is serving, please +replace $mytoken to the token you get from "openstack token issue": + +.. code-block:: bash + + openstack token issue + curl -H "Content-Type: application/json" -H "X-Auth-Token: $mytoken" \ + http://127.0.0.1:8118/ + +If the response looks like following: {"versions": [{"status": "CURRENT", +"updated": "2016-03-07", "id": "v1.0", "links": [{"href": +"http://127.0.0.1:8118/v1.0/", "rel": "self"}]}]}, +then that means the kingbird-api is working normally. + +Run the following commands to check whether kingbird-engine is serving, please +replace $mytoken to the token you get from "openstack token issue", and the +$admin_project_id to the admin project id in your environment: + +.. code-block:: bash + + curl -H "Content-Type: application/json" -H "X-Auth-Token: $mytoken" \ + -X PUT \ + http://127.0.0.1:8118/v1.0/$admin_project_id/os-quota-sets/$admin_project_id/sync + +If the response looks like following: "triggered quota sync for +0320065092b14f388af54c5bd18ab5da", then that means the kingbird-engine +is working normally. diff --git a/docs/release/overview/index.rst b/docs/release/overview/index.rst new file mode 100644 index 0000000..716f5a0 --- /dev/null +++ b/docs/release/overview/index.rst @@ -0,0 +1,12 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +*********************** +Multisite Release Notes +*********************** + +.. toctree:: + :numbered: + :maxdepth: 4 + + multisite.release.notes.rst diff --git a/docs/release/overview/multisite.release.notes.rst b/docs/release/overview/multisite.release.notes.rst new file mode 100644 index 0000000..85b9561 --- /dev/null +++ b/docs/release/overview/multisite.release.notes.rst @@ -0,0 +1,11 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +Multisite is to identify the requirements and gaps for the VIM(OpenStack) +to support multi-site NFV cloud. + +The documentation of requirements, installation, configuration and usage +guide for multi-site and Kingbird are provided. + +For Kingbird service, known bugs are registered at +https://bugs.launchpad.net/kingbird. diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst new file mode 100644 index 0000000..2726184 --- /dev/null +++ b/docs/release/userguide/index.rst @@ -0,0 +1,15 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 +.. (c) Chaoyi Huang, Huawei Technologies Co., Ltd. + +************************** +Multisite Admin User Guide +************************** + +.. toctree:: + :numbered: + :maxdepth: 4 + + multisite.admin.usage.rst + multisite.kingbird.usage.rst + multisite.tricircle.usage.rst diff --git a/docs/release/userguide/multisite.admin.usage.rst b/docs/release/userguide/multisite.admin.usage.rst new file mode 100644 index 0000000..544c9b1 --- /dev/null +++ b/docs/release/userguide/multisite.admin.usage.rst @@ -0,0 +1,365 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +Multisite identity service management +===================================== + +Goal +---- + +A user should, using a single authentication point be able to manage virtual +resources spread over multiple OpenStack regions. + +Token Format +------------ + +There are 3 types of token format supported by OpenStack KeyStone + + * **FERNET** + * **UUID** + * **PKI/PKIZ** + +It's very important to understand these token format before we begin the +mutltisite identity service management. Please refer to the OpenStack +official site for the identity management. +http://docs.openstack.org/admin-guide-cloud/identity_management.html + +Please note that PKI/PKIZ token format has been deprecated. + +Key consideration in multisite scenario +--------------------------------------- + +A user is provided with a single authentication URL to the Identity (Keystone) +service. Using that URL, the user authenticates with Keystone by +requesting a token typically using username/password credentials. Keystone +server validates the credentials, possibly with an external LDAP/AD server and +returns a token to the user. The user sends a request to a service in a +selected region including the token. Now the service in the region, say Nova +needs to validate the token. The service uses its configured keystone endpoint +and service credentials to request token validation from Keystone. After the +token is validated by KeyStone, the user is authorized to use the service. + +The key considerations for token validation in multisite scenario are: + * Site level failure: impact on authN and authZ shoulde be as minimal as + possible + * Scalable: as more and more sites added, no bottleneck in token validation + * Amount of inter region traffic: should be kept as little as possible + +Hence, Keystone token validation should preferably be done in the same +region as the service itself. + +The challenge to distribute KeyStone service into each region is the KeyStone +backend. Different token format has different data persisted in the backend. + +* Fernet: Tokens are non persistent cryptographic based tokens and validated + online by the Keystone service. Fernet tokens are more lightweight + than PKI tokens and have a fixed size. Fernet tokens require Keystone + deployed in a distributed manner, again to avoid inter region traffic. The + data synchronization cost for the Keystone backend is smaller due to the non- + persisted token. + +* UUID: UUID tokens have a fixed size. Tokens are persistently stored and + create a lot of database traffic, the persistence of token is for the revoke + purpose. UUID tokens are validated online by Keystone, call to service will + request keystone for token validation. Keystone can become a + bottleneck in a large system. Due to this, UUID token type is not suitable + for use in multi region clouds, no matter the Keystone database + replicates or not. + +Cryptographic tokens bring new (compared to UUID tokens) issues/use-cases +like key rotation, certificate revocation. Key management is out of scope for +this use case. + +Database deployment as the backend for KeyStone service +------------------------------------------------------ + +Database replication: + - Master/slave asynchronous: supported by the database server itself + (mysql/mariadb etc), works over WAN, it's more scalable. But only master will + provide write functionality, domain/project/role provisioning. + - Multi master synchronous: Galera(others like percona), not so scalable, + for multi-master writing, and need more parameter tunning for WAN latency.It + can provide the capability for limited multi-sites multi-write + function for distributed KeyStone service. + - Symmetrical/asymmetrical: data replicated to all regions or a subset, + in the latter case it means some regions needs to access Keystone in another + region. + +Database server sharing: +In an OpenStack controller, normally many databases from different +services are provided from the same database server instance. For HA reasons, +the database server is usually synchronously replicated to a few other nodes +(controllers) to form a cluster. Note that _all_ database are replicated in +this case, for example when Galera sync repl is used. + +Only the Keystone database can be replicated to other sites. Replicating +databases for other services will cause those services to get of out sync and +malfunction. + +Since only the Keystone database is to be replicated sync. or async. to another +region/site, it's better to deploy Keystone database into its own +database server with extra networking requirement, cluster or replication +configuration. How to support this by installer is out of scope. + +The database server can be shared when async master/slave replication is +used, if global transaction identifiers GTID is enabled. + +Deployment options +------------------ + +**Distributed KeyStone service with Fernet token** + +Fernet token is a very new format, and just introduced recently,the biggest +gain for this token format is :1) lightweight, size is small to be carried in +the API request, not like PKI token( as the sites increased, the endpoint-list +will grows and the token size is too long to carry in the API request) 2) no +token persistence, this also make the DB not changed too much and with light +weight data size (just project, Role, domain, endpoint etc). The drawback for +the Fernet token is that token has to be validated by KeyStone for each API +request. + +This makes that the DB of KeyStone can work as a cluster in multisite (for +example, using MySQL galera cluster). That means install KeyStone API server in +each site, but share the same the backend DB cluster.Because the DB cluster +will synchronize data in real time to multisite, all KeyStone server can see +the same data. + +Because each site with KeyStone installed, and all data kept same, +therefore all token validation could be done locally in the same site. + +The challenge for this solution is how many sites the DB cluster can +support. Question is aksed to MySQL galera developers, their answer is that no +number/distance/network latency limitation in the code. But in the practice, +they have seen a case to use MySQL cluster in 5 data centers, each data centers +with 3 nodes. + +This solution will be very good for limited sites which the DB cluster can +cover very well. + +**Distributed KeyStone service with Fernet token + Async replication (star-mode)** + +One master KeyStone cluster with Fernet token in one or two sites(for site +level high availability purpose), other sites will be installed with at least +2 slave nodes where the node is configured with DB async replication from the +master cluster members. The async. replication data source is better to be +from different member of the master cluster, if there are two sites for the +KeyStone cluster, it'll be better that source members for async. replication +are located in different site. + +Only the master cluster nodes are allowed to write, other slave nodes +waiting for replication from the master cluster member( very little delay). + +Pros: + * Deploy database cluster in the master sites is to provide more master + nodes, in order to provide more slaves could be done with async. replication + in parallel. Two sites for the master cluster is to provide higher + reliability (site level) for writing request, but reduce the maintaince + challenge at the same time by limiting the cluster spreading over too many + sites. + * Multi-slaves in other sites is because of the slave has no knowledge of + other slaves, so easy to manage multi-slaves in one site than a cluster, and + multi-slaves work independently but provide multi-instance redundancy(like a + cluster, but independent). + +Cons: + * Need to be aware of the chanllenge of key distribution and rotation + for Fernet token. + +Multisite VNF Geo site disaster recovery +======================================== + +Goal +---- + +A VNF (telecom application) should, be able to restore in another site for +catastrophic failures happened. + +Key consideration in multisite scenario +--------------------------------------- + +Geo site disaster recovery is to deal with more catastrophic failures +(flood, earthquake, propagating software fault), and that loss of calls, or +even temporary loss of service, is acceptable. It is also seems more common +to accept/expect manual / administrator intervene into drive the process, not +least because you don’t want to trigger the transfer by mistake. + +In terms of coordination/replication or backup/restore between geographic +sites, discussion often (but not always) seems to focus on limited application +level data/config replication, as opposed to replication backup/restore between +of cloud infrastructure between different sites. + +And finally, the lack of a requirement to do fast media transfer (without +resignalling) generally removes the need for special networking behavior, with +slower DNS-style redirection being acceptable. + +Here is more concerns about cloud infrastructure level capability to +support VNF geo site disaster recovery + +Option1, Consistency application backup +--------------------------------------- + +The disater recovery process will work like this: + +1) DR(Geo site disaster recovery )software get the volumes for each VM + in the VNF from Nova +2) DR software call Nova quiesce API to quarantee quiecing VMs in desired order +3) DR software takes snapshots of these volumes in Cinder (NOTE: Because + storage often provides fast snapshot, so the duration between quiece and + unquiece is a short interval) +4) DR software call Nova unquiece API to unquiece VMs of the VNF in reverse order +5) DR software create volumes from the snapshots just taken in Cinder +6) DR software create backup (incremental) for these volumes to remote + backup storage ( swift or ceph, or.. ) in Cinder +7) If this site failed, + 1) DR software restore these backup volumes in remote Cinder in the backup site. + 2) DR software boot VMs from bootable volumes from the remote Cinder in + the backup site and attach the regarding data volumes. + +Note: Quiesce/Unquiesce spec was approved in Mitaka, but code not get merged in +time, https://blueprints.launchpad.net/nova/+spec/expose-quiesce-unquiesce-api +The spec was rejected in Newton when it was reproposed: +https://review.openstack.org/#/c/295595/. So this option will not work any more. + +Option2, Vitrual Machine Snapshot +--------------------------------- +1) DR software create VM snapshot in Nova +2) Nova quiece the VM internally + (NOTE: The upper level application or DR software should take care of + avoiding infra level outage induced VNF outage) +3) Nova create image in Glance +4) Nova create a snapshot of the VM, including volumes +5) If the VM is volume backed VM, then create volume snapshot in Cinder +5) No image uploaded to glance, but add the snapshot in the meta data of the + image in Glance +6) DR software to get the snapshot information from the Glance +7) DR software create volumes from these snapshots +9) DR software create backup (incremental) for these volumes to backup storage + ( swift or ceph, or.. ) in Cinder +10) If this site failed, + 1) DR software restore these backup volumes to Cinder in the backup site. + 2) DR software boot vm from bootable volume from Cinder in the backup site + and attach the data volumes. + +This option only provides single VM level consistency disaster recovery. + +This feature is already available in current OPNFV release. + +Option3, Consistency volume replication +--------------------------------------- +1) DR software creates datastore (Block/Cinder, Object/Swift, App Custom + storage) with replication enabled at the relevant scope, for use to + selectively backup/replicate desire data to GR backup site +2) DR software get the reference of storage in the remote site storage +3) If primary site failed, + 1) DR software managing recovery in backup site gets references to relevant + storage and passes to new software instances + 2) Software attaches (or has attached) replicated storage, in the case of + volumes promoting to writable. + +Pros: + * Replication will be done in the storage level automatically, no need to + create backup regularly, for example, daily. + * Application selection of limited amount of data to replicate reduces + risk of replicating failed state and generates less overhear. + * Type of replication and model (active/backup, active/active, etc) can + be tailored to application needs + +Cons: + * Applications need to be designed with support in mind, including both + selection of data to be replicated and consideration of consistency + * "Standard" support in Openstack for Disaster Recovery currently fairly + limited, though active work in this area. + +Note: Volume replication v2.1 support project level replication. + + +VNF high availability across VIM +================================ + +Goal +---- + +A VNF (telecom application) should, be able to realize high availability +deloyment across OpenStack instances. + +Key consideration in multisite scenario +--------------------------------------- + +Most of telecom applications have already been designed as +Active-Standby/Active-Active/N-Way to achieve high availability +(99.999%, corresponds to 5.26 minutes of unplanned downtime in a year), +typically state replication or heart beat between +Active-Active/Active-Active/N-Way (directly or via replicated database +services, or via private designed message format) are required. + +We have to accept the currently limited availability ( 99.99%) of a +given OpenStack instance, and intend to provide the availability of the +telecom application by spreading its function across multiple OpenStack +instances.To help with this, many people appear willing to provide multiple +“independent” OpenStack instances in a single geographic site, with special +networking (L2/L3) between clouds in that physical site. + +The telecom application often has different networking plane for different +purpose: + +1) external network plane: using for communication with other telecom + application. + +2) components inter-communication plane: one VNF often consisted of several + components, this plane is designed for components inter-communication with + each other + +3) backup plane: this plane is used for the heart beat or state replication + between the component's active/standby or active/active or N-way cluster. + +4) management plane: this plane is mainly for the management purpose, like + configuration + +Generally these planes are separated with each other. And for legacy telecom +application, each internal plane will have its fixed or flexble IP addressing +plan. + +There are some interesting/hard requirements on the networking (L2/L3) +between OpenStack instances, at lease the backup plane across different +OpenStack instances: + +To make the VNF can work with HA mode across different OpenStack instances in +one site (but not limited to), need to support at lease the backup plane across +different OpenStack instances: + +1) L2 networking across OpenStack instance for heartbeat or state replication. +Overlay L2 networking or shared L2 provider networks can work as the backup +plance for heartbeat or state replication. Overlay L2 network is preferred, +the reason is: + + a. Support legacy compatibility: Some telecom app with built-in internal L2 + network, for easy to move these app to VNF, it would be better to provide + L2 network. + b. Isolated L2 network will simplify the security management between + different network planes. + c. Easy to support IP/mac floating across OpenStack. + d. Support IP overlapping: multiple VNFs may have overlaping IP address for + cross OpenStack instance networking. + +Therefore, over L2 networking across Neutron feature is required in OpenStack. + +2) L3 networking across OpenStack instance for heartbeat or state replication. +For L3 networking, we can leverage the floating IP provided in current +Neutron, or use VPN or BGPVPN(networking-bgpvpn) to setup the connection. + +L3 networking to support the VNF HA will consume more resources and need to +take more security factors into consideration, this make the networking +more complex. And L3 networking is also not possible to provide IP floating +across OpenStack instances. + +3) The IP address used for VNF to connect with other VNFs should be able to be +floating cross OpenStack instance. For example, if the master failed, the IP +address should be used in the standby which is running in another OpenStack +instance. There are some method like VRRP/GARP etc can help the movement of the +external IP, so no new feature will be added to OpenStack. + +Several projects are addressing the networking requirements, deployment should +consider the factors mentioned above. + * Tricircle: https://github.com/openstack/tricircle/ + * Networking-BGPVPN: https://github.com/openstack/networking-bgpvpn/ + * VPNaaS: https://github.com/openstack/neutron-vpnaas diff --git a/docs/release/userguide/multisite.kingbird.usage.rst b/docs/release/userguide/multisite.kingbird.usage.rst new file mode 100644 index 0000000..e9ead90 --- /dev/null +++ b/docs/release/userguide/multisite.kingbird.usage.rst @@ -0,0 +1,349 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +============================= +Multisite.Kingbird user guide +============================= + +Quota management for OpenStack multi-region deployments +------------------------------------------------------- +Kingbird is centralized synchronization service for multi-region OpenStack +deployments. In OPNFV Colorado release, Kingbird provides centralized quota +management feature. Administrator can set quota per project based in Kingbird +and sync the quota limit to multi-region OpenStack periodiclly or on-demand. +The tenant can check the total quota limit and usage from Kingbird for all +regions. Administrator can also manage the default quota by quota class +setting. + +Following quota items are supported to be managed in Kingbird: + +- **instances**: Number of instances allowed per project. +- **cores**: Number of instance cores allowed per project. +- **ram**: Megabytes of instance RAM allowed per project. +- **metadata_items**: Number of metadata items allowed per instance. +- **key_pairs**: Number of key pairs per user. +- **fixed_ips**: Number of fixed IPs allowed per project, + valid if Nova Network is used. +- **security_groups**: Number of security groups per project, + valid if Nova Network is used. +- **floating_ips**: Number of floating IPs allowed per project, + valid if Nova Network is used. +- **network**: Number of networks allowed per project, + valid if Neutron is used. +- **subnet**: Number of subnets allowed per project, + valid if Neutron is used. +- **port**: Number of ports allowed per project, + valid if Neutron is used. +- **security_group**: Number of security groups allowed per project, + valid if Neutron is used. +- **security_group_rule**: Number of security group rules allowed per project, + valid if Neutron is used. +- **router**: Number of routers allowed per project, + valid if Neutron is used. +- **floatingip**: Number of floating IPs allowed per project, + valid if Neutron is used. +- **volumes**: Number of volumes allowed per project. +- **snapshots**: Number of snapshots allowed per project. +- **gigabytes**: Total amount of storage, in gigabytes, allowed for volumes + and snapshots per project. +- **backups**: Number of volume backups allowed per project. +- **backup_gigabytes**: Total amount of storage, in gigabytes, allowed for volume + backups per project. + +Key pair is the only resource type supported in resource synchronization. + +Only restful APIs are provided for Kingbird in Colorado release, so curl or +other http client can be used to call Kingbird API. + +Before use the following command, get token, project id, and kingbird service +endpoint first. Use $kb_token to repesent the token, and $admin_tenant_id as +administrator project_id, and $tenant_id as the target project_id for quota +management and $kb_ip_addr for the kingbird service endpoint ip address. + +Note: +To view all tenants (projects), run: + + .. code-block:: bash + + openstack project list + +To get token, run: + + .. code-block:: bash + + openstack token issue + +To get Kingbird service endpoint, run: + + .. code-block:: bash + + openstack endpoint list + +Quota Management API +-------------------- + +1. Update global limit for a tenant + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota update b8eea2ceda4c47f1906fda7e7152a322 --port 10 --security_groups 10 + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X PUT \ + -d '{"quota_set":{"cores": 10,"ram": 51200, "metadata_items": 100,"key_pairs": 100, "network":20,"security_group": 20,"security_group_rule": 20}}' \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id + +2. Get global limit for a tenant + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota show --tenant $tenant_id + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id + +3. A tenant can also get the global limit by himself + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota show + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-quota-sets/$tenant_id + +4. Get defaults limits + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota defaults + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/defaults + +5. Get total usage for a tenant + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota detail --tenant $tenant_id + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X GET \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id/detail + +6. A tenant can also get the total usage by himself + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota detail + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X GET \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-quota-sets/$tenant_id/detail + +7. On demand quota sync + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota sync $tenant_id + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X PUT \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id/sync + + +8. Delete specific global limit for a tenant + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X DELETE \ + -d '{"quota_set": [ "cores", "ram"]}' \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id + +9. Delete all kingbird global limit for a tenant + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota delete $tenant_id + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X DELETE \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id + + +Quota Class API +--------------- + +1. Update default quota class + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota-class update --port 10 --security_groups 10 + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X PUT \ + -d '{"quota_class_set":{"cores": 100, "network":50,"security_group": 50,"security_group_rule": 50}}' \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default + +2. Get default quota class + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota-class show default + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default + +3. Delete default quota class + + Use python-kingbirdclient: + + .. code-block:: bash + + kingbird quota-class delete default + + Use curl: + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X DELETE \ + http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default + + +Resource Synchronization API +----------------------------- + +1. Create synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X POST -d \ + '{"resource_set":{"resources": [""],"force":,"resource_type": "keypair","source": <"Source_Region">,"target": [<"List_of_target_regions">]}}' \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync + +2. Get synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync/ + +3. Get active synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync/active + +4. Get detail information of a synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync/$job_id + +5. Delete a synchronization job + + .. code-block:: bash + + curl \ + -H "Content-Type: application/json" \ + -H "X-Auth-Token: $kb_token" \ + -X DELETE \ + http://$kb_ip_addr:8118/v1.0/$tenant_id/os-sync/job_id diff --git a/docs/release/userguide/multisite.tricircle.usage.rst b/docs/release/userguide/multisite.tricircle.usage.rst new file mode 100644 index 0000000..d42f5b0 --- /dev/null +++ b/docs/release/userguide/multisite.tricircle.usage.rst @@ -0,0 +1,13 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +============================== +Multisite.Tricircle user guide +============================== + +Tricircle is one OpenStack big-tent project. All user guide related documents +could be found from OpenStack website: + * Developer Guide: http://docs.openstack.org/developer/tricircle/ + * Installation Guide: http://docs.openstack.org/developer/tricircle/installation-guide.html + * Configuration Guide: http://docs.openstack.org/developer/tricircle/configuration.html + * Networking Guide: http://docs.openstack.org/developer/tricircle/networking-guide.html diff --git a/docs/releasenotes/index.rst b/docs/releasenotes/index.rst deleted file mode 100644 index df1e186..0000000 --- a/docs/releasenotes/index.rst +++ /dev/null @@ -1,12 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -************************** -Multisite Release Notes -************************** - -.. toctree:: - :numbered: - :maxdepth: 4 - - multisite.release.notes.rst diff --git a/docs/releasenotes/multisite.release.notes.rst b/docs/releasenotes/multisite.release.notes.rst deleted file mode 100644 index d90a064..0000000 --- a/docs/releasenotes/multisite.release.notes.rst +++ /dev/null @@ -1,14 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -Release Notes of Multisite project -================================== - -Multisite is to identify the requirements and gaps for the VIM(OpenStack) -to support multi-site NFV cloud. - -The documentation of requirements, installation, configuration and usage -guide for multi-site and Kingbird are provided. - -It's the first release for Kingbird service, known bugs are registered at -https://bugs.launchpad.net/kingbird. diff --git a/docs/requirements/VNF_high_availability_across_VIM.rst b/docs/requirements/VNF_high_availability_across_VIM.rst index 6c2e9f1..42c479e 100644 --- a/docs/requirements/VNF_high_availability_across_VIM.rst +++ b/docs/requirements/VNF_high_availability_across_VIM.rst @@ -1,21 +1,21 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 -======================================= +================================ VNF high availability across VIM -======================================= +================================ Problem description =================== Abstract ------------- +-------- a VNF (telecom application) should, be able to realize high availability deloyment across OpenStack instances. Description ------------- +----------- VNF (Telecom application running over cloud) may (already) be designed as Active-Standby/Active-Active/N-Way to achieve high availability, @@ -64,7 +64,7 @@ the potential for correlated failure to very low levels (at least as low as the required overall application availability). Analysis of requirements to OpenStack -=========================== +===================================== The VNF often has different networking plane for different purpose: external network plane: using for communication with other VNF @@ -76,24 +76,37 @@ between the component's active/standy or active/active or N-way cluster. management plane: this plane is mainly for the management purpose Generally these planes are seperated with each other. And for legacy telecom -application, each internal plane will have its fixed or flexsible IP addressing -plane. +application, each internal plane will have its fixed or flexible IP addressing +plan. -to make the VNF can work with HA mode across different OpenStack instances in +To make the VNF can work with HA mode across different OpenStack instances in one site (but not limited to), need to support at lease the backup plane across different OpenStack instances: -1) Overlay L2 networking or shared L2 provider networks as the backup plance for -heartbeat or state replication. Overlay L2 network is preferred, the reason is: -a. Support legacy compatibility: Some telecom app with built-in internal L2 -network, for easy to move these app to VNF, it would be better to provide L2 -network b. Support IP overlapping: multiple VNFs may have overlaping IP address -for cross OpenStack instance networking +1) L2 networking across OpenStack instance for heartbeat or state replication. +Overlay L2 networking or shared L2 provider networks can work as the backup +plance for heartbeat or state replication. Overlay L2 network is preferred, +the reason is: + + a. Support legacy compatibility: Some telecom app with built-in internal L2 + network, for easy to move these app to VNF, it would be better to provide + L2 network. + b. Isolated L2 network will simplify the security management between + different network planes. + c. Easy to support IP/mac floating across OpenStack. + d. Support IP overlapping: multiple VNFs may have overlaping IP address for + cross OpenStack instance networking. + Therefore, over L2 networking across Neutron feature is required in OpenStack. -2) L3 networking cross OpenStack instance for heartbeat or state replication. -For L3 networking, we can leverage the floating IP provided in current Neutron, -so no new feature requirement to OpenStack. +2) L3 networking across OpenStack instance for heartbeat or state replication. +For L3 networking, we can leverage the floating IP provided in current +Neutron, or use VPN or BGPVPN(networking-bgpvpn) to setup the connection. + +L3 networking to support the VNF HA will consume more resources and need to +take more security factors into consideration, this make the networking +more complex. And L3 networking is also not possible to provide IP floating +across OpenStack instances. 3) The IP address used for VNF to connect with other VNFs should be able to be floating cross OpenStack instance. For example, if the master failed, the IP @@ -103,48 +116,20 @@ external IP, so no new feature will be added to OpenStack. Prototype ------------ +--------- None. Proposed solution ------------ - - requirements perspective It's up to application descision to use L2 or L3 -networking across Neutron. - - For Neutron, a L2 network is consisted of lots of ports. To make the cross -Neutron L2 networking is workable, we need some fake remote ports in local -Neutron to represent VMs in remote site ( remote OpenStack ). - - the fake remote port will reside on some VTEP ( for VxLAN ), the tunneling -IP address of the VTEP should be the attribute of the fake remote port, so that -the local port can forward packet to correct tunneling endpoint. - - the idea is to add one more ML2 mechnism driver to capture the fake remote -port CRUD( creation, retievement, update, delete) - - when a fake remote port is added/update/deleted, then the ML2 mechanism -driver for these fake ports will activate L2 population, so that the VTEP -tunneling endpoint information could be understood by other local ports. - - it's also required to be able to query the port's VTEP tunneling endpoint -information through Neutron API, in order to use these information to create -fake remote port in another Neutron. - - In the past, the port's VTEP ip address is the host IP where the VM resides. -But the this BP https://review.openstack.org/#/c/215409/ will make the port free -of binding to host IP as the tunneling endpoint, you can even specify L2GW ip -address as the tunneling endpoint. - - Therefore a new BP will be registered to processing the fake remote port, in -order make cross Neutron L2 networking is feasible. RFE is registered first: -https://bugs.launchpad.net/neutron/+bug/1484005 - +----------------- +Several projects are addressing the networking requirements: + * Tricircle: https://github.com/openstack/tricircle/ + * Networking-BGPVPN: https://github.com/openstack/networking-bgpvpn/ + * VPNaaS: https://github.com/openstack/neutron-vpnaas Gaps ==== - 1) fake remote port for cross Neutron L2 networking - + Inter-networking among OpenStack clouds for application HA need is lack + in Neutron, and covered by sevral new created projects. **NAME-THE-MODULE issues:** @@ -156,4 +141,3 @@ Affected By References ========== - diff --git a/docs/requirements/multisite-centralized-service.rst b/docs/requirements/multisite-centralized-service.rst new file mode 100644 index 0000000..5dbbfc8 --- /dev/null +++ b/docs/requirements/multisite-centralized-service.rst @@ -0,0 +1,109 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +============================== + Multisite centralized service +============================== + + +Problem description +=================== + +Abstract +-------- + +a user should have one centralized service for resources management and/or +replication(sync tenant resources like images, ssh-keys, etc) across multiple +OpenStack clouds. + +Description +------------ + +For multisite management use cases, some common requirements in term of +centralized or shared services over the multiple openstack instances could +be summarized here. + +A user should be able to manage all their virtual resouces from one +centralized management interface, at least to have a summarized view of +the total resource capacity and the live utilization of their virtual +resources, for example: + +- Centralized Quota Management + Currently all quotas are set for each region separataly. And different + services (Nova, Cinder, Neutron, Glance, ...) have different quota to + be set. The requirement is to provide global view for quota per tenant + across multiple regions, and soft/hard quotas based on current usage for + all regions for this tenant. + +- A service to clone ssh keys across regions + A user may upload keypair to access the VMs allocated for her. But if her + VMs are spread in multiple regions, the user has to upload the keypair + seperatly to different region. Need a service to clone the SSH key to + desired OpenStack clouds. + +- A service to sync images across regions + In multi-site scenario, a user has to upload image seperatly to different + region. There can be 4 cases need to be considered: + No image sync + Auto-sync of images + Lazy sync - clone the requested image on demand. + Controlled sync, where you can control propagation and rollback if + problems. + +- Global view for tenant level IP address / mac address space management + If a tenant has networks in multiple region, and these networks are routable + (for example, connected with VPN), then, IP address may be duplicated. Need + a global view for IP address space management. + If IP v4 used, this issue needs to be considered. For IPv6, it should als + be managed. This requirement is important not only just for prevention of + duplicate address. + For security and other reasons it's important to know which IP Addresses + (IPv4 and IPv6) are used in which region. + Need to extend such requirement to floating and public IP Addresses. + +- A service to clone security groups across regions + No appropriate service to security groups across multiple region if the + tenant has resources distributed, has to set the security groups in + different region manually. + +- A user should be able to access all the logs and indicators produced by + multiple openstack instances, in a centralized way. + +Requirement analysis +==================== + +All problems me here are not covered by existing projects in OpenStack. + +Candidate solution analysis +--------------------------- + +- Kingbird[1][2] + Kingbird is an centralized OpenStack service that provides resource + operation and management across multiple OpenStack instances in a + multi-region OpenStack deployment. Kingbird provides features like + centralized quota management, centralized view for distributed virtual + resources, synchronisation of ssh keys, images, flavors etc. across regions. + +- Tricircle[3][4] + Tricricle is to provide networking automation across Neutron in multi-region + OpenStack deployments. Tricircle can address the challenges mentioned here: + Tenant level IP/mac addresses management to avoid conflict across OpenStack + clouds, global L2 network segement management and cross OpenStack L2 + networking, and make security group being sync-ed across OpenStack clouds. + + +Affected By +----------- + OPNFV multisite cloud. + +Conclusion +---------- + Kingbird and Tricircle are candidate solutions for these centralized + services in OpenStack multi-region clouds. + +References +========== +[1] Kingbird repository: https://github.com/openstack/kingbird +[2] Kingbird launchpad: https://launchpad.net/kingbird +[3] Tricricle wiki: https://wiki.openstack.org/wiki/Tricircle +[4] Tricircle repository: https://github.com/openstack/tricircle/ diff --git a/docs/requirements/multisite-identity-service-management.rst b/docs/requirements/multisite-identity-service-management.rst index ad2cea1..c1eeb2b 100644 --- a/docs/requirements/multisite-identity-service-management.rst +++ b/docs/requirements/multisite-identity-service-management.rst @@ -9,12 +9,12 @@ Glossary ======== There are 3 types of token supported by OpenStack KeyStone + **FERNET** + **UUID** **PKI/PKIZ** - **FERNET** - Please refer to reference section for these token formats, benchmark and comparation. @@ -189,7 +189,7 @@ cover very well. multi-cluster mode). We may have several KeyStone cluster with Fernet token, for example, -cluster1 ( site1, site2, … site 10 ), cluster 2 ( site11, site 12,..,site 20). +cluster1(site1, site2, .., site 10), cluster 2(site11, site 12,.., site 20). Then do the DB async replication among different cluster asynchronously. A prototype of this has been down on this. In some blogs they call it @@ -208,14 +208,16 @@ http://lbragstad.com/?p=156 - KeyStone service(Distributed) with Fernet token + Async replication ( star-mode). - one master KeyStone cluster with Fernet token in two sites (for site level -high availability purpose), other sites will be installed with at least 2 slave -nodes where the node is configured with DB async replication from the master -cluster members, and one slave’s mater node in site1, another slave’s master -node in site 2. + one master KeyStone cluster with Fernet token in one or two sites (two +sites if site level high availability is required), other sites will be +installed with at least 2 slave nodes where the node is configured with +DB async replication from the master cluster member. The async. replication +data source is better to be from different member of the master cluster, if +there are two sites for the KeyStone cluster, it'll be better that source +members for async. replication are located in different site. Only the master cluster nodes are allowed to write, other slave nodes -waiting for replication from the master cluster ( very little delay) member. +waiting for ( very little delay) replication from the master cluster member. But the chanllenge of key distribution and rotation for Fernet token should be settled, you can refer to these two blogs: http://lbragstad.com/?p=133, http://lbragstad.com/?p=156 @@ -349,6 +351,9 @@ in deployment and maintenance, with better scalability. token + Async replication ( star-mode)" for multsite OPNFV cloud is recommended. + PKI token has been deprecated, so all proposals about PKI token are not +recommended. + References ========== diff --git a/docs/userguide/index.rst b/docs/userguide/index.rst deleted file mode 100644 index 25de482..0000000 --- a/docs/userguide/index.rst +++ /dev/null @@ -1,13 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -************************** -Multisite Admin User Guide -************************** - -.. toctree:: - :numbered: - :maxdepth: 4 - - multisite.admin.usage.rst - multisite.kingbird.usage.rst diff --git a/docs/userguide/multisite.admin.usage.rst b/docs/userguide/multisite.admin.usage.rst deleted file mode 100644 index 41f23c0..0000000 --- a/docs/userguide/multisite.admin.usage.rst +++ /dev/null @@ -1,390 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -========================== -Multisite admin user guide -========================== - -Multisite identity service management -===================================== - -Goal ----- - -A user should, using a single authentication point be able to manage virtual -resources spread over multiple OpenStack regions. - -Token Format ------------- - -There are 3 types of token format supported by OpenStack KeyStone - - * **UUID** - * **PKI/PKIZ** - * **FERNET** - -It's very important to understand these token format before we begin the -mutltisite identity service management. Please refer to the OpenStack -official site for the identity management. -http://docs.openstack.org/admin-guide-cloud/identity_management.html - -Key consideration in multisite scenario ---------------------------------------- - -A user is provided with a single authentication URL to the Identity (Keystone) -service. Using that URL, the user authenticates with Keystone by -requesting a token typically using username/password credentials. Keystone -server validates the credentials, possibly with an external LDAP/AD server and -returns a token to the user. The user sends a request to a service in a -selected region including the token. Now the service in the region, say Nova -needs to validate the token. The service uses its configured keystone endpoint -and service credentials to request token validation from Keystone. After the -token is validated by KeyStone, the user is authorized to use the service. - -The key considerations for token validation in multisite scenario are: - * Site level failure: impact on authN and authZ shoulde be as minimal as - possible - * Scalable: as more and more sites added, no bottleneck in token validation - * Amount of inter region traffic: should be kept as little as possible - -Hence, Keystone token validation should preferably be done in the same -region as the service itself. - -The challenge to distribute KeyStone service into each region is the KeyStone -backend. Different token format has different data persisted in the backend. - -* UUID: UUID tokens have a fixed size. Tokens are persistently stored and - create a lot of database traffic, the persistence of token is for the revoke - purpose. UUID tokens are validated online by Keystone, call to service will - request keystone for token validation. Keystone can become a - bottleneck in a large system. Due to this, UUID token type is not suitable - for use in multi region clouds, no matter the Keystone database - replicates or not. - -* PKI: Tokens are non persistent cryptographic based tokens and validated - offline (not by the Keystone service) by Keystone middleware which is part - of other services such as Nova. Since PKI tokens include endpoint for all - services in all regions, the token size can become big. There are - several ways to reduce the token size such as no catalog policy, endpoint - filter to make a project binding with limited endpoints, and compressed PKI - token - PKIZ, but the size of token is still unpredictable, making it difficult - to manage. If catalog is not applied, that means the user can access all - regions, in some scenario, it's not allowed to do like this. Centralized - Keystone with PKI token to reduce inter region backend synchronization traffic. - PKI tokens do produce Keystone traffic for revocation lists. - -* Fernet: Tokens are non persistent cryptographic based tokens and validated - online by the Keystone service. Fernet tokens are more lightweight - than PKI tokens and have a fixed size. Fernet tokens require Keystone - deployed in a distributed manner, again to avoid inter region traffic. The - data synchronization cost for the Keystone backend is smaller due to the non- - persisted token. - -Cryptographic tokens bring new (compared to UUID tokens) issues/use-cases -like key rotation, certificate revocation. Key management is out of scope for -this use case. - -Database deployment as the backend for KeyStone service ------------------------------------------------------- - -Database replication: - - Master/slave asynchronous: supported by the database server itself - (mysql/mariadb etc), works over WAN, it's more scalable. But only master will - provide write functionality, domain/project/role provisioning. - - Multi master synchronous: Galera(others like percona), not so scalable, - for multi-master writing, and need more parameter tunning for WAN latency.It - can provide the capability for limited multi-sites multi-write - function for distributed KeyStone service. - - Symmetrical/asymmetrical: data replicated to all regions or a subset, - in the latter case it means some regions needs to access Keystone in another - region. - -Database server sharing: -In an OpenStack controller, normally many databases from different -services are provided from the same database server instance. For HA reasons, -the database server is usually synchronously replicated to a few other nodes -(controllers) to form a cluster. Note that _all_ database are replicated in -this case, for example when Galera sync repl is used. - -Only the Keystone database can be replicated to other sites. Replicating -databases for other services will cause those services to get of out sync and -malfunction. - -Since only the Keystone database is to be sync or replicated to another -region/site, it's better to deploy Keystone database into its own -database server with extra networking requirement, cluster or replication -configuration. How to support this by installer is out of scope. - -The database server can be shared when async master/slave replication is -used, if global transaction identifiers GTID is enabled. - -Deployment options ------------------- - -**Distributed KeyStone service with PKI token** - -Deploy KeyStone service in two sites with database replication. If site -level failure impact is not considered, then KeyStone service can only be -deployed into one site. - -The PKI token has one great advantage is that the token validation can be -done locally, without sending token validation request to KeyStone server. -The drawback of PKI token is -the endpoint list size in the token. If a project will be only spread in -very limited site number(region number), then we can use the endpoint -filter to reduce the token size, make it workable even a lot of sites -in the cloud. -KeyStone middleware(which is co-located in the service like -Nova-API/xxx-API) will have to send the request to the KeyStone server -frequently for the revoke-list, in order to reject some malicious API -request, for example, a user has to be deactivated, but use an old token -to access OpenStack service. - -For this option, needs to leverage database replication to provide -KeyStone Active-Active mode across sites to reduce the impact of site failure. -And the revoke-list request is very frequently asked, so the performance of the -KeyStone server needs also to be taken care. - -Site level keystone load balance is required to provide site level -redundancy, otherwise the KeyStone middleware will not switch request to the -healthy KeyStone server in time. - -And also the cert distribution/revoke to each site / API server for token -validation is required. - -This option can be used for some scenario where there are very limited -sites, especially if each project only spreads into limited sites ( regions ). - -**Distributed KeyStone service with Fernet token** - -Fernet token is a very new format, and just introduced recently,the biggest -gain for this token format is :1) lightweight, size is small to be carried in -the API request, not like PKI token( as the sites increased, the endpoint-list -will grows and the token size is too long to carry in the API request) 2) no -token persistence, this also make the DB not changed too much and with light -weight data size (just project, Role, domain, endpoint etc). The drawback for -the Fernet token is that token has to be validated by KeyStone for each API -request. - -This makes that the DB of KeyStone can work as a cluster in multisite (for -example, using MySQL galera cluster). That means install KeyStone API server in -each site, but share the same the backend DB cluster.Because the DB cluster -will synchronize data in real time to multisite, all KeyStone server can see -the same data. - -Because each site with KeyStone installed, and all data kept same, -therefore all token validation could be done locally in the same site. - -The challenge for this solution is how many sites the DB cluster can -support. Question is aksed to MySQL galera developers, their answer is that no -number/distance/network latency limitation in the code. But in the practice, -they have seen a case to use MySQL cluster in 5 data centers, each data centers -with 3 nodes. - -This solution will be very good for limited sites which the DB cluster can -cover very well. - -**Distributed KeyStone service with Fernet token + Async replication (star-mode)** - -One master KeyStone cluster with Fernet token in two sites (for site level -high availability purpose), other sites will be installed with at least 2 slave -nodes where the node is configured with DB async replication from the master -cluster members, and one slave’s mater node in site1, another slave’s master -node in site 2. - -Only the master cluster nodes are allowed to write, other slave nodes -waiting for replication from the master cluster member( very little delay). - -Pros: - * Deploy database cluster in the master sites is to provide more master - nodes, in order to provide more slaves could be done with async. replication - in parallel. Two sites for the master cluster is to provide higher - reliability (site level) for writing request, but reduce the maintaince - challenge at the same time by limiting the cluster spreading over too many - sites. - * Multi-slaves in other sites is because of the slave has no knowledge of - other slaves, so easy to manage multi-slaves in one site than a cluster, and - multi-slaves work independently but provide multi-instance redundancy(like a - cluster, but independent). - -Cons: - * Need to be aware of the chanllenge of key distribution and rotation - for Fernet token. - -Note: PKI token will be deprecated soon, so Fernet token is encouraged. - -Multisite VNF Geo site disaster recovery -======================================== - -Goal ----- - -A VNF (telecom application) should, be able to restore in another site for -catastrophic failures happened. - -Key consideration in multisite scenario ---------------------------------------- - -Geo site disaster recovery is to deal with more catastrophic failures -(flood, earthquake, propagating software fault), and that loss of calls, or -even temporary loss of service, is acceptable. It is also seems more common -to accept/expect manual / administrator intervene into drive the process, not -least because you don’t want to trigger the transfer by mistake. - -In terms of coordination/replication or backup/restore between geographic -sites, discussion often (but not always) seems to focus on limited application -level data/config replication, as opposed to replication backup/restore between -of cloud infrastructure between different sites. - -And finally, the lack of a requirement to do fast media transfer (without -resignalling) generally removes the need for special networking behavior, with -slower DNS-style redirection being acceptable. - -Here is more concerns about cloud infrastructure level capability to -support VNF geo site disaster recovery - -Option1, Consistency application backup ---------------------------------------- - -The disater recovery process will work like this: - -1) DR(Geo site disaster recovery )software get the volumes for each VM - in the VNF from Nova -2) DR software call Nova quiesce API to quarantee quiecing VMs in desired order -3) DR software takes snapshots of these volumes in Cinder (NOTE: Because - storage often provides fast snapshot, so the duration between quiece and - unquiece is a short interval) -4) DR software call Nova unquiece API to unquiece VMs of the VNF in reverse order -5) DR software create volumes from the snapshots just taken in Cinder -6) DR software create backup (incremental) for these volumes to remote - backup storage ( swift or ceph, or.. ) in Cinder -7) If this site failed, - 1) DR software restore these backup volumes in remote Cinder in the backup site. - 2) DR software boot VMs from bootable volumes from the remote Cinder in - the backup site and attach the regarding data volumes. - -Note: Quiesce/Unquiesce spec was approved in Mitaka, but code not get merged in -time, https://blueprints.launchpad.net/nova/+spec/expose-quiesce-unquiesce-api -The spec was rejected in Newton when it was reproposed: -https://review.openstack.org/#/c/295595/. So this option will not work any more. - -Option2, Vitrual Machine Snapshot ---------------------------------- -1) DR software create VM snapshot in Nova -2) Nova quiece the VM internally - (NOTE: The upper level application or DR software should take care of - avoiding infra level outage induced VNF outage) -3) Nova create image in Glance -4) Nova create a snapshot of the VM, including volumes -5) If the VM is volume backed VM, then create volume snapshot in Cinder -5) No image uploaded to glance, but add the snapshot in the meta data of the - image in Glance -6) DR software to get the snapshot information from the Glance -7) DR software create volumes from these snapshots -9) DR software create backup (incremental) for these volumes to backup storage - ( swift or ceph, or.. ) in Cinder -10) If this site failed, - 1) DR software restore these backup volumes to Cinder in the backup site. - 2) DR software boot vm from bootable volume from Cinder in the backup site - and attach the data volumes. - -This option only provides single VM level consistency disaster recovery. - -This feature is already available in current OPNFV release. - -Option3, Consistency volume replication ---------------------------------------- -1) DR software creates datastore (Block/Cinder, Object/Swift, App Custom - storage) with replication enabled at the relevant scope, for use to - selectively backup/replicate desire data to GR backup site -2) DR software get the reference of storage in the remote site storage -3) If primary site failed, - 1) DR software managing recovery in backup site gets references to relevant - storage and passes to new software instances - 2) Software attaches (or has attached) replicated storage, in the case of - volumes promoting to writable. - -Pros: - * Replication will be done in the storage level automatically, no need to - create backup regularly, for example, daily. - * Application selection of limited amount of data to replicate reduces - risk of replicating failed state and generates less overhear. - * Type of replication and model (active/backup, active/active, etc) can - be tailored to application needs - -Cons: - * Applications need to be designed with support in mind, including both - selection of data to be replicated and consideration of consistency - * "Standard" support in Openstack for Disaster Recovery currently fairly - limited, though active work in this area. - -Note: Volume replication v2.1 support project level replication. - - -VNF high availability across VIM -================================ - -Goal ----- - -A VNF (telecom application) should, be able to realize high availability -deloyment across OpenStack instances. - -Key consideration in multisite scenario ---------------------------------------- - -Most of telecom applications have already been designed as -Active-Standby/Active-Active/N-Way to achieve high availability -(99.999%, corresponds to 5.26 minutes of unplanned downtime in a year), -typically state replication or heart beat between -Active-Active/Active-Active/N-Way (directly or via replicated database -services, or via private designed message format) are required. - -We have to accept the currently limited availability ( 99.99%) of a -given OpenStack instance, and intend to provide the availability of the -telecom application by spreading its function across multiple OpenStack -instances.To help with this, many people appear willing to provide multiple -“independent” OpenStack instances in a single geographic site, with special -networking (L2/L3) between clouds in that physical site. - -The telecom application often has different networking plane for different -purpose: - -1) external network plane: using for communication with other telecom - application. - -2) components inter-communication plane: one VNF often consisted of several - components, this plane is designed for components inter-communication with - each other - -3) backup plane: this plane is used for the heart beat or state replication - between the component's active/standby or active/active or N-way cluster. - -4) management plane: this plane is mainly for the management purpose, like - configuration - -Generally these planes are separated with each other. And for legacy telecom -application, each internal plane will have its fixed or flexible IP addressing -plane. There are some interesting/hard requirements on the networking (L2/L3) -between OpenStack instances, at lease the backup plane across different -OpenStack instances: - -1) Overlay L2 networking is prefered as the backup plane for heartbeat or state - replication, the reason is: - - a) Support legacy compatibility: Some telecom app with built-in internal L2 - network, for easy to move these app to virtualized telecom application, it - would be better to provide L2 network. - - b) Support IP overlapping: multiple telecom applications may have - overlapping IP address for cross OpenStack instance networking. - Therefore over L2 networking across Neutron feature is required - in OpenStack. - -2) L3 networking cross OpenStack instance for heartbeat or state replication. - Can leverage FIP or vRouter inter-connected with overlay L2 network to - establish overlay L3 networking. - -Note: L2 border gateway spec was merged in L2GW project: -https://review.openstack.org/#/c/270786/. Code will be availabe in later -release. diff --git a/docs/userguide/multisite.kingbird.usage.rst b/docs/userguide/multisite.kingbird.usage.rst deleted file mode 100644 index 4cdab4f..0000000 --- a/docs/userguide/multisite.kingbird.usage.rst +++ /dev/null @@ -1,182 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -============================= -Multisite.Kingbird user guide -============================= - -Quota management for OpenStack multi-region deployments -------------------------------------------------------- -Kingbird is centralized synchronization service for multi-region OpenStack -deployments. In OPNFV Colorado release, Kingbird provides centralized quota -management feature. Administrator can set quota per project based in Kingbird -and sync the quota limit to multi-region OpenStack periodiclly or on-demand. -The tenant can check the total quota limit and usage from Kingbird for all -regions. Administrator can aslo manage the default quota by quota class -setting. - -Following quota items are supported to be managed in Kingbird: - -- **instances**: Number of instances allowed per project. -- **cores**: Number of instance cores allowed per project. -- **ram**: Megabytes of instance RAM allowed per project. -- **metadata_items**: Number of metadata items allowed per instance. -- **key_pairs**: Number of key pairs per user. -- **fixed_ips**: Number of fixed IPs allowed per project, - valid if Nova Network is used. -- **security_groups**: Number of security groups per project, - valid if Nova Network is used. -- **floating_ips**: Number of floating IPs allowed per project, - valid if Nova Network is used. -- **network**: Number of networks allowed per project, - valid if Neutron is used. -- **subnet**: Number of subnets allowed per project, - valid if Neutron is used. -- **port**: Number of ports allowed per project, - valid if Neutron is used. -- **security_group**: Number of security groups allowed per project, - valid if Neutron is used. -- **security_group_rule**: Number of security group rules allowed per project, - valid if Neutron is used. -- **router**: Number of routers allowed per project, - valid if Neutron is used. -- **floatingip**: Number of floating IPs allowed per project, - valid if Neutron is used. -- **volumes**: Number of volumes allowed per project. -- **snapshots**: Number of snapshots allowed per project. -- **gigabytes**: Total amount of storage, in gigabytes, allowed for volumes - and snapshots per project. -- **backups**: Number of volume backups allowed per project. -- **backup_gigabytes**: Total amount of storage, in gigabytes, allowed for volume - backups per project. - -Only restful APIs are provided for Kingbird in Colorado release, so curl or -other http client can be used to call Kingbird API. - -Before use the following command, get token, project id, and kingbird service -endpoint first. Use $kb_token to repesent the token, and $admin_tenant_id as -administrator project_id, and $tenant_id as the target project_id for quota -management and $kb_ip_addr for the kingbird service endpoint ip address. - -Note: -To view all tenants (projects), run: - -.. code-block:: bash - - openstack project list - -To get token, run: - -.. code-block:: bash - - openstack token issue - -To get Kingbird service endpoint, run: - -.. code-block:: bash - - openstack endpoint list - -Quota Management API --------------------- - -1. Update global limit for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X PUT \ - -d '{"quota_set":{"cores": 10,"ram": 51200, "metadata_items": 100,"key_pairs": 100, "network":20,"security_group": 20,"security_group_rule": 20}}' \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id - -2. Get global limit for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id - -3. A tenant can also get the global limit by himself - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - http://$kb_ip_addr:8118/v1.0/$tenant_id/os-quota-sets/$tenant_id - -4. Get defaults limits - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/defaults - -5. Get total usage for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X GET \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id/detail - -6. A tenant can also get the total usage by himself - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X GET \ - http://$kb_ip_addr:8118/v1.0/$tenant_id/os-quota-sets/$tenant_id/detail - -7. On demand quota sync - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X PUT \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id/sync - - -8. Delete specific global limit for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X DELETE \ - -d '{"quota_set": [ "cores", "ram"]}' \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id - -9. Delete all kingbird global limit for a tenant - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X DELETE \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-sets/$tenant_id - - -Quota Class API ---------------- - -1. Update default quota class - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X PUT \ - -d '{"quota_class_set":{"cores": 100, "network":50,"security_group": 50,"security_group_rule": 50}}' \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default - -2. Get default quota class - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default - -3. Delete default quota class - - curl \ - -H "Content-Type: application/json" \ - -H "X-Auth-Token: $kb_token" \ - -X DELETE \ - http://$kb_ip_addr:8118/v1.0/$admin_tenant_id/os-quota-class-sets/default - -- cgit 1.2.3-korg