summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorblsaws <bryan.sullivan@att.com>2016-01-26 04:55:08 -0800
committerblsaws <bryan.sullivan@att.com>2016-01-26 05:19:40 -0800
commit3554b60d431300274f2c004618c33c59508460a4 (patch)
treed05c56d2301dad058826d467c2e986b7f08790ac /docs
parent1074117e1c3e7fc6a44f00f9389fec47ad70150f (diff)
Address RST formattting issues (extra spaces etc)
JIRA: COPPER-1 Fix more pesky "errors" Change-Id: I65e722f8969a58e32cde1e4651dfc62fd45c18a7 Signed-off-by: blsaws <bryan.sullivan@att.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/design/architecture.rst22
-rw-r--r--docs/design/definitions.rst4
-rw-r--r--docs/design/introduction.rst14
-rw-r--r--docs/design/requirements.rst120
-rw-r--r--docs/design/usecases.rst156
5 files changed, 158 insertions, 158 deletions
diff --git a/docs/design/architecture.rst b/docs/design/architecture.rst
index 9949c1a..d5ab9ff 100644
--- a/docs/design/architecture.rst
+++ b/docs/design/architecture.rst
@@ -4,7 +4,7 @@ Architecture
Architectural Concept
---------------------
The following example diagram illustrates a "relationship diagram" type view of an NFVI platform, in which the roles of components focused on policy management, services, and infrastructure are shown. This view illustrates that a large-scale deployment of NFVI may leverage multiple components of the same "type" (e.g. SDN Controller), which fulfill specific purposes for which they are optimized. For example, a global SDN controller and cloud orchestrator can act as directed by a service orchestrator in the provisioning of VNFs per intent, while various components at a local and global level handle policy-related events directly and/or feed them back through a closed-loop policy design that responds as needed, directly or through the service orchestrator.
-
+
.. image:: ./images/policy_architecture.png
:width: 700 px
:alt: policy_architecture.png
@@ -15,12 +15,12 @@ The following example diagram illustrates a "relationship diagram" type view of
Architectural Aspects
---------------------
* Policies are reflected in two high-level goals
-
+
* Ensure resource requirements of VNFs and services are applied per VNF designer, service, and tenant intent
* Ensure that generic policies are not violated, e.g. *networks connected to VMs must either be public or owned by the VM owner*
* Policies are distributed through two main means
-
+
* As part of VNF packages, customized if needed by Service Design tools, expressing intent of the VNF designer and service provider, and possibly customized or supplemented by service orchestrators per the intent of specific tenants
* As generic policies provisioned into VIMs (SDN controllers and cloud orchestrators), expressing intent of the service provider re what states/events need to be policy-governed independently of specific VNFs
@@ -35,28 +35,28 @@ Architectural Aspects
* Delegated policy syntaxes are likely VIM-specific, e.g. Datalog (Congress), YANG (ODL-based SDNC), or other schemas specific to other SDNCs (Contrail, ONOS)
* Closed-loop policy and VNF-lifecycle event handling are //somewhat// distinct
-
+
* Closed-loop policy is mostly about resolving conditions that can't be handled locally, but as above in some cases the conditions may be of relevance and either delivered directly or forwarded to service orchestrators
* VNF-lifecycle events that can't be handled by the VIM locally are delivered directly to the service orchestrator
* Some events/analytics need to be collected into a more "open-loop" system which can enable other actions, e.g.
-
+
* audits and manual interventions
* machine-learning focused optimizations of policies (largely a future objective)
-
+
Issues to be investigated as part of establishing an overall cohesive/adaptive policy architecture:
* For the various components which may fulfill a specific purpose, what capabilities (e.g. APIs) do they have/need to
-
+
* handle events locally
* enable closed-loop policy handling components to subscribe/optimize policy-related events that are of interest
-
+
* For global controllers and cloud orchestrators
-
+
* How do they support correlation of events impacting resources in different scopes (network and cloud)
* What event/response flows apply to various policy use cases
-
+
* What specific policy use cases can/should fall into each overall class
-
+
* locally handled by NFVI components
* handled by a closed-loop policy system, either VNF/service-specific or VNF-independent
diff --git a/docs/design/definitions.rst b/docs/design/definitions.rst
index 4423d45..7f0628a 100644
--- a/docs/design/definitions.rst
+++ b/docs/design/definitions.rst
@@ -11,7 +11,7 @@ Definitions
- Information that can be used to convey or imply the state of something, e.g. an application, resource, entity, etc. This can include data held inside OPNFV components, "events" that have occurred (e.g. "policy violation"), etc.
* - Event
- - An item of significance to the policy engine, for which the engine has become aware through some method of discovery e.g. polling or notification.
+ - An item of significance to the policy engine, for which the engine has become aware thr ough some method of discovery e.g. polling or notification.
Abbreviations
=============
@@ -30,7 +30,7 @@ Abbreviations
* - NF
- Network Function
-
+
* - SFC
- Service Function Chaining
diff --git a/docs/design/introduction.rst b/docs/design/introduction.rst
index 7f16248..f82d47e 100644
--- a/docs/design/introduction.rst
+++ b/docs/design/introduction.rst
@@ -5,7 +5,7 @@ Introduction
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
-
+
.. NOTE::
This is the working documentation for the Copper project.
@@ -19,10 +19,10 @@ As focused on by Copper, configuration policy helps ensure that the NFV service
* what I want
* what I don't want
-Using road-based transportation as an analogy, some examples of this are shown below.
+Using road-based transportation as an analogy, some examples of this are shown below.
.. list-table:: Configuration Intent Example
- :widths: 10 45 45
+ :widths: 10 45 45
:header-rows: 1
* - Who I Am
@@ -54,11 +54,11 @@ According to their role, service providers may apply more specific configuration
- clustering, auto-scaling, anti-affinity, live migration
* - disaster recovery
- geo-diverse anti-affinity
- * - high compute/storage performance
+ * - high compute/storage performance
- clustering, affinity
- * - high network performance
+ * - high network performance
- data plane acceleration
- * - resource reclamation
+ * - resource reclamation
- low-usage monitoring
Although such intent to capability translation is conceptually useful, it is unclear how it can address the variety of aspects that may affect the choice of an applicable configuration capability. For that reason, the Copper project will initially focus on more specific configuration requirements as fulfilled by specific configuration capabilities, and how those requirements and capabilities are expressed in VNF and service design and packaging, or as generic poicies for the NFVI.
@@ -69,4 +69,4 @@ OPNFV Brahmaputra will be the initial OPNFV release for Copper, with the goals:
* Add the OpenStack Congress service to OPNFV, through at least one installer project
* If possible, add Congress support to the OPNFV CI/CD pipeline for all Genesis project installers (Apex, Fuel, JOID, Compass)
* Integrate Congress tests into Functest and develop additional use case tests for post-OPNFV-install
- * Extend with other OpenStack components for testing, as time permits \ No newline at end of file
+ * Extend with other OpenStack components for testing, as time permits
diff --git a/docs/design/requirements.rst b/docs/design/requirements.rst
index ee88b3c..61e9e08 100644
--- a/docs/design/requirements.rst
+++ b/docs/design/requirements.rst
@@ -1,72 +1,72 @@
Requirements
============
This section outlines general requirements for configuration policies, per the two main aspects in the Copper project scope:
- * Ensuring resource requirements of VNFs and services are applied per VNF designer, service, and tenant intent
- * Ensuring that generic policies are not violated, e.g. *networks connected to VMs must either be public or owned by the VM owner*
-
+ * Ensuring resource requirements of VNFs and services are applied per VNF designer, service, and tenant intent
+ * Ensuring that generic policies are not violated, e.g. *networks connected to VMs must either be public or owned by the VM owner*
+
Resource Requirements
+++++++++++++++++++++
Resource requirements describe the characteristics of virtual resources (compute, storage, network) that are needed for VNFs and services, and how those resources should be managed over the lifecycle of a VNF/service. Upstream projects already include multiple ways in which resource requirements can be expressed and fulfilled, e.g.:
- * OpenStack Nova
- * the `image <http://docs.openstack.org/openstack-ops/content/user_facing_images.html>`_ feature, enabling "VM templates" to be defined for NFs, and referenced by name as a specific NF version to be used
- * the `flavor <http://docs.openstack.org/openstack-ops/content/flavors.html>`_ feature, addressing basic compute and storage requirements, with extensibility for custom attributes
- * OpenStack Heat
- * the `Heat Orchestration Template <http://docs.openstack.org/developer/heat/template_guide/index.html>`_ feature, enabling a variety of VM aspects to be defined and managed by Heat throughout the VM lifecycle, notably
- * alarm handling (requires `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_)
- * attached volumes (requires `Cinder <https://wiki.openstack.org/wiki/Cinder>`_)
- * domain name assignment (requires `Designate <https://wiki.openstack.org/wiki/Designate>`_)
- * images (requires `Glance <https://wiki.openstack.org/wiki/Glance>`_)
- * autoscaling
- * software configuration associated with VM "lifecycle hooks (CREATE, UPDATE, SUSPEND, RESUME, DELETE"
- * wait conditions and signaling for sequencing orchestration steps
- * orchestration service user management (requires `Keystone <http://docs.openstack.org/developer/keystone/>`_)
- * shared storage (requires `Manila <https://wiki.openstack.org/wiki/Manila>`_)
- * load balancing (requires Neutron `LBaaS <http://docs.openstack.org/admin-guide-cloud/content/section_lbaas-overview.html>`_)
- * firewalls (requires Neutron `FWaaS <http://docs.openstack.org/admin-guide-cloud/content/install_neutron-fwaas-agent.html>`_)
- * various Neutron-based network and security configuration items
- * Nova flavors
- * Nova server attributes including access control
- * Nova server group affinity and anti-affinity
- * "Data-intensive application clustering" (requires `Sahara <https://wiki.openstack.org/wiki/Sahara>`_)
- * DBaaS (requires `Trove <http://docs.openstack.org/developer/trove/>`_)
- * "multi-tenant cloud messaging and notification service" (requires `Zaqar <http://docs.openstack.org/developer/zaqar/>`_)
- * OpenStack `Group-Based Policy <https://wiki.openstack.org/wiki/GroupBasedPolicy>`_
- * API-based grouping of endpoints with associated contractual expectations for data flow processing and service chaining
- * OpenStack `Tacker <https://wiki.openstack.org/wiki/Tacker>`_
- * "a fully functional ETSI MANO based general purpose NFV Orchestrator and VNF Manager for OpenStack"
- * OpenDaylight `Group-Based Policy <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)>`_
- * model-based grouping of endpoints with associated contractual expectations for data flow processing
- * OpenDaylight `Service Function Chaining (SFC) <https://wiki.opendaylight.org/view/Service_Function_Chaining:Main>`_
- * model-based management of "service chains" and the infrastucture that enables them
- * Additional projects that are commonly used for configuration management, implemented as client-server frameworks using model-based, declarative, or scripted configuration management data.
- * `Puppet <https://puppetlabs.com/puppet/puppet-open-source>`_
- * `Chef <https://www.chef.io/chef/>`_
- * `Ansible <http://docs.ansible.com/ansible/index.html>`_
- * `Salt <http://saltstack.com/community/>`_
-
+ * OpenStack Nova
+ * the `image <http://docs.openstack.org/openstack-ops/content/user_facing_images.html>`_ feature, enabling "VM templates" to be defined for NFs, and referenced by name as a specific NF version to be used
+ * the `flavor <http://docs.openstack.org/openstack-ops/content/flavors.html>`_ feature, addressing basic compute and storage requirements, with extensibility for custom attributes
+ * OpenStack Heat
+ * the `Heat Orchestration Template <http://docs.openstack.org/developer/heat/template_guide/index.html>`_ feature, enabling a variety of VM aspects to be defined and managed by Heat throughout the VM lifecycle, notably
+ * alarm handling (requires `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_)
+ * attached volumes (requires `Cinder <https://wiki.openstack.org/wiki/Cinder>`_)
+ * domain name assignment (requires `Designate <https://wiki.openstack.org/wiki/Designate>`_)
+ * images (requires `Glance <https://wiki.openstack.org/wiki/Glance>`_)
+ * autoscaling
+ * software configuration associated with VM "lifecycle hooks (CREATE, UPDATE, SUSPEND, RESUME, DELETE"
+ * wait conditions and signaling for sequencing orchestration steps
+ * orchestration service user management (requires `Keystone <http://docs.openstack.org/developer/keystone/>`_)
+ * shared storage (requires `Manila <https://wiki.openstack.org/wiki/Manila>`_)
+ * load balancing (requires Neutron `LBaaS <http://docs.openstack.org/admin-guide-cloud/content/section_lbaas-overview.html>`_)
+ * firewalls (requires Neutron `FWaaS <http://docs.openstack.org/admin-guide-cloud/content/install_neutron-fwaas-agent.html>`_)
+ * various Neutron-based network and security configuration items
+ * Nova flavors
+ * Nova server attributes including access control
+ * Nova server group affinity and anti-affinity
+ * "Data-intensive application clustering" (requires `Sahara <https://wiki.openstack.org/wiki/Sahara>`_)
+ * DBaaS (requires `Trove <http://docs.openstack.org/developer/trove/>`_)
+ * "multi-tenant cloud messaging and notification service" (requires `Zaqar <http://docs.openstack.org/developer/zaqar/>`_)
+ * OpenStack `Group-Based Policy <https://wiki.openstack.org/wiki/GroupBasedPolicy>`_
+ * API-based grouping of endpoints with associated contractual expectations for data flow processing and service chaining
+ * OpenStack `Tacker <https://wiki.openstack.org/wiki/Tacker>`_
+ * "a fully functional ETSI MANO based general purpose NFV Orchestrator and VNF Manager for OpenStack"
+ * OpenDaylight `Group-Based Policy <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)>`_
+ * model-based grouping of endpoints with associated contractual expectations for data flow processing
+ * OpenDaylight `Service Function Chaining (SFC) <https://wiki.opendaylight.org/view/Service_Function_Chaining:Main>`_
+ * model-based management of "service chains" and the infrastucture that enables them
+ * Additional projects that are commonly used for configuration management, implemented as client-server frameworks using model-based, declarative, or scripted configuration management data.
+ * `Puppet <https://puppetlabs.com/puppet/puppet-open-source>`_
+ * `Chef <https://www.chef.io/chef/>`_
+ * `Ansible <http://docs.ansible.com/ansible/index.html>`_
+ * `Salt <http://saltstack.com/community/>`_
+
Generic Policy Requirements
-+++++++++++++++++++++++++++
++++++++++++++++++++++++++++
Generic policy requirements address conditions related to resource state and events which need to be monitored for, and optionally responded to or prevented. These conditions are typically expected to be VNF/service-independent, as VNF/service-dependent condition handling (e.g. scale in/out) are considered to be addressed by VNFM/NFVO/VIM functions as described under Resource Requirements or as FCAPS related functions. However the general capabilities below can be applied to VNF/service-specific policy handling as well, or in particular to invocation of VNF/service-specific management/orchestration actions. The high-level required capabilities include:
- * Polled monitoring: Exposure of state via request-response APIs.
- * Notifications: Exposure of state via pub-sub APIs.
- * Realtime/near-realtime notifications: Notifications that occur in actual or near realtime.
- * Delegated policy: CRUD operations on policies that are distributed to specific components for local handling, including one/more of monitoring, violation reporting, and enforcement.
- * Violation reporting: Reporting of conditions that represent a policy violation.
- * Reactive enforcement: Enforcement actions taken in response to policy violation events.
- * Proactive enforcement: Enforcement actions taken in advance of policy violation events, e.g. blocking actions that could result in a policy violation.
- * Compliance auditing: Periodic auditing of state against policies.
-
+ * Polled monitoring: Exposure of state via request-response APIs.
+ * Notifications: Exposure of state via pub-sub APIs.
+ * Realtime/near-realtime notifications: Notifications that occur in actual or near realtime.
+ * Delegated policy: CRUD operations on policies that are distributed to specific components for local handling, including one/more of monitoring, violation reporting, and enforcement.
+ * Violation reporting: Reporting of conditions that represent a policy violation.
+ * Reactive enforcement: Enforcement actions taken in response to policy violation events.
+ * Proactive enforcement: Enforcement actions taken in advance of policy violation events, e.g. blocking actions that could result in a policy violation.
+ * Compliance auditing: Periodic auditing of state against policies.
+
Upstream projects already include multiple ways in which configuration conditions can be monitored and responded to:
- * OpenStack `Congress <https://wiki.openstack.org/wiki/Congress>`_ provides a table-based mechanism for state monitoring and proactive/reactive policy enforcement, including (as of the Kilo release) data obtained from internal databases of Nova, Neutron, Ceilometer, Cinder, Glance, Keystone, and Swift. The Congress design approach is also extensible to other VIMs (e.g. SDNCs) through development of data source drivers for the new monitored state information. See `Stackforge Congress Data Source Translators <https://github.com/stackforge/congress/tree/master/congress/datasources>`_, `congress.readthedocs.org <http://congress.readthedocs.org/en/latest/cloudservices.html#drivers>`_, and the `Congress specs <https://github.com/stackforge/congress-specs>`_ for more info.
- * OpenStack `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_ provides means to trigger alarms upon a wide variety of conditions derived from its monitored OpenStack analytics.
- * `Nagios <https://www.nagios.org/#/>`_ "offers complete monitoring and alerting for servers, switches, applications, and services".
-
+ * OpenStack `Congress <https://wiki.openstack.org/wiki/Congress>`_ provides a table-based mechanism for state monitoring and proactive/reactive policy enforcement, including (as of the Kilo release) data obtained from internal databases of Nova, Neutron, Ceilometer, Cinder, Glance, Keystone, and Swift. The Congress design approach is also extensible to other VIMs (e.g. SDNCs) through development of data source drivers for the new monitored state information. See `Stackforge Congress Data Source Translators <https://github.com/stackforge/congress/tree/master/congress/datasources>`_, `congress.readthedocs.org <http://congress.readthedocs.org/en/latest/cloudservices.html#drivers>`_, and the `Congress specs <https://github.com/stackforge/congress-specs>`_ for more info.
+ * OpenStack `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_ provides means to trigger alarms upon a wide variety of conditions derived from its monitored OpenStack analytics.
+ * `Nagios <https://www.nagios.org/#/>`_ "offers complete monitoring and alerting for servers, switches, applications, and services".
+
Requirements Validation Approach
++++++++++++++++++++++++++++++++
The Copper project will assess the completeness of the upstream project solutions for requirements in scope though a process of:
- * developing configuration policy use cases to focus solution assessment tests
- * integrating the projects into the OPNFV platform for testing
- * executing functional and performance tests for the solutions
- * assessing overall requirements coverage and gaps in the most complete upstream solutions
-
-Depending upon the priority of discovered gaps, new requirements will be submitted to upstream projects for the next available release cycle. \ No newline at end of file
+ * developing configuration policy use cases to focus solution assessment tests
+ * integrating the projects into the OPNFV platform for testing
+ * executing functional and performance tests for the solutions
+ * assessing overall requirements coverage and gaps in the most complete upstream solutions
+
+Depending upon the priority of discovered gaps, new requirements will be submitted to upstream projects for the next available release cycle.
diff --git a/docs/design/usecases.rst b/docs/design/usecases.rst
index ca37e13..91b616c 100644
--- a/docs/design/usecases.rst
+++ b/docs/design/usecases.rst
@@ -18,26 +18,26 @@ As implemented by OpenStack Heat using server groups:
.. code::
- resources:
- servgrp1:
- type: OS::Nova::ServerGroup
- properties:
- policies:
- - affinity
- serv1:
- type: OS::Nova::Server
- properties:
- image: { get_param: image }
- flavor: { get_param: flavor }
- networks:
- - network: {get_param: network}
- serv2:
- type: OS::Nova::Server
- properties:
- image: { get_param: image }
- flavor: { get_param: flavor }
- networks:
- - network: {get_param: network}
+ resources:
+ servgrp1:
+ type: OS::Nova::ServerGroup
+ properties:
+ policies:
+ - affinity
+ serv1:
+ type: OS::Nova::Server
+ properties:
+ image: { get_param: image }
+ flavor: { get_param: flavor }
+ networks:
+ - network: {get_param: network}
+ serv2:
+ type: OS::Nova::Server
+ properties:
+ image: { get_param: image }
+ flavor: { get_param: flavor }
+ networks:
+ - network: {get_param: network}
Anti-Affinity
.............
@@ -50,34 +50,34 @@ As implemented by OpenStack Heat using scheduler hints:
.. code::
- heat template version: 2013-05-23
- parameters:
- image:
- type: string
- default: TestVM
- flavor:
- type: string
- default: m1.micro
- network:
- type: string
- default: cirros_net2
- resources:
- serv1:
- type: OS::Nova::Server
- properties:
- image: { get_param: image }
- flavor: { get_param: flavor }
- networks:
- - network: {get_param: network}
- scheduler_hints: {different_host: {get_resource: serv2}}
- serv2:
- type: OS::Nova::Server
- properties:
- image: { get_param: image }
- flavor: { get_param: flavor }
- networks:
- - network: {get_param: network}
- scheduler_hints: {different_host: {get_resource: serv1}}
+ heat template version: 2013-05-23
+ parameters:
+ image:
+ type: string
+ default: TestVM
+ flavor:
+ type: string
+ default: m1.micro
+ network:
+ type: string
+ default: cirros_net2
+ resources:
+ serv1:
+ type: OS::Nova::Server
+ properties:
+ image: { get_param: image }
+ flavor: { get_param: flavor }
+ networks:
+ - network: {get_param: network}
+ scheduler_hints: {different_host: {get_resource: serv2}}
+ serv2:
+ type: OS::Nova::Server
+ properties:
+ image: { get_param: image }
+ flavor: { get_param: flavor }
+ networks:
+ - network: {get_param: network}
+ scheduler_hints: {different_host: {get_resource: serv1}}
DMZ Deployment
..............
@@ -108,47 +108,47 @@ This use case captures the intent of the following sub-use-cases:
* Unauthorized Link Mirroring Prevention: As a NFVaaS tenant, I need to be able to prevent other tenants from mirroring traffic on my virtual network ports so that I can protect the privacy of my service users.
* Link Mirroring Delegation: As a NFVaaS tenant, I need to be able to allow my NFVaaS SP customer support to mirror traffic on my virtual network ports so that they can assist in investigating trouble reports.
-As implemented through OpenStack Congress:
+As implemented through OpenStack Congress:
*Note: untested example...*
-.. code::
+.. code::
- error :-
- nova:vm(vm),
- neutron:network(network),
- nova:network(vm, network),
- neutron:private(network),
- nova:owner(vm, vm-own),
- neutron:owner(network, net-own),
+ error :-
+ nova:vm(vm),
+ neutron:network(network),
+ nova:network(vm, network),
+ neutron:private(network),
+ nova:owner(vm, vm-own),
+ neutron:owner(network, net-own),
-same-group(vm-own, net-own)
-
- same-group(user1, user2) :-
- ldap:group(user1, g),
+
+ same-group(user1, user2) :-
+ ldap:group(user1, g),
ldap:group(user2, g)
-
+
Storage Access Control
......................
Storage resources connected to VMs must be owned by someone in the VM owner's group.
-As implemented through OpenStack Congress:
+As implemented through OpenStack Congress:
*Note: untested example...*
-.. code::
+.. code::
- error :-
- nova:vm(vm),
- cinder:volumes(volume),
- nova:volume(vm, volume),
- nova:owner(vm, vm-own),
- neutron:owner(volume, vol-own),
+ error :-
+ nova:vm(vm),
+ cinder:volumes(volume),
+ nova:volume(vm, volume),
+ nova:owner(vm, vm-own),
+ neutron:owner(volume, vol-own),
-same-group(vm-own, vol-own)
-
- same-group(user1, user2) :-
- ldap:group(user1, g),
+
+ same-group(user1, user2) :-
+ ldap:group(user1, g),
ldap:group(user2, g)
Resource Management
@@ -157,11 +157,11 @@ Resource Management
Resource Reclamation
....................
-As a service provider or tenant, I need to be informed of VMs that are under-utilized so that I can reclaim the VI resources. (example from `RuleYourCloud blog <http://ruleyourcloud.com/2015/03/12/scaling-up-congress.html>`_)
+As a service provider or tenant, I need to be informed of VMs that are under-utilized so that I can reclaim the VI resources. (example from `RuleYourCloud blog <http://ruleyourcloud.com/2015/03/12/scaling-up-congress.html>`_)
-As implemented through OpenStack Congress:
+As implemented through OpenStack Congress:
-.. code::
+.. code::
reclaim_server(vm) :-
ceilometer:stats("cpu_util",vm, avg_cpu),
@@ -175,11 +175,11 @@ As implemented through OpenStack Congress:
Resource Use Limits
...................
-As a tenant or service provider, I need to be automatically terminate an instance that has run for a pre-agreed maximum duration.
+As a tenant or service provider, I need to be automatically terminate an instance that has run for a pre-agreed maximum duration.
-As implemented through OpenStack Congress:
+As implemented through OpenStack Congress:
-.. code::
+.. code::
terminate_server(vm) :-
ceilometer:statistics("duration",vm, avg_cpu),