summaryrefslogtreecommitdiffstats
path: root/docs/design
diff options
context:
space:
mode:
Diffstat (limited to 'docs/design')
-rw-r--r--docs/design/architecture.rst100
-rw-r--r--docs/design/conf.py50
-rw-r--r--docs/design/definitions.rst46
-rw-r--r--docs/design/featureconfig_link.rst2
-rw-r--r--docs/design/featureusage_link.rst2
-rw-r--r--docs/design/images/policy_architecture.pngbin324028 -> 0 bytes
-rw-r--r--docs/design/images/policy_architecture.pptxbin33988 -> 0 bytes
-rw-r--r--docs/design/index.rst22
-rw-r--r--docs/design/introduction.rst99
-rw-r--r--docs/design/postinstall_link.rst2
-rw-r--r--docs/design/releasenotes_link.rst2
-rw-r--r--docs/design/requirements.rst135
-rw-r--r--docs/design/usecases.rst330
13 files changed, 0 insertions, 790 deletions
diff --git a/docs/design/architecture.rst b/docs/design/architecture.rst
deleted file mode 100644
index 02d8335..0000000
--- a/docs/design/architecture.rst
+++ /dev/null
@@ -1,100 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2015-2017 AT&T Intellectual Property, Inc
-
-Architecture
-============
-
-Architectural Concept
----------------------
-The following example diagram illustrates a "relationship diagram" type view of
-an NFVI platform, in which the roles of components focused on policy management,
-services, and infrastructure are shown.
-
-This view illustrates that a large-scale deployment of NFVI may leverage multiple
-components of the same "type" (e.g. SDN Controller), which fulfill specific
-purposes for which they are optimized. For example, a global SDN controller and
-cloud orchestrator can act as directed by a service orchestrator in the
-provisioning of VNFs per intent, while various components at a local and global
-level handle policy-related events directly and/or feed them back through a
-closed-loop policy design that responds as needed, directly or through the
-service orchestrator.
-
-.. image:: ./images/policy_architecture.png
- :width: 700 px
- :alt: policy_architecture.png
- :align: center
-
-(source of the diagram above:
-https://git.opnfv.org/cgit/copper/plain/design_docs/images/policy_architecture.pptx)
-
-Architectural Aspects
----------------------
- * Policies are reflected in two high-level goals
-
- * Ensure resource requirements of VNFs and services are applied per VNF
- designer, service, and tenant intent
- * Ensure that generic policies are not violated, e.g. *networks connected to
- VMs must either be public or owned by the VM owner*
-
- * Policies are distributed through two main means
-
- * As part of VNF packages, customized if needed by Service Design tools,
- expressing intent of the VNF designer and service provider, and possibly
- customized or supplemented by service orchestrators per the intent of
- specific tenants
- * As generic policies provisioned into VIMs (SDN controllers and cloud
- orchestrators), expressing intent of the service provider re what
- states/events need to be policy-governed independently of specific VNFs
-
- * Policies are applied locally and in closed-loop systems per the capabilities
- of the local policy enforcer and the impact of the related state/event conditions
-
- * VIMs should be able to execute most policies locally
- * VIMs may need to pass policy-related state/events to a closed-loop system,
- where those events are relevant to other components in the architecture
- (e.g. service orchestrator), or some additional data/arbitration is needed
- to resolve the state/event condition
-
- * Policies are localized as they are distributed/delegated
-
- * High-level policies (e.g. expressing "intent") can be translated into VNF
- package elements or generic policies, perhaps using distinct syntaxes
- * Delegated policy syntaxes are likely VIM-specific, e.g. Datalog (Congress)
-
- * Closed-loop policy and VNF-lifecycle event handling are //somewhat// distinct
-
- * Closed-loop policy is mostly about resolving conditions that can't be
- handled locally, but as above in some cases the conditions may be of
- relevance and either delivered directly or forwarded to service orchestrators
- * VNF-lifecycle events that can't be handled by the VIM locally are delivered
- directly to the service orchestrator
-
- * Some events/analytics need to be collected into a more "open-loop" system
- which can enable other actions, e.g.
-
- * audits and manual interventions
- * machine-learning focused optimizations of policies (largely a future objective)
-
-Issues to be investigated as part of establishing an overall cohesive/adaptive
-policy architecture:
-
- * For the various components which may fulfill a specific purpose, what
- capabilities (e.g. APIs) do they have/need to
-
- * handle events locally
- * enable closed-loop policy handling components to subscribe/optimize
- policy-related events that are of interest
-
- * For global controllers and cloud orchestrators
-
- * How do they support correlation of events impacting resources in different
- scopes (network and cloud)
- * What event/response flows apply to various policy use cases
-
- * What specific policy use cases can/should fall into each overall class
-
- * locally handled by NFVI components
- * handled by a closed-loop policy system, either VNF/service-specific or
- VNF-independent
diff --git a/docs/design/conf.py b/docs/design/conf.py
deleted file mode 100644
index 99b1ca2..0000000
--- a/docs/design/conf.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright 2015 Open Platform for NFV Project, Inc. and its contributors
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-# What this is: Configuration file for OPNFV Copper project documentation
-# generation via Sphinx.
-#
-# Status: Tested and used in production of Copper project documentation.
-#
-
-import datetime
-import sys
-import os
-
-try:
- __import__('imp').find_module('sphinx.ext.numfig')
- extensions = ['sphinx.ext.numfig']
-except ImportError:
- # 'pip install sphinx_numfig'
- extensions = ['sphinx_numfig']
-
-# numfig:
-number_figures = True
-figure_caption_prefix = "Fig."
-
-source_suffix = '.rst'
-master_doc = 'index'
-pygments_style = 'sphinx'
-html_use_index = False
-html_theme = 'sphinx_rtd_theme'
-
-pdf_documents = [('index', u'OPNFV', u'OPNFV Copper Project', u'OPNFV')]
-pdf_fit_mode = "shrink"
-pdf_stylesheets = ['sphinx','kerning','a4']
-#latex_domain_indices = False
-#latex_use_modindex = False
-
-latex_elements = {
- 'printindex': '',
-}
diff --git a/docs/design/definitions.rst b/docs/design/definitions.rst
deleted file mode 100644
index 5552696..0000000
--- a/docs/design/definitions.rst
+++ /dev/null
@@ -1,46 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2015-2017 AT&T Intellectual Property, Inc
-
-Definitions
-===========
-.. list-table:: Definitions
- :widths: 15 85
- :header-rows: 1
-
- * - Term
- - Meaning
-
- * - State
- - Information that can be used to convey or imply the state of something, e.g. an application, resource, entity, etc. This can include data held inside OPNFV components, "events" that have occurred (e.g. "policy violation"), etc.
-
- * - Event
- - An item of significance to the policy engine, for which the engine has become aware through some method of discovery e.g. polling or notification.
-
-Abbreviations
-=============
-.. list-table:: Abbreviations
- :widths: 15 85
- :header-rows: 1
-
- * - Term
- - Meaning
-
- * - CRUD
- - Create, Read, Update, Delete (database operation types)
-
- * - FCAPS
- - Fault, Configuration, Accounting, Performance, Security
-
- * - NF
- - Network Function
-
- * - SFC
- - Service Function Chaining
-
- * - VNF
- - Virtual Network Function
-
- * - NFVI
- - Network Function Virtualization Infrastructure
diff --git a/docs/design/featureconfig_link.rst b/docs/design/featureconfig_link.rst
deleted file mode 100644
index 6af66f7..0000000
--- a/docs/design/featureconfig_link.rst
+++ /dev/null
@@ -1,2 +0,0 @@
-.. include::
- ../installationprocedure/feature.configuration.rst
diff --git a/docs/design/featureusage_link.rst b/docs/design/featureusage_link.rst
deleted file mode 100644
index c3ac10b..0000000
--- a/docs/design/featureusage_link.rst
+++ /dev/null
@@ -1,2 +0,0 @@
-.. include::
- ../userguide/feature.usage.rst
diff --git a/docs/design/images/policy_architecture.png b/docs/design/images/policy_architecture.png
deleted file mode 100644
index eb37e36..0000000
--- a/docs/design/images/policy_architecture.png
+++ /dev/null
Binary files differ
diff --git a/docs/design/images/policy_architecture.pptx b/docs/design/images/policy_architecture.pptx
deleted file mode 100644
index 5739f0f..0000000
--- a/docs/design/images/policy_architecture.pptx
+++ /dev/null
Binary files differ
diff --git a/docs/design/index.rst b/docs/design/index.rst
deleted file mode 100644
index b1bc74b..0000000
--- a/docs/design/index.rst
+++ /dev/null
@@ -1,22 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2015-2017 AT&T Intellectual Property, Inc
-
-********************
-OPNFV Copper Project
-********************
-
-.. toctree::
- :numbered:
- :maxdepth: 4
-
- introduction.rst
- releasenotes_link.rst
- definitions.rst
- usecases.rst
- architecture.rst
- requirements.rst
- featureconfig_link.rst
- postinstall_link.rst
- featureusage_link.rst
diff --git a/docs/design/introduction.rst b/docs/design/introduction.rst
deleted file mode 100644
index cc2ceee..0000000
--- a/docs/design/introduction.rst
+++ /dev/null
@@ -1,99 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2015-2017 AT&T Intellectual Property, Inc
-
-Introduction
-============
-
-..
- This work is licensed under a Creative Commons Attribution 4.0 Unported License.
-
- http://creativecommons.org/licenses/by/4.0
-
-.. NOTE::
- This is the working documentation for the Copper project.
-
-The `OPNFV Copper <https://wiki.opnfv.org/copper>`_ project aims to help ensure
-that virtualized infrastructure and application deployments comply with goals of
-the NFV service provider or the VNF designer/user.
-
-This is the third ("Danube") release of the Copper project. The documentation
-provided here focuses on the overall goals of the Copper project and the
-specific features supported in the Colorado release.
-
-Overall Goals for Configuration Policy
---------------------------------------
-
-As focused on by Copper, configuration policy helps ensure that the NFV service
-environment meets the requirements of the variety of stakeholders which will
-provide or use NFV platforms.
-
-These requirements can be expressed as an *intent* of the stakeholder,
-in specific terms or more abstractly, but at the highest level they express:
-
- * what I want
- * what I don't want
-
-Using road-based transportation as an analogy, some examples of this are shown
-below:
-
-.. list-table:: Configuration Intent Example
- :widths: 10 45 45
- :header-rows: 1
-
- * - Who I Am
- - What I Want
- - What I Don't Want
- * - user
- - a van, wheelchair-accessible, electric powered
- - someone driving off with my van
- * - road provider
- - keep drivers moving at an optimum safe speed
- - four-way stops
- * - public safety
- - shoulder warning strips, center media barriers
- - speeding, tractors on the freeway
-
-According to their role, service providers may apply more specific configuration
-requirements than users, since service providers are more likely to be managing
-specific types of infrastructure capabilities.
-
-Developers and users may also express their requirements more specifically,
-based upon the type of application or how the user intends to use it.
-
-For users, a high-level intent can be also translated into a more or less specific
-configuration capability by the service provider, taking into consideration
-aspects such as the type of application or its constraints.
-
-Examples of such translation are:
-
-.. list-table:: Intent Translation into Configuration Capability
- :widths: 40 60
- :header-rows: 1
-
- * - Intent
- - Configuration Capability
- * - network security
- - firewall, DPI, private subnets
- * - compute/storage security
- - vulnerability monitoring, resource access controls
- * - high availability
- - clustering, auto-scaling, anti-affinity, live migration
- * - disaster recovery
- - geo-diverse anti-affinity
- * - high compute/storage performance
- - clustering, affinity
- * - high network performance
- - data plane acceleration
- * - resource reclamation
- - low-usage monitoring
-
-Although such intent-to-capability translation is conceptually useful, it is
-unclear how it can address the variety of aspects that may affect the choice of
-an applicable configuration capability.
-
-For that reason, the Copper project will initially focus on more specific
-configuration requirements as fulfilled by specific configuration capabilities,
-as well as how those requirements and capabilities are expressed in VNF and service
-design and packaging or as generic policies for the NFV Infrastructure.
diff --git a/docs/design/postinstall_link.rst b/docs/design/postinstall_link.rst
deleted file mode 100644
index ca8d99c..0000000
--- a/docs/design/postinstall_link.rst
+++ /dev/null
@@ -1,2 +0,0 @@
-.. include::
- ../installationprocedure/installation.instruction.rst
diff --git a/docs/design/releasenotes_link.rst b/docs/design/releasenotes_link.rst
deleted file mode 100644
index 6ee81a3..0000000
--- a/docs/design/releasenotes_link.rst
+++ /dev/null
@@ -1,2 +0,0 @@
-.. include::
- ../releasenotes/release.notes.rst
diff --git a/docs/design/requirements.rst b/docs/design/requirements.rst
deleted file mode 100644
index 87894cf..0000000
--- a/docs/design/requirements.rst
+++ /dev/null
@@ -1,135 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2015-2017 AT&T Intellectual Property, Inc
-
-Requirements
-============
-This section outlines general requirements for configuration policies,
-per the two main aspects in the Copper project scope:
-
- * Ensuring resource requirements of VNFs and services are applied per VNF
- designer, service, and tenant intent
- * Ensuring that generic policies are not violated,
- e.g. *networks connected to VMs must either be public or owned by the VM owner*
-
-Resource Requirements
-+++++++++++++++++++++
-Resource requirements describe the characteristics of virtual resources (compute,
-storage, network) that are needed for VNFs and services, and how those resources
-should be managed over the lifecycle of a VNF/service. Upstream projects already
-include multiple ways in which resource requirements can be expressed and fulfilled, e.g.:
-
- * OpenStack Nova
-
- * the Image feature, enabling "VM templates" to be defined for NFs and referenced by name as a specific NF version to be used
- * the Flavor feature, addressing basic compute and storage requirements, with extensibility for custom attributes
-
- * OpenStack Heat
-
- * the `Heat Orchestration Template <http://docs.openstack.org/developer/heat/template_guide/index.html>`_
- feature, enabling a variety of VM aspects to be defined and managed by
- Heat throughout the VM lifecycle, notably
-
- * alarm handling (requires `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_)
- * attached volumes (requires `Cinder <https://wiki.openstack.org/wiki/Cinder>`_)
- * domain name assignment (requires `Designate <https://wiki.openstack.org/wiki/Designate>`_)
- * images (requires `Glance <https://wiki.openstack.org/wiki/Glance>`_)
- * autoscaling
- * software configuration associated with VM lifecycle hooks (CREATE,
- UPDATE, SUSPEND, RESUME, DELETE)
- * wait conditions and signaling for sequencing orchestration steps
- * orchestration service user management (requires
- `Keystone <http://docs.openstack.org/developer/keystone/>`_)
- * shared storage (requires `Manila <https://wiki.openstack.org/wiki/Manila>`_)
- * load balancing (requires `Neutron LBaaS <http://docs.openstack.org/admin-guide/networking.html>`_)
- * firewalls (requires `Neutron FWaaS <http://docs.openstack.org/admin-guide/networking.html>`_)
- * various Neutron-based network and security configuration items
- * Nova flavors
- * Nova server attributes including access control
- * Nova server group affinity and anti-affinity
- * "Data-intensive application clustering" (requires
- `Sahara <https://wiki.openstack.org/wiki/Sahara>`_)
- * DBaaS (requires `Trove <http://docs.openstack.org/developer/trove/>`_)
- * "multi-tenant cloud messaging and notification service" (requires
- `Zaqar <http://docs.openstack.org/developer/zaqar/>`_)
-
- * `OpenStack Group-Based Policy <https://wiki.openstack.org/wiki/GroupBasedPolicy>`_
-
- * API-based grouping of endpoints with associated contractual expectations for data flow processing and service chaining
-
- * `OpenStack Tacker <https://wiki.openstack.org/wiki/Tacker>`_
-
- * "a fully functional ETSI MANO based general purpose NFV Orchestrator and VNF Manager for OpenStack"
-
- * `OpenDaylight Group-Based Policy <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)>`_
-
- * model-based grouping of endpoints with associated contractual expectations for data flow processing
-
- * `OpenDaylight Service Function Chaining (SFC) <https://wiki.opendaylight.org/view/Service_Function_Chaining:Main>`_
-
- * model-based management of "service chains" and the infrastucture that enables them
-
- * Additional projects that are commonly used for configuration management,
- implemented as client-server frameworks using model-based, declarative, or
- scripted configuration management data.
-
- * `Puppet <https://puppetlabs.com/puppet/puppet-open-source>`_
- * `Chef <https://www.chef.io/chef/>`_
- * `Ansible <http://docs.ansible.com/ansible/index.html>`_
- * `Salt <http://saltstack.com/community/>`_
-
-Generic Policy Requirements
-+++++++++++++++++++++++++++
-Generic policy requirements address conditions related to resource state and
-events which need to be monitored for, and optionally responded to or prevented.
-These conditions are typically expected to be VNF/service-independent, as
-VNF/service-dependent condition handling (e.g. scale in/out) are considered to
-be addressed by VNFM/NFVO/VIM functions as described under Resource Requirements
-or as FCAPS related functions. However the general capabilities below can be
-applied to VNF/service-specific policy handling as well, or in particular to
-invocation of VNF/service-specific management/orchestration actions. The
-high-level required capabilities include:
-
- * Polled monitoring: Exposure of state via request-response APIs.
- * Notifications: Exposure of state via pub-sub APIs.
- * Realtime/near-realtime notifications: Notifications that occur in actual or
- near realtime.
- * Delegated policy: CRUD operations on policies that are distributed to
- specific components for local handling, including one/more of monitoring,
- violation reporting, and enforcement.
- * Violation reporting: Reporting of conditions that represent a policy violation.
- * Reactive enforcement: Enforcement actions taken in response to policy
- violation events.
- * Proactive enforcement: Enforcement actions taken in advance of policy
- violation events,
- e.g. blocking actions that could result in a policy violation.
- * Compliance auditing: Periodic auditing of state against policies.
-
-Upstream projects already include multiple ways in which configuration conditions
-can be monitored and responded to:
-
- * OpenStack `Congress <http://docs.openstack.org/developer/congress/index.html>`_ provides a
- table-based mechanism for state monitoring and proactive/reactive policy
- enforcement, including data obtained from internal databases of OpenStack
- core and optional services. The Congress design approach is also extensible
- to other VIMs (e.g. SDNCs) through development of data source drivers for
- the new monitored state information.
- * OpenStack `Aodh <https://wiki.openstack.org/wiki/Telemetry#Aodh>`_
- provides means to trigger alarms upon a wide variety of conditions derived
- from its monitored OpenStack analytics.
- * `Nagios <https://www.nagios.org/#/>`_ "offers complete monitoring and alerting for servers, switches, applications, and services".
-
-Requirements Validation Approach
-++++++++++++++++++++++++++++++++
-The Copper project will assess the completeness of the upstream project solutions
-for requirements in scope though a process of:
-
- * developing configuration policy use cases to focus solution assessment tests
- * integrating the projects into the OPNFV platform for testing
- * executing functional and performance tests for the solutions
- * assessing overall requirements coverage and gaps in the most complete
- upstream solutions
-
-Depending upon the priority of discovered gaps, new requirements will be
-submitted to upstream projects for the next available release cycle.
diff --git a/docs/design/usecases.rst b/docs/design/usecases.rst
deleted file mode 100644
index 431590d..0000000
--- a/docs/design/usecases.rst
+++ /dev/null
@@ -1,330 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2015-2017 AT&T Intellectual Property, Inc
-
-Use Cases
-=========
-
-Implemented in Current Release
-------------------------------
-
-Network Bridging
-................
-
-As a service provider, I need to prevent tenants from bridging networks I have
-created with different security levels, e.g. a DMZ network and an Admin
-network.
-
-An example implementation is shown in the Congress use case test "Network
-Bridging" (bridging.sh) in the Copper repo under the tests folder. This test:
-
- * Identifies VMs that are connected to Service Provider (SP) defined networks via floating IPs
- * Identifies VMs that are connected to two such networks with different security levels
- * For VMs that are thus connected, identifies those that are not owned by the Service Provider
- * Reactively enforces the network bridging rule by pausing VMs found to be in violation of the policy
-
-Note the assumptions related to the following example:
-
- * "SP" is the service provider tenant, and only the SP can create tenants
-
-As implemented through OpenStack Congress:
-
-.. code::
-
- sp_dmz_connected(x) :-
- nova:floating_ips(fixed_ip, id, ip=y, instance_id=x, pool),
- neutronv2:floating_ips(id, router_id, tenant_id, floating_network_id=z,
- fixed_ip_address, floating_ip_address=y, port_id, status),
- neutronv2:networks(id=z, tenant_id=w, name="DMZ", status, admin_state_up, shared),
- keystone:tenants(enabled, name="SP", id=w)
-
- sp_admin_connected(x) :-
- nova:floating_ips(fixed_ip, id, ip=y, instance_id=x, pool),
- neutronv2:floating_ips(id, router_id, tenant_id, floating_network_id=z,
- fixed_ip_address, floating_ip_address=y, port_id, status),
- neutronv2:networks(id=z, tenant_id=w, name="Admin", status, admin_state_up, shared),
- keystone:tenants(enabled, name="SP", id=w)
-
- dmz_admin_connnected(x) :-
- sp_dmz_connected(x), sp_admin_connected(x)
-
- dmz_admin_bridging_error(id) :-
- nova:servers(id,name,hostId,status,tenant_id=x,user_id,image,flavor,az,hh),
- not keystone:tenants(enabled, name="SP", id=x)
-
- execute[nova:servers.pause(id)] :-
- dmz_admin_bridging_error(id),
- nova:servers(id,status='ACTIVE')
-
-DMZ Deployment
-..............
-
-As a service provider, I need to ensure that applications which have not been
-designed for exposure in a DMZ zone are not attached to DMZ networks.
-
-An example implementation is shown in the Congress use case test "DMZ Placement"
-(dmz.sh) in the Copper repo under the tests folder. This test:
-
- * Identifies VMs connected to a DMZ (currently identified through a specifically-named security group)
- * Identifies VMs connected to a DMZ, which are by policy not allowed to be (currently implemented through an image tag intended to identify images that are "authorized" i.e. tested and secure, to be DMZ-connected)
- * Reactively enforces the dmz placement rule by pausing VMs found to be in violation of the policy.
-
-As implemented through OpenStack Congress:
-
-.. code::
-
- dmz_server(x) :-
- nova:servers(id=x,status='ACTIVE'),
- neutronv2:ports(id, device_id, status='ACTIVE'),
- neutronv2:security_group_port_bindings(id, sg),
- neutronv2:security_groups(sg,name='dmz')"
-
- dmz_placement_error(id) :-
- nova:servers(id,name,hostId,status,tenant_id,user_id,image,flavor,az,hh),
- not glancev2:tags(image,'dmz'),
- dmz_server(id)"
-
- execute[nova:servers.pause(id)] :-
- dmz_placement_error(id),
- nova:servers(id,status='ACTIVE')"
-
-Configuration Auditing
-......................
-
-As a service provider or tenant, I need to periodically verify that resource
-configuration requirements have not been violated, as a backup means to proactive
-or reactive policy enforcement.
-
-An example implementation is shown in the Congress use case test "SMTP Ingress"
-(smtp_ingress.sh) in the Copper repo under the tests folder. This test:
-
- * Detects that a VM is associated with a security group that allows SMTP
- ingress (TCP port 25)
- * Adds a policy table row entry for the VM, which can be later investigated
- for appropriate use of the security group
-
-As implemented through OpenStack Congress:
-
-.. code::
-
- smtp_ingress(x) :-
- nova:servers(id=x,status='ACTIVE'),
- neutronv2:ports(port_id, status='ACTIVE'),
- neutronv2:security_groups(sg, tenant_id, sgn, sgd),
- neutronv2:security_group_port_bindings(port_id, sg),
- neutronv2:security_group_rules(sg, rule_id, tenant_id, remote_group_id,
- 'ingress', ethertype, 'tcp', port_range_min, port_range_max, remote_ip),
- lt(port_range_min, 26),
- gt(port_range_max, 24)
-
-Reserved Resources
-..................
-
-As an NFV Infrastructure provider, I need to ensure that my admins do not inadvertently
-enable VMs to connect to reserved subnets.
-
-An example implementation is shown in the Congress use case test "Reserved Subnet"
-(reserved_subnet.sh) in the Copper repo under the tests folder. This test:
-
- * Detects that a subnet has been created in a reserved range
- * Reactively deletes the subnet
-
-As implemented through OpenStack Congress:
-
-.. code::
-
- reserved_subnet_error(x) :-
- neutronv2:subnets(id=x, cidr='10.7.1.0/24')
-
- execute[neutronv2:delete_subnet(x)] :-
- reserved_subnet_error(x)
-
-
-For Further Analysis and Implementation
----------------------------------------
-
-Affinity
-........
-
-Ensures that the VM instance is launched "with affinity to" specific resources,
-e.g. within a compute or storage cluster. Examples include: "Same Host Filter",
-i.e. place on the same compute node as a given set of instances, e.g. as defined
-in a scheduler hint list.
-
-As implemented by OpenStack Heat using server groups:
-
-*Note: untested example...*
-
-.. code::
-
- resources:
- servgrp1:
- type: OS::Nova::ServerGroup
- properties:
- policies:
- - affinity
- serv1:
- type: OS::Nova::Server
- properties:
- image: { get_param: image }
- flavor: { get_param: flavor }
- networks:
- - network: {get_param: network}
- serv2:
- type: OS::Nova::Server
- properties:
- image: { get_param: image }
- flavor: { get_param: flavor }
- networks:
- - network: {get_param: network}
-
-Anti-Affinity
-.............
-
-Ensures that the VM instance is launched "with anti-affinity to" specific resources,
-e.g. outside a compute or storage cluster, or geographic location.
-Examples include: "Different Host Filter", i.e. ensures that the VM instance is
-launched on a different compute node from a given set of instances, as defined
-in a scheduler hint list.
-
-As implemented by OpenStack Heat using scheduler hints:
-
-*Note: untested example...*
-
-.. code::
-
- heat template version: 2013-05-23
- parameters:
- image:
- type: string
- default: TestVM
- flavor:
- type: string
- default: m1.micro
- network:
- type: string
- default: cirros_net2
- resources:
- serv1:
- type: OS::Nova::Server
- properties:
- image: { get_param: image }
- flavor: { get_param: flavor }
- networks:
- - network: {get_param: network}
- scheduler_hints: {different_host: {get_resource: serv2}}
- serv2:
- type: OS::Nova::Server
- properties:
- image: { get_param: image }
- flavor: { get_param: flavor }
- networks:
- - network: {get_param: network}
- scheduler_hints: {different_host: {get_resource: serv1}}
-
-Network Access Control
-......................
-
-Networks connected to VMs must be public or owned by someone in the VM owner's group.
-
-This use case captures the intent of the following sub-use-cases:
-
- * Link Mirroring: As a troubleshooter,
- I need to mirror traffic from physical or virtual network ports so that I
- can investigate trouble reports.
- * Link Mirroring: As a NFVaaS tenant,
- I need to be able to mirror traffic on my virtual network ports so that I
- can investigate trouble reports.
- * Unauthorized Link Mirroring Prevention: As a NFVaaS tenant,
- I need to be able to prevent other tenants from mirroring traffic on my
- virtual network ports so that I can protect the privacy of my service users.
- * Link Mirroring Delegation: As a NFVaaS tenant,
- I need to be able to allow my NFVaaS SP customer support to mirror traffic
- on my virtual network ports so that they can assist in investigating trouble
- reports.
-
-As implemented through OpenStack Congress:
-
-*Note: untested example...*
-
-.. code::
-
- error :-
- nova:vm(vm),
- neutron:network(network),
- nova:network(vm, network),
- neutron:private(network),
- nova:owner(vm, vm-own),
- neutron:owner(network, net-own),
- -same-group(vm-own, net-own)
-
- same-group(user1, user2) :-
- ldap:group(user1, g),
- ldap:group(user2, g)
-
-
-Storage Access Control
-......................
-
-Storage resources connected to VMs must be owned by someone in the VM owner's group.
-
-As implemented through OpenStack Congress:
-
-*Note: untested example...*
-
-.. code::
-
- error :-
- nova:vm(vm),
- cinder:volumes(volume),
- nova:volume(vm, volume),
- nova:owner(vm, vm-own),
- neutron:owner(volume, vol-own),
- -same-group(vm-own, vol-own)
-
- same-group(user1, user2) :-
- ldap:group(user1, g),
- ldap:group(user2, g)
-
-Resource Reclamation
-....................
-
-As a service provider or tenant, I need to be informed of VMs that are under-utilized
-so that I can reclaim the VI resources. (example from `RuleYourCloud blog <http://ruleyourcloud.com/2015/03/12/scaling-up-congress.html>`_)
-
-As implemented through OpenStack Congress:
-
-*Note: untested example...*
-
-.. code::
-
- reclaim_server(vm) :-
- ceilometer:stats("cpu_util",vm, avg_cpu),
- lessthan(avg_cpu, 1)
-
- error(user_id, email, vm_name) :-
- reclaim_server(vm),
- nova:servers(vm, vm_name, user_id),
- keystone:users(user_id, email)
-
-Resource Use Limits
-...................
-
-As a tenant or service provider, I need to be automatically terminate an instance
-that has run for a pre-agreed maximum duration.
-
-As implemented through OpenStack Congress:
-
-*Note: untested example...*
-
-.. code::
-
- terminate_server(vm) :-
- ceilometer:statistics("duration",vm, avg_cpu),
- lessthan(avg_cpu, 1)
-
- error(user_id, email, vm_name) :-
- reclaim_server(vm),
- nova:servers(vm, vm_name, user_id),
- keystone:users(user_id, email)