From f033459b5fbb9c354ba61682a462c7a95f999421 Mon Sep 17 00:00:00 2001 From: Ryota MIBU Date: Mon, 8 Feb 2016 21:12:10 +0900 Subject: fix docs - doc8 violations - use latest index.rst format - move userguide.rst to featureusage.rst Change-Id: I73e0a8a45418e197b973537d4f2856bd98c9a06c Signed-off-by: Ryota MIBU (cherry picked from commit 7dee5427d770340526afc7ffa642d223b53cfdf1) --- docs/configguide/featureconfig.rst | 10 ++++----- docs/configguide/postinstall.rst | 4 +++- docs/design/architecture.rst | 34 ++++++++++++++++++++-------- docs/design/index.rst | 18 ++------------- docs/design/introduction.rst | 31 ++++++++++++++++++++----- docs/design/requirements.rst | 43 ++++++++++++++++++++++++----------- docs/design/usecases.rst | 46 +++++++++++++++++++++++++++++--------- docs/userguide/featureusage.rst | 5 +++++ docs/userguide/userguide.rst | 5 ----- 9 files changed, 131 insertions(+), 65 deletions(-) create mode 100644 docs/userguide/featureusage.rst delete mode 100644 docs/userguide/userguide.rst (limited to 'docs') diff --git a/docs/configguide/featureconfig.rst b/docs/configguide/featureconfig.rst index 99bafd5..1b3dd5a 100644 --- a/docs/configguide/featureconfig.rst +++ b/docs/configguide/featureconfig.rst @@ -117,7 +117,7 @@ Clone congress git clone https://github.com/openstack/congress.git cd congress git checkout stable/liberty - + Create virtualenv ................. @@ -362,8 +362,8 @@ Run Congress Tempest Tests Restarting after server power loss etc ...................................... -Currently this install procedure is manual. Automated install and restoral \ -after host recovery is TBD. For now, this procedure will get the Congress \ +Currently this install procedure is manual. Automated install and restoral +after host recovery is TBD. For now, this procedure will get the Congress service running again. .. code:: @@ -377,9 +377,9 @@ service running again. sudo lxc-start -n juju-trusty-congress -d # Verify the Congress container status sudo lxc-ls -f juju-trusty-congress - NAME STATE IPV4 IPV6 GROUPS AUTOSTART + NAME STATE IPV4 IPV6 GROUPS AUTOSTART ---------------------------------------------------------------------- - juju-trusty-congress RUNNING 192.168.10.117 - - NO + juju-trusty-congress RUNNING 192.168.10.117 - - NO # exit back to the Jumphost, wait a minute, and go back to the \ "SSH to Congress server" step above # On the Congress server that you have logged into diff --git a/docs/configguide/postinstall.rst b/docs/configguide/postinstall.rst index 4fd8148..9252d95 100644 --- a/docs/configguide/postinstall.rst +++ b/docs/configguide/postinstall.rst @@ -207,4 +207,6 @@ Using the Test Webapp ..................... Browse to the trusty-copper server IP address. -Interactive options are meant to be self-explanatory given a basic familiarity with the Congress service and data model. But the app will be developed with additional features and UI elements. +Interactive options are meant to be self-explanatory given a basic familiarity +with the Congress service and data model. +But the app will be developed with additional features and UI elements. diff --git a/docs/design/architecture.rst b/docs/design/architecture.rst index d5ab9ff..d55c208 100644 --- a/docs/design/architecture.rst +++ b/docs/design/architecture.rst @@ -3,7 +3,13 @@ Architecture Architectural Concept --------------------- -The following example diagram illustrates a "relationship diagram" type view of an NFVI platform, in which the roles of components focused on policy management, services, and infrastructure are shown. This view illustrates that a large-scale deployment of NFVI may leverage multiple components of the same "type" (e.g. SDN Controller), which fulfill specific purposes for which they are optimized. For example, a global SDN controller and cloud orchestrator can act as directed by a service orchestrator in the provisioning of VNFs per intent, while various components at a local and global level handle policy-related events directly and/or feed them back through a closed-loop policy design that responds as needed, directly or through the service orchestrator. +The following example diagram illustrates a "relationship diagram" type view of an NFVI platform, +in which the roles of components focused on policy management, services, and infrastructure are shown. +This view illustrates that a large-scale deployment of NFVI may leverage multiple components of the same "type" +(e.g. SDN Controller), which fulfill specific purposes for which they are optimized. For example, a global SDN +controller and cloud orchestrator can act as directed by a service orchestrator in the provisioning of VNFs per +intent, while various components at a local and global level handle policy-related events directly and/or feed +them back through a closed-loop policy design that responds as needed, directly or through the service orchestrator. .. image:: ./images/policy_architecture.png :width: 700 px @@ -17,26 +23,36 @@ Architectural Aspects * Policies are reflected in two high-level goals * Ensure resource requirements of VNFs and services are applied per VNF designer, service, and tenant intent - * Ensure that generic policies are not violated, e.g. *networks connected to VMs must either be public or owned by the VM owner* + * Ensure that generic policies are not violated, + e.g. *networks connected to VMs must either be public or owned by the VM owner* * Policies are distributed through two main means - * As part of VNF packages, customized if needed by Service Design tools, expressing intent of the VNF designer and service provider, and possibly customized or supplemented by service orchestrators per the intent of specific tenants - * As generic policies provisioned into VIMs (SDN controllers and cloud orchestrators), expressing intent of the service provider re what states/events need to be policy-governed independently of specific VNFs + * As part of VNF packages, customized if needed by Service Design tools, expressing intent of the VNF designer and + service provider, and possibly customized or supplemented by service orchestrators per the intent of specific + tenants + * As generic policies provisioned into VIMs (SDN controllers and cloud orchestrators), expressing intent of the + service provider re what states/events need to be policy-governed independently of specific VNFs - * Policies are applied locally and in closed-loop systems per the capabilities of the local policy enforcer and the impact of the related state/event conditions + * Policies are applied locally and in closed-loop systems per the capabilities of the local policy enforcer and + the impact of the related state/event conditions * VIMs should be able to execute most policies locally - * VIMs may need to pass policy-related state/events to a closed-loop system, where those events are relevant to other components in the architecture (e.g. service orchestrator), or some additional data/arbitration is needed to resolve the state/event condition + * VIMs may need to pass policy-related state/events to a closed-loop system, + where those events are relevant to other components in the architecture (e.g. service orchestrator), + or some additional data/arbitration is needed to resolve the state/event condition * Policies are localized as they are distributed/delegated - * High-level policies (e.g. expressing “intent”) can be translated into VNF package elements or generic policies, perhaps using distinct syntaxes - * Delegated policy syntaxes are likely VIM-specific, e.g. Datalog (Congress), YANG (ODL-based SDNC), or other schemas specific to other SDNCs (Contrail, ONOS) + * High-level policies (e.g. expressing "intent") can be translated into VNF package elements or generic policies, + perhaps using distinct syntaxes + * Delegated policy syntaxes are likely VIM-specific, e.g. Datalog (Congress), YANG (ODL-based SDNC), + or other schemas specific to other SDNCs (Contrail, ONOS) * Closed-loop policy and VNF-lifecycle event handling are //somewhat// distinct - * Closed-loop policy is mostly about resolving conditions that can't be handled locally, but as above in some cases the conditions may be of relevance and either delivered directly or forwarded to service orchestrators + * Closed-loop policy is mostly about resolving conditions that can't be handled locally, but as above in some cases + the conditions may be of relevance and either delivered directly or forwarded to service orchestrators * VNF-lifecycle events that can't be handled by the VIM locally are delivered directly to the service orchestrator * Some events/analytics need to be collected into a more "open-loop" system which can enable other actions, e.g. diff --git a/docs/design/index.rst b/docs/design/index.rst index 9423b31..19e2262 100644 --- a/docs/design/index.rst +++ b/docs/design/index.rst @@ -1,12 +1,6 @@ -.. Copper documentation master file, created by - sphinx-quickstart on Tue Jun 9 19:12:31 2015. - You can adapt this file completely to your liking, but it should at least - contain the root `toctree` directive. - +******************** OPNFV Copper Project -==================== - -Contents: +******************** .. toctree:: :numbered: @@ -17,11 +11,3 @@ Contents: usecases.rst architecture.rst requirements.rst - -Indices and tables -================== - -* :ref:`genindex` -* :ref:`modindex` -* :ref:`search` - diff --git a/docs/design/introduction.rst b/docs/design/introduction.rst index f82d47e..a7cbb02 100644 --- a/docs/design/introduction.rst +++ b/docs/design/introduction.rst @@ -9,12 +9,22 @@ Introduction .. NOTE:: This is the working documentation for the Copper project. -The `OPNFV Copper `_ project aims to help ensure that virtualized infrastructure deployments comply with goals of the VNF designer/user, e.g. re affinity and partitioning (e.g. per regulation, control/user plane separation, cost…). This is a "requirements" project with initial goal to assess "off the shelf" basic OPNFV platform support for policy management, using existing open source projects such as OpenStack Congress and OpenDaylight Group-Based Policy (GBP). The project will assess what policy-related features are currently supported through research into the related projects in OpenStack and ODL, and testing of integrated vanilla distributions of those and other dependent open source projects in the OPNFV’s NFVI platform scope. +The `OPNFV Copper `_ project aims to help ensure that virtualized infrastructure +deployments comply with goals of the VNF designer/user, e.g. re affinity and partitioning (e.g. per regulation, +control/user plane separation, cost...). +This is a "requirements" project with initial goal to assess "off the shelf" basic OPNFV platform support for policy +management, using existing open source projects such as OpenStack Congress and OpenDaylight Group-Based Policy (GBP). +The project will assess what policy-related features are currently supported through research into the related projects +in OpenStack and ODL, and testing of integrated vanilla distributions of those and other dependent open source projects +in the OPNFV's NFVI platform scope. Configuration Policy -------------------- -As focused on by Copper, configuration policy helps ensure that the NFV service environment meets the requirements of the variety of stakeholders which will provide or use NFV platforms. These requirements can be expressed as an *intent* of the stakeholder, in specific terms or more abstractly, but at the highest level they express: +As focused on by Copper, configuration policy helps ensure that the NFV service environment meets the requirements of +the variety of stakeholders which will provide or use NFV platforms. +These requirements can be expressed as an *intent* of the stakeholder, +in specific terms or more abstractly, but at the highest level they express: * what I want * what I don't want @@ -38,7 +48,13 @@ Using road-based transportation as an analogy, some examples of this are shown b - shoulder warning strips, center media barriers - speeding, tractors on the freeway -According to their role, service providers may apply more specific configuration requirements than users, since service providers are more likely to be managing specific types of infrastructure capabilities. Developers and users may also express their requirements more specifically, based upon the type of application or how the user intends to use it. For users, a high-level intent can be also translated into a more or less specific configuration capability by the service provider, taking into consideration aspects such as the type of application or its constraints. Examples of such translation are: +According to their role, service providers may apply more specific configuration requirements than users, +since service providers are more likely to be managing specific types of infrastructure capabilities. +Developers and users may also express their requirements more specifically, +based upon the type of application or how the user intends to use it. +For users, a high-level intent can be also translated into a more or less specific configuration capability +by the service provider, taking into consideration aspects such as the type of application or its constraints. +Examples of such translation are: .. list-table:: Intent Translation into Configuration Capability :widths: 40 60 @@ -61,12 +77,17 @@ According to their role, service providers may apply more specific configuration * - resource reclamation - low-usage monitoring -Although such intent to capability translation is conceptually useful, it is unclear how it can address the variety of aspects that may affect the choice of an applicable configuration capability. For that reason, the Copper project will initially focus on more specific configuration requirements as fulfilled by specific configuration capabilities, and how those requirements and capabilities are expressed in VNF and service design and packaging, or as generic poicies for the NFVI. +Although such intent to capability translation is conceptually useful, it is unclear how it can address the variety of +aspects that may affect the choice of an applicable configuration capability. +For that reason, the Copper project will initially focus on more specific configuration requirements as fulfilled by +specific configuration capabilities, and how those requirements and capabilities are expressed in VNF and service +design and packaging, or as generic poicies for the NFVI. Release 1 Scope --------------- OPNFV Brahmaputra will be the initial OPNFV release for Copper, with the goals: * Add the OpenStack Congress service to OPNFV, through at least one installer project - * If possible, add Congress support to the OPNFV CI/CD pipeline for all Genesis project installers (Apex, Fuel, JOID, Compass) + * If possible, add Congress support to the OPNFV CI/CD pipeline for all Genesis project installers + (Apex, Fuel, JOID, Compass) * Integrate Congress tests into Functest and develop additional use case tests for post-OPNFV-install * Extend with other OpenStack components for testing, as time permits diff --git a/docs/design/requirements.rst b/docs/design/requirements.rst index 61e9e08..a9b5bc0 100644 --- a/docs/design/requirements.rst +++ b/docs/design/requirements.rst @@ -1,17 +1,24 @@ Requirements ============ -This section outlines general requirements for configuration policies, per the two main aspects in the Copper project scope: +This section outlines general requirements for configuration policies, +per the two main aspects in the Copper project scope: * Ensuring resource requirements of VNFs and services are applied per VNF designer, service, and tenant intent - * Ensuring that generic policies are not violated, e.g. *networks connected to VMs must either be public or owned by the VM owner* + * Ensuring that generic policies are not violated, + e.g. *networks connected to VMs must either be public or owned by the VM owner* Resource Requirements +++++++++++++++++++++ -Resource requirements describe the characteristics of virtual resources (compute, storage, network) that are needed for VNFs and services, and how those resources should be managed over the lifecycle of a VNF/service. Upstream projects already include multiple ways in which resource requirements can be expressed and fulfilled, e.g.: +Resource requirements describe the characteristics of virtual resources (compute, storage, network) that are needed for +VNFs and services, and how those resources should be managed over the lifecycle of a VNF/service. Upstream projects +already include multiple ways in which resource requirements can be expressed and fulfilled, e.g.: * OpenStack Nova - * the `image `_ feature, enabling "VM templates" to be defined for NFs, and referenced by name as a specific NF version to be used - * the `flavor `_ feature, addressing basic compute and storage requirements, with extensibility for custom attributes + * the `image `_ feature, enabling + "VM templates" to be defined for NFs, and referenced by name as a specific NF version to be used + * the `flavor `_ feature, addressing basic compute + and storage requirements, with extensibility for custom attributes * OpenStack Heat - * the `Heat Orchestration Template `_ feature, enabling a variety of VM aspects to be defined and managed by Heat throughout the VM lifecycle, notably + * the `Heat Orchestration Template `_ feature, + enabling a variety of VM aspects to be defined and managed by Heat throughout the VM lifecycle, notably * alarm handling (requires `Ceilometer `_) * attached volumes (requires `Cinder `_) * domain name assignment (requires `Designate `_) @@ -31,7 +38,8 @@ Resource requirements describe the characteristics of virtual resources (compute * DBaaS (requires `Trove `_) * "multi-tenant cloud messaging and notification service" (requires `Zaqar `_) * OpenStack `Group-Based Policy `_ - * API-based grouping of endpoints with associated contractual expectations for data flow processing and service chaining + * API-based grouping of endpoints with associated contractual expectations for data flow processing and + service chaining * OpenStack `Tacker `_ * "a fully functional ETSI MANO based general purpose NFV Orchestrator and VNF Manager for OpenStack" * OpenDaylight `Group-Based Policy `_ @@ -46,27 +54,36 @@ Resource requirements describe the characteristics of virtual resources (compute Generic Policy Requirements +++++++++++++++++++++++++++ -Generic policy requirements address conditions related to resource state and events which need to be monitored for, and optionally responded to or prevented. These conditions are typically expected to be VNF/service-independent, as VNF/service-dependent condition handling (e.g. scale in/out) are considered to be addressed by VNFM/NFVO/VIM functions as described under Resource Requirements or as FCAPS related functions. However the general capabilities below can be applied to VNF/service-specific policy handling as well, or in particular to invocation of VNF/service-specific management/orchestration actions. The high-level required capabilities include: +Generic policy requirements address conditions related to resource state and events which need to be monitored for, +and optionally responded to or prevented. These conditions are typically expected to be VNF/service-independent, +as VNF/service-dependent condition handling (e.g. scale in/out) are considered to be addressed by VNFM/NFVO/VIM +functions as described under Resource Requirements or as FCAPS related functions. However the general capabilities +below can be applied to VNF/service-specific policy handling as well, or in particular to invocation of +VNF/service-specific management/orchestration actions. The high-level required capabilities include: * Polled monitoring: Exposure of state via request-response APIs. * Notifications: Exposure of state via pub-sub APIs. * Realtime/near-realtime notifications: Notifications that occur in actual or near realtime. - * Delegated policy: CRUD operations on policies that are distributed to specific components for local handling, including one/more of monitoring, violation reporting, and enforcement. + * Delegated policy: CRUD operations on policies that are distributed to specific components for local handling, + including one/more of monitoring, violation reporting, and enforcement. * Violation reporting: Reporting of conditions that represent a policy violation. * Reactive enforcement: Enforcement actions taken in response to policy violation events. - * Proactive enforcement: Enforcement actions taken in advance of policy violation events, e.g. blocking actions that could result in a policy violation. + * Proactive enforcement: Enforcement actions taken in advance of policy violation events, + e.g. blocking actions that could result in a policy violation. * Compliance auditing: Periodic auditing of state against policies. - Upstream projects already include multiple ways in which configuration conditions can be monitored and responded to: +Upstream projects already include multiple ways in which configuration conditions can be monitored and responded to: * OpenStack `Congress `_ provides a table-based mechanism for state monitoring and proactive/reactive policy enforcement, including (as of the Kilo release) data obtained from internal databases of Nova, Neutron, Ceilometer, Cinder, Glance, Keystone, and Swift. The Congress design approach is also extensible to other VIMs (e.g. SDNCs) through development of data source drivers for the new monitored state information. See `Stackforge Congress Data Source Translators `_, `congress.readthedocs.org `_, and the `Congress specs `_ for more info. * OpenStack `Ceilometer `_ provides means to trigger alarms upon a wide variety of conditions derived from its monitored OpenStack analytics. * `Nagios `_ "offers complete monitoring and alerting for servers, switches, applications, and services". Requirements Validation Approach ++++++++++++++++++++++++++++++++ -The Copper project will assess the completeness of the upstream project solutions for requirements in scope though a process of: +The Copper project will assess the completeness of the upstream project solutions for requirements in scope though +a process of: * developing configuration policy use cases to focus solution assessment tests * integrating the projects into the OPNFV platform for testing * executing functional and performance tests for the solutions * assessing overall requirements coverage and gaps in the most complete upstream solutions -Depending upon the priority of discovered gaps, new requirements will be submitted to upstream projects for the next available release cycle. +Depending upon the priority of discovered gaps, new requirements will be submitted to upstream projects for the next +available release cycle. diff --git a/docs/design/usecases.rst b/docs/design/usecases.rst index e37aa17..ef9e82d 100644 --- a/docs/design/usecases.rst +++ b/docs/design/usecases.rst @@ -10,7 +10,12 @@ Workload Placement Affinity ........ -Ensures that the VM instance is launched "with affinity to" specific resources, e.g. within a compute or storage cluster. This is analogous to the affinity rules in `VMWare vSphere DRS `_. Examples include: "Same Host Filter", i.e. place on the same compute node as a given set of instances, e.g. as defined in a scheduler hint list. +Ensures that the VM instance is launched "with affinity to" specific resources, +e.g. within a compute or storage cluster. +This is analogous to the affinity rules in +`VMWare vSphere DRS `_. +Examples include: "Same Host Filter", i.e. place on the same compute node as a given set of instances, +e.g. as defined in a scheduler hint list. As implemented by OpenStack Heat using server groups: @@ -42,7 +47,11 @@ As implemented by OpenStack Heat using server groups: Anti-Affinity ............. -Ensures that the VM instance is launched "with anti-affinity to" specific resources, e.g. outside a compute or storage cluster, or geographic location. This filter is analogous to the anti-affinity rules in vSphere DRS. Examples include: "Different Host Filter", i.e. ensures that the VM instance is launched on a different compute node from a given set of instances, as defined in a scheduler hint list. +Ensures that the VM instance is launched "with anti-affinity to" specific resources, +e.g. outside a compute or storage cluster, or geographic location. +This filter is analogous to the anti-affinity rules in vSphere DRS. +Examples include: "Different Host Filter", i.e. ensures that the VM instance is launched +on a different compute node from a given set of instances, as defined in a scheduler hint list. As implemented by OpenStack Heat using scheduler hints: @@ -81,12 +90,16 @@ As implemented by OpenStack Heat using scheduler hints: DMZ Deployment .............. -As a service provider, I need to ensure that applications which have not been designed for exposure in a DMZ zone, are not attached to DMZ networks. +As a service provider, +I need to ensure that applications which have not been designed for exposure in a DMZ zone, +are not attached to DMZ networks. Configuration Auditing ---------------------- -As a service provider or tenant, I need to periodically verify that resource configuration requirements have not been violated, as a backup means to proactive or reactive policy enforcement. +As a service provider or tenant, +I need to periodically verify that resource configuration requirements have not been violated, +as a backup means to proactive or reactive policy enforcement. Generic Policy Requirements +++++++++++++++++++++++++++ @@ -94,7 +107,9 @@ Generic Policy Requirements NFVI Self-Service Constraints ----------------------------- -As an NFVI provider, I need to ensure that my self-service tenants are not able to configure their VNFs in ways that would impact other tenants or the reliability, security, etc of the NFVI. +As an NFVI provider, +I need to ensure that my self-service tenants are not able to configure their VNFs in ways +that would impact other tenants or the reliability, security, etc of the NFVI. Network Access Control ...................... @@ -103,10 +118,16 @@ Networks connected to VMs must be public, or owned by someone in the VM owner's This use case captures the intent of the following sub-use-cases: - * Link Mirroring: As a troubleshooter, I need to mirror traffic from physical or virtual network ports so that I can investigate trouble reports. - * Link Mirroring: As a NFVaaS tenant, I need to be able to mirror traffic on my virtual network ports so that I can investigate trouble reports. - * Unauthorized Link Mirroring Prevention: As a NFVaaS tenant, I need to be able to prevent other tenants from mirroring traffic on my virtual network ports so that I can protect the privacy of my service users. - * Link Mirroring Delegation: As a NFVaaS tenant, I need to be able to allow my NFVaaS SP customer support to mirror traffic on my virtual network ports so that they can assist in investigating trouble reports. + * Link Mirroring: As a troubleshooter, + I need to mirror traffic from physical or virtual network ports so that I can investigate trouble reports. + * Link Mirroring: As a NFVaaS tenant, + I need to be able to mirror traffic on my virtual network ports so that I can investigate trouble reports. + * Unauthorized Link Mirroring Prevention: As a NFVaaS tenant, + I need to be able to prevent other tenants from mirroring traffic on my virtual network ports + so that I can protect the privacy of my service users. + * Link Mirroring Delegation: As a NFVaaS tenant, + I need to be able to allow my NFVaaS SP customer support to mirror traffic on my virtual network ports + so that they can assist in investigating trouble reports. As implemented through OpenStack Congress: @@ -157,7 +178,9 @@ Resource Management Resource Reclamation .................... -As a service provider or tenant, I need to be informed of VMs that are under-utilized so that I can reclaim the VI resources. (example from `RuleYourCloud blog `_) +As a service provider or tenant, +I need to be informed of VMs that are under-utilized so that I can reclaim the VI resources. +(example from `RuleYourCloud blog `_) As implemented through OpenStack Congress: @@ -175,7 +198,8 @@ As implemented through OpenStack Congress: Resource Use Limits ................... -As a tenant or service provider, I need to be automatically terminate an instance that has run for a pre-agreed maximum duration. +As a tenant or service provider, +I need to be automatically terminate an instance that has run for a pre-agreed maximum duration. As implemented through OpenStack Congress: diff --git a/docs/userguide/featureusage.rst b/docs/userguide/featureusage.rst new file mode 100644 index 0000000..00bf78c --- /dev/null +++ b/docs/userguide/featureusage.rst @@ -0,0 +1,5 @@ +Copper capabilities and usage +============================= +This release focused on use of the OpenStack Congress service for managing +configuration policy. See the `Congress intro guide on readthedocs `_ for information on the capabilities and usage of Congress. + diff --git a/docs/userguide/userguide.rst b/docs/userguide/userguide.rst deleted file mode 100644 index 00bf78c..0000000 --- a/docs/userguide/userguide.rst +++ /dev/null @@ -1,5 +0,0 @@ -Copper capabilities and usage -============================= -This release focused on use of the OpenStack Congress service for managing -configuration policy. See the `Congress intro guide on readthedocs `_ for information on the capabilities and usage of Congress. - -- cgit 1.2.3-korg