diff options
author | blsaws <bryan.sullivan@att.com> | 2016-08-18 22:35:08 -0700 |
---|---|---|
committer | blsaws <bryan.sullivan@att.com> | 2016-08-18 22:35:08 -0700 |
commit | 816fc353daf07ffe8f2adc12d565c070fe4f3746 (patch) | |
tree | 46c8fa7fca2e7514e5581bf2fd1dc73cbb620201 | |
parent | 4e504df222285d26bee5365c166d690ae6997353 (diff) |
Update docs for Colorado
JIRA: COPPER-1
Change-Id: I2c0b1fa29c7129aceff6779ca48f7fb26f288063
Signed-off-by: blsaws <bryan.sullivan@att.com>
-rw-r--r-- | docs/configguide/featureconfig.rst | 105 | ||||
-rw-r--r-- | docs/configguide/postinstall.rst | 230 | ||||
-rw-r--r-- | docs/design/architecture.rst | 83 | ||||
-rw-r--r-- | docs/design/definitions.rst | 7 | ||||
-rw-r--r-- | docs/design/introduction.rst | 81 | ||||
-rw-r--r-- | docs/design/requirements.rst | 145 | ||||
-rw-r--r-- | docs/design/usecases.rst | 169 | ||||
-rw-r--r-- | docs/userguide/featureusage.rst | 7 |
8 files changed, 417 insertions, 410 deletions
diff --git a/docs/configguide/featureconfig.rst b/docs/configguide/featureconfig.rst index f871258..5c900d7 100644 --- a/docs/configguide/featureconfig.rst +++ b/docs/configguide/featureconfig.rst @@ -1,15 +1,13 @@ Copper configuration ==================== -This release focused on use of the OpenStack Congress service for managing -configuration policy. The Congress install procedure described here is largely -manual. This procedure, as well as the longer-term goal of automated installer -support, is a work in progress. The procedure is further specific to one OPNFV -installer (JOID, i.e. MAAS/JuJu) based environment. Support for other OPNFV -installer deployed environments is also a work in progress. +This release includes installer support for the OpenStack Congress service under +JOID and Apex installers. Congress is installed by default for all JOID and Apex +scenarios. Support for other OPNFV installer deployed environments is planned +for the next release. Pre-configuration activities ---------------------------- -This procedure assumes OPNFV has been installed via the JOID installer. +None required. Hardware configuration ---------------------- @@ -17,55 +15,60 @@ There is no specific hardware configuration required for the Copper project. Feature configuration --------------------- -Following are instructions for installing Congress on an Ubuntu 14.04 LXC -container in the OPNFV Controller node, as installed by the JOID installer. -This guide uses instructions from the -`Congress intro guide on readthedocs <http://congress.readthedocs.org/en/latest/readme.html#installing-congress|Congress>`_. -Specific values below will need to be modified if you intend to repeat this -procedure in your JOID-based install environment. -Install Procedure -................. -The install currently occurs via four bash scripts provided in the copper repo. See these files for the detailed steps: - * `install_congress_1.sh <https://git.opnfv.org/cgit/copper/tree/components/congress/joid/install_congress_1.sh>`_ - * creates and starts the linux container for congress on the controller node - * copies install_congress_2.sh to the controller node and invokes it via ssh - * `install_congress_2.sh <https://git.opnfv.org/cgit/copper/tree/components/congress/joid/install_congress_2.sh>`_ - * installs congress on the congress server. +OPNFV installer support +....................... -Cleanup Procedure -................. -If there is an error during installation, use the bash script -`clean_congress.sh <https://git.opnfv.org/cgit/copper/tree/components/congress/joid/clean_congress.sh>`_ -which stops the congress server if running, and removes the congress user and -service from the controller database. +The Congress service is automatically configured as required by the JOID and +Apex installers, including creation of datasources per the installed datasource +drivers. This release includes default support for the following datasource drivers: + * nova + * neutronv2 + * ceilometer + * cinder + * glancev2 + * keystone -Restarting after server power loss etc -...................................... +For JOID, Congress is installed through a JuJu Charm, and for Apex through a +Puppet Module. Both the Charm and Module are being upstreamed to OpenStack for +future maintenance. -Currently this install procedure is manual. Automated install and restoral after host -recovery is TBD. For now, this procedure will get the Congress service running again. +Other project installer support (e.g. Doctor) may install additional datasource +drivers once Congress is installed. + +Manual installation +................... + +NOTE: This section describes a manual install procedure that had been tested +under the JOID and Apex base installs, prior to the integration of native +installer support through JuJu (JOID) and Puppet (Apex). This procedure is being +maintained as a basis for additional installer support in future releases. +However since Congress is pre-installed for JOID and Apex, this procedure is not +necessary and not recommended for use if Congress is already installed. + +Copper provides a set of bash scripts to automatically install Congress based +upon a JOID or Apex install which does not already have Congress installed. +These scripts are in the Copper repo at: + * components/congress/install/bash/install_congress_1.sh + * components/congress/install/bash/install_congress_2.sh + +Prerequisites to using these scripts: + * OPFNV installed via JOID or Apex + * For Apex installs, on the jumphost, ssh to the undercloud VM and "su stack". + * For JOID installs, admin-openrc.sh saved from Horizon to ~/admin-openrc.sh + * Retrieve the copper install script as below, optionally specifying the branch + to use as a URL parameter, e.g. ?h=stable%2Fbrahmaputra + +To invoke the procedure, enter the following shell commands, optionally +specifying the branch identifier to use for OpenStack. .. code:: - # On jumphost, SSH to Congress server - source ~/env.sh - juju ssh ubuntu@$CONGRESS_HOST - # If that fails - # On jumphost, SSH to controller node - juju ssh ubuntu@node1-control - # Start the Congress container - sudo lxc-start -n juju-trusty-congress -d - # Verify the Congress container status - sudo lxc-ls -f juju-trusty-congress - NAME STATE IPV4 IPV6 GROUPS AUTOSTART - ---------------------------------------------------------------------- - juju-trusty-congress RUNNING 192.168.10.117 - - NO - # exit back to the Jumphost, wait a minute, and go back to the "SSH to Congress server" step above - # On the Congress server that you have logged into - source ~/admin-openrc.sh - cd ~/git/congress - source bin/activate - bin/congress-server & - disown -h %1 +cd ~ +wget https://git.opnfv.org/cgit/copper/plain/components/congress/install/bash/install_congress_1.sh +wget https://git.opnfv.org/cgit/copper/plain/components/congress/install/bash/install_congress_2.sh +bash install_congress_1.sh [openstack-branch] +Copper post configuration procedures +------------------------------------ +No configuration procedures are required beyond the basic install procedure. diff --git a/docs/configguide/postinstall.rst b/docs/configguide/postinstall.rst index 9252d95..69c38c3 100644 --- a/docs/configguide/postinstall.rst +++ b/docs/configguide/postinstall.rst @@ -1,212 +1,58 @@ Copper post installation procedures =================================== -This release focused on use of the OpenStack Congress service for managing -configuration policy. The Congress install verify procedure described here -is largely manual. This procedure, as well as the longer-term goal of -automated verification support, is a work in progress. The procedure is -further specific to one OPNFV installer (JOID, i.e. MAAS/JuJu) based -environment. -Automated post installation activities --------------------------------------- -No automated procedures are provided at this time. +This section describes optional procedures for verifying that the Congress +service is operational, and additional test tools developed for the Colorado +release. -Copper post configuration procedures ------------------------------------- -No configuration procedures are required beyond the basic install procedure. +Copper functional tests +----------------------- -Platform components validation ------------------------------- +This release includes the following test cases which are integrated into OPNFV +Functest for the JOID and Apex installers: + * DMZ Placement: dmz.sh + * SMTP Ingress: smtp_ingress.sh + * Reserved Subnet: reserved_subnet.sh -Following are notes on creating a container as test driver for Congress. -This is based upon an Ubuntu host as installed by JOID. +These scripts, related scripts that clean up the OpenStack environment afterward, +and a combined test runner (run.sh) are in the Copper repo under the "tests" +folder. Instructions for using the tests are provided as script comments. -Create and Activate the Container -................................. -On the jumphost: +Further description of the tests is provided on the Copper wiki at +https://wiki.opnfv.org/display/copper/testing. -.. code:: - sudo lxc-create -n trusty-copper -t /usr/share/lxc/templates/lxc-ubuntu \ - -- -b ubuntu ~/opnfv - sudo lxc-start -n trusty-copper -d - sudo lxc-info --name trusty-copper - (typical output) - Name: trusty-copper - State: RUNNING - PID: 4563 - IP: 10.0.3.44 - CPU use: 28.77 seconds - BlkIO use: 522.79 MiB - Memory use: 559.75 MiB - KMem use: 0 bytes - Link: vethDMFOAN - TX bytes: 2.62 MiB - RX bytes: 88.48 MiB - Total bytes: 91.10 MiB - -Login and configure the test server -................................... +Congress test webapp +-------------------- -.. code:: +This release also provides a webapp that can be automatically installed in a +docker container on the jumphost. This script is in the Copper repo at: + * components/congress/test-webapp/setup/install_congress_testserver.sh - ssh ubuntu@10.0.3.44 - sudo apt-get update - sudo apt-get upgrade -y - - # Install pip - sudo apt-get install python-pip -y - - # Install java - sudo apt-get install default-jre -y - - # Install other dependencies - sudo apt-get install git gcc python-dev libxml2 libxslt1-dev \ - libzip-dev php5-curl -y - - # Setup OpenStack environment variables per your OPNFV install - export CONGRESS_HOST=192.168.10.117 - export KEYSTONE_HOST=192.168.10.108 - export CEILOMETER_HOST=192.168.10.105 - export CINDER_HOST=192.168.10.101 - export GLANCE_HOST=192.168.10.106 - export HEAT_HOST=192.168.10.107 - export NEUTRON_HOST=192.168.10.111 - export NOVA_HOST=192.168.10.112 - source ~/admin-openrc.sh - - # Install and test OpenStack client - mkdir ~/git - cd git - git clone https://github.com/openstack/python-openstackclient.git - cd python-openstackclient - git checkout stable/liberty - sudo pip install -r requirements.txt - sudo python setup.py install - openstack service list - (typical output) - +----------------------------------+------------+----------------+ - | ID | Name | Type | - +----------------------------------+------------+----------------+ - | 2f8799ae50f24c928c021fabf8a50f5f | keystone | identity | - | 351b13f56d9a4e25849406ec1d5a2726 | cinder | volume | - | 5129510c3143454f9ba8ec7e6735e267 | cinderv2 | volumev2 | - | 5ee1e220460f41dea9be06921400ce9b | congress | policy | - | 78e73a7789a14f56a5d248a0cd141201 | quantum | network | - | 9d5a00fb475a45b2ae6767528299ed6b | ceilometer | metering | - | 9e4b1624ef0b434abc0b82f607c5045c | heat | orchestration | - | b6c01ceb5023442d9f394b83f2a18e01 | heat-cfn | cloudformation | - | ba6199e3505045ad87e2a7175bd0c57f | glance | image | - | d753f304a0d541dbb989780ae70328a8 | nova | compute | - +----------------------------------+------------+----------------+ - - # Install and test Congress client - cd ~/git - git clone https://github.com/openstack/python-congressclient.git - cd python-congressclient - git checkout stable/liberty - sudo pip install -r requirements.txt - sudo python setup.py install - openstack congress driver list - (typical output) - +------------+--------------------------------------------------------------------------+ - | id | description | - +------------+--------------------------------------------------------------------------+ - | ceilometer | Datasource driver that interfaces with ceilometer. | - | neutronv2 | Datasource driver that interfaces with OpenStack Networking aka Neutron. | - | nova | Datasource driver that interfaces with OpenStack Compute aka nova. | - | keystone | Datasource driver that interfaces with keystone. | - | cinder | Datasource driver that interfaces with OpenStack cinder. | - | glancev2 | Datasource driver that interfaces with OpenStack Images aka Glance. | - +------------+--------------------------------------------------------------------------+ - - # Install and test Glance client - cd ~/git - git clone https://github.com/openstack/python-glanceclient.git - cd python-glanceclient - git checkout stable/liberty - sudo pip install -r requirements.txt - sudo python setup.py install - glance image-list - (typical output) - +--------------------------------------+---------------------+ - | ID | Name | - +--------------------------------------+---------------------+ - | 6ce4433e-65c0-4cd8-958d-b06e30c76241 | cirros-0.3.3-x86_64 | - +--------------------------------------+---------------------+ - - # Install and test Neutron client - cd ~/git - git clone https://github.com/openstack/python-neutronclient.git - cd python-neutronclient - git checkout stable/liberty - sudo pip install -r requirements.txt - sudo python setup.py install - neutron net-list - (typical output) - +--------------------------------------+----------+------------------------------------------------------+ - | id | name | subnets | - +--------------------------------------+----------+------------------------------------------------------+ - | dc6227df-af41-439f-bd2c-c2c2f0fe7fc5 | public | 5745846c-dd79-4900-a7da-bf506348ceac 192.168.10.0/24 | - | a3f9f13a-5de9-4d3b-98c8-d2e40a2ef8e9 | internal | 5e0be862-90da-44ab-af43-56d5c65aa049 10.0.0.0/24 | - +--------------------------------------+----------+------------------------------------------------------+ - - # Install and test Nova client - cd ~/git - git clone https://github.com/openstack/python-novaclient.git - cd python-novaclient - git checkout stable/liberty - sudo pip install -r requirements.txt - sudo python setup.py install - nova hypervisor-list - (typical output) - +----+---------------------+-------+---------+ - | ID | Hypervisor hostname | State | Status | - +----+---------------------+-------+---------+ - | 1 | compute1.maas | up | enabled | - +----+---------------------+-------+---------+ - - # Install and test Keystone client - cd ~/git - git clone https://github.com/openstack/python-keystoneclient.git - cd python-keystoneclient - git checkout stable/liberty - sudo pip install -r requirements.txt - sudo python setup.py install - -Setup the Congress Test Webapp -.............................. +Prerequisites to using this script: + * OPFNV installed per JOID or Apex installer + * For Apex installs, on the jumphost, ssh to the undercloud VM and "su stack" -.. code:: +To invoke the procedure, enter the following shell commands, optionally +specifying the branch identifier to use for Copper. - # Clone Copper (if not already cloned in user home) - cd ~/git - if [ ! -d ~/git/copper ]; then \ - git clone https://gerrit.opnfv.org/gerrit/copper; fi +.. code:: - # Copy the Apache config - sudo cp ~/git/copper/components/congress/test-webapp/www/ubuntu-apache2.conf \ - /etc/apache2/apache2.conf +wget https://git.opnfv.org/cgit/copper/plain/components/congress/test-webapp/setup/install_congress_testserver.sh +bash install_congress_testserver.sh [copper-branch] - # Point proxy.php to the Congress server per your install - sed -i -- "s/192.168.10.117/$CONGRESS_HOST/g" \ - ~/git/copper/components/congress/test-webapp/www/html/proxy/index.php +Using the test webapp +..................... - # Copy the webapp to the Apache root directory and fix permissions - sudo cp -R ~/git/copper/components/congress/test-webapp/www/html /var/www - sudo chmod 755 /var/www/html -R +Browse to the webapp IP address provided at the end of the install +procedure. - # Make webapp log directory and set permissions - mkdir ~/logs - chmod 777 ~/logs +Interactive options are meant to be self-explanatory given a basic familiarity +with the Congress service and data model. - # Restart Apache - sudo service apache2 restart +Removing the test webapp +........................ -Using the Test Webapp -..................... -Browse to the trusty-copper server IP address. +The webapp can be removed by running this script from the Copper repo: + * components/congress/test-webapp/setup/clean_congress_testserver.sh -Interactive options are meant to be self-explanatory given a basic familiarity -with the Congress service and data model. -But the app will be developed with additional features and UI elements. diff --git a/docs/design/architecture.rst b/docs/design/architecture.rst index d55c208..8c6d04b 100644 --- a/docs/design/architecture.rst +++ b/docs/design/architecture.rst @@ -3,76 +3,93 @@ Architecture Architectural Concept --------------------- -The following example diagram illustrates a "relationship diagram" type view of an NFVI platform, -in which the roles of components focused on policy management, services, and infrastructure are shown. -This view illustrates that a large-scale deployment of NFVI may leverage multiple components of the same "type" -(e.g. SDN Controller), which fulfill specific purposes for which they are optimized. For example, a global SDN -controller and cloud orchestrator can act as directed by a service orchestrator in the provisioning of VNFs per -intent, while various components at a local and global level handle policy-related events directly and/or feed -them back through a closed-loop policy design that responds as needed, directly or through the service orchestrator. +The following example diagram illustrates a "relationship diagram" type view of +an NFVI platform, in which the roles of components focused on policy management, +services, and infrastructure are shown. + +This view illustrates that a large-scale deployment of NFVI may leverage multiple +components of the same "type" (e.g. SDN Controller), which fulfill specific +purposes for which they are optimized. For example, a global SDN controller and +cloud orchestrator can act as directed by a service orchestrator in the +provisioning of VNFs per intent, while various components at a local and global +level handle policy-related events directly and/or feed them back through a +closed-loop policy design that responds as needed, directly or through the +service orchestrator. .. image:: ./images/policy_architecture.png :width: 700 px :alt: policy_architecture.png :align: center -(source of the diagram above: https://git.opnfv.org/cgit/copper/plain/design_docs/images/policy_architecture.pptx) +(source of the diagram above: +https://git.opnfv.org/cgit/copper/plain/design_docs/images/policy_architecture.pptx) Architectural Aspects --------------------- * Policies are reflected in two high-level goals - * Ensure resource requirements of VNFs and services are applied per VNF designer, service, and tenant intent - * Ensure that generic policies are not violated, - e.g. *networks connected to VMs must either be public or owned by the VM owner* + * Ensure resource requirements of VNFs and services are applied per VNF + designer, service, and tenant intent + * Ensure that generic policies are not violated, e.g. *networks connected to + VMs must either be public or owned by the VM owner* * Policies are distributed through two main means - * As part of VNF packages, customized if needed by Service Design tools, expressing intent of the VNF designer and - service provider, and possibly customized or supplemented by service orchestrators per the intent of specific - tenants - * As generic policies provisioned into VIMs (SDN controllers and cloud orchestrators), expressing intent of the - service provider re what states/events need to be policy-governed independently of specific VNFs + * As part of VNF packages, customized if needed by Service Design tools, + expressing intent of the VNF designer and service provider, and possibly + customized or supplemented by service orchestrators per the intent of + specific tenants + * As generic policies provisioned into VIMs (SDN controllers and cloud + orchestrators), expressing intent of the service provider re what + states/events need to be policy-governed independently of specific VNFs - * Policies are applied locally and in closed-loop systems per the capabilities of the local policy enforcer and - the impact of the related state/event conditions + * Policies are applied locally and in closed-loop systems per the capabilities + of the local policy enforcer and the impact of the related state/event conditions * VIMs should be able to execute most policies locally * VIMs may need to pass policy-related state/events to a closed-loop system, - where those events are relevant to other components in the architecture (e.g. service orchestrator), - or some additional data/arbitration is needed to resolve the state/event condition + where those events are relevant to other components in the architecture + (e.g. service orchestrator), or some additional data/arbitration is needed + to resolve the state/event condition * Policies are localized as they are distributed/delegated - * High-level policies (e.g. expressing "intent") can be translated into VNF package elements or generic policies, - perhaps using distinct syntaxes - * Delegated policy syntaxes are likely VIM-specific, e.g. Datalog (Congress), YANG (ODL-based SDNC), - or other schemas specific to other SDNCs (Contrail, ONOS) + * High-level policies (e.g. expressing "intent") can be translated into VNF + package elements or generic policies, perhaps using distinct syntaxes + * Delegated policy syntaxes are likely VIM-specific, e.g. Datalog (Congress) * Closed-loop policy and VNF-lifecycle event handling are //somewhat// distinct - * Closed-loop policy is mostly about resolving conditions that can't be handled locally, but as above in some cases - the conditions may be of relevance and either delivered directly or forwarded to service orchestrators - * VNF-lifecycle events that can't be handled by the VIM locally are delivered directly to the service orchestrator + * Closed-loop policy is mostly about resolving conditions that can't be + handled locally, but as above in some cases the conditions may be of + relevance and either delivered directly or forwarded to service orchestrators + * VNF-lifecycle events that can't be handled by the VIM locally are delivered + directly to the service orchestrator - * Some events/analytics need to be collected into a more "open-loop" system which can enable other actions, e.g. + * Some events/analytics need to be collected into a more "open-loop" system + which can enable other actions, e.g. * audits and manual interventions * machine-learning focused optimizations of policies (largely a future objective) -Issues to be investigated as part of establishing an overall cohesive/adaptive policy architecture: +Issues to be investigated as part of establishing an overall cohesive/adaptive +policy architecture: - * For the various components which may fulfill a specific purpose, what capabilities (e.g. APIs) do they have/need to + * For the various components which may fulfill a specific purpose, what + capabilities (e.g. APIs) do they have/need to * handle events locally - * enable closed-loop policy handling components to subscribe/optimize policy-related events that are of interest + * enable closed-loop policy handling components to subscribe/optimize + policy-related events that are of interest * For global controllers and cloud orchestrators - * How do they support correlation of events impacting resources in different scopes (network and cloud) + * How do they support correlation of events impacting resources in different + scopes (network and cloud) * What event/response flows apply to various policy use cases * What specific policy use cases can/should fall into each overall class * locally handled by NFVI components - * handled by a closed-loop policy system, either VNF/service-specific or VNF-independent + * handled by a closed-loop policy system, either VNF/service-specific or + VNF-independent diff --git a/docs/design/definitions.rst b/docs/design/definitions.rst index 7f0628a..daf7a48 100644 --- a/docs/design/definitions.rst +++ b/docs/design/definitions.rst @@ -8,10 +8,13 @@ Definitions - Meaning * - State - - Information that can be used to convey or imply the state of something, e.g. an application, resource, entity, etc. This can include data held inside OPNFV components, "events" that have occurred (e.g. "policy violation"), etc. + - Information that can be used to convey or imply the state of something, +e.g. an application, resource, entity, etc. This can include data held inside +OPNFV components, "events" that have occurred (e.g. "policy violation"), etc. * - Event - - An item of significance to the policy engine, for which the engine has become aware thr ough some method of discovery e.g. polling or notification. + - An item of significance to the policy engine, for which the engine has +become aware through some method of discovery e.g. polling or notification. Abbreviations ============= diff --git a/docs/design/introduction.rst b/docs/design/introduction.rst index a7cbb02..676adab 100644 --- a/docs/design/introduction.rst +++ b/docs/design/introduction.rst @@ -9,27 +9,29 @@ Introduction .. NOTE:: This is the working documentation for the Copper project. -The `OPNFV Copper <https://wiki.opnfv.org/copper>`_ project aims to help ensure that virtualized infrastructure -deployments comply with goals of the VNF designer/user, e.g. re affinity and partitioning (e.g. per regulation, -control/user plane separation, cost...). -This is a "requirements" project with initial goal to assess "off the shelf" basic OPNFV platform support for policy -management, using existing open source projects such as OpenStack Congress and OpenDaylight Group-Based Policy (GBP). -The project will assess what policy-related features are currently supported through research into the related projects -in OpenStack and ODL, and testing of integrated vanilla distributions of those and other dependent open source projects -in the OPNFV's NFVI platform scope. - -Configuration Policy --------------------- - -As focused on by Copper, configuration policy helps ensure that the NFV service environment meets the requirements of -the variety of stakeholders which will provide or use NFV platforms. +The `OPNFV Copper <https://wiki.opnfv.org/copper>`_ project aims to help ensure +that virtualized infrastructure and application deployments comply with goals of +the NFV service provider or the VNF designer/user. + +This is the second ("Colorado") release of the Copper project. The documenation +provided here focuses on the overall goals of the Copper project, and the +specific features supported in the Colorado release. + +Overall Goals for Configuration Policy +-------------------------------------- + +As focused on by Copper, configuration policy helps ensure that the NFV service +environment meets the requirements of the variety of stakeholders which will +provide or use NFV platforms. + These requirements can be expressed as an *intent* of the stakeholder, in specific terms or more abstractly, but at the highest level they express: * what I want * what I don't want -Using road-based transportation as an analogy, some examples of this are shown below. +Using road-based transportation as an analogy, some examples of this are shown +below. .. list-table:: Configuration Intent Example :widths: 10 45 45 @@ -48,12 +50,17 @@ Using road-based transportation as an analogy, some examples of this are shown b - shoulder warning strips, center media barriers - speeding, tractors on the freeway -According to their role, service providers may apply more specific configuration requirements than users, -since service providers are more likely to be managing specific types of infrastructure capabilities. +According to their role, service providers may apply more specific configuration +requirements than users, since service providers are more likely to be managing +specific types of infrastructure capabilities. + Developers and users may also express their requirements more specifically, based upon the type of application or how the user intends to use it. -For users, a high-level intent can be also translated into a more or less specific configuration capability -by the service provider, taking into consideration aspects such as the type of application or its constraints. + +For users, a high-level intent can be also translated into a more or less specific +configuration capability by the service provider, taking into consideration +aspects such as the type of application or its constraints. + Examples of such translation are: .. list-table:: Intent Translation into Configuration Capability @@ -77,17 +84,29 @@ Examples of such translation are: * - resource reclamation - low-usage monitoring -Although such intent to capability translation is conceptually useful, it is unclear how it can address the variety of -aspects that may affect the choice of an applicable configuration capability. -For that reason, the Copper project will initially focus on more specific configuration requirements as fulfilled by -specific configuration capabilities, and how those requirements and capabilities are expressed in VNF and service +Although such intent to capability translation is conceptually useful, it is +unclear how it can address the variety of aspects that may affect the choice of +an applicable configuration capability. + +For that reason, the Copper project will initially focus on more specific +configuration requirements as fulfilled by specific configuration capabilities, +and how those requirements and capabilities are expressed in VNF and service design and packaging, or as generic poicies for the NFVI. -Release 1 Scope ---------------- -OPNFV Brahmaputra will be the initial OPNFV release for Copper, with the goals: - * Add the OpenStack Congress service to OPNFV, through at least one installer project - * If possible, add Congress support to the OPNFV CI/CD pipeline for all Genesis project installers - (Apex, Fuel, JOID, Compass) - * Integrate Congress tests into Functest and develop additional use case tests for post-OPNFV-install - * Extend with other OpenStack components for testing, as time permits +Copper Release 1 Scope +---------------------- +OPNFV Brahmaputra was the initial OPNFV release for Copper, and achieved the +goals: + * Add the OpenStack Congress service to OPNFV, through at least one installer +project, through post-install configuration. + * Provide basis tests scripts and tools to exercise the Congress service + +Copper Release 2 Scope +---------------------- +OPNFV Colorado includes the additional features: + * Congress support in the the OPNFV CI/CD pipeline for the JOID and Apex +installers, through the following projects being upstreamed to OpenStack: + * For JOID, a JuJu Charm for Congress + * For Apex, a Puppet Module for Congress + * Congress use case tests integrated into Functest and as manual tests + * Further enhancements of Congress test tools diff --git a/docs/design/requirements.rst b/docs/design/requirements.rst index a9b5bc0..a3f32d8 100644 --- a/docs/design/requirements.rst +++ b/docs/design/requirements.rst @@ -2,51 +2,77 @@ Requirements ============ This section outlines general requirements for configuration policies, per the two main aspects in the Copper project scope: - * Ensuring resource requirements of VNFs and services are applied per VNF designer, service, and tenant intent + * Ensuring resource requirements of VNFs and services are applied per VNF + designer, service, and tenant intent * Ensuring that generic policies are not violated, e.g. *networks connected to VMs must either be public or owned by the VM owner* Resource Requirements +++++++++++++++++++++ -Resource requirements describe the characteristics of virtual resources (compute, storage, network) that are needed for -VNFs and services, and how those resources should be managed over the lifecycle of a VNF/service. Upstream projects -already include multiple ways in which resource requirements can be expressed and fulfilled, e.g.: +Resource requirements describe the characteristics of virtual resources (compute, +storage, network) that are needed for VNFs and services, and how those resources +should be managed over the lifecycle of a VNF/service. Upstream projects already +include multiple ways in which resource requirements can be expressed and fulfilled, e.g.: * OpenStack Nova - * the `image <http://docs.openstack.org/openstack-ops/content/user_facing_images.html>`_ feature, enabling - "VM templates" to be defined for NFs, and referenced by name as a specific NF version to be used - * the `flavor <http://docs.openstack.org/openstack-ops/content/flavors.html>`_ feature, addressing basic compute - and storage requirements, with extensibility for custom attributes + * the `image + <http://docs.openstack.org/openstack-ops/content/user_facing_images.html>`_ + feature, enabling "VM templates" to be defined for NFs, and referenced by + name as a specific NF version to be used + * the `flavor <http://docs.openstack.org/openstack-ops/content/flavors.html>`_ + feature, addressing basic compute and storage requirements, with + extensibility for custom attributes * OpenStack Heat - * the `Heat Orchestration Template <http://docs.openstack.org/developer/heat/template_guide/index.html>`_ feature, - enabling a variety of VM aspects to be defined and managed by Heat throughout the VM lifecycle, notably - * alarm handling (requires `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_) - * attached volumes (requires `Cinder <https://wiki.openstack.org/wiki/Cinder>`_) - * domain name assignment (requires `Designate <https://wiki.openstack.org/wiki/Designate>`_) + * the `Heat Orchestration Template + <http://docs.openstack.org/developer/heat/template_guide/index.html>`_ + feature, enabling a variety of VM aspects to be defined and managed by + Heat throughout the VM lifecycle, notably + * alarm handling (requires + `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_) + * attached volumes (requires + `Cinder <https://wiki.openstack.org/wiki/Cinder>`_) + * domain name assignment (requires + `Designate <https://wiki.openstack.org/wiki/Designate>`_) * images (requires `Glance <https://wiki.openstack.org/wiki/Glance>`_) * autoscaling - * software configuration associated with VM "lifecycle hooks (CREATE, UPDATE, SUSPEND, RESUME, DELETE" + * software configuration associated with VM "lifecycle hooks (CREATE, + UPDATE, SUSPEND, RESUME, DELETE" * wait conditions and signaling for sequencing orchestration steps - * orchestration service user management (requires `Keystone <http://docs.openstack.org/developer/keystone/>`_) + * orchestration service user management (requires + `Keystone <http://docs.openstack.org/developer/keystone/>`_) * shared storage (requires `Manila <https://wiki.openstack.org/wiki/Manila>`_) - * load balancing (requires Neutron `LBaaS <http://docs.openstack.org/admin-guide-cloud/content/section_lbaas-overview.html>`_) - * firewalls (requires Neutron `FWaaS <http://docs.openstack.org/admin-guide-cloud/content/install_neutron-fwaas-agent.html>`_) + * load balancing (requires Neutron + `LBaaS + <http://docs.openstack.org/admin-guide-cloud/content/section_lbaas-overview.html>`_) + * firewalls (requires Neutron + `FWaaS + <http://docs.openstack.org/admin-guide-cloud/content/install_neutron-fwaas-agent.html>`_) * various Neutron-based network and security configuration items * Nova flavors * Nova server attributes including access control * Nova server group affinity and anti-affinity - * "Data-intensive application clustering" (requires `Sahara <https://wiki.openstack.org/wiki/Sahara>`_) + * "Data-intensive application clustering" (requires + `Sahara <https://wiki.openstack.org/wiki/Sahara>`_) * DBaaS (requires `Trove <http://docs.openstack.org/developer/trove/>`_) - * "multi-tenant cloud messaging and notification service" (requires `Zaqar <http://docs.openstack.org/developer/zaqar/>`_) - * OpenStack `Group-Based Policy <https://wiki.openstack.org/wiki/GroupBasedPolicy>`_ - * API-based grouping of endpoints with associated contractual expectations for data flow processing and - service chaining + * "multi-tenant cloud messaging and notification service" (requires + `Zaqar <http://docs.openstack.org/developer/zaqar/>`_) + * OpenStack `Group-Based Policy + <https://wiki.openstack.org/wiki/GroupBasedPolicy>`_ + * API-based grouping of endpoints with associated contractual expectations + for data flow processing and service chaining * OpenStack `Tacker <https://wiki.openstack.org/wiki/Tacker>`_ - * "a fully functional ETSI MANO based general purpose NFV Orchestrator and VNF Manager for OpenStack" - * OpenDaylight `Group-Based Policy <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)>`_ - * model-based grouping of endpoints with associated contractual expectations for data flow processing - * OpenDaylight `Service Function Chaining (SFC) <https://wiki.opendaylight.org/view/Service_Function_Chaining:Main>`_ - * model-based management of "service chains" and the infrastucture that enables them - * Additional projects that are commonly used for configuration management, implemented as client-server frameworks using model-based, declarative, or scripted configuration management data. + * "a fully functional ETSI MANO based general purpose NFV Orchestrator and + VNF Manager for OpenStack" + * OpenDaylight `Group-Based Policy + <https://wiki.opendaylight.org/view/Group_Based_Policy_(GBP)>`_ + * model-based grouping of endpoints with associated contractual expectations + for data flow processing + * OpenDaylight `Service Function Chaining (SFC) + <https://wiki.opendaylight.org/view/Service_Function_Chaining:Main>`_ + * model-based management of "service chains" and the infrastucture that + enables them + * Additional projects that are commonly used for configuration management, + implemented as client-server frameworks using model-based, declarative, or + scripted configuration management data. * `Puppet <https://puppetlabs.com/puppet/puppet-open-source>`_ * `Chef <https://www.chef.io/chef/>`_ * `Ansible <http://docs.ansible.com/ansible/index.html>`_ @@ -54,36 +80,59 @@ already include multiple ways in which resource requirements can be expressed an Generic Policy Requirements +++++++++++++++++++++++++++ -Generic policy requirements address conditions related to resource state and events which need to be monitored for, -and optionally responded to or prevented. These conditions are typically expected to be VNF/service-independent, -as VNF/service-dependent condition handling (e.g. scale in/out) are considered to be addressed by VNFM/NFVO/VIM -functions as described under Resource Requirements or as FCAPS related functions. However the general capabilities -below can be applied to VNF/service-specific policy handling as well, or in particular to invocation of -VNF/service-specific management/orchestration actions. The high-level required capabilities include: +Generic policy requirements address conditions related to resource state and +events which need to be monitored for, and optionally responded to or prevented. +These conditions are typically expected to be VNF/service-independent, as +VNF/service-dependent condition handling (e.g. scale in/out) are considered to +be addressed by VNFM/NFVO/VIM functions as described under Resource Requirements +or as FCAPS related functions. However the general capabilities below can be +applied to VNF/service-specific policy handling as well, or in particular to +invocation of VNF/service-specific management/orchestration actions. The +high-level required capabilities include: * Polled monitoring: Exposure of state via request-response APIs. * Notifications: Exposure of state via pub-sub APIs. - * Realtime/near-realtime notifications: Notifications that occur in actual or near realtime. - * Delegated policy: CRUD operations on policies that are distributed to specific components for local handling, - including one/more of monitoring, violation reporting, and enforcement. + * Realtime/near-realtime notifications: Notifications that occur in actual or + near realtime. + * Delegated policy: CRUD operations on policies that are distributed to + specific components for local handling, including one/more of monitoring, + violation reporting, and enforcement. * Violation reporting: Reporting of conditions that represent a policy violation. - * Reactive enforcement: Enforcement actions taken in response to policy violation events. - * Proactive enforcement: Enforcement actions taken in advance of policy violation events, + * Reactive enforcement: Enforcement actions taken in response to policy + violation events. + * Proactive enforcement: Enforcement actions taken in advance of policy + violation events, e.g. blocking actions that could result in a policy violation. * Compliance auditing: Periodic auditing of state against policies. -Upstream projects already include multiple ways in which configuration conditions can be monitored and responded to: - * OpenStack `Congress <https://wiki.openstack.org/wiki/Congress>`_ provides a table-based mechanism for state monitoring and proactive/reactive policy enforcement, including (as of the Kilo release) data obtained from internal databases of Nova, Neutron, Ceilometer, Cinder, Glance, Keystone, and Swift. The Congress design approach is also extensible to other VIMs (e.g. SDNCs) through development of data source drivers for the new monitored state information. See `Stackforge Congress Data Source Translators <https://github.com/stackforge/congress/tree/master/congress/datasources>`_, `congress.readthedocs.org <http://congress.readthedocs.org/en/latest/cloudservices.html#drivers>`_, and the `Congress specs <https://github.com/stackforge/congress-specs>`_ for more info. - * OpenStack `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_ provides means to trigger alarms upon a wide variety of conditions derived from its monitored OpenStack analytics. - * `Nagios <https://www.nagios.org/#/>`_ "offers complete monitoring and alerting for servers, switches, applications, and services". +Upstream projects already include multiple ways in which configuration conditions +can be monitored and responded to: + * OpenStack `Congress <https://wiki.openstack.org/wiki/Congress>`_ provides a + table-based mechanism for state monitoring and proactive/reactive policy + enforcement, including data obtained from internal databases of OpenStack + core and optional services. The Congress design approach is also extensible + to other VIMs (e.g. SDNCs) through development of data source drivers for + the new monitored state information. See `Stackforge Congress Data Source + Translators + <https://github.com/stackforge/congress/tree/master/congress/datasources>`_, + `congress.readthedocs.org + <http://congress.readthedocs.org/en/latest/cloudservices.html#drivers>`_, + and the `Congress specs <https://github.com/stackforge/congress-specs>`_ for + more info. + * OpenStack `Ceilometer <https://wiki.openstack.org/wiki/Ceilometer>`_ + provides means to trigger alarms upon a wide variety of conditions derived + from its monitored OpenStack analytics. + * `Nagios <https://www.nagios.org/#/>`_ "offers complete monitoring and + alerting for servers, switches, applications, and services". Requirements Validation Approach ++++++++++++++++++++++++++++++++ -The Copper project will assess the completeness of the upstream project solutions for requirements in scope though -a process of: +The Copper project will assess the completeness of the upstream project solutions +for requirements in scope though a process of: * developing configuration policy use cases to focus solution assessment tests * integrating the projects into the OPNFV platform for testing * executing functional and performance tests for the solutions - * assessing overall requirements coverage and gaps in the most complete upstream solutions + * assessing overall requirements coverage and gaps in the most complete + upstream solutions -Depending upon the priority of discovered gaps, new requirements will be submitted to upstream projects for the next -available release cycle. +Depending upon the priority of discovered gaps, new requirements will be +submitted to upstream projects for the next available release cycle. diff --git a/docs/design/usecases.rst b/docs/design/usecases.rst index ef9e82d..ae046f3 100644 --- a/docs/design/usecases.rst +++ b/docs/design/usecases.rst @@ -1,21 +1,105 @@ Use Cases ========= -Resource Requirements -+++++++++++++++++++++ +Implemented as of this release +------------------------------ -Workload Placement ------------------- +DMZ Deployment +.............. + +As a service provider, I need to ensure that applications which have not been +designed for exposure in a DMZ zone, are not attached to DMZ networks. + +An example implementation is shown in the Congress use case test "DMZ Placement" +(dmz.sh) in the Copper repo under the tests folder. This test: + * Identifies VMs connected to a DMZ (currently identified through a + specifically-named security group) + * Identifes VMs connected to a DMZ, which are by policy not allowed to be + (currently implemented through an image tag intended to identify images + that are "authorized" i.e. tested and secure, to be DMZ-connected) + * Reactively enforces the dmz placement rule by pausing VMs found to be in + violation of the policy. + +As implemented through OpenStack Congress: + +.. code:: + + dmz_server(x) :- + nova:servers(id=x,status='ACTIVE'), + neutronv2:ports(id, device_id, status='ACTIVE'), + neutronv2:security_group_port_bindings(id, sg), + neutronv2:security_groups(sg,name='dmz')" + + dmz_placement_error(id) :- + nova:servers(id,name,hostId,status,tenant_id,user_id,image,flavor,az,hh), + not glancev2:tags(image,'dmz'), + dmz_server(id)" + + execute[nova:servers.pause(id)] :- + dmz_placement_error(id), + nova:servers(id,status='ACTIVE')" + +Configuration Auditing +...................... + +As a service provider or tenant, I need to periodically verify that resource +configuration requirements have not been violated, as a backup means to proactive +or reactive policy enforcement. + +An example implementation is shown in the Congress use case test "SMTP Ingress" +(smtp_ingress.sh) in the Copper repo under the tests folder. This test: + * Detects that a VM is associated with a security group that allows SMTP + ingress (TCP port 25) + * Adds a policy table row entry for the VM, which can be later investigated + for appropriate use of the security group, etc + +As implemented through OpenStack Congress: + +.. code:: + + smtp_ingress(x) :- + nova:servers(id=x,status='ACTIVE'), + neutronv2:ports(port_id, status='ACTIVE'), + neutronv2:security_groups(sg, tenant_id, sgn, sgd), + neutronv2:security_group_port_bindings(port_id, sg), + neutronv2:security_group_rules(sg, rule_id, tenant_id, remote_group_id, + 'ingress', ethertype, 'tcp', port_range_min, port_range_max, remote_ip), + lt(port_range_min, 26), + gt(port_range_max, 24) + +Reserved Resources +.................. + +As an NFVI provider, I need to ensure that my admins do not inadvertently +enable VMs to connect to reserved subnets. + +An example implementation is shown in the Congress use case test "Reserved +Subnet" (reserved_subnet.sh) in the Copper repo under the tests folder. This +test: + * Detects that a subnet has been created in a reserved range + * Reactively deletes the subnet + +As implemented through OpenStack Congress: + +.. code:: + + reserved_subnet_error(x) :- + neutronv2:subnets(id=x, cidr='10.7.1.0/24') + + execute[neutronv2:delete_subnet(x)] :- + reserved_subnet_error(x) + + +For further analysis and implementation +--------------------------------------- Affinity ........ Ensures that the VM instance is launched "with affinity to" specific resources, -e.g. within a compute or storage cluster. -This is analogous to the affinity rules in -`VMWare vSphere DRS <https://pubs.vmware.com/vsphere-50/topic/com.vmware.vsphere.resmgmt.doc_50/GUID-FF28F29C-8B67-4EFF-A2EF-63B3537E6934.html>`_. -Examples include: "Same Host Filter", i.e. place on the same compute node as a given set of instances, -e.g. as defined in a scheduler hint list. +e.g. within a compute or storage cluster. Examples include: "Same Host Filter", +i.e. place on the same compute node as a given set of instances, e.g. as defined +in a scheduler hint list. As implemented by OpenStack Heat using server groups: @@ -48,10 +132,10 @@ Anti-Affinity ............. Ensures that the VM instance is launched "with anti-affinity to" specific resources, -e.g. outside a compute or storage cluster, or geographic location. -This filter is analogous to the anti-affinity rules in vSphere DRS. -Examples include: "Different Host Filter", i.e. ensures that the VM instance is launched -on a different compute node from a given set of instances, as defined in a scheduler hint list. +e.g. outside a compute or storage cluster, or geographic location. Examples +include: "Different Host Filter", i.e. ensures that the VM instance is launched +on a different compute node from a given set of instances, as defined in a +scheduler hint list. As implemented by OpenStack Heat using scheduler hints: @@ -88,46 +172,27 @@ As implemented by OpenStack Heat using scheduler hints: - network: {get_param: network} scheduler_hints: {different_host: {get_resource: serv1}} -DMZ Deployment -.............. -As a service provider, -I need to ensure that applications which have not been designed for exposure in a DMZ zone, -are not attached to DMZ networks. - -Configuration Auditing ----------------------- - -As a service provider or tenant, -I need to periodically verify that resource configuration requirements have not been violated, -as a backup means to proactive or reactive policy enforcement. - -Generic Policy Requirements -+++++++++++++++++++++++++++ - -NFVI Self-Service Constraints ------------------------------ - -As an NFVI provider, -I need to ensure that my self-service tenants are not able to configure their VNFs in ways -that would impact other tenants or the reliability, security, etc of the NFVI. - Network Access Control ...................... -Networks connected to VMs must be public, or owned by someone in the VM owner's group. +Networks connected to VMs must be public, or owned by someone in the VM owner's +group. This use case captures the intent of the following sub-use-cases: * Link Mirroring: As a troubleshooter, - I need to mirror traffic from physical or virtual network ports so that I can investigate trouble reports. + I need to mirror traffic from physical or virtual network ports so that I + can investigate trouble reports. * Link Mirroring: As a NFVaaS tenant, - I need to be able to mirror traffic on my virtual network ports so that I can investigate trouble reports. + I need to be able to mirror traffic on my virtual network ports so that I + can investigate trouble reports. * Unauthorized Link Mirroring Prevention: As a NFVaaS tenant, - I need to be able to prevent other tenants from mirroring traffic on my virtual network ports - so that I can protect the privacy of my service users. + I need to be able to prevent other tenants from mirroring traffic on my + virtual network ports so that I can protect the privacy of my service users. * Link Mirroring Delegation: As a NFVaaS tenant, - I need to be able to allow my NFVaaS SP customer support to mirror traffic on my virtual network ports - so that they can assist in investigating trouble reports. + I need to be able to allow my NFVaaS SP customer support to mirror traffic + on my virtual network ports so that they can assist in investigating trouble + reports. As implemented through OpenStack Congress: @@ -172,18 +237,17 @@ As implemented through OpenStack Congress: ldap:group(user1, g), ldap:group(user2, g) -Resource Management -------------------- - Resource Reclamation .................... -As a service provider or tenant, -I need to be informed of VMs that are under-utilized so that I can reclaim the VI resources. -(example from `RuleYourCloud blog <http://ruleyourcloud.com/2015/03/12/scaling-up-congress.html>`_) +As a service provider or tenant, I need to be informed of VMs that are +under-utilized so that I can reclaim the VI resources. (example from +`RuleYourCloud blog <http://ruleyourcloud.com/2015/03/12/scaling-up-congress.html>`_) As implemented through OpenStack Congress: +*Note: untested example...* + .. code:: reclaim_server(vm) :- @@ -198,11 +262,13 @@ As implemented through OpenStack Congress: Resource Use Limits ................... -As a tenant or service provider, -I need to be automatically terminate an instance that has run for a pre-agreed maximum duration. +As a tenant or service provider, I need to be automatically terminate an +instance that has run for a pre-agreed maximum duration. As implemented through OpenStack Congress: +*Note: untested example...* + .. code:: terminate_server(vm) :- @@ -214,4 +280,3 @@ As implemented through OpenStack Congress: nova:servers(vm, vm_name, user_id), keystone:users(user_id, email) - diff --git a/docs/userguide/featureusage.rst b/docs/userguide/featureusage.rst index 00bf78c..6fd60c6 100644 --- a/docs/userguide/featureusage.rst +++ b/docs/userguide/featureusage.rst @@ -1,5 +1,10 @@ Copper capabilities and usage ============================= This release focused on use of the OpenStack Congress service for managing -configuration policy. See the `Congress intro guide on readthedocs <http://congress.readthedocs.org/en/latest/readme.html#installing-congress|Congress>`_ for information on the capabilities and usage of Congress. +configuration policy. See the `Congress intro guide on readthedocs +<http://congress.readthedocs.io/en/latest/index.html>`_ for general information +on the capabilities and usage of Congress. + +Examples of Congress API usage can be found in the Copper tests as described +on the OPNFV wiki at https://wiki.opnfv.org/display/copper/testing. |