aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/release/configguide/Auto-featureconfig.rst101
-rw-r--r--docs/release/configguide/auto-OPFNV-fuel.jpgbin189899 -> 0 bytes
-rw-r--r--docs/release/configguide/auto-OPFNV-fuel.pngbin0 -> 41457 bytes
-rw-r--r--docs/release/configguide/auto-installTarget-ONAP-B.pngbin0 -> 50086 bytes
-rw-r--r--docs/release/configguide/auto-installTarget-generic.jpgbin154476 -> 0 bytes
-rw-r--r--docs/release/configguide/auto-installTarget-generic.pngbin0 -> 41926 bytes
-rw-r--r--docs/release/configguide/auto-installTarget-initial.jpgbin118641 -> 0 bytes
-rw-r--r--docs/release/configguide/auto-installTarget-initial.pngbin0 -> 31484 bytes
-rw-r--r--docs/release/configguide/auto-repo-folders.jpgbin162411 -> 0 bytes
-rw-r--r--docs/release/configguide/auto-repo-folders.pngbin0 -> 36136 bytes
-rw-r--r--docs/release/release-notes/Auto-release-notes.rst222
-rw-r--r--docs/release/release-notes/ONAP-toplevel-beijing.pngbin0 -> 383760 bytes
-rw-r--r--docs/release/release-notes/auto-proj-openstacksummit1805.pngbin0 -> 10928 bytes
-rw-r--r--docs/release/release-notes/auto-proj-tests.pngbin0 -> 33348 bytes
-rw-r--r--docs/release/release-notes/auto-project-activities.pngbin55670 -> 58789 bytes
15 files changed, 222 insertions, 101 deletions
diff --git a/docs/release/configguide/Auto-featureconfig.rst b/docs/release/configguide/Auto-featureconfig.rst
index fe51be3..ed68069 100644
--- a/docs/release/configguide/Auto-featureconfig.rst
+++ b/docs/release/configguide/Auto-featureconfig.rst
@@ -14,22 +14,29 @@ and provides guidelines on how to perform configurations and additional installa
Goal
====
-The goal of `Auto <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>`_ installation and configuration is to prepare an environment where the `Auto use cases <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/userguide/index.html#auto-userguide>`_ can be assessed, i.e. where the corresponding test cases can be executed and their results can be collected.
+The goal of `Auto <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>`_
+installation and configuration is to prepare an environment where the
+`Auto use cases <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/userguide/index.html#auto-userguide>`_
+can be assessed, i.e. where the corresponding test cases can be executed and their results can be collected.
An instance of ONAP needs to be present, as well as a number of deployed VNFs, in the scope of the use cases.
The initial Auto use cases cover:
-* Edge Cloud (increased autonomy and automation for managing Edge VNFs)
-* Resilience Improvements through ONAP (reduced recovery time for VNFs and end-to-end services in case of failure or suboptimal performance)
-* Enterprise vCPE (automation, cost optimization, and performance assurance of enterprise connectivity to Data Centers and the Internet)
+* **Edge Cloud** (increased autonomy and automation for managing Edge VNFs)
+* **Resilience Improvements through ONAP** (reduced recovery time for VNFs and end-to-end services in case of failure
+ or suboptimal performance)
+* **Enterprise vCPE** (automation, cost optimization, and performance assurance of enterprise connectivity to Data Centers
+ and the Internet)
The general idea of Auto is to install an OPNFV environment (comprising at least one Cloud Manager),
an ONAP instance, ONAP-deployed VNFs as required by use cases, possibly additional cloud managers not
already installed during the OPNFV environment setup, traffic generators, and the Auto-specific software
-for the use cases (which can include test frameworks such as `Robot <http://robotframework.org/>`_ or `Functest <http://docs.opnfv.org/en/latest/submodules/functest/docs/release/release-notes/index.html#functest-releasenotes>`_).
+for the use cases (which can include test frameworks such as `Robot <http://robotframework.org/>`_ or
+`Functest <http://docs.opnfv.org/en/latest/submodules/functest/docs/release/release-notes/index.html#functest-releasenotes>`_).
The ONAP instance needs to be configured with policies and closed-loop controls (also as required by use cases),
-and the test framework controls the execution and result collection of all the test cases.
+and the test framework controls the execution and result collection of all the test cases. Then, test case execution
+results are analyzed, so as to fine-tune policies and closed-loop controls.
The following diagram illustrates two execution environments, for x86 architectures and for Arm architectures.
The installation process depends on the underlying architecture, since certain components may require a
@@ -39,7 +46,7 @@ manager), for x86 or for Arm. The initial VM-based VNFs will cover OpenStack, an
additional cloud managers will be considered. The configuration of ONAP and of test cases should not depend
on the architecture.
-.. image:: auto-installTarget-generic.jpg
+.. image:: auto-installTarget-generic.png
For each component, various installer tools will be selected (based on simplicity and performance), and
@@ -47,12 +54,18 @@ may change from one Auto release to the next. For example, the most natural inst
OOM (ONAP Operations Manager).
The initial version of Auto will focus on OpenStack VM-based VNFs, onboarded and deployed via ONAP API
-(not by ONAP GUI, for the purpose of automation). ONAP is installed on Kubernetes. Two servers from LaaS
-are used: one to support an OpenStack instance as provided by the OPNFV installation via Fuel/MCP, and
-the other to support ONAP with Kubernetes and Docker. Therefore, the VNF execution environment is the
-server with the OpenStack instance.
+(not by ONAP GUI, for the purpose of automation). ONAP is installed on Kubernetes. Two or more servers from LaaS
+are used: one or more to support an OpenStack instance as provided by the OPNFV installation via Fuel/MCP or other
+OPNFV installers (Compass4NFV, Apex/TripleO, Daisy4NFV, JOID), and the other(s) to support ONAP with Kubernetes
+and Docker. Therefore, the VNF execution environment is composed of the server(s) with the OpenStack instance(s).
-.. image:: auto-installTarget-initial.jpg
+.. image:: auto-installTarget-initial.png
+
+ONAP/K8S has several variants. The initial variant considered by Auto is the basic one recommended by ONAP,
+which relies on the Rancher installer and on OpenStack VMs providing VMs for the Rancher master and for the
+Kubernetes cluster workers, as illustrated below for ONAP-Beijing release:
+
+.. image:: auto-installTarget-ONAP-B.png
The OpenStack instance running VNFs may need to be configured as per ONAP expectations, for example creating
@@ -64,16 +77,16 @@ SDK, or OpenStack CLI, or even OpenStack Heat templates) would populate the Open
-Jenkins (or more precisely JJB: Jenkins Job Builder) will be used for Continuous Integration in OPNFV releases, to ensure that the latest master
-branch of Auto is always working. The first 3 tasks in the pipeline would be: install OpenStack instance via OPNFV
-installer (Fuel/MCP for example), configure the OpenStack instance for ONAP, install ONAP (using the OpenStack
-instance network IDs in the ONAP YAMP file).
+Jenkins (or more precisely JJB: Jenkins Job Builder) will be used for Continuous Integration in OPNFV releases,
+to ensure that the latest master branch of Auto is always working. The first 3 tasks in the pipeline would be:
+install OpenStack instance via OPNFV installer (Fuel/MCP for example), configure the OpenStack instance for ONAP,
+install ONAP (using the OpenStack instance network IDs in the ONAP YAML file).
Moreover, Auto will offer an API, which can be imported as a module, and can be accessed for example
by a web application. The following diagram shows the planned structure for the Auto Git repository,
supporting this module, as well as the installation scripts, test case software, utilities, and documentation.
-.. image:: auto-repo-folders.jpg
+.. image:: auto-repo-folders.png
@@ -82,15 +95,18 @@ Pre-configuration activities
The following resources will be required for the initial version of Auto:
-* two LaaS (OPNFV Lab-as-a-Service) pods, with their associated network information. Later, other types of target pods will be supported.
-* the `Auto Git repository <https://git.opnfv.org/auto/tree/>`_ (clone from `Gerrit Auto <https://gerrit.opnfv.org/gerrit/#/admin/projects/auto>`_)
+* at least two LaaS (OPNFV Lab-as-a-Service) pods (or equivalent in another lab), with their associated network
+ information. Later, other types of target pods will be supported, such as clusters (physical bare metal or virtual).
+ The pods can be either x86 or Arm CPU architectures.
+* the `Auto Git repository <https://git.opnfv.org/auto/tree/>`_
+ (clone from `Gerrit Auto <https://gerrit.opnfv.org/gerrit/#/admin/projects/auto>`_)
Hardware configuration
======================
-<TBC>
+<TBC; large servers, at least 512G RAM, 1TB storage, 80-100 CPU threads>
@@ -100,7 +116,8 @@ Feature configuration
Environment installation
^^^^^^^^^^^^^^^^^^^^^^^^
-Current Auto work in progress is captured in the `Auto Lab Deployment wiki page <https://wiki.opnfv.org/display/AUTO/Auto+Lab+Deployment>`_.
+Current Auto work in progress is captured in the
+`Auto Lab Deployment wiki page <https://wiki.opnfv.org/display/AUTO/Auto+Lab+Deployment>`_.
OPNFV with OpenStack
@@ -114,7 +131,7 @@ This OPNFV installer starts with installing a Salt Master, which then configures
subnets and bridges, and install VMs (e.g., for controllers and compute nodes)
and an OpenStack instance with predefined credentials.
-.. image:: auto-OPFNV-fuel.jpg
+.. image:: auto-OPFNV-fuel.png
The Auto version of OPNFV installation configures additional resources for the OpenStack virtual pod,
@@ -136,21 +153,37 @@ These lines can be added to configure more resources:
gtw01:
ram: 2048
+ cmp01:
- + vcpus: 16
- + ram: 65536
- + disk: 40
+ + vcpus: 32
+ + ram: 196608
+ cmp02:
- + vcpus: 16
- + ram: 65536
- + disk: 40
+ + vcpus: 32
+ + ram: 196608
-The final step deploys OpenStack (duration: approximately between 30 and 45 minutes).
+The final steps deploy OpenStack (duration: approximately between 30 and 45 minutes).
.. code-block:: console
- 6. ci/deploy.sh -l UNH-LaaS -p virtual1 -s os-nosdn-nofeature-noha -D |& tee deploy.log
+ # The following change will provide more space to VMs. Default is 100G per cmp0x. This gives 350 each and 700 total.
+ 6. sed -i mcp/scripts/lib.sh -e 's/\(qemu-img create.*\) 100G/\1 350G/g'
+
+ # Then deploy OpenStack. It should take between 30 and 45 minutes:
+ 7. ci/deploy.sh -l UNH-LaaS -p virtual1 -s os-nosdn-nofeature-noha -D |& tee deploy.log
+
+ # Lastly, to get access to the extra RAM and vCPUs, adjust the quotas (done on the controller at 172.16.10.36):
+ 8. openstack quota set --cores 64 admin
+ 9. openstack quota set --ram 393216 admin
+
+
+Note:
+* with Linux Kernel 4.4, the installation of OPNFV is not working properly (seems to be a known bug of 4.4, as it works correctly with 4.13):
+ neither qemu-nbd nor kpartx are able to correctly create a mapping to /dev/nbd0p1 partition in order to resize it to 3G (see Fuel repository,
+ file `mcp/scripts/lib.sh <https://git.opnfv.org/fuel/tree/mcp/scripts/lib.sh>`_ , function mount_image).
+* it is not a big deal in case of x86, because it is still possible to update the image and complete the installation even with the
+ original partition size.
+* however, in the case of ARM, the OPNFV installation will fail, because there isn't enough space to install all required packages into
+ the cloud image.
ONAP on Kubernetes
@@ -166,10 +199,9 @@ with all components except DCAE running on Kubernetes, and DCAE running separate
For Arm architectures, specialized Docker images are being developed to provide Arm architecture
binary compatibility.
-The goal for the first release of Auto is to use an ONAP instance where DCAE also runs on Kubernetes,
-for both architectures.
+The goal for Auto is to use an ONAP instance where DCAE also runs on Kubernetes, for both architectures.
-The ONAP reference for this installation is detailed `here <https://wiki.onap.org/display/DW/ONAP+on+Kubernetes>`_.
+The ONAP reference for this installation is detailed `here <http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_user_guide.html>`_.
Examples of manual steps for the deploy procedure are as follows:
@@ -223,7 +255,8 @@ Test Case software installation and execution control
Installation health-check
=========================
-<TBC; the Auto installation will self-check, but indicate here manual steps to double-check that the installation was successful>
+<TBC; the Auto installation will self-check, but indicate here manual steps to double-check that the
+installation was successful>
diff --git a/docs/release/configguide/auto-OPFNV-fuel.jpg b/docs/release/configguide/auto-OPFNV-fuel.jpg
deleted file mode 100644
index 706d997..0000000
--- a/docs/release/configguide/auto-OPFNV-fuel.jpg
+++ /dev/null
Binary files differ
diff --git a/docs/release/configguide/auto-OPFNV-fuel.png b/docs/release/configguide/auto-OPFNV-fuel.png
new file mode 100644
index 0000000..3100d40
--- /dev/null
+++ b/docs/release/configguide/auto-OPFNV-fuel.png
Binary files differ
diff --git a/docs/release/configguide/auto-installTarget-ONAP-B.png b/docs/release/configguide/auto-installTarget-ONAP-B.png
new file mode 100644
index 0000000..dc069fe
--- /dev/null
+++ b/docs/release/configguide/auto-installTarget-ONAP-B.png
Binary files differ
diff --git a/docs/release/configguide/auto-installTarget-generic.jpg b/docs/release/configguide/auto-installTarget-generic.jpg
deleted file mode 100644
index 3f94871..0000000
--- a/docs/release/configguide/auto-installTarget-generic.jpg
+++ /dev/null
Binary files differ
diff --git a/docs/release/configguide/auto-installTarget-generic.png b/docs/release/configguide/auto-installTarget-generic.png
new file mode 100644
index 0000000..6740933
--- /dev/null
+++ b/docs/release/configguide/auto-installTarget-generic.png
Binary files differ
diff --git a/docs/release/configguide/auto-installTarget-initial.jpg b/docs/release/configguide/auto-installTarget-initial.jpg
deleted file mode 100644
index edc6509..0000000
--- a/docs/release/configguide/auto-installTarget-initial.jpg
+++ /dev/null
Binary files differ
diff --git a/docs/release/configguide/auto-installTarget-initial.png b/docs/release/configguide/auto-installTarget-initial.png
new file mode 100644
index 0000000..380738d
--- /dev/null
+++ b/docs/release/configguide/auto-installTarget-initial.png
Binary files differ
diff --git a/docs/release/configguide/auto-repo-folders.jpg b/docs/release/configguide/auto-repo-folders.jpg
deleted file mode 100644
index ee88866..0000000
--- a/docs/release/configguide/auto-repo-folders.jpg
+++ /dev/null
Binary files differ
diff --git a/docs/release/configguide/auto-repo-folders.png b/docs/release/configguide/auto-repo-folders.png
new file mode 100644
index 0000000..1c9d6a4
--- /dev/null
+++ b/docs/release/configguide/auto-repo-folders.png
Binary files differ
diff --git a/docs/release/release-notes/Auto-release-notes.rst b/docs/release/release-notes/Auto-release-notes.rst
index 2c2b6d0..e10f497 100644
--- a/docs/release/release-notes/Auto-release-notes.rst
+++ b/docs/release/release-notes/Auto-release-notes.rst
@@ -13,32 +13,49 @@ This document provides the release notes for the Fraser release of Auto.
Important notes for this release
================================
-The initial release for Auto was in Fraser 6.0 (project inception: July 2017). This is the first point release, in Fraser 6.1.
+The initial release for Auto was in Fraser 6.0 (project inception: July 2017). This is the second point release, in Fraser 6.2.
Summary
=======
-OPNFV is a SDNFV system integration project for open-source components, which so far have been mostly limited to the NFVI+VIM as generally described by ETSI.
+Overview
+^^^^^^^^
+
+OPNFV is an SDNFV system integration project for open-source components, which so far have been mostly limited to
+the NFVI+VIM as generally described by ETSI.
In particular, OPNFV has yet to integrate higher-level automation features for VNFs and end-to-end Services.
-Auto ("ONAP-Automated OPNFV") will focus on ONAP component integration and verification with OPNFV reference
-platforms/scenarios, through primarily a post-install process in order to avoid impact to OPNFV installer projects.
-As much as possible, this will use a generic installation/integration process (not specific to any OPNFV installer's
-technology).
+As an OPNFV project, Auto ("ONAP-Automated OPNFV") will focus on ONAP component integration and verification with
+OPNFV reference platforms/scenarios, through primarily a post-install process, in order to avoid impact to OPNFV
+installer projects. As much as possible, this will use a generic installation/integration process (not specific to
+any OPNFV installer's technology).
+
+* `ONAP <https://www.onap.org/>`_ (a Linux Foundation Project) is an open source software platform that delivers
+ robust capabilities for the design, creation, orchestration, monitoring, and life cycle management of
+ Software-Defined Networks (SDNs). The current release of ONAP is B (Beijing).
+
+Auto aims at validating the business value of ONAP in general, but especially within an OPNFV infrastructure
+(integration of ONAP and OPNFV). Business value is measured in terms of improved service quality (performance,
+reliability, ...) and OPEX reduction (VNF management simplification, power consumption reduction, ...).
-* `ONAP <https://www.onap.org/>`_ (a Linux Foundation Project) is an open source software platform that delivers robust capabilities for the design, creation, orchestration, monitoring, and life cycle management of Software-Defined Networks (SDNs).
-While all of ONAP is in scope, as it proceeds, the project will focus on specific aspects of this integration and verification in each release. Some example topics and work items include:
+While all of ONAP is in scope, as it proceeds, the Auto project will focus on specific aspects of this integration
+and verification in each release. Some example topics and work items include:
* How ONAP meets VNFM standards, and interacts with VNFs from different vendors
-* How ONAP SDN-C uses OPNFV existing features, e.g. NetReady, in a two-layer controller architecture in which the upper
- layer (global controller) is replaceable, and the lower layer can use different vendor’s local controller to interact
- with SDN-C
+* How ONAP SDN-C uses OPNFV existing features, e.g. NetReady, in a two-layer controller architecture in which the
+ upper layer (global controller) is replaceable, and the lower layer can use different vendor’s local controller to
+ interact with SDN-C. For interaction with multiple cloud infrastructures, the MultiVIM ONAP component will be used.
+* How ONAP leverages OPNFV installers (Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV, JOID) to provide a cloud
+ instance (starting with OpenStack) on which to install the tool ONAP
* What data collection interface VNF and controllers provide to ONAP DCAE, and (through DCAE), to closed-loop control
functions such as Policy Tests which verify interoperability of ONAP automation/lifecycle features with specific NFVI
- and VIM features, as prioritized by the project with technical community and EUAG input. Examples include:
+ and VIM features, as prioritized by the project with OPNFV technical community and
+ EUAG (`End User Advisory Group <https://www.opnfv.org/end-users/end-user-advisory-group>`_) input.
+
+ Examples:
* Abstraction of networking tech/features e.g. through NetReady/Gluon
* Blueprint-based VNF deployment (HOT, TOSCA, YANG)
@@ -46,7 +63,8 @@ While all of ONAP is in scope, as it proceeds, the project will focus on specifi
* Policy (through DCAE)
* Telemetry (through VES/DCAE)
-Initial areas of focus for Auto (in orange dotted lines; this scope can be expanded for future releases). It is understood that:
+Initial areas of focus for Auto (in orange dotted lines; this scope can be expanded for future releases).
+It is understood that:
* ONAP scope extends beyond the lines drawn below
* ONAP architecture does not necessarily align with the ETSI NFV inspired diagrams this is based upon
@@ -54,22 +72,85 @@ Initial areas of focus for Auto (in orange dotted lines; this scope can be expan
.. image:: auto-proj-rn01.png
-Testability:
+The current ONAP architecture overview can be found `here <http://onap.readthedocs.io/en/latest/guides/onap-developer/architecture/onap-architecture.html>`_.
+
+For reference, the ONAP-Beijing architecture diagram is replicated here:
+
+.. image:: ONAP-toplevel-beijing.png
+
+
+Within OPNFV, Auto leverages tools and collaborates with other projects:
+
+* use clouds/VIMs as installed in OPNFV infrastructure (e.g. OpenStack as installed by Fuel/MCP, Compass4NFV, etc.)
+* include VNFs developed by OPNFV data plane groups (e.g., accelerated by VPP (Vector Packet Processing) with DPDK support, ...)
+* validate ONAP+VNFs+VIMs on two major CPU architectures: x86 (CISC), Arm (RISC); collaborate with OPNFV/Armband
+* work with other related groups in OPNFV:
+
+ * FuncTest for software verification (CI/CD, Pass/Fail)
+ * Yardstick for metric management (quantitative measurements)
+ * VES (VNF Event Stream) and Barometer for VNF monitoring (feed to ONAP/DCAE)
+
+* leverage OPNFV tools and infrastructure:
-* Tests will be developed for use cases within the project scope.
+ * Pharos as LaaS: transient pods (3-week bookings) and permanent Arm pod (6 servers)
+ * possibly other labs from the community
+ * JJB/Jenkins for CI/CD (and follow OPNFV scenario convention)
+ * Gerrit/Git for code and documents reviewing and archiving (similar to ONAP: Linux Foundation umbrella)
+ * follow OPNFV releases (Releng group)
+
+
+
+Testability
+^^^^^^^^^^^
+
+* Tests (test cases) will be developed for use cases within the project scope.
* In future releases, tests will be added to Functest runs for supporting scenarios.
Auto’s goals include the standup and tests for integrated ONAP-Cloud platforms (“Cloud” here being OPNFV “scenarios”
-or other cloud environments). Thus, the artifacts would be tools to deploy ONAP (leveraging OOM whenever possible
-(starting with Beijing release of ONAP), and a preference for the containerized version of ONAP), to integrate it with
+or other cloud environments). Thus, the artifacts would be tools to deploy ONAP (leveraging OOM whenever possible,
+starting with Beijing release of ONAP, and a preference for the containerized version of ONAP), to integrate it with
clouds, to onboard and deploy test VNFs, to configure policies and closed-loop controls, and to run use-case defined
tests against that integrated environment. OPNFV scenarios would be a possible component in the above.
+Installing Auto components and running a battery of tests will be automated, with some or all of the tests being
+integrated in OPNFV CI/CD (depending on the execution length and resource consumption).
+
+Combining all potential parameters, a full set of Auto test case executions can result in thousands of individual results.
+The analysis of these results can be performed by humans, or even by ML/AI (Machine Learning, Artificial Intelligence).
+Test results will be used to fine-tune policies and closed-loop controls configured in ONAP, for increased ONAP business
+value (i.e., find/determine policies and controls which yield optimized ONAP business value metrics such as OPEX).
+
+More precisely, the following list shows parameters that could be applied to an Auto full run of test cases:
+
+* Auto test cases for given use cases
+* OPNFV installer {Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV, JOID}
+* OPNFV availability scenario {HA, noHA}
+* cloud where ONAP runs {OpenStack, AWS, GCP, Azure, ...}
+* ONAP installation type {bare metal or virtual server, VM or container, ...} and options {MultiVIM single|distributed, ...}
+* VNFs {vFW, vCPE, vAAA, vDHCP, vDNS, vHSS, ...} and VNF-based services {vIMS, vEPC, ...}
+* cloud where VNFs run {OpenStack, AWS, GCP, Azure, ...}
+* VNF type {VM-based, container}
+* CPU architectures {x86/AMD64, ARM/aarch64} for ONAP software and for VNFs
+* pod size and technology (RAM, storage, CPU cores/threads, NICs)
+* traffic types and amounts/volumes
+* ONAP configuration {especially policies and closed-loop controls; monitoring types for DCAE: VES, ...}
+* versions of every component {Linux OS (Ubuntu, CentOS), OPNFV release, clouds, ONAP, VNFs, ...}
+
+
+Illustration of Auto analysis loop based on test case executions:
+
+.. image:: auto-proj-tests.png
+
+
Auto currently defines three use cases: Edge Cloud (UC1), Resiliency Improvements (UC2), and Enterprise vCPE (UC3). These use cases aim to show:
-* increased autonomy of Edge Cloud management (automation, catalog-based deployment). This use case relates to the `OPNFV Edge Cloud <https://wiki.opnfv.org/display/PROJ/Edge+cloud>`_ initiative.
-* increased resilience (i.e. fast VNF recovery in case of failure or problem, thanks to closed-loop control), including end-to-end composite services of which a Cloud Manager may not be aware.
-* enterprise-grade performance of vCPEs (certification during onboarding, then real-time performance assurance with SLAs and HA as well as scaling).
+* increased autonomy of Edge Cloud management (automation, catalog-based deployment). This use case relates to the
+ `OPNFV Edge Cloud <https://wiki.opnfv.org/display/PROJ/Edge+cloud>`_ initiative.
+* increased resilience (i.e. fast VNF recovery in case of failure or problem, thanks to closed-loop control),
+ including end-to-end composite services of which a Cloud Manager may not be aware (VMs or containers could be
+ recovered by a Cloud Manager, but not necessarily an end-to-end service built on top of VMs or containers).
+* enterprise-grade performance of vCPEs (certification during onboarding, then real-time performance assurance with
+ SLAs and HA as well as scaling).
The use cases define test cases, which initially will be independent, but which might eventually be integrated to `FuncTest <https://wiki.opnfv.org/display/functest/Opnfv+Functional+Testing>`_.
@@ -78,37 +159,57 @@ Home Gateways). The interest for vHGW is to reduce overall power consumption: ev
residential premises consume a lot of energy. Virtualizing that service to the Service Provider edge data center would
allow to minimize that consumption.
+
+Lab environment
+^^^^^^^^^^^^^^^
+
Target architectures for all Auto use cases and test cases include x86 and Arm. Power consumption analysis will be
-performed, leveraging Functest tools.
+performed, leveraging Functest tools (based on RedFish/IPMI/ILO).
+
+Initially, an ONAP-Amsterdam instance (without DCAE) had been installed over Kubernetes on bare metal on a single-server
+x86 pod at UNH IOL.
-An ONAP instance (without DCAE) has been installed over Kubernetes on bare metal on an x86 pod of 6 servers at UNH IOL.
A transition is in progress, to leverage OPNFV LaaS (Lab-as-a-Service) pods (`Pharos <https://labs.opnfv.org/>`_).
-These pods can be booked for 3 weeks only (with an extension for a maximum of 2 weeks), so are not a permanent resource.
-A repeatable automated installation procedure is being developed. An installation of ONAP on Kubernetes in a public
-OpenStack cloud on an Arm server has been done, and demonstrated during OpenStack Summit in Vancouver on May 21st 2018
-(see link in references below).
+These pods can be booked for 3 weeks only (with an extension for a maximum of 2 weeks), so they are not a permanent resource.
+
+A repeatable automated installation procedure is being developed.
-ONAP-based onboarding and deployment of VNFs is in progress (ONAP pre-loading of VNFs must still done outside of ONAP:
-for VM-based VNFs, need to prepare OpenStack stacks (using Heat templates), then make an instance snapshot which serves
-as the binary image of the VNF).
+ONAP-based onboarding and deployment of VNFs is in progress (ONAP-Amsterdam pre-loading of VNFs must still done outside
+of ONAP: for VM-based VNFs, users need to prepare OpenStack stacks (using Heat templates), then make an instance snapshot
+which serves as the binary image of the VNF).
-An initial version of a script to prepare an OpenStack instance for ONAP (creation of a public and a private network, with a router) has been developed.
+An initial version of a script to prepare an OpenStack instance for ONAP (creation of a public and a private network,
+with a router) has been developed. It leverages OpenStack SDK.
Integration with Arm servers has started (exploring binary compatibility):
-* Openstack is currently installed on a 6-server pod of Arm servers
+* OpenStack is currently installed on a 6-server pod of Arm servers
* A set of 14 additional Arm servers was deployed at UNH, for increased capacity
-* Arm-compatible Docker images are in the process of being developed (they were used in the OpenStack Summit demo)
+* Arm-compatible Docker images are in the process of being developed
Test case implementation for the three use cases has started.
-OPNFV CI/CD integration with JJD (Jenkins Job Description) has started: see the Auto plan description `here <https://wiki.opnfv.org/display/AUTO/CI+Plan+for+Auto>`_. The permanent resource for that is a 6-server Arm pod, hosted at UNH.
+OPNFV CI/CD integration with JJD (Jenkins Job Description) has started: see the Auto plan description
+`here <https://wiki.opnfv.org/display/AUTO/CI+Plan+for+Auto>`_. The permanent resource for that is the 6-server Arm
+pod, hosted at UNH. The CI directory from the Auto repository is `here <https://git.opnfv.org/auto/tree/ci>`_
Finally, the following figure illustrates Auto in terms of project activities:
.. image:: auto-project-activities.png
+Note: a demo was delivered at the OpenStack Summit in Vancouver on May 21st 2018, to illustrate the deployment of a WordPress application
+(WordPress is a platform for websites and blogs) deployed on a multi-architecture cloud (mix of x86 and Arm servers).
+This shows how service providers and enterprises can diversify their data centers with servers of different architectures,
+and select architectures best suited to each use case (mapping application components to architectures: DBs, interactive servers,
+number-crunching modules, ...).
+This prefigures how other examples such as ONAP, VIMs, and VNFs could also be deployed on heterogeneous multi-architecture
+environments (open infrastructure), orchestrated by Kubernetes. The Auto installation scripts could expand on that approach.
+
+.. image:: auto-proj-openstacksummit1805.png
+
+
+
Release Data
============
@@ -117,13 +218,13 @@ Release Data
| **Project** | Auto |
| | |
+--------------------------------------+--------------------------------------+
-| **Repo/commit-ID** | auto/opnfv-6.1.0 |
+| **Repo/commit-ID** | auto/opnfv-6.2.0 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release designation** | Fraser 6.1 |
+| **Release designation** | Fraser 6.2 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | 2018-05-25 |
+| **Release date** | 2018-06-29 |
| | |
+--------------------------------------+--------------------------------------+
| **Purpose of the delivery** | Official OPNFV release |
@@ -162,46 +263,33 @@ Point release 6.1:
* added Gambia release plan
* started integration with CI/CD (JJB) on permanent Arm pod
* Arm demo at OpenStack Summit
-* initial script for configuring OpenStack instance for ONAP, using OpenStack SDK
+* initial script for configuring OpenStack instance for ONAP, using OpenStack SDK 0.13
* initial attempts to install ONAP Beijing
* alignment with OPNFV Edge Cloud
* initial contacts with Functest
+Point release 6.2:
-**JIRA TICKETS:**
+* initial scripts for OPNFV CI/CD, registration of Jenkins slave on `Arm pod <https://build.opnfv.org/ci/view/auto/>`_
+* updated script for configuring OpenStack instance for ONAP, using OpenStack SDK 0.14
+
+Notable activities since release 6.1, which may result in new features for Gambia 7.0:
+* researching how to configure multiple Pharos servers in a cluster for Kubernetes
+* started to evaluate Compass4nfv as another OpenStack installer; issues with Python version (2 or 3)
+* common meeting with Functest
+* Plugfest: initiated collaboration with ONAP/MultiVIM (including support for ONAP installation)
+
+
+**JIRA TICKETS for this release:**
+--------------------------------------+--------------------------------------+
| **JIRA REFERENCE** | **SLOGAN** |
| | |
+--------------------------------------+--------------------------------------+
-| AUTO-1, UC1 definition | Define Auto-UC-01 Service Provider's |
-| | Management of Edge Cloud |
+| AUTO-38, auto-resiliency-vif-001: | UC2: validate VM suspension command |
+| 2/3 Test Logic | and measurement of Recovery Time |
+--------------------------------------+--------------------------------------+
-| AUTO-2, UC2 definition | Define Auto-UC-02 Resilience |
-| | Improvements through ONAP |
-+--------------------------------------+--------------------------------------+
-| AUTO-7, UC3 definition | Define Auto-UC-03 Enterprise vCPE |
| | |
-+--------------------------------------+--------------------------------------+
-| AUTO-3, UC1 test case definition | Develop test cases for Auto-UC-01 |
-| | SP's Management of Edge Cloud |
-+--------------------------------------+--------------------------------------+
-| AUTO-4, UC2 test case definition | Develop test cases for Auto-UC-02 |
-| | Resilience Improvements through ONAP |
-+--------------------------------------+--------------------------------------+
-| AUTO-8, UC3 test case definition | Develop test cases for Auto-UC-03 |
-| | Enterprise vCPE |
-+--------------------------------------+--------------------------------------+
-| AUTO-5, install ONAP | Getting ONAP running onto Pharos |
-| | deployment (without DCAE) |
-+--------------------------------------+--------------------------------------+
-| AUTO-31, UC1 test case progress | auto-edge-pif-001 Basic OpenStack |
-| | environment check |
-+--------------------------------------+--------------------------------------+
-| AUTO-13, UC2 test case progress | Develop test script for vif-001: |
-| | Data Management |
-+--------------------------------------+--------------------------------------+
-| AUTO-20, UC3 test case progress | Onboarding of VNFs via SDC GUI |
| | |
+--------------------------------------+--------------------------------------+
@@ -230,7 +318,7 @@ Deliverables
Software deliverables
^^^^^^^^^^^^^^^^^^^^^
-6.0 and 6.1 releases: in-progress install scripts and test case implementations.
+6.2 release: in-progress install scripts, CI scripts, and test case implementations.
Documentation deliverables
@@ -252,8 +340,8 @@ Known Limitations, Issues and Workarounds
System Limitations
^^^^^^^^^^^^^^^^^^
-* ONAP still to be validated for Arm servers
-* DCAE still to be validated for Kubernetes
+* ONAP still to be validated for Arm servers (many Docker images are ready)
+* ONAP installation still to be automated in a repeatable way, and need to configure cluster of Pharos servers
diff --git a/docs/release/release-notes/ONAP-toplevel-beijing.png b/docs/release/release-notes/ONAP-toplevel-beijing.png
new file mode 100644
index 0000000..62a9d47
--- /dev/null
+++ b/docs/release/release-notes/ONAP-toplevel-beijing.png
Binary files differ
diff --git a/docs/release/release-notes/auto-proj-openstacksummit1805.png b/docs/release/release-notes/auto-proj-openstacksummit1805.png
new file mode 100644
index 0000000..339365a
--- /dev/null
+++ b/docs/release/release-notes/auto-proj-openstacksummit1805.png
Binary files differ
diff --git a/docs/release/release-notes/auto-proj-tests.png b/docs/release/release-notes/auto-proj-tests.png
new file mode 100644
index 0000000..6b3be10
--- /dev/null
+++ b/docs/release/release-notes/auto-proj-tests.png
Binary files differ
diff --git a/docs/release/release-notes/auto-project-activities.png b/docs/release/release-notes/auto-project-activities.png
index a946372..c50bd72 100644
--- a/docs/release/release-notes/auto-project-activities.png
+++ b/docs/release/release-notes/auto-project-activities.png
Binary files differ