aboutsummaryrefslogtreecommitdiffstats
path: root/docs/release/configguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release/configguide')
-rw-r--r--docs/release/configguide/Auto-featureconfig.rst100
-rw-r--r--docs/release/configguide/auto-installTarget-initial.pngbin31484 -> 35994 bytes
2 files changed, 73 insertions, 27 deletions
diff --git a/docs/release/configguide/Auto-featureconfig.rst b/docs/release/configguide/Auto-featureconfig.rst
index ed68069..8a32300 100644
--- a/docs/release/configguide/Auto-featureconfig.rst
+++ b/docs/release/configguide/Auto-featureconfig.rst
@@ -17,9 +17,14 @@ Goal
The goal of `Auto <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>`_
installation and configuration is to prepare an environment where the
`Auto use cases <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/userguide/index.html#auto-userguide>`_
-can be assessed, i.e. where the corresponding test cases can be executed and their results can be collected.
+can be assessed, i.e. where the corresponding test cases can be executed and their results can be collected for analysis.
+See the `Auto Release Notes <https://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>`_
+for a discussion of the test results analysis loop.
An instance of ONAP needs to be present, as well as a number of deployed VNFs, in the scope of the use cases.
+Simulated traffic needs to be generated, and then test cases can be executed. There are multiple parameters to
+the Auto environment, and the same set of test cases will be executed on each environment, so as to be able to
+evaluate the influence of each environment parameter.
The initial Auto use cases cover:
@@ -29,35 +34,44 @@ The initial Auto use cases cover:
* **Enterprise vCPE** (automation, cost optimization, and performance assurance of enterprise connectivity to Data Centers
and the Internet)
-The general idea of Auto is to install an OPNFV environment (comprising at least one Cloud Manager),
+The general idea of the Auto feature configuration is to install an OPNFV environment (comprising at least one Cloud Manager),
an ONAP instance, ONAP-deployed VNFs as required by use cases, possibly additional cloud managers not
already installed during the OPNFV environment setup, traffic generators, and the Auto-specific software
for the use cases (which can include test frameworks such as `Robot <http://robotframework.org/>`_ or
`Functest <http://docs.opnfv.org/en/latest/submodules/functest/docs/release/release-notes/index.html#functest-releasenotes>`_).
+
The ONAP instance needs to be configured with policies and closed-loop controls (also as required by use cases),
and the test framework controls the execution and result collection of all the test cases. Then, test case execution
-results are analyzed, so as to fine-tune policies and closed-loop controls.
+results can be analyzed, so as to fine-tune policies and closed-loop controls, and to compare environment parameters.
-The following diagram illustrates two execution environments, for x86 architectures and for Arm architectures.
+The following diagram illustrates execution environments, for x86 architectures and for Arm architectures,
+and other environment parameters (see the Release Notes for a more detailed discussion on the parameters).
The installation process depends on the underlying architecture, since certain components may require a
specific binary-compatible version for a given x86 or Arm architecture. The preferred variant of ONAP is one
that runs on Kubernetes, while all VNF types are of interest to Auto: VM-based or containerized (on any cloud
-manager), for x86 or for Arm. The initial VM-based VNFs will cover OpenStack, and in future versions,
-additional cloud managers will be considered. The configuration of ONAP and of test cases should not depend
-on the architecture.
+manager), for x86 or for Arm. In fact, even PNFs could be considered, to support the evaluation of hybrid PNF/VNF
+transition deployments (ONAP has the ability of also managing legacy PNFs).
+
+The initial VM-based VNFs will cover OpenStack, and in future Auto releases, additional cloud managers will be considered.
+The configuration of ONAP and of test cases should not depend on the underlying architecture and infrastructure.
.. image:: auto-installTarget-generic.png
-For each component, various installer tools will be selected (based on simplicity and performance), and
-may change from one Auto release to the next. For example, the most natural installer for ONAP should be
-OOM (ONAP Operations Manager).
+For each component, various installer tools will be considered (as environment parameters), so as to enable comparison,
+as well as ready-to-use setups for Auto end-users. For example, the most natural installer for ONAP would be
+OOM (ONAP Operations Manager). For the OPNFV infrastructure, supported installer projects will be used: Fuel/MCP,
+Compass4NFV, Apex/TripleO, Daisy4NFV. Note that JOID was last supported in OPNFV Fraser 6.2, and is not supported
+anymore as of Gambia 7.0.
The initial version of Auto will focus on OpenStack VM-based VNFs, onboarded and deployed via ONAP API
(not by ONAP GUI, for the purpose of automation). ONAP is installed on Kubernetes. Two or more servers from LaaS
are used: one or more to support an OpenStack instance as provided by the OPNFV installation via Fuel/MCP or other
-OPNFV installers (Compass4NFV, Apex/TripleO, Daisy4NFV, JOID), and the other(s) to support ONAP with Kubernetes
+OPNFV installers (Compass4NFV, Apex/TripleO, Daisy4NFV), and the other(s) to support ONAP with Kubernetes
and Docker. Therefore, the VNF execution environment is composed of the server(s) with the OpenStack instance(s).
+Initial tests will also include ONAP instances installed on bare-metal servers (i.e. not directly on an OPNFV
+infrastructure; the ONAP/OPNFV integration can start at the VNF environment level; but ultimately, ONAP should
+be installed within an OPNFV infrastructure, for full integration).
.. image:: auto-installTarget-initial.png
@@ -75,12 +89,17 @@ SDK, or OpenStack CLI, or even OpenStack Heat templates) would populate the Open
.. image:: auto-OS-config4ONAP.png
+That script can also delete these created objects, so it can be used in tear-down procedures as well
+(use -del or --delete option). It is located in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ ,
+under the setup/VIMs/OpenStack directory:
+
+* auto_script_config_openstack_for_onap.py
Jenkins (or more precisely JJB: Jenkins Job Builder) will be used for Continuous Integration in OPNFV releases,
to ensure that the latest master branch of Auto is always working. The first 3 tasks in the pipeline would be:
-install OpenStack instance via OPNFV installer (Fuel/MCP for example), configure the OpenStack instance for ONAP,
-install ONAP (using the OpenStack instance network IDs in the ONAP YAML file).
+install OpenStack instance via an OPNFV installer (Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV), configure
+the OpenStack instance for ONAP, install ONAP (using the OpenStack instance network IDs in the ONAP YAML file).
Moreover, Auto will offer an API, which can be imported as a module, and can be accessed for example
by a web application. The following diagram shows the planned structure for the Auto Git repository,
@@ -96,8 +115,9 @@ Pre-configuration activities
The following resources will be required for the initial version of Auto:
* at least two LaaS (OPNFV Lab-as-a-Service) pods (or equivalent in another lab), with their associated network
- information. Later, other types of target pods will be supported, such as clusters (physical bare metal or virtual).
- The pods can be either x86 or Arm CPU architectures.
+ information. Later, other types of target pods will be supported, such as clusters (physical bare-metal or virtual).
+ The pods can be either x86 or Arm CPU architectures. An effort is currently ongoing (ONAP Integration team, and Auto team),
+ to ensure Arm binaries are available for all ONAP components in the official ONAP Docker registry.
* the `Auto Git repository <https://git.opnfv.org/auto/tree/>`_
(clone from `Gerrit Auto <https://gerrit.opnfv.org/gerrit/#/admin/projects/auto>`_)
@@ -106,7 +126,14 @@ The following resources will be required for the initial version of Auto:
Hardware configuration
======================
-<TBC; large servers, at least 512G RAM, 1TB storage, 80-100 CPU threads>
+ONAP needs relatively large servers (at least 512G RAM, 1TB storage, 80-100 CPU threads). Initial deployment
+attempts on single servers did not complete. Current attempts use 3-server clusters, on bare-metal.
+
+For initial VNF deployment environments, virtual deployments by OPNFV installers on a single server should suffice.
+Later, if many large VNFs are deployed for the Auto test cases, and if heavy traffic is generated, more servers
+might be necessary. Also, if many environment parameters are considered, full executions of all test cases
+on all environment configurations could take a long time, so parallel executions of independent test case batches
+on multiple sets of servers and clusters might be considered.
@@ -123,10 +150,10 @@ Current Auto work in progress is captured in the
OPNFV with OpenStack
~~~~~~~~~~~~~~~~~~~~
-The Auto installation uses the Fuel/MCP installer for the OPNFV environment (see the
+The first Auto installation used the Fuel/MCP installer for the OPNFV environment (see the
`OPNFV download page <https://www.opnfv.org/software/downloads>`_).
-The following figure summarizes the two installation cases: virtual or baremetal.
+The following figure summarizes the two installation cases for Fuel: virtual or bare-metal.
This OPNFV installer starts with installing a Salt Master, which then configures
subnets and bridges, and install VMs (e.g., for controllers and compute nodes)
and an OpenStack instance with predefined credentials.
@@ -134,8 +161,8 @@ and an OpenStack instance with predefined credentials.
.. image:: auto-OPFNV-fuel.png
-The Auto version of OPNFV installation configures additional resources for the OpenStack virtual pod,
-as compared to the default installation. Examples of manual steps are as follows:
+The Auto version of OPNFV installation configures additional resources for the OpenStack virtual pod
+(more virtual CPUs and more RAM), as compared to the default installation. Examples of manual steps are as follows:
.. code-block:: console
@@ -185,6 +212,17 @@ Note:
* however, in the case of ARM, the OPNFV installation will fail, because there isn't enough space to install all required packages into
the cloud image.
+Using the above as starting point, Auto-specific scripts have been developed, for each of the 4 OPNFV installers Fuel/MCP,
+Compass4NFV, Apex/TripleO, Daisy4NFV. Instructions for virtual deployments from each of these installers have been used, and
+sometimes expanded and clarified (missing details or steps from the instructions).
+They can be found in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ , under the ci directory:
+
+* deploy-opnfv-fuel-ubuntu.sh
+* deploy-opnfv-compass-ubuntu.sh
+* deploy-opnfv-apex-centos.sh
+* deploy-opnfv-daisy-centos.sh
+
+
ONAP on Kubernetes
~~~~~~~~~~~~~~~~~~
@@ -193,13 +231,13 @@ An ONAP installation on OpenStack has also been investigated, but we focus here
the ONAP on Kubernetes version.
The initial focus is on x86 architectures. The ONAP DCAE component for a while was not operational
-on Kubernetes, and had to be installed separately on OpenStack. So the ONAP instance was a hybrid,
-with all components except DCAE running on Kubernetes, and DCAE running separately on OpenStack.
+on Kubernetes (with ONAP Amsterdam), and had to be installed separately on OpenStack. So the ONAP
+instance was a hybrid, with all components except DCAE running on Kubernetes, and DCAE running
+separately on OpenStack. Starting with ONAP Beijing, DCAE also runs on Kubernetes.
For Arm architectures, specialized Docker images are being developed to provide Arm architecture
-binary compatibility.
-
-The goal for Auto is to use an ONAP instance where DCAE also runs on Kubernetes, for both architectures.
+binary compatibility. See the `Auto Release Notes <https://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>~_
+for more details on the availability status of these Arm images in the ONAP Docker registry.
The ONAP reference for this installation is detailed `here <http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_user_guide.html>`_.
@@ -218,6 +256,12 @@ Examples of manual steps for the deploy procedure are as follows:
9 cd ../oneclick
10 ./createAll.bash -n onap
+Several automation efforts to integrate the ONAP installation in Auto CI are in progress.
+One effort involves using a 3-server cluster at OPNFV Pharos LaaS (Lab-as-a-Service).
+The script is available in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ , under the ci directory::
+
+* deploy-onap.sh
+
ONAP configuration
@@ -248,7 +292,9 @@ Traffic Generator configuration
Test Case software installation and execution control
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-<TBC>
+<TBC; mention the management of multiple environments (characterized by their parameters), execution of all test cases
+in each environment, only a subset in official OPNFV CI/CD Jenkins due to size and time limits; then posting and analysis
+of results; failures lead to bug-fixing, successes lead to analysis for comparisons and fine-tuning>
@@ -272,7 +318,7 @@ Auto Wiki pages:
OPNFV documentation on Auto:
-* `Auto release notes <http://docs.opnfv.org/en/latest/release/release-notes.html>`_
+* `Auto release notes <https://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>`_
* `Auto use case user guides <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/userguide/index.html#auto-userguide>`_
diff --git a/docs/release/configguide/auto-installTarget-initial.png b/docs/release/configguide/auto-installTarget-initial.png
index 380738d..465b468 100644
--- a/docs/release/configguide/auto-installTarget-initial.png
+++ b/docs/release/configguide/auto-installTarget-initial.png
Binary files differ