aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--ci/deploy-opnfv-daisy-centos.sh179
-rw-r--r--docs/release/configguide/Auto-featureconfig.rst100
-rw-r--r--docs/release/configguide/auto-installTarget-initial.pngbin31484 -> 35994 bytes
-rw-r--r--docs/release/release-notes/Auto-release-notes.rst173
-rw-r--r--docs/release/release-notes/auto-proj-parameters.pngbin0 -> 32716 bytes
-rw-r--r--docs/release/release-notes/auto-project-activities.pngbin58789 -> 25995 bytes
6 files changed, 373 insertions, 79 deletions
diff --git a/ci/deploy-opnfv-daisy-centos.sh b/ci/deploy-opnfv-daisy-centos.sh
new file mode 100644
index 0000000..664ba55
--- /dev/null
+++ b/ci/deploy-opnfv-daisy-centos.sh
@@ -0,0 +1,179 @@
+#!/usr/bin/env bash
+
+# /usr/bin/env bash or /bin/bash ? /usr/bin/env bash is more environment-independent
+# beware of files which were edited in Windows, and have invisible \r end-of-line characters, causing Linux errors
+
+##############################################################################
+# Copyright (c) 2018 Wipro Limited and others.
+#
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+
+# OPNFV contribution guidelines Wiki page:
+# https://wiki.opnfv.org/display/DEV/Contribution+Guidelines
+
+# OPNFV/Auto project:
+# https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095
+
+
+# localization control: force script to use default language for output, and force sorting to be bytewise
+# ("C" is from C language, represents "safe" locale everywhere)
+# (result: the script will consider only basic ASCII characters and disable UTF-8 multibyte match)
+export LANG=C
+export LC_ALL=C
+
+
+###############################################################################
+## installation of OpenStack via OPNFV Daisy4nfv, on CentOS, virtual deployment
+###############################################################################
+# reference manual: https://docs.opnfv.org/en/stable-fraser/submodules/daisy/docs/release/installation/index.html#daisy-installation
+# page for virtual deployment: https://docs.opnfv.org/en/stable-fraser/submodules/daisy/docs/release/installation/vmdeploy.html
+
+echo "*** begin AUTO install: OPNFV Daisy4nfv"
+
+# check OS version
+echo "*** print OS version (must be CentOS, version 7.2 or more)"
+cat /etc/*release
+
+# make sure cp is not aliased or a function; same for mv and rm
+unalias cp
+unset -f cp
+unalias mv
+unset -f mv
+unalias rm
+unset -f rm
+
+# Manage Nested Virtualization
+echo "*** ensure Nested Virtualization is enabled on Intel x86"
+echo "*** nested flag before:"
+cat /sys/module/kvm_intel/parameters/nested
+rm -f /etc/modprobe.d/kvm-nested.conf
+{ printf "options kvm-intel nested=1\n";\
+ printf "options kvm-intel enable_shadow_vmcs=1\n";\
+ printf "options kvm-intel enable_apicv=1\n";\
+ printf "options kvm-intel ept=1\n"; } >> /etc/modprobe.d/kvm-nested.conf
+sudo modprobe -r kvm_intel
+sudo modprobe -a kvm_intel
+echo "*** nested flag after:"
+cat /sys/module/kvm_intel/parameters/nested
+
+echo "*** verify status of modules in the Linux Kernel: kvm_intel module should be loaded for x86_64 machines"
+lsmod | grep kvm_
+grep kvm_ < /proc/modules
+
+# download tools: git, kvm, libvirt, python-yaml
+sudo yum -y install git
+sudo yum -y install kvm
+sudo yum -y install libvirt
+sudo yum info libvirt
+sudo yum info qemu-kvm
+sudo yum -y install python-yaml
+
+
+# make sure SELinux is enforced (Security-Enhanced Linux)
+sudo setenforce 1
+echo "getenforce: $(getenforce)"
+
+# Restart the libvirtd daemon:
+sudo service libvirtd restart
+# Verify if the kvm module is loaded, you should see amd or intel depending on the hardware:
+lsmod | grep kvm
+# Note: to test, issue a virsh command to ensure local root connectivity:
+# sudo virsh sysinfo
+
+
+
+# update everything (upgrade: riskier than update, as packages supposed to be unused will be deleted)
+# (note: can take several minutes; may not be necessary)
+sudo yum -y update
+
+# prepare Daisy installation directory
+export INSTALLDIR=/opt/opnfv-daisy
+mkdir $INSTALLDIR
+cd $INSTALLDIR
+
+# oslo-config, needed in daisy/deploy/get_conf.py
+sudo curl -O https://bootstrap.pypa.io/get-pip.py
+hash -r
+python get-pip.py --no-warn-script-location
+pip install --upgrade oslo-config
+
+
+# retrieve Daisy4nfv repository
+git clone https://gerrit.opnfv.org/gerrit/daisy
+cd daisy
+
+
+
+# OPTION 1: master repo and latest bin file: May 17th 2018
+# Download latest bin file from http://artifacts.opnfv.org/daisy.html and name it opnfv.bin
+curl http://artifacts.opnfv.org/daisy/opnfv-2018-05-17_14-00-32.bin -o opnfv.bin
+# make opnfv.bin executable
+chmod 777 opnfv.bin
+
+# OPTION 2: stable release: Fraser 6.0 (so, checkout to stable Fraser release opnfv-6.0)
+# Download matching bin file from http://artifacts.opnfv.org/daisy.html and name it opnfv.bin
+#git checkout opnfv.6.0 # as per Daisy4nfv instructions, but does not work
+#git checkout stable/fraser
+#curl http://artifacts.opnfv.org/daisy/fraser/opnfv-6.0.iso -o opnfv.bin
+# make opnfv.bin executable
+#chmod 777 opnfv.bin
+
+
+
+# The deploy.yaml file is the inventory template of deployment nodes:
+# error from doc: ”./deploy/conf/vm_environment/zte-virtual1/deploy.yml”
+# correct path: "./deploy/config/vm_environment/zte-virtual1/deploy.yml”
+# You can write your own name/roles reference into it:
+# name – Host name for deployment node after installation.
+# roles – Components deployed.
+# note: ./templates/virtual_environment/ contains xml files, for networks and VMs
+
+
+# prepare config dir for Auto lab in daisy dir, and copy deploy and network YAML files from default files (virtual1 or virtual2)
+export AUTO_DAISY_LAB_CONFIG1=labs/auto_daisy_lab/virtual1/daisy/config
+export DAISY_DEFAULT_ENV1=deploy/config/vm_environment/zte-virtual1
+mkdir -p $AUTO_DAISY_LAB_CONFIG1
+cp $DAISY_DEFAULT_ENV1/deploy.yml $AUTO_DAISY_LAB_CONFIG1
+cp $DAISY_DEFAULT_ENV1/network.yml $AUTO_DAISY_LAB_CONFIG1
+
+export AUTO_DAISY_LAB_CONFIG2=labs/auto_daisy_lab/virtual2/daisy/config
+export DAISY_DEFAULT_ENV2=deploy/config/vm_environment/zte-virtual2
+mkdir -p $AUTO_DAISY_LAB_CONFIG2
+cp $DAISY_DEFAULT_ENV2/deploy.yml $AUTO_DAISY_LAB_CONFIG2
+cp $DAISY_DEFAULT_ENV2/network.yml $AUTO_DAISY_LAB_CONFIG2
+
+# Note:
+# - zte-virtual1 config files deploy openstack with five nodes (3 LB nodes and 2 computer nodes).
+# - zte-virtual2 config files deploy an all-in-one openstack
+
+# run deploy script, scenario os-nosdn-nofeature-ha, multinode OpenStack
+sudo ./ci/deploy/deploy.sh -L "$(cd ./;pwd)" -l auto_daisy_lab -p virtual1 -s os-nosdn-nofeature-ha
+
+# run deploy script, scenario os-nosdn-nofeature-noha, all-in-one OpenStack
+# sudo ./ci/deploy/deploy.sh -L "$(cd ./;pwd)" -l auto_daisy_lab -p virtual2 -s os-nosdn-nofeature-noha
+
+
+# Notes about deploy.sh:
+# The value after -L should be an absolute path which points to the directory which includes $AUTO_DAISY_LAB_CONFIG directory.
+# The value after -p parameter (virtual1 or virtual2) should match the one selected for $AUTO_DAISY_LAB_CONFIG.
+# The value after -l parameter (e.g. auto_daisy_lab) should match the lab name selected for $AUTO_DAISY_LAB_CONFIG, after labs/ .
+# Scenario (-s parameter): “os-nosdn-nofeature-ha” is used for deploying multinode openstack (virtual1)
+# Scenario (-s parameter): “os-nosdn-nofeature-noha” used for deploying all-in-one openstack (virtual2)
+
+# more details on deploy.sh OPTIONS:
+# -B PXE Bridge for booting Daisy Master, optional
+# -D Dry-run, does not perform deployment, will be deleted later
+# -L Securelab repo absolute path, optional
+# -l LAB name, necessary
+# -p POD name, necessary
+# -r Remote workspace in target server, optional
+# -w Workdir for temporary usage, optional
+# -h Print this message and exit
+# -s Deployment scenario
+# -S Skip recreate Daisy VM during deployment
+
+# When deployed successfully, the floating IP of openstack is 10.20.11.11, the login account is “admin” and the password is “keystone”
diff --git a/docs/release/configguide/Auto-featureconfig.rst b/docs/release/configguide/Auto-featureconfig.rst
index ed68069..8a32300 100644
--- a/docs/release/configguide/Auto-featureconfig.rst
+++ b/docs/release/configguide/Auto-featureconfig.rst
@@ -17,9 +17,14 @@ Goal
The goal of `Auto <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>`_
installation and configuration is to prepare an environment where the
`Auto use cases <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/userguide/index.html#auto-userguide>`_
-can be assessed, i.e. where the corresponding test cases can be executed and their results can be collected.
+can be assessed, i.e. where the corresponding test cases can be executed and their results can be collected for analysis.
+See the `Auto Release Notes <https://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>`_
+for a discussion of the test results analysis loop.
An instance of ONAP needs to be present, as well as a number of deployed VNFs, in the scope of the use cases.
+Simulated traffic needs to be generated, and then test cases can be executed. There are multiple parameters to
+the Auto environment, and the same set of test cases will be executed on each environment, so as to be able to
+evaluate the influence of each environment parameter.
The initial Auto use cases cover:
@@ -29,35 +34,44 @@ The initial Auto use cases cover:
* **Enterprise vCPE** (automation, cost optimization, and performance assurance of enterprise connectivity to Data Centers
and the Internet)
-The general idea of Auto is to install an OPNFV environment (comprising at least one Cloud Manager),
+The general idea of the Auto feature configuration is to install an OPNFV environment (comprising at least one Cloud Manager),
an ONAP instance, ONAP-deployed VNFs as required by use cases, possibly additional cloud managers not
already installed during the OPNFV environment setup, traffic generators, and the Auto-specific software
for the use cases (which can include test frameworks such as `Robot <http://robotframework.org/>`_ or
`Functest <http://docs.opnfv.org/en/latest/submodules/functest/docs/release/release-notes/index.html#functest-releasenotes>`_).
+
The ONAP instance needs to be configured with policies and closed-loop controls (also as required by use cases),
and the test framework controls the execution and result collection of all the test cases. Then, test case execution
-results are analyzed, so as to fine-tune policies and closed-loop controls.
+results can be analyzed, so as to fine-tune policies and closed-loop controls, and to compare environment parameters.
-The following diagram illustrates two execution environments, for x86 architectures and for Arm architectures.
+The following diagram illustrates execution environments, for x86 architectures and for Arm architectures,
+and other environment parameters (see the Release Notes for a more detailed discussion on the parameters).
The installation process depends on the underlying architecture, since certain components may require a
specific binary-compatible version for a given x86 or Arm architecture. The preferred variant of ONAP is one
that runs on Kubernetes, while all VNF types are of interest to Auto: VM-based or containerized (on any cloud
-manager), for x86 or for Arm. The initial VM-based VNFs will cover OpenStack, and in future versions,
-additional cloud managers will be considered. The configuration of ONAP and of test cases should not depend
-on the architecture.
+manager), for x86 or for Arm. In fact, even PNFs could be considered, to support the evaluation of hybrid PNF/VNF
+transition deployments (ONAP has the ability of also managing legacy PNFs).
+
+The initial VM-based VNFs will cover OpenStack, and in future Auto releases, additional cloud managers will be considered.
+The configuration of ONAP and of test cases should not depend on the underlying architecture and infrastructure.
.. image:: auto-installTarget-generic.png
-For each component, various installer tools will be selected (based on simplicity and performance), and
-may change from one Auto release to the next. For example, the most natural installer for ONAP should be
-OOM (ONAP Operations Manager).
+For each component, various installer tools will be considered (as environment parameters), so as to enable comparison,
+as well as ready-to-use setups for Auto end-users. For example, the most natural installer for ONAP would be
+OOM (ONAP Operations Manager). For the OPNFV infrastructure, supported installer projects will be used: Fuel/MCP,
+Compass4NFV, Apex/TripleO, Daisy4NFV. Note that JOID was last supported in OPNFV Fraser 6.2, and is not supported
+anymore as of Gambia 7.0.
The initial version of Auto will focus on OpenStack VM-based VNFs, onboarded and deployed via ONAP API
(not by ONAP GUI, for the purpose of automation). ONAP is installed on Kubernetes. Two or more servers from LaaS
are used: one or more to support an OpenStack instance as provided by the OPNFV installation via Fuel/MCP or other
-OPNFV installers (Compass4NFV, Apex/TripleO, Daisy4NFV, JOID), and the other(s) to support ONAP with Kubernetes
+OPNFV installers (Compass4NFV, Apex/TripleO, Daisy4NFV), and the other(s) to support ONAP with Kubernetes
and Docker. Therefore, the VNF execution environment is composed of the server(s) with the OpenStack instance(s).
+Initial tests will also include ONAP instances installed on bare-metal servers (i.e. not directly on an OPNFV
+infrastructure; the ONAP/OPNFV integration can start at the VNF environment level; but ultimately, ONAP should
+be installed within an OPNFV infrastructure, for full integration).
.. image:: auto-installTarget-initial.png
@@ -75,12 +89,17 @@ SDK, or OpenStack CLI, or even OpenStack Heat templates) would populate the Open
.. image:: auto-OS-config4ONAP.png
+That script can also delete these created objects, so it can be used in tear-down procedures as well
+(use -del or --delete option). It is located in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ ,
+under the setup/VIMs/OpenStack directory:
+
+* auto_script_config_openstack_for_onap.py
Jenkins (or more precisely JJB: Jenkins Job Builder) will be used for Continuous Integration in OPNFV releases,
to ensure that the latest master branch of Auto is always working. The first 3 tasks in the pipeline would be:
-install OpenStack instance via OPNFV installer (Fuel/MCP for example), configure the OpenStack instance for ONAP,
-install ONAP (using the OpenStack instance network IDs in the ONAP YAML file).
+install OpenStack instance via an OPNFV installer (Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV), configure
+the OpenStack instance for ONAP, install ONAP (using the OpenStack instance network IDs in the ONAP YAML file).
Moreover, Auto will offer an API, which can be imported as a module, and can be accessed for example
by a web application. The following diagram shows the planned structure for the Auto Git repository,
@@ -96,8 +115,9 @@ Pre-configuration activities
The following resources will be required for the initial version of Auto:
* at least two LaaS (OPNFV Lab-as-a-Service) pods (or equivalent in another lab), with their associated network
- information. Later, other types of target pods will be supported, such as clusters (physical bare metal or virtual).
- The pods can be either x86 or Arm CPU architectures.
+ information. Later, other types of target pods will be supported, such as clusters (physical bare-metal or virtual).
+ The pods can be either x86 or Arm CPU architectures. An effort is currently ongoing (ONAP Integration team, and Auto team),
+ to ensure Arm binaries are available for all ONAP components in the official ONAP Docker registry.
* the `Auto Git repository <https://git.opnfv.org/auto/tree/>`_
(clone from `Gerrit Auto <https://gerrit.opnfv.org/gerrit/#/admin/projects/auto>`_)
@@ -106,7 +126,14 @@ The following resources will be required for the initial version of Auto:
Hardware configuration
======================
-<TBC; large servers, at least 512G RAM, 1TB storage, 80-100 CPU threads>
+ONAP needs relatively large servers (at least 512G RAM, 1TB storage, 80-100 CPU threads). Initial deployment
+attempts on single servers did not complete. Current attempts use 3-server clusters, on bare-metal.
+
+For initial VNF deployment environments, virtual deployments by OPNFV installers on a single server should suffice.
+Later, if many large VNFs are deployed for the Auto test cases, and if heavy traffic is generated, more servers
+might be necessary. Also, if many environment parameters are considered, full executions of all test cases
+on all environment configurations could take a long time, so parallel executions of independent test case batches
+on multiple sets of servers and clusters might be considered.
@@ -123,10 +150,10 @@ Current Auto work in progress is captured in the
OPNFV with OpenStack
~~~~~~~~~~~~~~~~~~~~
-The Auto installation uses the Fuel/MCP installer for the OPNFV environment (see the
+The first Auto installation used the Fuel/MCP installer for the OPNFV environment (see the
`OPNFV download page <https://www.opnfv.org/software/downloads>`_).
-The following figure summarizes the two installation cases: virtual or baremetal.
+The following figure summarizes the two installation cases for Fuel: virtual or bare-metal.
This OPNFV installer starts with installing a Salt Master, which then configures
subnets and bridges, and install VMs (e.g., for controllers and compute nodes)
and an OpenStack instance with predefined credentials.
@@ -134,8 +161,8 @@ and an OpenStack instance with predefined credentials.
.. image:: auto-OPFNV-fuel.png
-The Auto version of OPNFV installation configures additional resources for the OpenStack virtual pod,
-as compared to the default installation. Examples of manual steps are as follows:
+The Auto version of OPNFV installation configures additional resources for the OpenStack virtual pod
+(more virtual CPUs and more RAM), as compared to the default installation. Examples of manual steps are as follows:
.. code-block:: console
@@ -185,6 +212,17 @@ Note:
* however, in the case of ARM, the OPNFV installation will fail, because there isn't enough space to install all required packages into
the cloud image.
+Using the above as starting point, Auto-specific scripts have been developed, for each of the 4 OPNFV installers Fuel/MCP,
+Compass4NFV, Apex/TripleO, Daisy4NFV. Instructions for virtual deployments from each of these installers have been used, and
+sometimes expanded and clarified (missing details or steps from the instructions).
+They can be found in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ , under the ci directory:
+
+* deploy-opnfv-fuel-ubuntu.sh
+* deploy-opnfv-compass-ubuntu.sh
+* deploy-opnfv-apex-centos.sh
+* deploy-opnfv-daisy-centos.sh
+
+
ONAP on Kubernetes
~~~~~~~~~~~~~~~~~~
@@ -193,13 +231,13 @@ An ONAP installation on OpenStack has also been investigated, but we focus here
the ONAP on Kubernetes version.
The initial focus is on x86 architectures. The ONAP DCAE component for a while was not operational
-on Kubernetes, and had to be installed separately on OpenStack. So the ONAP instance was a hybrid,
-with all components except DCAE running on Kubernetes, and DCAE running separately on OpenStack.
+on Kubernetes (with ONAP Amsterdam), and had to be installed separately on OpenStack. So the ONAP
+instance was a hybrid, with all components except DCAE running on Kubernetes, and DCAE running
+separately on OpenStack. Starting with ONAP Beijing, DCAE also runs on Kubernetes.
For Arm architectures, specialized Docker images are being developed to provide Arm architecture
-binary compatibility.
-
-The goal for Auto is to use an ONAP instance where DCAE also runs on Kubernetes, for both architectures.
+binary compatibility. See the `Auto Release Notes <https://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>~_
+for more details on the availability status of these Arm images in the ONAP Docker registry.
The ONAP reference for this installation is detailed `here <http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_user_guide.html>`_.
@@ -218,6 +256,12 @@ Examples of manual steps for the deploy procedure are as follows:
9 cd ../oneclick
10 ./createAll.bash -n onap
+Several automation efforts to integrate the ONAP installation in Auto CI are in progress.
+One effort involves using a 3-server cluster at OPNFV Pharos LaaS (Lab-as-a-Service).
+The script is available in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ , under the ci directory::
+
+* deploy-onap.sh
+
ONAP configuration
@@ -248,7 +292,9 @@ Traffic Generator configuration
Test Case software installation and execution control
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-<TBC>
+<TBC; mention the management of multiple environments (characterized by their parameters), execution of all test cases
+in each environment, only a subset in official OPNFV CI/CD Jenkins due to size and time limits; then posting and analysis
+of results; failures lead to bug-fixing, successes lead to analysis for comparisons and fine-tuning>
@@ -272,7 +318,7 @@ Auto Wiki pages:
OPNFV documentation on Auto:
-* `Auto release notes <http://docs.opnfv.org/en/latest/release/release-notes.html>`_
+* `Auto release notes <https://docs.opnfv.org/en/latest/submodules/auto/docs/release/release-notes/index.html#auto-releasenotes>`_
* `Auto use case user guides <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/userguide/index.html#auto-userguide>`_
diff --git a/docs/release/configguide/auto-installTarget-initial.png b/docs/release/configguide/auto-installTarget-initial.png
index 380738d..465b468 100644
--- a/docs/release/configguide/auto-installTarget-initial.png
+++ b/docs/release/configguide/auto-installTarget-initial.png
Binary files differ
diff --git a/docs/release/release-notes/Auto-release-notes.rst b/docs/release/release-notes/Auto-release-notes.rst
index 26445b6..c30bd4c 100644
--- a/docs/release/release-notes/Auto-release-notes.rst
+++ b/docs/release/release-notes/Auto-release-notes.rst
@@ -7,13 +7,13 @@
Auto Release Notes
==================
-This document provides the release notes for the Fraser release of Auto.
+This document provides the release notes for the Gambia 7.0 release of Auto.
Important notes for this release
================================
-The initial release for Auto was in Fraser 6.0 (project inception: July 2017). This is the second point release, in Fraser 6.2.
+The initial release for Auto was in Fraser 6.0 (project inception: July 2017).
Summary
@@ -23,14 +23,14 @@ Overview
^^^^^^^^
OPNFV is an SDNFV system integration project for open-source components, which so far have been mostly limited to
-the NFVI+VIM as generally described by ETSI.
+the NFVI+VIM as generally described by `ETSI <https://www.etsi.org/technologies-clusters/technologies/nfv>`_.
In particular, OPNFV has yet to integrate higher-level automation features for VNFs and end-to-end Services.
-As an OPNFV project, Auto ("ONAP-Automated OPNFV") will focus on ONAP component integration and verification with
+As an OPNFV project, Auto (*ONAP-Automated OPNFV*) will focus on ONAP component integration and verification with
OPNFV reference platforms/scenarios, through primarily a post-install process, in order to avoid impact to OPNFV
-installer projects. As much as possible, this will use a generic installation/integration process (not specific to
-any OPNFV installer's technology).
+installer projects (Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV). As much as possible, this will use a generic
+installation/integration process (not specific to any OPNFV installer's technology).
* `ONAP <https://www.onap.org/>`_ (a Linux Foundation Project) is an open source software platform that delivers
robust capabilities for the design, creation, orchestration, monitoring, and life cycle management of
@@ -38,7 +38,15 @@ any OPNFV installer's technology).
Auto aims at validating the business value of ONAP in general, but especially within an OPNFV infrastructure
(integration of ONAP and OPNFV). Business value is measured in terms of improved service quality (performance,
-reliability, ...) and OPEX reduction (VNF management simplification, power consumption reduction, ...).
+reliability, ...) and OPEX reduction (VNF management simplification, power consumption reduction, ...), as
+demonstrated by use cases.
+
+Auto also validates multi-architecture software (binary images and containers) availability of ONAP and OPNFV:
+CPUs (x86, ARM) and Clouds (MultiVIM)
+
+In other words, Auto is a turnkey approach to automatically deploy an integrated open-source virtual network
+based on OPNFV (as infrastructure) and ONAP (as end-to-end service manager), that demonstrates business value
+to end-users (IT/Telco service providers, enterprises).
While all of ONAP is in scope, as it proceeds, the Auto project will focus on specific aspects of this integration
@@ -48,7 +56,7 @@ and verification in each release. Some example topics and work items include:
* How ONAP SDN-C uses OPNFV existing features, e.g. NetReady, in a two-layer controller architecture in which the
upper layer (global controller) is replaceable, and the lower layer can use different vendor’s local controller to
interact with SDN-C. For interaction with multiple cloud infrastructures, the MultiVIM ONAP component will be used.
-* How ONAP leverages OPNFV installers (Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV, JOID) to provide a cloud
+* How ONAP leverages OPNFV installers (Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV) to provide a cloud
instance (starting with OpenStack) on which to install the tool ONAP
* What data collection interface VNF and controllers provide to ONAP DCAE, and (through DCAE), to closed-loop control
functions such as Policy Tests which verify interoperability of ONAP automation/lifecycle features with specific NFVI
@@ -72,7 +80,7 @@ It is understood that:
.. image:: auto-proj-rn01.png
-The current ONAP architecture overview can be found `here <http://onap.readthedocs.io/en/latest/guides/onap-developer/architecture/onap-architecture.html>`_.
+The current ONAP architecture overview can be found `here <https://onap.readthedocs.io/en/latest/guides/onap-developer/architecture/onap-architecture.html>`_.
For reference, the ONAP-Beijing architecture diagram is replicated here:
@@ -89,17 +97,18 @@ Within OPNFV, Auto leverages tools and collaborates with other projects:
* FuncTest for software verification (CI/CD, Pass/Fail)
* Yardstick for metric management (quantitative measurements)
* VES (VNF Event Stream) and Barometer for VNF monitoring (feed to ONAP/DCAE)
+ * Edge Cloud as use case
* leverage OPNFV tools and infrastructure:
* Pharos as LaaS: transient pods (3-week bookings) and permanent Arm pod (6 servers)
- * possibly other labs from the community
+ * `WorksOnArm <http://worksonarm.com/cluster>`_ (`GitHub link <http://github.com/worksonarm/cluster>`_)
+ * possibly other labs from the community (Huawei pod-12, 6 servers, x86)
* JJB/Jenkins for CI/CD (and follow OPNFV scenario convention)
* Gerrit/Git for code and documents reviewing and archiving (similar to ONAP: Linux Foundation umbrella)
* follow OPNFV releases (Releng group)
-
Testability
^^^^^^^^^^^
@@ -123,21 +132,26 @@ value (i.e., find/determine policies and controls which yield optimized ONAP bus
More precisely, the following list shows parameters that could be applied to an Auto full run of test cases:
* Auto test cases for given use cases
-* OPNFV installer {Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV, JOID}
+* OPNFV installer {Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV}
* OPNFV availability scenario {HA, noHA}
-* cloud where ONAP runs {OpenStack, AWS, GCP, Azure, ...}
-* ONAP installation type {bare metal or virtual server, VM or container, ...} and options {MultiVIM single|distributed, ...}
-* VNFs {vFW, vCPE, vAAA, vDHCP, vDNS, vHSS, ...} and VNF-based services {vIMS, vEPC, ...}
+* environment where ONAP runs {bare metal servers, VMs from clouds (OpenStack, AWS, GCP, Azure, ...), containers}
+* ONAP installation type {bare metal, VM, or container, ...} and options {MultiVIM single|distributed, ...}
+* VNF types {vFW, vCPE, vAAA, vDHCP, vDNS, vHSS, ...} and VNF-based services {vIMS, vEPC, ...}
* cloud where VNFs run {OpenStack, AWS, GCP, Azure, ...}
-* VNF type {VM-based, container}
-* CPU architectures {x86/AMD64, ARM/aarch64} for ONAP software and for VNFs
+* VNF host type {VM, container}
+* CPU architectures {x86/AMD64, ARM/aarch64} for ONAP software and for VNF software; not really important for Auto software;
* pod size and technology (RAM, storage, CPU cores/threads, NICs)
-* traffic types and amounts/volumes
+* traffic types and amounts/volumes; traffic generators (although that should not really matter);
* ONAP configuration {especially policies and closed-loop controls; monitoring types for DCAE: VES, ...}
* versions of every component {Linux OS (Ubuntu, CentOS), OPNFV release, clouds, ONAP, VNFs, ...}
+The diagram below shows Auto parameters:
+
+.. image:: auto-proj-parameters.png
-Illustration of Auto analysis loop based on test case executions:
+
+The next figure is an illustration of the Auto analysis loop (design, configuration, execution, result analysis)
+based on test cases covering as many parameters as possible :
.. image:: auto-proj-tests.png
@@ -150,14 +164,14 @@ Auto currently defines three use cases: Edge Cloud (UC1), Resiliency Improvement
including end-to-end composite services of which a Cloud Manager may not be aware (VMs or containers could be
recovered by a Cloud Manager, but not necessarily an end-to-end service built on top of VMs or containers).
* enterprise-grade performance of vCPEs (certification during onboarding, then real-time performance assurance with
- SLAs and HA as well as scaling).
+ SLAs and HA, as well as scaling).
The use cases define test cases, which initially will be independent, but which might eventually be integrated to `FuncTest <https://wiki.opnfv.org/display/functest/Opnfv+Functional+Testing>`_.
-Additional use cases can be added in the future, such as vIMS (example: project Clearwater) or residential vHGW (virtual
-Home Gateways). The interest for vHGW is to reduce overall power consumption: even in idle mode, physical HGWs in
-residential premises consume a lot of energy. Virtualizing that service to the Service Provider edge data center would
-allow to minimize that consumption.
+Additional use cases can be added in the future, such as vIMS (example: project `Clearwater <http://www.projectclearwater.org/>`_)
+or residential vHGW (virtual Home Gateways). The interest for vHGW is to reduce overall power consumption: even in idle mode,
+physical HGWs in residential premises consume a lot of energy. Virtualizing that service to the Service Provider edge data center
+would allow to minimize that consumption.
Lab environment
@@ -172,39 +186,53 @@ x86 pod at UNH IOL.
A transition is in progress, to leverage OPNFV LaaS (Lab-as-a-Service) pods (`Pharos <https://labs.opnfv.org/>`_).
These pods can be booked for 3 weeks only (with an extension for a maximum of 2 weeks), so they are not a permanent resource.
-A repeatable automated installation procedure is being developed.
+For ONAP-Beijing, a repeatable automated installation procedure is being developed, using 3 Pharos servers (x86 for now).
+Also, a more permanent ONAP installation is in progress at a Huawei lab (pod-12, consisting of 6 x86 servers,
+1 as jump server, the other 5 with this example allocation: 3 for ONAP components, and 2 for an OPNFV infratructure:
+Openstack installed by Compass4NFV).
ONAP-based onboarding and deployment of VNFs is in progress (ONAP-Amsterdam pre-loading of VNFs must still done outside
of ONAP: for VM-based VNFs, users need to prepare OpenStack stacks (using Heat templates), then make an instance snapshot
which serves as the binary image of the VNF).
-An initial version of a script to prepare an OpenStack instance for ONAP (creation of a public and a private network,
-with a router) has been developed. It leverages OpenStack SDK.
+A script to prepare an OpenStack instance for ONAP (creation of a public and a private network, with a router,
+pre-loading of images and flavors, creation of a security group and an ONAP user) has been developed. It leverages
+OpenStack SDK. It has a delete option, so it can be invoked to delete these objects for example in a tear-down procedure.
Integration with Arm servers has started (exploring binary compatibility):
-* OpenStack is currently installed on a 6-server pod of Arm servers
+* The Auto project has a specific 6-server pod of Arm servers, which is currently loaned to ONAP integration team,
+ to build ONAP images
* A set of 14 additional Arm servers was deployed at UNH, for increased capacity
-* Arm-compatible Docker images are in the process of being developed
+* ONAP Docker registry: ONAP-specific images for ARM are being built, with the purpose of populating ONAP nexus2
+ (Maven2 artifacts) and nexus3 (Docker containers) repositories at Linux Foundation. Docker images are
+ multi-architecture, and the manifest of an image may contain 1 or more layers (for example 2 layers: x86/AMD64
+ and ARM/aarch64). One of ONAP-Casablanca architectural requirements is to be CPU-architecture independent.
+ There are almost 150 Docker containers in a complete ONAP instance. Currently, more disk space is being added
+ to the ARM nodes (configuration of Nova, and/or additional actual physical storage space).
+
-Test case implementation for the three use cases has started.
+Test case design and implementation for the three use cases has started.
OPNFV CI/CD integration with JJD (Jenkins Job Description) has started: see the Auto plan description
-`here <https://wiki.opnfv.org/display/AUTO/CI+Plan+for+Auto>`_. The permanent resource for that is the 6-server Arm
+`here <https://wiki.opnfv.org/display/AUTO/CI+for+Auto>`_. The permanent resource for that is the 6-server Arm
pod, hosted at UNH. The CI directory from the Auto repository is `here <https://git.opnfv.org/auto/tree/ci>`_
+
Finally, the following figure illustrates Auto in terms of project activities:
.. image:: auto-project-activities.png
-Note: a demo was delivered at the OpenStack Summit in Vancouver on May 21st 2018, to illustrate the deployment of a WordPress application
-(WordPress is a platform for websites and blogs) deployed on a multi-architecture cloud (mix of x86 and Arm servers).
+Note: a demo was delivered at the OpenStack Summit in Vancouver on May 21st 2018, to illustrate the deployment of
+a WordPress application (WordPress is a platform for websites and blogs) deployed on a multi-architecture cloud (mix
+of x86 and Arm servers).
This shows how service providers and enterprises can diversify their data centers with servers of different architectures,
-and select architectures best suited to each use case (mapping application components to architectures: DBs, interactive servers,
-number-crunching modules, ...).
+and select architectures best suited to each use case (mapping application components to architectures: DBs,
+interactive servers, number-crunching modules, ...).
This prefigures how other examples such as ONAP, VIMs, and VNFs could also be deployed on heterogeneous multi-architecture
-environments (open infrastructure), orchestrated by Kubernetes. The Auto installation scripts could expand on that approach.
+environments (open infrastructure), orchestrated by Kubernetes. The Auto installation scripts covering all the parameters
+described above could expand on that approach.
.. image:: auto-proj-openstacksummit1805.png
@@ -218,13 +246,13 @@ Release Data
| **Project** | Auto |
| | |
+--------------------------------------+--------------------------------------+
-| **Repo/commit-ID** | auto/opnfv-6.2.0 |
+| **Repo/commit-ID** | auto/opnfv-7.0.0 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release designation** | Fraser 6.2 |
+| **Release designation** | Gambia 7.0 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | 2018-06-29 |
+| **Release date** | 2018-11-02 |
| | |
+--------------------------------------+--------------------------------------+
| **Purpose of the delivery** | Official OPNFV release |
@@ -273,22 +301,66 @@ Point release 6.2:
* initial scripts for OPNFV CI/CD, registration of Jenkins slave on `Arm pod <https://build.opnfv.org/ci/view/auto/>`_
* updated script for configuring OpenStack instance for ONAP, using OpenStack SDK 0.14
-Notable activities since release 6.1, which may result in new features for Gambia 7.0:
+Point release 7.0:
+
+* progress on Docker registry of ONAP's Arm images
+* progress on ONAP installation script for 3-server cluster of UNH servers
+* CI scripts for OPNFV installers: Fuel/MCP (x86), Compass, Apex/TripleO (must run twice)
+* initial CI script for Daisy4NFV (work in progress)
+* JOID script, but supported only until R6.2, not Gambia 7.0
+* completed script for configuring OpenStack instance for ONAP, using OpenStack SDK 0.17
+* use of an additional lab resource for Auto development: 6-server x86 pod (huawei-pod12)
+
+
-* researching how to configure multiple Pharos servers in a cluster for Kubernetes
-* started to evaluate Compass4nfv as another OpenStack installer; issues with Python version (2 or 3)
-* common meeting with Functest
-* Plugfest: initiated collaboration with ONAP/MultiVIM (including support for ONAP installation)
**JIRA TICKETS for this release:**
+
+`JIRA Auto Gambia 7.0.0 Done <https://jira.opnfv.org/issues/?filter=12403>`_
+
+Manual selection of significant JIRA tickets for this version's highlights:
+
+--------------------------------------+--------------------------------------+
| **JIRA REFERENCE** | **SLOGAN** |
| | |
+--------------------------------------+--------------------------------------+
-| AUTO-38, auto-resiliency-vif-001: | UC2: validate VM suspension command |
-| 2/3 Test Logic | and measurement of Recovery Time |
+| AUTO-37 | Get DCAE running onto Pharos |
+| | deployment |
++--------------------------------------+--------------------------------------+
+| AUTO-42 | Use Compass4NFV to create an |
+| | OpenStack instance on a UNH pod |
++--------------------------------------+--------------------------------------+
+| AUTO-43 | String together scripts for Fuel, |
+| | Tool installation, ONAP preparation |
++--------------------------------------+--------------------------------------+
+| AUTO-44 | Build ONAP components for arm64 |
+| | platform |
++--------------------------------------+--------------------------------------+
+| AUTO-45 | CI: Jenkins definition of verify and |
+| | merge jobs |
++--------------------------------------+--------------------------------------+
+| AUTO-46 | Use Apex to create an OpenStack |
+| | instance on a UNH pod |
++--------------------------------------+--------------------------------------+
+| AUTO-47 | Install ONAP with Kubernetes on LaaS |
+| | |
++--------------------------------------+--------------------------------------+
+| AUTO-48 | Create documentation for ONAP |
+| | deployment with Kubernetes on LaaS |
++--------------------------------------+--------------------------------------+
+| AUTO-49 | Automate ONAP deployment with |
+| | Kubernetes on LaaS |
++--------------------------------------+--------------------------------------+
+| AUTO-51 | huawei-pod12: Prepare IDF and PDF |
+| | files |
++--------------------------------------+--------------------------------------+
+| AUTO-52 | Deploy a running ONAP instance on |
+| | huawei-pod12 |
++--------------------------------------+--------------------------------------+
+| AUTO-54 | Use Daisy4nfv to create an OpenStack |
+| | instance on a UNH pod |
+--------------------------------------+--------------------------------------+
| | |
| | |
@@ -319,7 +391,7 @@ Deliverables
Software deliverables
^^^^^^^^^^^^^^^^^^^^^
-6.2 release: in-progress install scripts, CI scripts, and test case implementations.
+7.0 release: in-progress Docker ARM images, install scripts, CI scripts, and test case implementations.
Documentation deliverables
@@ -341,9 +413,6 @@ Known Limitations, Issues and Workarounds
System Limitations
^^^^^^^^^^^^^^^^^^
-* ONAP still to be validated for Arm servers (many Docker images are ready)
-* ONAP installation still to be automated in a repeatable way, and need to configure cluster of Pharos servers
-
Known issues
@@ -393,8 +462,8 @@ None at this point.
References
==========
-For more information on the OPNFV Fraser release, please see:
-http://opnfv.org/fraser
+For more information on the OPNFV Gambia release, please see:
+http://opnfv.org/gambia
Auto Wiki pages:
diff --git a/docs/release/release-notes/auto-proj-parameters.png b/docs/release/release-notes/auto-proj-parameters.png
new file mode 100644
index 0000000..a0cbe2e
--- /dev/null
+++ b/docs/release/release-notes/auto-proj-parameters.png
Binary files differ
diff --git a/docs/release/release-notes/auto-project-activities.png b/docs/release/release-notes/auto-project-activities.png
index c50bd72..d25ac2a 100644
--- a/docs/release/release-notes/auto-project-activities.png
+++ b/docs/release/release-notes/auto-project-activities.png
Binary files differ