aboutsummaryrefslogtreecommitdiffstats
path: root/docs/release/configguide/Auto-featureconfig.rst
blob: 15126a86348a30f306f364ebe484d5d33166bec8 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. SPDX-License-Identifier CC-BY-4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors


Introduction
============

This document describes the software and hardware reference frameworks used by Auto,
and provides guidelines on how to perform configurations and additional installations.


Goal
====

The goal of :ref:`Auto <auto-releasenotes>`

installation and configuration is to prepare an environment where the
:ref:`Auto use cases <auto-userguide>`

can be assessed, i.e. where the corresponding test cases can be executed and their results can be collected for analysis.
See the `Auto Release Notes <auto-releasenotes>`

for a discussion of the test results analysis loop.

An instance of ONAP needs to be present, as well as a number of deployed VNFs, in the scope of the use cases.
Simulated traffic needs to be generated, and then test cases can be executed. There are multiple parameters to
the Auto environment, and the same set of test cases will be executed on each environment, so as to be able to
evaluate the influence of each environment parameter.

The initial Auto use cases cover:

* **Edge Cloud** (increased autonomy and automation for managing Edge VNFs)
* **Resilience Improvements through ONAP** (reduced recovery time for VNFs and end-to-end services in case of failure
  or suboptimal performance)
* **Enterprise vCPE** (automation, cost optimization, and performance assurance of enterprise connectivity to Data Centers
  and the Internet)

The general idea of the Auto feature configuration is to install an OPNFV environment (comprising at least one Cloud Manager),
an ONAP instance, ONAP-deployed VNFs as required by use cases, possibly additional cloud managers not
already installed during the OPNFV environment setup, traffic generators, and the Auto-specific software
for the use cases (which can include test frameworks such as `Robot <http://robotframework.org/>`_ or :doc:`Functest <functest:release/release-notes>`

The ONAP instance needs to be configured with policies and closed-loop controls (also as required by use cases),
and the test framework controls the execution and result collection of all the test cases. Then, test case execution
results can be analyzed, so as to fine-tune policies and closed-loop controls, and to compare environment parameters.

The following diagram illustrates execution environments, for x86 architectures and for Arm architectures,
and other environment parameters (see the Release Notes for a more detailed discussion on the parameters).
The installation process depends on the underlying architecture, since certain components may require a
specific binary-compatible version for a given x86 or Arm architecture. The preferred variant of ONAP is one
that runs on Kubernetes, while all VNF types are of interest to Auto: VM-based or containerized (on any cloud
manager), for x86 or for Arm. In fact, even PNFs could be considered, to support the evaluation of hybrid PNF/VNF
transition deployments (ONAP has the ability of also managing legacy PNFs).

The initial VM-based VNFs will cover OpenStack, and in future Auto releases, additional cloud managers will be considered.
The configuration of ONAP and of test cases should not depend on the underlying architecture and infrastructure.

.. image:: auto-installTarget-generic.png


For each component, various installer tools will be considered (as environment parameters), so as to enable comparison,
as well as ready-to-use setups for Auto end-users. For example, the most natural installer for ONAP would be
OOM (ONAP Operations Manager). For the OPNFV infrastructure, supported installer projects will be used: Fuel/MCP,
Compass4NFV, Apex/TripleO, Daisy4NFV. Note that JOID was last supported in OPNFV Fraser 6.2, and is not supported
anymore as of Gambia 7.0.

The initial version of Auto will focus on OpenStack VM-based VNFs, onboarded and deployed via ONAP API
(not by ONAP GUI, for the purpose of automation). ONAP is installed on Kubernetes. Two or more servers from LaaS
are used: one or more to support an OpenStack instance as provided by the OPNFV installation via Fuel/MCP or other
OPNFV installers (Compass4NFV, Apex/TripleO, Daisy4NFV), and the other(s) to support ONAP with Kubernetes
and Docker. Therefore, the VNF execution environment is composed of the server(s) with the OpenStack instance(s).
Initial tests will also include ONAP instances installed on bare-metal servers (i.e. not directly on an OPNFV
infrastructure; the ONAP/OPNFV integration can start at the VNF environment level; but ultimately, ONAP should
be installed within an OPNFV infrastructure, for full integration).

.. image:: auto-installTarget-initial.png

ONAP/K8S has several variants. The initial variant considered by Auto is the basic one recommended by ONAP,
which relies on the Rancher installer and on OpenStack VMs providing VMs for the Rancher master and for the
Kubernetes cluster workers, as illustrated below for ONAP-Beijing release:

.. image:: auto-installTarget-ONAP-B.png


The OpenStack instance running VNFs may need to be configured as per ONAP expectations, for example creating
instances of ONAP projects/tenants, users, security groups, networks (private, public), connected to the
Internet by a Router, and making sure expected VM images and flavors are present. A script (using OpenStack
SDK, or OpenStack CLI, or even OpenStack Heat templates) would populate the OpenStack instance, as illustrated below:

.. image:: auto-OS-config4ONAP.png

That script can also delete these created objects, so it can be used in tear-down procedures as well
(use -del or --delete option). It is located in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ ,
under the setup/VIMs/OpenStack directory:

* auto_script_config_openstack_for_onap.py


Jenkins (or more precisely JJB: Jenkins Job Builder) will be used for Continuous Integration in OPNFV releases,
to ensure that the latest master branch of Auto is always working. The first 3 tasks in the pipeline would be:
install OpenStack instance via an OPNFV installer (Fuel/MCP, Compass4NFV, Apex/TripleO, Daisy4NFV), configure
the OpenStack instance for ONAP, install ONAP (using the OpenStack instance network IDs in the ONAP YAML file).

Moreover, Auto will offer an API, which can be imported as a module, and can be accessed for example
by a web application. The following diagram shows the planned structure for the Auto Git repository,
supporting this module, as well as the installation scripts, test case software, utilities, and documentation.

.. image:: auto-repo-folders.png



Pre-configuration activities
============================

The following resources will be required for the initial version of Auto:

* at least two LaaS (OPNFV Lab-as-a-Service) pods (or equivalent in another lab), with their associated network
  information. Later, other types of target pods will be supported, such as clusters (physical bare-metal or virtual).
  The pods can be either x86 or Arm CPU architectures. An effort is currently ongoing (ONAP Integration team, and Auto team),
  to ensure Arm binaries are available for all ONAP components in the official ONAP Docker registry.
* the `Auto Git repository <https://git.opnfv.org/auto/tree/>`_
  (clone from `Gerrit Auto <https://gerrit.opnfv.org/gerrit/#/admin/projects/auto>`_)



Hardware configuration
======================

ONAP needs relatively large servers (at least 512G RAM, 1TB storage, 80-100 CPU threads). Initial deployment
attempts on single servers did not complete. Current attempts use 3-server clusters, on bare-metal.

For initial VNF deployment environments, virtual deployments by OPNFV installers on a single server should suffice.
Later, if many large VNFs are deployed for the Auto test cases, and if heavy traffic is generated, more servers
might be necessary. Also, if many environment parameters are considered, full executions of all test cases
on all environment configurations could take a long time, so parallel executions of independent test case batches
on multiple sets of servers and clusters might be considered.



Feature configuration
=====================

Environment installation
^^^^^^^^^^^^^^^^^^^^^^^^

Current Auto work in progress is captured in the
`Auto Lab Deployment wiki page <https://wiki.opnfv.org/display/AUTO/Auto+Lab+Deployment>`_.


OPNFV with OpenStack
~~~~~~~~~~~~~~~~~~~~

The first Auto installation used the Fuel/MCP installer for the OPNFV environment (see the
`OPNFV download page <https://www.opnfv.org/software/downloads>`_).

The following figure summarizes the two installation cases for Fuel: virtual or bare-metal.
This OPNFV installer starts with installing a Salt Master, which then configures
subnets and bridges, and install VMs (e.g., for controllers and compute nodes)
and an OpenStack instance with predefined credentials.

.. image:: auto-OPFNV-fuel.png


The Auto version of OPNFV installation configures additional resources for the OpenStack virtual pod
(more virtual CPUs and more RAM), as compared to the default installation. Examples of manual steps are as follows:

.. code-block:: console

    1. mkdir /opt/fuel
    2. cd /opt/fuel
    3. git clone https://git.opnfv.org/fuel
    4. cd fuel
    5. vi /opt/fuel/fuel/mcp/config/scenario/os-nosdn-nofeature-noha.yaml


These lines can be added to configure more resources:

.. code-block:: yaml

       gtw01:
         ram: 2048
    +  cmp01:
    +    vcpus: 32
    +    ram: 196608
    +  cmp02:
    +    vcpus: 32
    +    ram: 196608


The final steps deploy OpenStack (duration: approximately between 30 and 45 minutes).

.. code-block:: console

    # The following change will provide more space to VMs. Default is 100G per cmp0x. This gives 350 each and 700 total.
    6. sed -i mcp/scripts/lib.sh -e 's/\(qemu-img create.*\) 100G/\1 350G/g'

    # Then deploy OpenStack. It should take between 30 and 45 minutes:
    7. ci/deploy.sh -l UNH-LaaS -p virtual1 -s os-nosdn-nofeature-noha -D |& tee deploy.log

    # Lastly, to get access to the extra RAM and vCPUs, adjust the quotas (done on the controller at 172.16.10.36):
    8. openstack quota set --cores 64 admin
    9. openstack quota set --ram 393216 admin


Note:

* with Linux Kernel 4.4, the installation of OPNFV is not working properly (seems to be a known bug of 4.4, as it works correctly with 4.13):
  neither qemu-nbd nor kpartx are able to correctly create a mapping to /dev/nbd0p1 partition in order to resize it to 3G (see Fuel repository,
  file `mcp/scripts/lib.sh <https://git.opnfv.org/fuel/tree/mcp/scripts/lib.sh>`_ , function mount_image).
* it is not a big deal in case of x86, because it is still possible to update the image and complete the installation even with the
  original partition size.
* however, in the case of ARM, the OPNFV installation will fail, because there isn't enough space to install all required packages into
  the cloud image.

Using the above as starting point, Auto-specific scripts have been developed, for each of the 4 OPNFV installers Fuel/MCP,
Compass4NFV, Apex/TripleO, Daisy4NFV. Instructions for virtual deployments from each of these installers have been used, and
sometimes expanded and clarified (missing details or steps from the instructions).
They can be found in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ , under the ci directory:

* deploy-opnfv-fuel-ubuntu.sh
* deploy-opnfv-compass-ubuntu.sh
* deploy-opnfv-apex-centos.sh
* deploy-opnfv-daisy-centos.sh



ONAP on Kubernetes
~~~~~~~~~~~~~~~~~~

An ONAP installation on OpenStack has also been investigated, but we focus here on
the ONAP on Kubernetes version.

The initial focus is on x86 architectures. The ONAP DCAE component for a while was not operational
on Kubernetes (with ONAP Amsterdam), and had to be installed separately on OpenStack. So the ONAP
instance was a hybrid, with all components except DCAE running on Kubernetes, and DCAE running
separately on OpenStack. Starting with ONAP Beijing, DCAE also runs on Kubernetes.

For Arm architectures, specialized Docker images are being developed to provide Arm architecture
binary compatibility. See the `Auto Release Notes <auto-releasenotes>`
for more details on the availability status of these Arm images in the ONAP Docker registry.

The ONAP reference for this installation is detailed `here <http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_user_guide.html>`_.

Examples of manual steps for the deploy procedure are as follows:

.. code-block:: console

    1  git clone https://gerrit.onap.org/r/oom
    2  cd oom
    3  git pull https://gerrit.onap.org/r/oom refs/changes/19/32019/6
    4  cd install/rancher
    5  ./oom_rancher_setup.sh -b master -s <your external ip> -e onap
    6  cd oom/kubernetes/config
    7  (modify onap-parameters.yaml for VIM connection (manual))
    8  ./createConfig.sh -n onap
    9  cd ../oneclick
    10 ./createAll.bash -n onap

Several automation efforts to integrate the ONAP installation in Auto CI are in progress.
One effort involves using a 3-server cluster at OPNFV Pharos LaaS (Lab-as-a-Service).
The script is available in the `Auto repository <https://git.opnfv.org/auto/tree/>`_ , under the ci directory::

* deploy-onap.sh



ONAP configuration
^^^^^^^^^^^^^^^^^^

This section describes the logical steps performed by the Auto scripts to prepare ONAP and VNFs.


VNF deployment
~~~~~~~~~~~~~~

<TBC; pre-onboarding, onboarding, deployment>


Policy and closed-loop control configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

<TBC>


Traffic Generator configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

<TBC>



Test Case software installation and execution control
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

<TBC; mention the management of multiple environments (characterized by their parameters), execution of all test cases
in each environment, only a subset in official OPNFV CI/CD Jenkins due to size and time limits; then posting and analysis
of results; failures lead to bug-fixing, successes lead to analysis for comparisons and fine-tuning>



Installation health-check
=========================

<TBC; the Auto installation will self-check, but indicate here manual steps to double-check that the
installation was successful>




References
==========

Auto Wiki pages:

* `Auto wiki main page <https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095>`_
* `Auto Lab Deployment wiki page <https://wiki.opnfv.org/display/AUTO/Auto+Lab+Deployment>`_


OPNFV documentation on Auto:

* `Auto Release Notes <release-notes>`
* `Auto use case user guides <auto-userguide>`


Git&Gerrit Auto repositories:

* `Auto Git repository <https://git.opnfv.org/auto/tree/>`_
* `Gerrit for Auto project <https://gerrit.opnfv.org/gerrit/#/admin/projects/auto>`_