summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorGerard Damm <gerard.damm@wipro.com>2018-04-11 22:39:10 -0500
committerGerard Damm <gerard.damm@wipro.com>2018-04-12 11:09:12 -0500
commited52cec1928d0ff93538c9bb7a57f8c52e83b99e (patch)
treec6a3b557de2bbf68ad07cdedade08f2fab06d4d8 /docs
parentc74e914c628819473625c3bfd03d4535e6e64bb8 (diff)
first draft of Auto configuration guide; patched;
JIRA: AUTO-27 First draft of Auto configuration guide, referencing long-term goals and current progress (from wiki page). Patch to fix typos and address comments. Change-Id: I9a15bbed9b71a7f351be274401c0ae033befb245 Signed-off-by: Gerard Damm <gerard.damm@wipro.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/release/configguide/Auto-featureconfig.rst241
-rw-r--r--docs/release/configguide/Auto-postinstall.rst28
-rw-r--r--docs/release/configguide/auto-OPFNV-fuel.jpgbin0 -> 189899 bytes
-rw-r--r--docs/release/configguide/auto-installTarget-generic.jpgbin0 -> 154476 bytes
-rw-r--r--docs/release/configguide/auto-installTarget-initial.jpgbin0 -> 118641 bytes
-rw-r--r--docs/release/configguide/auto-repo-folders.jpgbin0 -> 162411 bytes
-rw-r--r--docs/release/configguide/index.rst17
-rw-r--r--docs/release/installation/UC01-feature.userguide.rst84
-rw-r--r--docs/release/installation/UC01-installation.instruction.rst212
-rw-r--r--docs/release/installation/UC02-feature.userguide.rst145
-rw-r--r--docs/release/installation/UC02-installation.instruction.rst195
-rw-r--r--docs/release/installation/UC03-feature.userguide.rst100
-rw-r--r--docs/release/installation/UC03-installation.instruction.rst212
-rw-r--r--docs/release/installation/index.rst15
14 files changed, 286 insertions, 963 deletions
diff --git a/docs/release/configguide/Auto-featureconfig.rst b/docs/release/configguide/Auto-featureconfig.rst
new file mode 100644
index 0000000..4e9705f
--- /dev/null
+++ b/docs/release/configguide/Auto-featureconfig.rst
@@ -0,0 +1,241 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. SPDX-License-Identifier CC-BY-4.0
+.. (c) Open Platform for NFV Project, Inc. and its contributors
+
+
+Introduction
+============
+
+This document describes the software and hardware reference frameworks used by Auto,
+and provides guidelines on how to perform configurations and additional installations.
+
+
+Goal
+====
+
+The goal of `Auto <http://docs.opnfv.org/en/latest/release/release-notes.html>`_ installation and configuration is to prepare
+an environment where the `Auto use cases <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/userguide/index.html#auto-userguide>`_
+can be assessed, i.e. where the corresponding test cases can be executed and their results can be collected.
+
+An instance of ONAP needs to be present, as well as a number of deployed VNFs, in the scope of the use cases.
+
+The initial Auto use cases cover:
+
+* Edge Cloud (increased autonomy and automation for managing Edge VNFs)
+* Resilience Improvements through ONAP (reduced recovery time for VNFs and end-to-end services in case of failure or suboptimal performance)
+* Enterprise vCPE (automation, cost optimization, and performance assurance of enterprise connectivity to Data Centers and the Internet)
+
+The general idea of Auto is to install an OPNFV environment (comprising at least one Cloud Manager),
+an ONAP instance, ONAP-deployed VNFs as required by use cases, possibly additional cloud managers not
+already installed during the OPNFV environment setup, traffic generators, and the Auto-specific software
+for the use cases (which can include test frameworks such as `Robot <http://robotframework.org/>`_ or `Functest <http://docs.opnfv.org/en/latest/submodules/functest/docs/release/release-notes/index.html#functest-releasenotes>`_).
+The ONAP instance needs to be configured with policies and closed-loop controls (also as required by use cases),
+and the test framework controls the execution and result collection of all the test cases.
+
+The following diagram illustrates two execution environments, for x86 architectures and for Arm architectures.
+The installation process depends on the underlying architecture, since certain components may require a
+specific binary-compatible version for a given x86 or Arm architecture. The preferred variant of ONAP is one
+that runs on Kubernetes, while all VNF types are of interest to Auto: VM-based or containerized (on any cloud
+manager), for x86 or for Arm. The initial VM-based VNFs will cover OpenStack, and in future versions,
+additional cloud managers will be considered. The configuration of ONAP and of test cases should not depend
+on the architecture.
+
+.. image:: auto-installTarget-generic.jpg
+
+
+For each component, various installer tools will be selected (based on simplicity and performance), and
+may change from one Auto release to the next. For example, the most natural installer for ONAP should be
+OOM (ONAP Operations Manager).
+
+The initial version of Auto will focus on OpenStack VM-based VNFs, onboarded and deployed via ONAP API
+(not by ONAP GUI, for the purpose of automation). ONAP is installed on Kubernetes. Two servers from LaaS
+are used: one to support an OpenStack instance as provided by the OPNFV installation via Fuel/MCP, and
+the other to support ONAP with Kubernetes and Docker. Therefore, the VNF execution environment is the
+server with the OpenStack instance.
+
+.. image:: auto-installTarget-initial.jpg
+
+
+Jenkins will be used for Continuous Integration in OPNFV releases, to ensure that the latest master
+branch of Auto is always working.
+
+Moreover, Auto will offer an API, which can be imported as a module, and can be accessed for example
+by a web application. The following diagram shows the planned structure for the Auto Git repository,
+supporting this module, as well as the installation scripts, test case software, utilities, and documentation.
+
+.. image:: auto-repo-folders.jpg
+
+
+
+Pre-configuration activities
+============================
+
+The following resources will be required for the initial version of Auto:
+
+* two LaaS (OPNFV Lab-as-a-Service) pods, with their associated network information. Later, other types of target pods will be supported.
+* the `Auto Git repository <https://git.opnfv.org/auto/tree/>`_ (clone from `Gerrit Auto <https://gerrit.opnfv.org/gerrit/#/admin/projects/auto>`_)
+
+
+
+Hardware configuration
+======================
+
+<TBC>
+
+
+
+Feature configuration
+=====================
+
+Environment installation
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Current Auto work in progress is captured in the `Auto Lab Deployment wiki page <https://wiki.opnfv.org/display/AUTO/Auto+Lab+Deployment>`_.
+
+
+OPNFV with OpenStack
+~~~~~~~~~~~~~~~~~~~~
+
+The Auto installation uses the Fuel/MCP installer for the OPNFV environment (see the
+`OPNFV download page <https://www.opnfv.org/software/downloads>`_).
+
+The following figure summarizes the two installation cases: virtual or baremetal.
+This OPNFV installer starts with installing a Salt Master, which then configures
+subnets and bridges, and install VMs (e.g., for controllers and compute nodes)
+and an OpenStack instance with predefined credentials.
+
+.. image:: auto-OPFNV-fuel.jpg
+
+
+The Auto version of OPNFV installation configures additional resources for the OpenStack virtual pod,
+as compared to the default installation. Examples of manual steps are as follows:
+
+.. code-block:: console
+
+ 1. mkdir /opt/fuel
+ 2. cd /opt/fuel
+ 3. git clone https://git.opnfv.org/fuel
+ 4. cd fuel
+ 5. vi /opt/fuel/fuel/mcp/config/scenario/os-nosdn-nofeature-noha.yaml
+
+
+These lines can be added to configure more resources:
+
+.. code-block:: yaml
+
+ gtw01:
+ ram: 2048
+ + cmp01:
+ + vcpus: 16
+ + ram: 65536
+ + disk: 40
+ + cmp02:
+ + vcpus: 16
+ + ram: 65536
+ + disk: 40
+
+
+The final step deploys OpenStack (duration: approximately between 30 and 45 minutes).
+
+.. code-block:: console
+
+ 6. ci/deploy.sh -l UNH-LaaS -p virtual1 -s os-nosdn-nofeature-noha -D |& tee deploy.log
+
+
+
+ONAP on Kubernetes
+~~~~~~~~~~~~~~~~~~
+
+An ONAP installation on OpenStack has also been investigated, but we focus here on
+the ONAP on Kubernetes version.
+
+The initial focus is on x86 architectures. The ONAP DCAE component for a while was not operational
+on Kubernetes, and had to be installed separately on OpenStack. So the ONAP instance was a hybrid,
+with all components except DCAE running on Kubernetes, and DCAE running separately on OpenStack.
+
+For Arm architectures, specialized Docker images are being developed to provide Arm architecture
+binary compatibility.
+
+The goal for the first release of Auto is to use an ONAP instance where DCAE also runs on Kubernetes,
+for both architectures.
+
+The ONAP reference for this installation is detailed `here <https://wiki.onap.org/display/DW/ONAP+on+Kubernetes>`_.
+
+Examples of manual steps for the deploy procedure are as follows:
+
+.. code-block:: console
+
+ 1 git clone https://gerrit.onap.org/r/oom
+ 2 cd oom
+ 3 git pull https://gerrit.onap.org/r/oom refs/changes/19/32019/6
+ 4 cd install/rancher
+ 5 ./oom_rancher_setup.sh -b master -s <your external ip> -e onap
+ 6 cd oom/kubernetes/config
+ 7 (modify onap-parameters.yaml for VIM connection (manual))
+ 8 ./createConfig.sh -n onap
+ 9 cd ../oneclick
+ 10 ./createAll.bash -n onap
+
+
+
+ONAP configuration
+^^^^^^^^^^^^^^^^^^
+
+This section describes the logical steps performed by the Auto scripts to prepare ONAP and VNFs.
+
+
+VNF deployment
+~~~~~~~~~~~~~~
+
+<TBC; pre-onboarding, onboarding, deployment>
+
+
+Policy and closed-loop control configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+<TBC>
+
+
+Traffic Generator configuration
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+<TBC>
+
+
+
+Test Case software installation and execution control
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+<TBC>
+
+
+
+Installation health-check
+=========================
+
+<TBC; the Auto installation will self-check, but indicate here manual steps to double-check that the installation was successful>
+
+
+
+
+References
+==========
+
+Auto Wiki pages:
+
+* `Auto wiki main page <https://wiki.opnfv.org/pages/viewpage.action?pageId=12389095>`_
+* `Auto Lab Deployment wiki page <https://wiki.opnfv.org/display/AUTO/Auto+Lab+Deployment>`_
+
+
+OPNFV documentation on Auto:
+
+* `Auto release notes <http://docs.opnfv.org/en/latest/release/release-notes.html>`_
+* `Auto use case user guides <http://docs.opnfv.org/en/latest/submodules/auto/docs/release/userguide/index.html#auto-userguide>`_
+
+
+Git&Gerrit Auto repositories:
+
+* `Auto Git repository <https://git.opnfv.org/auto/tree/>`_
+* `Gerrit for Auto project <https://gerrit.opnfv.org/gerrit/#/admin/projects/auto>`_
+
diff --git a/docs/release/configguide/Auto-postinstall.rst b/docs/release/configguide/Auto-postinstall.rst
new file mode 100644
index 0000000..500a99d
--- /dev/null
+++ b/docs/release/configguide/Auto-postinstall.rst
@@ -0,0 +1,28 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. SPDX-License-Identifier CC-BY-4.0
+.. (c) Open Platform for NFV Project, Inc. and its contributors
+
+
+********************************
+Auto Post Installation Procedure
+********************************
+
+<TBC; normally, the installation is self-contained and there should be no need for post-installation manual steps;
+possibly input for CI toolchain and deployment pipeline in first section>
+
+
+Automated post installation activities
+======================================
+<TBC if needed>
+
+
+<Project> post configuration procedures
+=======================================
+<TBC if needed>
+
+
+Platform components validation
+==============================
+<TBC if needed>
+
diff --git a/docs/release/configguide/auto-OPFNV-fuel.jpg b/docs/release/configguide/auto-OPFNV-fuel.jpg
new file mode 100644
index 0000000..706d997
--- /dev/null
+++ b/docs/release/configguide/auto-OPFNV-fuel.jpg
Binary files differ
diff --git a/docs/release/configguide/auto-installTarget-generic.jpg b/docs/release/configguide/auto-installTarget-generic.jpg
new file mode 100644
index 0000000..3f94871
--- /dev/null
+++ b/docs/release/configguide/auto-installTarget-generic.jpg
Binary files differ
diff --git a/docs/release/configguide/auto-installTarget-initial.jpg b/docs/release/configguide/auto-installTarget-initial.jpg
new file mode 100644
index 0000000..edc6509
--- /dev/null
+++ b/docs/release/configguide/auto-installTarget-initial.jpg
Binary files differ
diff --git a/docs/release/configguide/auto-repo-folders.jpg b/docs/release/configguide/auto-repo-folders.jpg
new file mode 100644
index 0000000..ee88866
--- /dev/null
+++ b/docs/release/configguide/auto-repo-folders.jpg
Binary files differ
diff --git a/docs/release/configguide/index.rst b/docs/release/configguide/index.rst
new file mode 100644
index 0000000..ba1a3da
--- /dev/null
+++ b/docs/release/configguide/index.rst
@@ -0,0 +1,17 @@
+.. _auto-configguide:
+
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. SPDX-License-Identifier CC-BY-4.0
+.. (c) Open Platform for NFV Project, Inc. and its contributors
+
+*****************************************************
+OPNFV Auto (ONAP-Automated OPNFV) Configuration Guide
+*****************************************************
+
+.. toctree::
+ :numbered:
+ :maxdepth: 3
+
+ Auto-featureconfig.rst
+ Auto-postinstall.rst
diff --git a/docs/release/installation/UC01-feature.userguide.rst b/docs/release/installation/UC01-feature.userguide.rst
deleted file mode 100644
index 5da0865..0000000
--- a/docs/release/installation/UC01-feature.userguide.rst
+++ /dev/null
@@ -1,84 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. SPDX-License-Identifier CC-BY-4.0
-.. (c) optionally add copywriters name
-
-
-================================================================
-Auto User Guide: Use Case 1 Edge Cloud
-================================================================
-
-This document provides the user guide for Fraser release of Auto,
-specifically for Use Case 1: Edge Cloud.
-
-.. contents::
- :depth: 3
- :local:
-
-
-Description
-===========
-
-This use case aims at showcasing the benefits of using ONAP for autonomous Edge Cloud management.
-
-A high level of automation of VNF lifecycle event handling after launch is enabled by ONAP policies
-and closed-loop controls, which take care of most lifecycle events (start, stop, scale up/down/in/out,
-recovery/migration for HA) as well as their monitoring and SLA management.
-
-Multiple types of VNFs, for different execution environments, are first approved in the catalog thanks
-to the onboarding process, and then can be deployed and handled by multiple controllers in a systematic way.
-
-This results in management efficiency (lower control/automation overhead) and high degree of autonomy.
-
-
-Preconditions:
-#. hardware environment in which Edge cloud may be deployed
-#. an Edge cloud has been deployed and is ready for operation
-#. ONAP has been deployed onto a Cloud, and is interfaced (i.e. provisioned for API access) to the Edge cloud
-
-
-
-Main Success Scenarios:
-
-* lifecycle management - stop, stop, scale (dependent upon telemetry)
-
-* recovering from faults (detect, determine appropriate response, act); i.e. exercise closed-loop policy engine in ONAP
-
- * verify mechanics of control plane interaction
-
-* collection of telemetry for machine learning
-
-
-Details on the test cases corresponding to this use case:
-
-* Environment check
-
- * Basic environment check: Create test script to check basic VIM (OpenStack), ONAP, and VNF are up and running
-
-* VNF lifecycle management
-
- * VNF Instance Management: Validation of VNF Instance Management which includes VNF instantiation, VNF State Management and termination
-
- * Tacker Monitoring Driver (VNFMonitorPing):
-
- * Write Tacker Monitor driver to handle monitor_call and based on return state value create custom events
- * If Ping to VNF fails, trigger below events
-
- * Event 1 : Collect failure logs from VNF
- * Event 2 : Soft restart/respawn the VNF
-
- * Integrate with Telemetry
-
- * Create TOSCA template policies to implement ceilometer data collection service
- * Collect CPU utilization data, compare with threshold, and perform action accordingly (respawn, scale-in/scale-out)
-
-
-
-Test execution high-level description
-=====================================
-
-<TBC>
-
-
-
-
diff --git a/docs/release/installation/UC01-installation.instruction.rst b/docs/release/installation/UC01-installation.instruction.rst
deleted file mode 100644
index 9ecb8bd..0000000
--- a/docs/release/installation/UC01-installation.instruction.rst
+++ /dev/null
@@ -1,212 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. SPDX-License-Identifier CC-BY-4.0
-.. (c) optionally add copywriters name
-
-========
-Abstract
-========
-
-This document describes how to install OPNFV Auto Use Case 1: Edge Cloud, its dependencies and required system resources.
-
-.. contents::
- :depth: 3
- :local:
-
-Version history
----------------------
-
-+--------------------+--------------------+--------------------+--------------------+
-| **Date** | **Ver.** | **Author** | **Comment** |
-| | | | |
-+--------------------+--------------------+--------------------+--------------------+
-| 2015-04-14 | 0.1.0 | Jonas Bjurel | First draft |
-| | | | |
-+--------------------+--------------------+--------------------+--------------------+
-| | 0.1.1 | | |
-| | | | |
-+--------------------+--------------------+--------------------+--------------------+
-| | 1.0 | | |
-| | | | |
-| | | | |
-+--------------------+--------------------+--------------------+--------------------+
-
-
-Introduction
-============
-<INTRODUCTION TO THE SCOPE AND INTENTION OF THIS DOCUMENT AS WELL AS TO THE SYSTEM TO BE INSTALLED>
-
-<EXAMPLE>:
-
-This document describes the supported software and hardware configurations for the
-Fuel OPNFV reference platform as well as providing guidelines on how to install and
-configure such reference system.
-
-Although the available installation options gives a high degree of freedom in how the system is set-up,
-with what architecture, services and features, etc., not nearly all of those permutations provides
-a OPNFV compliant reference architecture. Following the guidelines in this document ensures
-a result that is OPNFV compliant.
-
-The audience of this document is assumed to have good knowledge in network and Unix/Linux administration.
-
-
-Preface
-=======
-<DESCRIBE NEEDED PREREQUISITES, PLANNING, ETC.>
-
-<EXAMPLE>:
-
-Before starting the installation of Fuel@OPNFV, some planning must preceed.
-
-First of all, the Fuel@OPNFV .iso image needs to be retrieved,
-the Latest stable Arno release of Fuel@OPNFV can be found here: <www.opnfv.org/abc/def>
-
-Alternatively, you may build the .iso from source by cloning the opnfv/genesis git repository:
-<git clone https://<linux foundation uid>@gerrit.opnf.org/gerrit/genesis>
-Check-out the Arno release:
-<cd genesis; git checkout arno>
-Goto the fuel directory and build the .iso
-<cd fuel/build; make all>
-
-Familiarize yourself with the Fuel 6.0.1 version by reading the following documents:
-- abc <http://wiki.openstack.org/abc>
-- def <http://wiki.openstack.org/def>
-- ghi <http://wiki.openstack.org/ghi>
-
-Secondly, a number of deployment specific parameters must be collected, those are:
-
-1. Provider sub-net and gateway information
-
-2. Provider VLAN information
-
-3. Provider DNS addresses
-
-4. Provider NTP addresses
-
-This information will be needed for the configuration procedures provided in this document.
-
-
-Hardware requirements
-=====================
-<PROVIDE A LIST OF MINIMUM HARDWARE REQUIREMENTS NEEDED FOR THE INSTALL>
-
-<EXAMPLE>:
-
-Following minimum hardware requirements must be met for installation of Fuel@OPNFV:
-
-+--------------------+----------------------------------------------------+
-| **HW Aspect** | **Requirement** |
-| | |
-+--------------------+----------------------------------------------------+
-| **# of servers** | Minimum 5 (3 for non redundant deployment) |
-| | 1 Fuel deployment master (may be virtualized) |
-| | 3(1) Controllers |
-| | 1 Compute |
-+--------------------+----------------------------------------------------+
-| **CPU** | Minimum 1 socket x86_AMD64 Ivy bridge 1.6 GHz |
-| | |
-+--------------------+----------------------------------------------------+
-| **RAM** | Minimum 16GB/server (Depending on VNF work load) |
-| | |
-+--------------------+----------------------------------------------------+
-| **Disk** | Minimum 256GB 10kRPM spinning disks |
-| | |
-+--------------------+----------------------------------------------------+
-| **NICs** | 2(1)x10GE Niantec for Private/Public (Redundant) |
-| | |
-| | 2(1)x10GE Niantec for SAN (Redundant) |
-| | |
-| | 2(1)x1GE for admin (PXE) and control (RabitMQ,etc) |
-| | |
-+--------------------+----------------------------------------------------+
-
-
-Top of the rack (TOR) Configuration requirements
-================================================
-<DESCRIBE NEEDED NETWORK TOPOLOGY SETUP IN THE TORs>
-
-<EXAMPLE>:
-
-The switching infrastructure provides connectivity for the OPNFV infra-structure operations as well as
-for the tenant networks (East/West) and provider connectivity (North/South bound connectivity).
-The switching connectivity can (but does not need to) be fully redundant,
-in case it and comprises a redundant 10GE switch pair for "Traffic/Payload/SAN" purposes as well as
-a 1GE switch pair for "infrastructure control-, management and administration"
-
-The switches are **not** automatically configured from the OPNFV reference platform.
-All the networks involved in the OPNFV infra-structure as well as the provider networks
-and the private tenant VLANs needs to be manually configured.
-
-This following sections guides through required black-box switch configurations.
-
-VLAN considerations and blue-print
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-IP Address plan considerations and blue-print
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-
-OPNFV Software installation and deployment
-==========================================
-<DESCRIBE THE FULL PROCEDURES FOR THE INSTALLATION OF THE OPNFV COMPONENT INSTALLATION AND DEPLOYMENT>
-
-<EXAMPLE>:
-
-This section describes the installation of the Fuel@OPNFV installation server (Fuel master)
-as well as the deployment of the full OPNFV reference platform stack across a server cluster.
-Etc.
-
-Install Fuel master
-^^^^^^^^^^^^^^^^^^^^^
-
-Create an OPNV (Fuel Environment)
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Configure the OPNFV environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Deploy the OPNFV environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-
-Installation health-check
-=========================
-<DESCRIBE ANY MEANS TO DO VERIFY THE INTEGRITY AND HEALTHYNESS OF THE INSTALL>
-
-<EXAMPLE>:
-
-Now that the OPNFV environment has been created, and before the post installation configurations is started,
-perform a system health check from the Fuel GUI:
-
-- Select the "Health check" TAB.
-- Select all test-cases
-- And click "Run tests"
-
-All test cases except the following should pass:
-
-Post installation and deployment actions
-------------------------------------------
-<DESCRIBE ANY POST INSTALLATION ACTIONS/CONFIGURATIONS NEEDED>
-
-<EXAMPLE>:
-After the OPNFV deployment is completed, the following manual changes needs to be performed in order
-for the system to work according OPNFV standards.
-
-**Change host OS password:**
-Change the Host OS password by......
-
-
-References
-==========
-<PROVIDE NEEDED/USEFUL REFERENCES>
-
-<EXAMPLES>:
-
-OPNFV
-^^^^^^^^^^
-
-OpenStack
-^^^^^^^^^^^
-
-OpenDaylight
-^^^^^^^^^^^^^^^
diff --git a/docs/release/installation/UC02-feature.userguide.rst b/docs/release/installation/UC02-feature.userguide.rst
deleted file mode 100644
index 32a6df8..0000000
--- a/docs/release/installation/UC02-feature.userguide.rst
+++ /dev/null
@@ -1,145 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. SPDX-License-Identifier CC-BY-4.0
-.. (c) optionally add copywriters name
-
-
-================================================================
-Auto User Guide: Use Case 2 Resiliency Improvements Through ONAP
-================================================================
-
-This document provides the user guide for Fraser release of Auto,
-specifically for Use Case 2: Resiliency Improvements Through ONAP.
-
-.. contents::
- :depth: 3
- :local:
-
-
-Description
-===========
-
-This use case illustrates VNF failure recovery time reduction with ONAP, thanks to its automated monitoring and management.
-It simulates an underlying problem (failure, stress, etc.: any adverse condition in the network that can impact VNFs),
-tracks a VNF, and measures the amount of time it takes for ONAP to restore the VNF functionality.
-
-The benefit for NFV edge service providers is to assess what degree of added VIM+NFVI platform resilience for VNFs is obtained by
-leveraging ONAP closed-loop control, vs. VIM+NFVI self-managed resilience (which may not be aware of the VNF or the corresponding
-end-to-end Service, but only of underlying resources such as VMs and servers).
-
-
-Preconditions:
-
-#. hardware environment in which Edge cloud may be deployed
-#. Edge cloud has been deployed and is ready for operation
-#. ONAP has been deployed onto a cloud and is interfaced (i.e. provisioned for API access) to the Edge cloud
-#. Components of ONAP have been deployed on the Edge cloud as necessary for specific test objectives
-
-In future releases, Auto Use cases will also include the deployment of ONAP (if not already installed), the deployment
-of test VNFs (pre-existing VNFs in pre-existing ONAP can be used in the test as well), the configuration of ONAP for
-monitoring these VNFs (policies, CLAMP, DCAE), in addition to the test scripts which simulate a problem and measures recovery time.
-
-Different types of problems can be simulated, hence the identification of multiple test cases corresponding to this use case,
-as illustrated in this diagram:
-
-.. image:: auto-UC02-testcases.jpg
-
-Description of simulated problems/challenges:
-
-* Physical Infra Failure
-
- * Migration upon host failure: Compute host power is interrupted, and affected workloads are migrated to other available hosts.
- * Migration upon disk failure: Disk volumes are unmounted, and affected workloads are migrated to other available hosts.
- * Migration upon link failure: Traffic on links is interrupted/corrupted, and affected workloads are migrated to other available hosts.
- * Migration upon NIC failure: NIC ports are disabled by host commands, and affected workloads are migrated to other available hosts.
-
-* Virtual Infra Failure
-
- * OpenStack compute host service fail: Core OpenStack service processes on compute hosts are terminated, and auto-restored, or affected workloads are migrated to other available hosts.
- * SDNC service fail: Core SDNC service processes are terminated, and auto-restored.
- * OVS fail: OVS bridges are disabled, and affected workloads are migrated to other available hosts.
- * etc.
-
-* Security
-
- * Host tampering: Host tampering is detected, the host is fenced, and affected workloads are migrated to other available hosts.
- * Host intrusion: Host intrusion attempts are detected, an offending workload, device, or flow is identified and fenced, and as needed affected workloads are migrated to other available hosts.
- * Network intrusion: Network intrusion attempts are detected, and an offending flow is identified and fenced.
-
-
-
-
-Test execution high-level description
-=====================================
-
-The following two MSCs (Message Sequence Charts) show the actors and high-level interactions.
-
-The first MSC shows the preparation activities (assuming the hardware, network, cloud, and ONAP have already been installed):
-onboarding and deployment of VNFs (via ONAP portal and modules in sequence: SDC, VID, SO), and ONAP configuration
-(policy framework, closed-loops in CLAMP, activation of DCAE).
-
-.. image:: auto-UC02-preparation.jpg
-
-The second MSC illustrates the pattern of all test cases for the Resiliency Improvements:
-* simulate the chosen problem (a.k.a. a "Challenge") for this test case, for example suspend a VM which may be used by a VNF
-* start tracking the target VNF of this test case
-* measure the ONAP-orchestrated VNF Recovery Time
-* then the test stops simulating the problem (for example: resume the VM that was suspended),
-
-In parallel, the MSC also shows the sequence of events happening in ONAP, thanks to its configuration to provide Service
-Assurance for the VNF.
-
-.. image:: auto-UC02-pattern.jpg
-
-
-Test design: data model, implementation modules
-===============================================
-
-The high-level design of classes shows the identification of several entities:
-* Test Case: as identified above, each is a special case of the overall use case (e.g., categorized by challenge type)
-* Test Definition: gathers all the information necessary to run a certain test case
-* Metric Definition: describes a certain metric that may be measured, in addition to Recovery Time
-* Challenge Definition: describe the challenge (problem, failure, stress, ...) simulated by the test case
-* Recipient: entity that can receive commands and send responses, and that is queried by the Test Definition or Challenge Definition
-(a recipient would be typically a management service, with interfaces (CLI or API) for clients to query)
-* Resources: with 3 types (VNF, cloud virtual resource such as a VM, physical resource such as a server)
-
-Three of these entities have execution-time corresponding classes:
-* Test Execution, which captures all the relevant data of the execution of a Test Definition
-* Challenge Execution, which captures all the relevant data of the execution of a Challenge Definition
-* Metric Value, which captures the a quantitative measurement of a Metric Definition (with a timestamp)
-
-.. image:: auto-UC02-data1.jpg
-
-The following diagram illustrates an implementation-independent design of the attributes of these entities:
-.. image:: auto-UC02-data2.jpg
-
-This next diagram shows the Python classes and attributes, as implemented by this Use Case (for all test cases):
-
-.. image:: auto-UC02-data3.jpg
-
-Test definition data is stored in serialization files (Python pickles), while test execution data is stored in CSV
-files, for easier post-analysis.
-
-The module design is straightforward: functions and classes for managing data, for interfacing with recipients,
-for executing tests, and for interacting with the test user (choosing a Test Definition, showing the details
-of a Test Definition, starting the execution).
-
-.. image:: auto-UC02-module1.jpg
-
-This last diagram shows the test user menu functions:
-
-.. image:: auto-UC02-module2.jpg
-
-In future releases of Auto, testing environments such as FuncTest and Yardstick might be leveraged.
-
-Also, anonymized test results could be collected from users willing to share them, and aggregates could be
-maintained as benchmarks.
-
-
-
-
-
-
-
-
diff --git a/docs/release/installation/UC02-installation.instruction.rst b/docs/release/installation/UC02-installation.instruction.rst
deleted file mode 100644
index 0e126dd..0000000
--- a/docs/release/installation/UC02-installation.instruction.rst
+++ /dev/null
@@ -1,195 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. SPDX-License-Identifier CC-BY-4.0
-.. (c) optionally add copywriters name
-
-========
-Abstract
-========
-
-This document describes how to install OPNFV Auto Use Case 2: Resiliency Improvements Through ONAP, its dependencies and required system resources.
-
-.. contents::
- :depth: 3
- :local:
-
-
-
-Introduction
-============
-<INTRODUCTION TO THE SCOPE AND INTENTION OF THIS DOCUMENT AS WELL AS TO THE SYSTEM TO BE INSTALLED>
-
-<EXAMPLE>:
-
-This document describes the supported software and hardware configurations for the
-Fuel OPNFV reference platform as well as providing guidelines on how to install and
-configure such reference system.
-
-Although the available installation options gives a high degree of freedom in how the system is set-up,
-with what architecture, services and features, etc., not nearly all of those permutations provides
-a OPNFV compliant reference architecture. Following the guidelines in this document ensures
-a result that is OPNFV compliant.
-
-The audience of this document is assumed to have good knowledge in network and Unix/Linux administration.
-
-
-Preface
-=======
-<DESCRIBE NEEDED PREREQUISITES, PLANNING, ETC.>
-
-<EXAMPLE>:
-
-Before starting the installation of Fuel@OPNFV, some planning must preceed.
-
-First of all, the Fuel@OPNFV .iso image needs to be retrieved,
-the Latest stable Arno release of Fuel@OPNFV can be found here: <www.opnfv.org/abc/def>
-
-Alternatively, you may build the .iso from source by cloning the opnfv/genesis git repository:
-<git clone https://<linux foundation uid>@gerrit.opnf.org/gerrit/genesis>
-Check-out the Arno release:
-<cd genesis; git checkout arno>
-Goto the fuel directory and build the .iso
-<cd fuel/build; make all>
-
-Familiarize yourself with the Fuel 6.0.1 version by reading the following documents:
-- abc <http://wiki.openstack.org/abc>
-- def <http://wiki.openstack.org/def>
-- ghi <http://wiki.openstack.org/ghi>
-
-Secondly, a number of deployment specific parameters must be collected, those are:
-
-1. Provider sub-net and gateway information
-
-2. Provider VLAN information
-
-3. Provider DNS addresses
-
-4. Provider NTP addresses
-
-This information will be needed for the configuration procedures provided in this document.
-
-
-Hardware requirements
-=====================
-<PROVIDE A LIST OF MINIMUM HARDWARE REQUIREMENTS NEEDED FOR THE INSTALL>
-
-<EXAMPLE>:
-
-Following minimum hardware requirements must be met for installation of Fuel@OPNFV:
-
-+--------------------+----------------------------------------------------+
-| **HW Aspect** | **Requirement** |
-| | |
-+--------------------+----------------------------------------------------+
-| **# of servers** | Minimum 5 (3 for non redundant deployment) |
-| | 1 Fuel deployment master (may be virtualized) |
-| | 3(1) Controllers |
-| | 1 Compute |
-+--------------------+----------------------------------------------------+
-| **CPU** | Minimum 1 socket x86_AMD64 Ivy bridge 1.6 GHz |
-| | |
-+--------------------+----------------------------------------------------+
-| **RAM** | Minimum 16GB/server (Depending on VNF work load) |
-| | |
-+--------------------+----------------------------------------------------+
-| **Disk** | Minimum 256GB 10kRPM spinning disks |
-| | |
-+--------------------+----------------------------------------------------+
-| **NICs** | 2(1)x10GE Niantec for Private/Public (Redundant) |
-| | |
-| | 2(1)x10GE Niantec for SAN (Redundant) |
-| | |
-| | 2(1)x1GE for admin (PXE) and control (RabitMQ,etc) |
-| | |
-+--------------------+----------------------------------------------------+
-
-
-Top of the rack (TOR) Configuration requirements
-================================================
-<DESCRIBE NEEDED NETWORK TOPOLOGY SETUP IN THE TORs>
-
-<EXAMPLE>:
-
-The switching infrastructure provides connectivity for the OPNFV infra-structure operations as well as
-for the tenant networks (East/West) and provider connectivity (North/South bound connectivity).
-The switching connectivity can (but does not need to) be fully redundant,
-in case it and comprises a redundant 10GE switch pair for "Traffic/Payload/SAN" purposes as well as
-a 1GE switch pair for "infrastructure control-, management and administration"
-
-The switches are **not** automatically configured from the OPNFV reference platform.
-All the networks involved in the OPNFV infra-structure as well as the provider networks
-and the private tenant VLANs needs to be manually configured.
-
-This following sections guides through required black-box switch configurations.
-
-VLAN considerations and blue-print
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-IP Address plan considerations and blue-print
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-
-OPNFV Software installation and deployment
-==========================================
-<DESCRIBE THE FULL PROCEDURES FOR THE INSTALLATION OF THE OPNFV COMPONENT INSTALLATION AND DEPLOYMENT>
-
-<EXAMPLE>:
-
-This section describes the installation of the Fuel@OPNFV installation server (Fuel master)
-as well as the deployment of the full OPNFV reference platform stack across a server cluster.
-Etc.
-
-Install Fuel master
-^^^^^^^^^^^^^^^^^^^^^
-
-Create an OPNV (Fuel Environment)
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Configure the OPNFV environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Deploy the OPNFV environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-
-Installation health-check
-=========================
-<DESCRIBE ANY MEANS TO DO VERIFY THE INTEGRITY AND HEALTHYNESS OF THE INSTALL>
-
-<EXAMPLE>:
-
-Now that the OPNFV environment has been created, and before the post installation configurations is started,
-perform a system health check from the Fuel GUI:
-
-- Select the "Health check" TAB.
-- Select all test-cases
-- And click "Run tests"
-
-All test cases except the following should pass:
-
-Post installation and deployment actions
-------------------------------------------
-<DESCRIBE ANY POST INSTALLATION ACTIONS/CONFIGURATIONS NEEDED>
-
-<EXAMPLE>:
-After the OPNFV deployment is completed, the following manual changes needs to be performed in order
-for the system to work according OPNFV standards.
-
-**Change host OS password:**
-Change the Host OS password by......
-
-
-References
-==========
-<PROVIDE NEEDED/USEFUL REFERENCES>
-
-<EXAMPLES>:
-
-OPNFV
-^^^^^^^^^^
-
-OpenStack
-^^^^^^^^^^^
-
-OpenDaylight
-^^^^^^^^^^^^^^^
diff --git a/docs/release/installation/UC03-feature.userguide.rst b/docs/release/installation/UC03-feature.userguide.rst
deleted file mode 100644
index 354d052..0000000
--- a/docs/release/installation/UC03-feature.userguide.rst
+++ /dev/null
@@ -1,100 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. SPDX-License-Identifier CC-BY-4.0
-.. (c) optionally add copywriters name
-
-
-================================================================
-Auto User Guide: Use Case 3 Enterprise vCPE
-================================================================
-
-This document provides the user guide for Fraser release of Auto,
-specifically for Use Case 3: Enterprise vCPE.
-
-.. contents::
- :depth: 3
- :local:
-
-
-Description
-===========
-
-This Use Case shows how ONAP can help ensuring that virtual CPEs (including vFW: virtual firewalls) in Edge Cloud are enterprise-grade.
-
-ONAP operations include a verification process for VNF onboarding (i.e. inclusion in the ONAP catalog),
-with multiple Roles (designer, tester, governor, operator), responsible for approving proposed VNFs
-(as VSPs (Vendor Software Products), and eventually as end-to-end Services).
-
-This process guarantees a minimum level of quality of onboarded VNFs. If all deployed vCPEs are only
-chosen from such an approved ONAP catalog, the resulting deployed end-to-end vCPE services will meet
-enterprise-grade requirements. ONAP provides a NBI in addition to a standard portal, thus enabling
-a programmatic deployment of VNFs, still conforming to ONAP processes.
-
-Moreover, ONAP also comprises real-time monitoring (by the DCAE component), which monitors performance for SLAs,
-can adjust allocated resources accordingly (elastic adjustment at VNF level), and can ensure High Availability.
-
-DCAE executes directives coming from policies described in the Policy Framework, and closed-loop controls
-described in the CLAMP component.
-
-Finally, this automated approach also reduces costs, since repetitive actions are designed once and executed multiple times,
-as vCPEs are instantiated and decommissioned (frequent events, given the variability of business activity,
-and a Small Business market similar to the Residential market: many contract updates resulting in many vCPE changes).
-
-NFV edge service providers need to provide site2site, site2dc (Data Center) and site2internet services to tenants
-both efficiently and safely, by deploying such qualified enterprise-grade vCPE.
-
-
-Preconditions:
-
-#. hardware environment in which Edge cloud may be deployed
-#. an Edge cloud has been deployed and is ready for operation
-#. enterprise edge devices, such as ThinCPE, have access to the Edge cloud with WAN interfaces
-#. ONAP components (MSO, SDN-C, APP-C and VNFM) have been deployed onto a cloud and are interfaced (i.e. provisioned for API access) to the Edge cloud
-
-
-Main Success Scenarios:
-
-* VNF spin-up
-
- * vCPE spin-up: MSO calls the VNFM to spin up a vCPE instance from the catalog and then updates the active VNF list
- * vFW spin-up: MSO calls the VNFM to spin up a vFW instance from the catalog and then updates the active VNF list
-
-* site2site
-
- * L3VPN service subscribing: MSO calls the SDNC to create VXLAN tunnels to carry L2 traffic between client's ThinCPE and SP's vCPE, and enables vCPE to route between different sites.
- * L3VPN service unsubscribing: MSO calls the SDNC to destroy tunnels and routes, thus disable traffic between different sites.
-
-
-See `ONAP description of vCPE use case <https://wiki.onap.org/display/DW/Use+Case+proposal%3A+Enterprise+vCPE>`_ for more details, including MSCs.
-
-
-Details on the test cases corresponding to this use case:
-
-* VNF Management
-
- * Spin up a vCPE instance: Spin up a vCPE instance, by calling NBI of the orchestrator.
- * Spin up a vFW instance: Spin up a vFW instance, by calling NBI of the orchestrator.
-
-* VPN as a Service
- * Subscribe to a VPN service: Subscribe to a VPN service, by calling NBI of the orchestrator.
- * Unsubscribe to a VPN service: Unsubscribe to a VPN service, by calling NBI of the orchestrator.
-
-* Internet as a Service
-
- * Subscribe to an Internet service: Subscribe to an Internet service, by calling NBI of the orchestrator.
- * Unsubscribe to an Internet service: Unsubscribe to an Internet service, by calling NBI of the orchestrator.
-
-
-Test execution high-level description
-=====================================
-
-<TBC>
-
-
-
-
-
-
-
-
-
diff --git a/docs/release/installation/UC03-installation.instruction.rst b/docs/release/installation/UC03-installation.instruction.rst
deleted file mode 100644
index 0221885..0000000
--- a/docs/release/installation/UC03-installation.instruction.rst
+++ /dev/null
@@ -1,212 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. SPDX-License-Identifier CC-BY-4.0
-.. (c) optionally add copywriters name
-
-========
-Abstract
-========
-
-This document describes how to install OPNFV Auto Use Case 3: Enterprise vCPE, its dependencies and required system resources.
-
-.. contents::
- :depth: 3
- :local:
-
-Version history
----------------------
-
-+--------------------+--------------------+--------------------+--------------------+
-| **Date** | **Ver.** | **Author** | **Comment** |
-| | | | |
-+--------------------+--------------------+--------------------+--------------------+
-| 2015-04-14 | 0.1.0 | Jonas Bjurel | First draft |
-| | | | |
-+--------------------+--------------------+--------------------+--------------------+
-| | 0.1.1 | | |
-| | | | |
-+--------------------+--------------------+--------------------+--------------------+
-| | 1.0 | | |
-| | | | |
-| | | | |
-+--------------------+--------------------+--------------------+--------------------+
-
-
-Introduction
-============
-<INTRODUCTION TO THE SCOPE AND INTENTION OF THIS DOCUMENT AS WELL AS TO THE SYSTEM TO BE INSTALLED>
-
-<EXAMPLE>:
-
-This document describes the supported software and hardware configurations for the
-Fuel OPNFV reference platform as well as providing guidelines on how to install and
-configure such reference system.
-
-Although the available installation options gives a high degree of freedom in how the system is set-up,
-with what architecture, services and features, etc., not nearly all of those permutations provides
-a OPNFV compliant reference architecture. Following the guidelines in this document ensures
-a result that is OPNFV compliant.
-
-The audience of this document is assumed to have good knowledge in network and Unix/Linux administration.
-
-
-Preface
-=======
-<DESCRIBE NEEDED PREREQUISITES, PLANNING, ETC.>
-
-<EXAMPLE>:
-
-Before starting the installation of Fuel@OPNFV, some planning must preceed.
-
-First of all, the Fuel@OPNFV .iso image needs to be retrieved,
-the Latest stable Arno release of Fuel@OPNFV can be found here: <www.opnfv.org/abc/def>
-
-Alternatively, you may build the .iso from source by cloning the opnfv/genesis git repository:
-<git clone https://<linux foundation uid>@gerrit.opnf.org/gerrit/genesis>
-Check-out the Arno release:
-<cd genesis; git checkout arno>
-Goto the fuel directory and build the .iso
-<cd fuel/build; make all>
-
-Familiarize yourself with the Fuel 6.0.1 version by reading the following documents:
-- abc <http://wiki.openstack.org/abc>
-- def <http://wiki.openstack.org/def>
-- ghi <http://wiki.openstack.org/ghi>
-
-Secondly, a number of deployment specific parameters must be collected, those are:
-
-1. Provider sub-net and gateway information
-
-2. Provider VLAN information
-
-3. Provider DNS addresses
-
-4. Provider NTP addresses
-
-This information will be needed for the configuration procedures provided in this document.
-
-
-Hardware requirements
-=====================
-<PROVIDE A LIST OF MINIMUM HARDWARE REQUIREMENTS NEEDED FOR THE INSTALL>
-
-<EXAMPLE>:
-
-Following minimum hardware requirements must be met for installation of Fuel@OPNFV:
-
-+--------------------+----------------------------------------------------+
-| **HW Aspect** | **Requirement** |
-| | |
-+--------------------+----------------------------------------------------+
-| **# of servers** | Minimum 5 (3 for non redundant deployment) |
-| | 1 Fuel deployment master (may be virtualized) |
-| | 3(1) Controllers |
-| | 1 Compute |
-+--------------------+----------------------------------------------------+
-| **CPU** | Minimum 1 socket x86_AMD64 Ivy bridge 1.6 GHz |
-| | |
-+--------------------+----------------------------------------------------+
-| **RAM** | Minimum 16GB/server (Depending on VNF work load) |
-| | |
-+--------------------+----------------------------------------------------+
-| **Disk** | Minimum 256GB 10kRPM spinning disks |
-| | |
-+--------------------+----------------------------------------------------+
-| **NICs** | 2(1)x10GE Niantec for Private/Public (Redundant) |
-| | |
-| | 2(1)x10GE Niantec for SAN (Redundant) |
-| | |
-| | 2(1)x1GE for admin (PXE) and control (RabitMQ,etc) |
-| | |
-+--------------------+----------------------------------------------------+
-
-
-Top of the rack (TOR) Configuration requirements
-================================================
-<DESCRIBE NEEDED NETWORK TOPOLOGY SETUP IN THE TORs>
-
-<EXAMPLE>:
-
-The switching infrastructure provides connectivity for the OPNFV infra-structure operations as well as
-for the tenant networks (East/West) and provider connectivity (North/South bound connectivity).
-The switching connectivity can (but does not need to) be fully redundant,
-in case it and comprises a redundant 10GE switch pair for "Traffic/Payload/SAN" purposes as well as
-a 1GE switch pair for "infrastructure control-, management and administration"
-
-The switches are **not** automatically configured from the OPNFV reference platform.
-All the networks involved in the OPNFV infra-structure as well as the provider networks
-and the private tenant VLANs needs to be manually configured.
-
-This following sections guides through required black-box switch configurations.
-
-VLAN considerations and blue-print
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-IP Address plan considerations and blue-print
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-
-OPNFV Software installation and deployment
-==========================================
-<DESCRIBE THE FULL PROCEDURES FOR THE INSTALLATION OF THE OPNFV COMPONENT INSTALLATION AND DEPLOYMENT>
-
-<EXAMPLE>:
-
-This section describes the installation of the Fuel@OPNFV installation server (Fuel master)
-as well as the deployment of the full OPNFV reference platform stack across a server cluster.
-Etc.
-
-Install Fuel master
-^^^^^^^^^^^^^^^^^^^^^
-
-Create an OPNV (Fuel Environment)
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Configure the OPNFV environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Deploy the OPNFV environment
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-
-Installation health-check
-=========================
-<DESCRIBE ANY MEANS TO DO VERIFY THE INTEGRITY AND HEALTHYNESS OF THE INSTALL>
-
-<EXAMPLE>:
-
-Now that the OPNFV environment has been created, and before the post installation configurations is started,
-perform a system health check from the Fuel GUI:
-
-- Select the "Health check" TAB.
-- Select all test-cases
-- And click "Run tests"
-
-All test cases except the following should pass:
-
-Post installation and deployment actions
-------------------------------------------
-<DESCRIBE ANY POST INSTALLATION ACTIONS/CONFIGURATIONS NEEDED>
-
-<EXAMPLE>:
-After the OPNFV deployment is completed, the following manual changes needs to be performed in order
-for the system to work according OPNFV standards.
-
-**Change host OS password:**
-Change the Host OS password by......
-
-
-References
-==========
-<PROVIDE NEEDED/USEFUL REFERENCES>
-
-<EXAMPLES>:
-
-OPNFV
-^^^^^^^^^^
-
-OpenStack
-^^^^^^^^^^^
-
-OpenDaylight
-^^^^^^^^^^^^^^^
diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst
deleted file mode 100644
index 0120e92..0000000
--- a/docs/release/installation/index.rst
+++ /dev/null
@@ -1,15 +0,0 @@
-.. _auto-configguide:
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-=====================================================
-OPNFV Auto (ONAP-Automated OPNFV) Configuration Guide
-=====================================================
-
-.. toctree::
- :maxdepth: 1
-
- UC01-installation.instruction.rst
- UC02-installation.instruction.rst
- UC03-installation.instruction.rst