summaryrefslogtreecommitdiffstats
path: root/docs/installationprocedure
diff options
context:
space:
mode:
Diffstat (limited to 'docs/installationprocedure')
-rw-r--r--docs/installationprocedure/abstract.rst16
-rw-r--r--docs/installationprocedure/architecture.rst143
-rw-r--r--docs/installationprocedure/baremetal.rst273
-rw-r--r--docs/installationprocedure/index.rst28
-rw-r--r--docs/installationprocedure/introduction.rst42
-rw-r--r--docs/installationprocedure/references.rst40
-rw-r--r--docs/installationprocedure/requirements.rst78
-rw-r--r--docs/installationprocedure/troubleshooting.rst144
-rw-r--r--docs/installationprocedure/verification.rst89
-rw-r--r--docs/installationprocedure/virtualinstall.rst69
10 files changed, 0 insertions, 922 deletions
diff --git a/docs/installationprocedure/abstract.rst b/docs/installationprocedure/abstract.rst
deleted file mode 100644
index 70814cea..00000000
--- a/docs/installationprocedure/abstract.rst
+++ /dev/null
@@ -1,16 +0,0 @@
-Abstract
-========
-
-This document describes how to install the Colorado release of OPNFV when
-using Apex as a deployment tool covering it's limitations, dependencies
-and required system resources.
-
-License
-=======
-Colorado release of OPNFV when using Apex as a deployment tool Docs
-(c) by Tim Rozet (Red Hat) and Dan Radez (Red Hat)
-
-Colorado release of OPNFV when using Apex as a deployment tool Docs
-are licensed under a Creative Commons Attribution 4.0 International License.
-You should have received a copy of the license along with this.
-If not, see <http://creativecommons.org/licenses/by/4.0/>.
diff --git a/docs/installationprocedure/architecture.rst b/docs/installationprocedure/architecture.rst
deleted file mode 100644
index 38806391..00000000
--- a/docs/installationprocedure/architecture.rst
+++ /dev/null
@@ -1,143 +0,0 @@
-Triple-O Deployment Architecture
-================================
-
-Apex is based on the OpenStack Triple-O project as distributed by
-the RDO Project. It is important to understand the basics
-of a Triple-O deployment to help make decisions that will assist in
-successfully deploying OPNFV.
-
-Triple-O stands for OpenStack On OpenStack. This means that OpenStack
-will be used to install OpenStack. The target OPNFV deployment is an
-OpenStack cloud with NFV features built-in that will be deployed by a
-smaller all-in-one deployment of OpenStack. In this deployment
-methodology there are two OpenStack installations. They are referred
-to as the undercloud and the overcloud. The undercloud is used to
-deploy the overcloud.
-
-The undercloud is the all-in-one installation of OpenStack that includes
-baremetal provisioning capability. The undercloud will be deployed as a
-virtual machine on a jumphost. This VM is pre-built and distributed as part
-of the Apex RPM.
-
-The overcloud is OPNFV. Configuration will be passed into undercloud and
-the undercloud will use OpenStack's orchestration component, named Heat, to
-execute a deployment that will provision the target OPNFV nodes.
-
-Apex High Availability Architecture
-===================================
-
-Undercloud
-----------
-
-The undercloud is not Highly Available. End users do not depend on the
-underloud. It is only for management purposes.
-
-Overcloud
----------
-
-Apex will deploy three control nodes in an HA deployment. Each of these nodes
-will run the following services:
-
-- Stateless OpenStack services
-- MariaDB / Galera
-- RabbitMQ
-- OpenDaylight
-- HA Proxy
-- Pacemaker & VIPs
-- Ceph Monitors and OSDs
-
-Stateless OpenStack services
- All running statesless OpenStack services are load balanced by HA Proxy.
- Pacemaker monitors the services and ensures that they are running.
-
-Stateful OpenStack services
- All running stateful OpenStack services are load balanced by HA Proxy.
- They are monitored by pacemaker in an active/passive failover configuration.
-
-MariaDB / Galera
- The MariaDB database is replicated across the control nodes using Galera.
- Pacemaker is responsible for a proper start up of the Galera cluster. HA
- Proxy provides and active/passive failover methodology to connections to the
- database.
-
-RabbitMQ
- The message bus is managed by Pacemaker to ensure proper start up and
- establishment of clustering across cluster members.
-
-OpenDaylight
- OpenDaylight is currently installed on all three control nodes but only
- started on the first control node. OpenDaylight's HA capabilities are not yet
- mature enough to be enabled.
-
-HA Proxy
- HA Proxy is monitored by Pacemaker to ensure it is running across all nodes
- and available to balance connections.
-
-Pacemaker & VIPs
- Pacemaker has relationships and restraints setup to ensure proper service
- start up order and Virtual IPs associated with specific services are running
- on the proper host.
-
-Ceph Monitors & OSDs
- The Ceph monitors run on each of the control nodes. Each control node also
- has a Ceph OSD running on it. By default the OSDs use an autogenerated
- virtual disk as their target device. A non-autogenerated device can be
- specified in the deploy file.
-
-VM Migration is configured and VMs can be evacuated as needed or as invoked
-by tools such as heat as part of a monitored stack deployment in the overcloud.
-
-
-OPNFV Scenario Architecture
-===========================
-
-OPNFV distinguishes different types of SDN controllers, deployment options, and
-features into "scenarios". These scenarios are universal across all OPNFV
-installers, although some may or may not be supported by each installer.
-
-The standard naming convention for a scenario is:
-<VIM platform>-<SDN type>-<feature>-<ha/noha>
-
-The only supported VIM type is "OS" (OpenStack), while SDN types can be any
-supported SDN controller. "feature" includes things like ovs_dpdk, sfc, etc.
-"ha" or "noha" determines if the deployment will be highly available. If "ha"
-is used at least 3 control nodes are required.
-
-OPNFV Scenarios in Apex
-=======================
-
-Apex provides pre-built scenario files in /etc/opnfv-apex which a user can
-select from to deploy the desired scenario. Simply pass the desired file to
-the installer as a (-d) deploy setting. Read further in the Apex documentation
-to learn more about invoking the deploy command. Below is quick reference
-matrix for OPNFV scenarios supported in Apex. Please refer to the respective
-OPNFV Docs documentation for each scenario in order to see a full scenario
-description. Also, please refer to release-notes for information about known
-issues per scenario. The following scenarios correspond to a supported
-<Scenario>.yaml deploy settings file:
-
-+-------------------------+------------+-----------------+
-| **Scenario** | **Owner** | **Supported** |
-+-------------------------+------------+-----------------+
-| os-nosdn-nofeature-ha | Apex | Yes |
-+-------------------------+------------+-----------------+
-| os-nosdn-nofeature-noha | Apex | Yes |
-+-------------------------+------------+-----------------+
-| os-nosdn-ovs-noha | OVS for NFV| Yes |
-+-------------------------+------------+-----------------+
-| os-nosdn-fdio-noha | FDS | Yes |
-+-------------------------+------------+-----------------+
-| os-odl_l2-nofeature-ha | Apex | Yes |
-+-------------------------+------------+-----------------+
-| os-odl_l3-nofeature-ha | Apex | Yes |
-+-------------------------+------------+-----------------+
-| os-odl_l2-sfc-noha | SFC | Yes |
-+-------------------------+------------+-----------------+
-| os-odl-bgpvpn-ha | SDNVPN | No |
-+-------------------------+------------+-----------------+
-| os-odl_l2-fdio-noha | FDS | Yes |
-+-------------------------+------------+-----------------+
-| os-onos-nofeature-ha | ONOSFW | Yes |
-+-------------------------+------------+-----------------+
-| os-onos-sfc-ha | ONOSFW | Yes |
-+-------------------------+------------+-----------------+
diff --git a/docs/installationprocedure/baremetal.rst b/docs/installationprocedure/baremetal.rst
deleted file mode 100644
index 83cda326..00000000
--- a/docs/installationprocedure/baremetal.rst
+++ /dev/null
@@ -1,273 +0,0 @@
-Installation High-Level Overview - Bare Metal Deployment
-========================================================
-
-The setup presumes that you have 6 or more bare metal servers already setup
-with network connectivity on at least 1 or more network interfaces for all
-servers via a TOR switch or other network implementation.
-
-The physical TOR switches are **not** automatically configured from the OPNFV
-reference platform. All the networks involved in the OPNFV infrastructure as
-well as the provider networks and the private tenant VLANs needs to be manually
-configured.
-
-The Jumphost can be installed using the bootable ISO or by using the
-(``opnfv-apex*.rpm``) RPMs and their dependencies. The Jumphost should then be
-configured with an IP gateway on its admin or public interface and configured
-with a working DNS server. The Jumphost should also have routable access
-to the lights out network for the overcloud nodes.
-
-``opnfv-deploy`` is then executed in order to deploy the undercloud VM and to
-provision the overcloud nodes. ``opnfv-deploy`` uses three configuration files
-in order to know how to install and provision the OPNFV target system.
-The information gathered under section
-`Execution Requirements (Bare Metal Only)`_ is put into the YAML file
-``/etc/opnfv-apex/inventory.yaml`` configuration file. Deployment options are
-put into the YAML file ``/etc/opnfv-apex/deploy_settings.yaml``. Alternatively
-there are pre-baked deploy_settings files available in ``/etc/opnfv-apex/``.
-These files are named with the naming convention
-os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in place
-of the ``/etc/opnfv-apex/deploy_settings.yaml`` file if one suites your
-deployment needs. Networking definitions gathered under section
-`Network Requirements`_ are put into the YAML file
-``/etc/opnfv-apex/network_settings.yaml``. ``opnfv-deploy`` will boot the
-undercloud VM and load the target deployment configuration into the
-provisioning toolchain. This information includes MAC address, IPMI,
-Networking Environment and OPNFV deployment options.
-
-Once configuration is loaded and the undercloud is configured it will then
-reboot the overcloud nodes via IPMI. The nodes should already be set to PXE
-boot first off the admin interface. The nodes will first PXE off of the
-undercloud PXE server and go through a discovery/introspection process.
-
-Introspection boots off of custom introspection PXE images. These images are
-designed to look at the properties of the hardware that is being booted
-and report the properties of it back to the undercloud node.
-
-After introspection the undercloud will execute a Heat Stack Deployment to
-continue node provisioning and configuration. The nodes will reboot and PXE
-from the undercloud PXE server again to provision each node using Glance disk
-images provided by the undercloud. These disk images include all the necessary
-packages and configuration for an OPNFV deployment to execute. Once the disk
-images have been written to node's disks the nodes will boot locally and
-execute cloud-init which will execute the final node configuration. This
-configuration is largly completed by executing a puppet apply on each node.
-
-Installation High-Level Overview - VM Deployment
-================================================
-
-The VM nodes deployment operates almost the same way as the bare metal
-deployment with a few differences mainly related to power management.
-``opnfv-deploy`` still deploys an undercloud VM. In addition to the undercloud
-VM a collection of VMs (3 control nodes + 2 compute for an HA deployment or 1
-control node and 1 or more compute nodes for a Non-HA Deployment) will be
-defined for the target OPNFV deployment. The part of the toolchain that
-executes IPMI power instructions calls into libvirt instead of the IPMI
-interfaces on baremetal servers to operate the power managment. These VMs are
-then provisioned with the same disk images and configuration that baremetal
-would be.
-
-To Triple-O these nodes look like they have just built and registered the same
-way as bare metal nodes, the main difference is the use of a libvirt driver for
-the power management.
-
-Installation Guide - Bare Metal Deployment
-==========================================
-
-This section goes step-by-step on how to correctly install and provision the
-OPNFV target system to bare metal nodes.
-
-Install Bare Metal Jumphost
----------------------------
-
-1a. If your Jumphost does not have CentOS 7 already on it, or you would like to
- do a fresh install, then download the Apex bootable ISO from the OPNFV
- artifacts site <http://artifacts.opnfv.org/apex.html>. There have been
- isolated reports of problems with the ISO having trouble completing
- installation successfully. In the unexpected event the ISO does not work
- please workaround this by downloading the CentOS 7 DVD and performing a
- "Virtualization Host" install. If you perform a "Minimal Install" or
- install type other than "Virtualization Host" simply run
- ``sudo yum groupinstall "Virtualization Host"``
- ``chkconfig libvirtd on && reboot``
- to install virtualzation support and enable libvirt on boot. If you use the
- CentOS 7 DVD proceed to step 1b once the CentOS 7 with "Virtualzation Host"
- support is completed.
-
-1b. If your Jump host already has CentOS 7 with libvirt running on it then
- install the install the RDO Newton Release RPM and epel-release:
-
- ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
- ``sudo yum install epel-release``
-
- The RDO Project release repository is needed to install OpenVSwitch, which
- is a dependency of opnfv-apex. If you do not have external connectivity to
- use this repository you need to download the OpenVSwitch RPM from the RDO
- Project repositories and install it with the opnfv-apex RPM.
-
-2a. Boot the ISO off of a USB or other installation media and walk through
- installing OPNFV CentOS 7. The ISO comes prepared to be written directly
- to a USB drive with dd as such:
-
- ``dd if=opnfv-apex.iso of=/dev/sdX bs=4M``
-
- Replace /dev/sdX with the device assigned to your usb drive. Then select
- the USB device as the boot media on your Jumphost
-
-2b. If your Jump host already has CentOS 7 with libvirt running on it then
- install the opnfv-apex RPMs using the OPNFV artifacts yum repo. This yum
- repo is created at release. It will not exist before release day.
-
- ``sudo yum install http://artifacts.opnfv.org/apex/danube/opnfv-apex-release-danube.noarch.rpm``
-
- Once you have installed the repo definitions for Apex, RDO and EPEL then
- yum install Apex:
-
- ``sudo yum install opnfv-apex``
-
- If ONOS will be used, install the ONOS rpm instead of the opnfv-apex rpm.
-
- ``sudo yum install opnfv-apex-onos``
-
-2c. If you choose not to use the Apex yum repo or you choose to use
- pre-released RPMs you can download and install the required RPMs from the
- artifacts site <http://artifacts.opnfv.org/apex.html>. The following RPMs
- are available for installation:
-
- - opnfv-apex - OpenDaylight L2 / L3 and ODL SFC support *
- - opnfv-apex-onos - ONOS support *
- - opnfv-apex-undercloud - (reqed) Undercloud Image
- - opnfv-apex-common - (reqed) Supporting config files and scripts
- - python34-markupsafe - (reqed) Dependency of opnfv-apex-common **
- - python3-jinja2 - (reqed) Dependency of opnfv-apex-common **
- - python3-ipmi - (reqed) Dependency of opnfv-apex-common **
-
- \* One or more of these RPMs is required
- Only one of opnfv-apex or opnfv-apex-onos is required. It is safe to leave
- the unneeded SDN controller's RPMs uninstalled if you do not intend to use
- them.
-
- ** These RPMs are not yet distributed by CentOS or EPEL.
- Apex has built these for distribution with Apex while CentOS and EPEL do
- not distribute them. Once they are carried in an upstream channel Apex will
- no longer carry them and they will not need special handling for
- installation.
-
-
- The EPEL and RDO yum repos are still required:
- ``sudo yum install epel-release``
- ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
-
- Once the apex RPMs are downloaded install them by passing the file names
- directly to yum:
- ``sudo yum install python34-markupsafe-<version>.rpm
- python3-jinja2-<version>.rpm python3-ipmi-<version>.rpm``
- ``sudo yum install opnfv-apex-<version>.rpm
- opnfv-apex-undercloud-<version>.rpm opnfv-apex-common-<version>.rpm``
-
-3. After the operating system and the opnfv-apex RPMs are installed, login to
- your Jumphost as root.
-
-4. Configure IP addresses on the interfaces that you have selected as your
- networks.
-
-5. Configure the IP gateway to the Internet either, preferably on the public
- interface.
-
-6. Configure your ``/etc/resolv.conf`` to point to a DNS server
- (8.8.8.8 is provided by Google).
-
-Creating a Node Inventory File
-------------------------------
-
-IPMI configuration information gathered in section
-`Execution Requirements (Bare Metal Only)`_ needs to be added to the
-``inventory.yaml`` file.
-
-1. Copy ``/usr/share/doc/opnfv/inventory.yaml.example`` as your inventory file
- template to ``/etc/opnfv-apex/inventory.yaml``.
-
-2. The nodes dictionary contains a definition block for each baremetal host
- that will be deployed. 1 or more compute nodes and 3 controller nodes are
- required. (The example file contains blocks for each of these already).
- It is optional at this point to add more compute nodes into the node list.
-
-3. Edit the following values for each node:
-
- - ``mac_address``: MAC of the interface that will PXE boot from undercloud
- - ``ipmi_ip``: IPMI IP Address
- - ``ipmi_user``: IPMI username
- - ``ipmi_password``: IPMI password
- - ``pm_type``: Power Management driver to use for the node
- values: pxe_ipmitool (tested) or pxe_wol (untested) or pxe_amt (untested)
- - ``cpus``: (Introspected*) CPU cores available
- - ``memory``: (Introspected*) Memory available in Mib
- - ``disk``: (Introspected*) Disk space available in Gb
- - ``disk_device``: (Opt***) Root disk device to use for installation
- - ``arch``: (Introspected*) System architecture
- - ``capabilities``: (Opt**) Node's role in deployment
- values: profile:control or profile:compute
-
- \* Introspection looks up the overcloud node's resources and overrides these
- value. You can leave default values and Apex will get the correct values when
- it runs introspection on the nodes.
-
- ** If capabilities profile is not specified then Apex will select node's roles
- in the OPNFV cluster in a non-deterministic fashion.
-
- \*** disk_device declares which hard disk to use as the root device for
- installation. The format is a comma delimited list of devices, such as
- "sda,sdb,sdc". The disk chosen will be the first device in the list which
- is found by introspection to exist on the system. Currently, only a single
- definition is allowed for all nodes. Therefore if multiple disk_device
- definitions occur within the inventory, only the last definition on a node
- will be used for all nodes.
-
-Creating the Settings Files
----------------------------
-
-Edit the 2 settings files in /etc/opnfv-apex/. These files have comments to
-help you customize them.
-
-1. deploy_settings.yaml
- This file includes basic configuration options deployment, and also documents
- all available options.
- Alternatively, there are pre-built deploy_settings files available in
- (``/etc/opnfv-apex/``). These files are named with the naming convention
- os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in
- place of the (``/etc/opnfv-apex/deploy_settings.yaml``) file if one suites
- your deployment needs. If a pre-built deploy_settings file is choosen there
- is no need to customize (``/etc/opnfv-apex/deploy_settings.yaml``). The
- pre-built file can be used in place of the
- (``/etc/opnfv-apex/deploy_settings.yaml``) file.
-
-2. network_settings.yaml
- This file provides Apex with the networking information that satisfies the
- prerequisite `Network Requirements`_. These are specific to your
- environment.
-
-Running ``opnfv-deploy``
-------------------------
-
-You are now ready to deploy OPNFV using Apex!
-``opnfv-deploy`` will use the inventory and settings files to deploy OPNFV.
-
-Follow the steps below to execute:
-
-1. Execute opnfv-deploy
- ``sudo opnfv-deploy -n network_settings.yaml
- -i inventory.yaml -d deploy_settings.yaml``
- If you need more information about the options that can be passed to
- opnfv-deploy use ``opnfv-deploy --help``. -n
- network_settings.yaml allows you to customize your networking topology.
-
-2. Wait while deployment is executed.
- If something goes wrong during this part of the process, start by reviewing
- your network or the information in your configuration files. It's not
- uncommon for something small to be overlooked or mis-typed.
- You will also notice outputs in your shell as the deployment progresses.
-
-3. When the deployment is complete the undercloud IP and ovecloud dashboard
- url will be printed. OPNFV has now been deployed using Apex.
-
-.. _`Execution Requirements (Bare Metal Only)`: index.html#execution-requirements-bare-metal-only
-.. _`Network Requirements`: index.html#network-requirements
diff --git a/docs/installationprocedure/index.rst b/docs/installationprocedure/index.rst
deleted file mode 100644
index 83e9292e..00000000
--- a/docs/installationprocedure/index.rst
+++ /dev/null
@@ -1,28 +0,0 @@
-**************************************
-OPNFV Installation instructions (Apex)
-**************************************
-
-Contents:
-
-.. toctree::
- :numbered:
- :maxdepth: 4
-
- abstract.rst
- introduction.rst
- architecture.rst
- requirements.rst
- baremetal.rst
- virtualinstall.rst
- verification.rst
- troubleshooting.rst
- references.rst
-
-:Authors: Tim Rozet (trozet@redhat.com)
-:Authors: Dan Radez (dradez@redhat.com)
-:Version: 3.0
-
-Indices and tables
-==================
-
-* :ref:`search`
diff --git a/docs/installationprocedure/introduction.rst b/docs/installationprocedure/introduction.rst
deleted file mode 100644
index cc489917..00000000
--- a/docs/installationprocedure/introduction.rst
+++ /dev/null
@@ -1,42 +0,0 @@
-Introduction
-============
-
-This document describes the steps to install an OPNFV Colorado reference
-platform, as defined by the Genesis Project using the Apex installer.
-
-The audience is assumed to have a good background in networking
-and Linux administration.
-
-Preface
-=======
-
-Apex uses Triple-O from the RDO Project OpenStack distribution as a
-provisioning tool. The Triple-O image based life cycle installation
-tool provisions an OPNFV Target System (3 controllers, 2 or more
-compute nodes) with OPNFV specific configuration provided by the Apex
-deployment tool chain.
-
-The Apex deployment artifacts contain the necessary tools to deploy and
-configure an OPNFV target system using the Apex deployment toolchain.
-These artifacts offer the choice of using the Apex bootable ISO
-(``opnfv-apex-colorado.iso``) to both install CentOS 7 and the
-necessary materials to deploy or the Apex RPMs (``opnfv-apex*.rpm``),
-and their associated dependencies, which expects installation to a
-CentOS 7 libvirt enabled host. The RPM contains a collection of
-configuration files, prebuilt disk images, and the automatic deployment
-script (``opnfv-deploy``).
-
-An OPNFV install requires a "Jumphost" in order to operate. The bootable
-ISO will allow you to install a customized CentOS 7 release to the Jumphost,
-which includes the required packages needed to run ``opnfv-deploy``.
-If you already have a Jumphost with CentOS 7 installed, you may choose to
-skip the ISO step and simply install the (``opnfv-apex*.rpm``) RPMs. The RPMs
-are the same RPMs included in the ISO and include all the necessary disk
-images and configuration files to execute an OPNFV deployment. Either method
-will prepare a host to the same ready state for OPNFV deployment.
-
-``opnfv-deploy`` instantiates a Triple-O Undercloud VM server using libvirt
-as its provider. This VM is then configured and used to provision the
-OPNFV target deployment (3 controllers, n compute nodes). These nodes can
-be either virtual or bare metal. This guide contains instructions for
-installing either method.
diff --git a/docs/installationprocedure/references.rst b/docs/installationprocedure/references.rst
deleted file mode 100644
index a63a8421..00000000
--- a/docs/installationprocedure/references.rst
+++ /dev/null
@@ -1,40 +0,0 @@
-Frequently Asked Questions
-==========================
-
-License
-=======
-
-All Apex and "common" entities are protected by the `Apache 2.0 License <http://www.apache.org/licenses/>`_.
-
-References
-==========
-
-OPNFV
------
-
-`OPNFV Home Page <www.opnfv.org>`_
-
-`OPNFV Genesis project page <https://wiki.opnfv.org/get_started>`_
-
-`OPNFV Apex project page <https://wiki.opnfv.org/apex>`_
-
-`OPNFV Apex release notes <http://artifacts.opnfv.org/apex/colorado/docs/releasenotes/release-notes.html#references>`_
-
-OpenStack
----------
-
-`OpenStack Mitaka Release artifacts <http://www.openstack.org/software/mitaka>`_
-
-`OpenStack documentation <http://docs.openstack.org>`_
-
-OpenDaylight
-------------
-
-Upstream OpenDaylight provides `a number of packaging and deployment options <https://wiki.opendaylight.org/view/Deployment>`_ meant for consumption by downstream projects like OPNFV.
-
-Currently, OPNFV Apex uses `OpenDaylight's Puppet module <https://github.com/dfarrell07/puppet-opendaylight>`_, which in turn depends on `OpenDaylight's RPM <http://cbs.centos.org/repos/nfv7-opendaylight-4-release/>`_.
-
-RDO Project
------------
-
-`RDO Project website <https://www.rdoproject.org/>`_
diff --git a/docs/installationprocedure/requirements.rst b/docs/installationprocedure/requirements.rst
deleted file mode 100644
index 507b671e..00000000
--- a/docs/installationprocedure/requirements.rst
+++ /dev/null
@@ -1,78 +0,0 @@
-Setup Requirements
-==================
-
-Jumphost Requirements
----------------------
-
-The Jumphost requirements are outlined below:
-
-1. CentOS 7 (from ISO or self-installed).
-
-2. Root access.
-
-3. libvirt virtualization support.
-
-4. minimum 1 networks and maximum 5 networks, multiple NIC and/or VLAN
- combinations are supported. This is virtualized for a VM deployment.
-
-5. The Colorado Apex RPMs and their dependencies.
-
-6. 16 GB of RAM for a bare metal deployment, 64 GB of RAM for a VM
- deployment.
-
-Network Requirements
---------------------
-
-Network requirements include:
-
-1. No DHCP or TFTP server running on networks used by OPNFV.
-
-2. 1-5 separate networks with connectivity between Jumphost and nodes.
-
- - Control Plane (Provisioning)
-
- - Private Tenant-Networking Network*
-
- - External Network*
-
- - Storage Network*
-
- - Internal API Network* (required for IPv6 \*\*)
-
-3. Lights out OOB network access from Jumphost with IPMI node enabled
- (bare metal deployment only).
-
-4. External network is a routable network from outside the cloud,
- deployment. The External network is where public internet access would
- reside if available.
-
-\*These networks can be combined with each other or all combined on the
-Control Plane network.
-
-\*\*Internal API network, by default, is collapsed with provisioning in IPv4
-deployments, this is not possible with the current lack of PXE boot
-support and therefore the API network is required to be its own
-network in an IPv6 deployment.
-
-Bare Metal Node Requirements
-----------------------------
-
-Bare metal nodes require:
-
-1. IPMI enabled on OOB interface for power control.
-
-2. BIOS boot priority should be PXE first then local hard disk.
-
-3. BIOS PXE interface should include Control Plane network mentioned above.
-
-Execution Requirements (Bare Metal Only)
-----------------------------------------
-
-In order to execute a deployment, one must gather the following information:
-
-1. IPMI IP addresses for the nodes.
-
-2. IPMI login information for the nodes (user/pass).
-
-3. MAC address of Control Plane / Provisioning interfaces of the overcloud
- nodes.
diff --git a/docs/installationprocedure/troubleshooting.rst b/docs/installationprocedure/troubleshooting.rst
deleted file mode 100644
index ed0d1ff6..00000000
--- a/docs/installationprocedure/troubleshooting.rst
+++ /dev/null
@@ -1,144 +0,0 @@
-Developer Guide and Troubleshooting
-===================================
-
-This section aims to explain in more detail the steps that Apex follows
-to make a deployment. It also tries to explain possible issues you might find
-in the process of building or deploying an environment.
-
-After installing the Apex RPMs in the jumphost, some files will be located
-around the system.
-
-1. /etc/opnfv-apex: this directory contains a bunch of scenarios to be
- deployed with different characteristics such HA (High Availability), SDN
- controller integration (OpenDaylight/ONOS), BGPVPN, FDIO, etc. Having a
- look at any of these files will give you an idea of how to make a
- customized scenario setting up different flags.
-
-2. /usr/bin/: it contains the binaries for the commands opnfv-deploy,
- opnfv-clean and opnfv-util.
-
-3. /var/opt/opnfv/: it contains several files and directories.
-
- 3.1. images/: this folder contains the images that will be deployed
- according to the chosen scenario.
-
- 3.2. lib/: bunch of scripts that will be executed in the different phases
- of deployment.
-
-
-Utilization of Images
----------------------
-
-As mentioned earlier in this guide, the Undercloud VM will be in charge of
-deploying OPNFV (Overcloud VMs). Since the Undercloud is an all-in-one
-OpenStack deployment, it will use Glance to manage the images that will be
-deployed as the Overcloud.
-
-So whatever customization that is done to the images located in the jumpserver
-(/var/opt/opnfv/images) will be uploaded to the undercloud and consequently, to
-the overcloud.
-
-Make sure, the customization is performed on the right image. For example, if I
-virt-customize the following image overcloud-full-opendaylight.qcow2, but then
-I deploy OPNFV with the following command:
-
- ``sudo opnfv-deploy -n network_settings.yaml -d
- /etc/opnfv-apex/os-onos-nofeature-ha.yaml``
-
-It will not have any effect over the deployment, since the customized image is
-the opendaylight one, and the scenario indicates that the image to be deployed
-is the overcloud-full-onos.qcow2.
-
-
-Post-deployment Configuration
------------------------------
-
-Post-deployment scripts will perform some configuration tasks such ssh-key
-injection, network configuration, NATing, OpenVswitch creation. It will take
-care of some OpenStack tasks such creation of endpoints, external networks,
-users, projects, etc.
-
-If any of these steps fail, the execution will be interrupted. In some cases,
-the interruption occurs at very early stages, so a new deployment must be
-executed. However, some other cases it could be worth it to try to debug it.
-
- 1. There is not external connectivity from the overcloud nodes:
-
- Post-deployment scripts will configure the routing, nameservers
- and a bunch of other things between the overcloud and the
- undercloud. If local connectivity, like pinging between the
- different nodes, is working fine, script must have failed when
- configuring the NAT via iptables. The main rules to enable
- external connectivity would look like these:
-
- ``iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE``
- ``iptables -t nat -A POSTROUTING -s ${external_cidr} -o eth0 -j
- MASQUERADE``
- ``iptables -A FORWARD -i eth2 -j ACCEPT``
- ``iptables -A FORWARD -s ${external_cidr} -m state --state
- ESTABLISHED,RELATED -j ACCEPT``
- ``service iptables save``
-
- These rules must be executed as root (or sudo) in the
- undercloud machine.
-
-OpenDaylight Integration
-------------------------
-
-When a user deploys any of the following scenarios:
-
- - os-odl-bgpvpn-ha.yaml
- - os-odl_l2-fdio-ha.yaml
- - os-odl_l2-fdio-noha.yaml
- - os-odl_l2-nofeature-ha.yaml
- - os-odl_l2-sfc-noha.yaml
- - os-odl_l3-nofeature-ha.yaml
-
-OpenDaylight (ODL) SDN controller will be deployed too and completely
-integrated with OpenStack. ODL is running as a systemd service, so you can
-manage it as a regular service:
-
- ``systemctl start/restart/stop opendaylight.service``
-
-This command must be executed as root in the controller node of the overcloud,
-where OpenDaylight is running. ODL files are located in /opt/opendaylight. ODL
-uses karaf as a Java container management system that allows the users to
-install new features, check logs and configure a lot of things. In order to
-connect to Karaf's console, use the following command:
-
- ``opnfv-util opendaylight``
-
-This command is very easy to use, but in case it is not connecting to Karaf,
-this is the command that is executing underneath:
-
- ``ssh -p 8101 -o UserKnownHostsFile=/dev/null -o
- StrictHostKeyChecking=no karaf@localhost``
-
-Of course, localhost when the command is executed in the overcloud controller,
-but you use its public IP to connect from elsewhere.
-
-Debugging Failures
-------------------
-
-This section will try to gather different type of failures, the root cause and
-some possible solutions or workarounds to get the process continued.
-
-1. I can see in the output log a post-deployment error messages:
-
- Heat resources will apply puppet manifests during this phase. If one of
- these processes fail, you could try to see the error and after that,
- re-run puppet to apply that manifest. Log into the controller (see
- verification section for that) and check as root /var/log/messages.
- Search for the error you have encountered and see if you can fix it. In
- order to re-run the puppet manifest, search for "puppet apply" in that
- same log. You will have to run the last "puppet apply" before the
- error. And It should look like this:
-
- ``FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/5b4c7a01-0d63-4a71-81e9-d5ee6f0a1f2f" FACTER_fqdn="overcloud-controller-0.localdomain.com" \
- FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" puppet apply --detailed-exitcodes -l syslog -l console \
- /var/lib/heat-config/heat-config-puppet/5b4c7a01-0d63-4a71-81e9-d5ee6f0a1f2f.pp``
-
- As a comment, Heat will trigger the puppet run via os-apply-config and
- it will pass a different value for step each time. There is a total of
- five steps. Some of these steps will not be executed depending on the
- type of scenario that is being deployed.
diff --git a/docs/installationprocedure/verification.rst b/docs/installationprocedure/verification.rst
deleted file mode 100644
index 81e4c8e4..00000000
--- a/docs/installationprocedure/verification.rst
+++ /dev/null
@@ -1,89 +0,0 @@
-Verifying the Setup
--------------------
-
-Once the deployment has finished, the OPNFV deployment can be accessed via the
-undercloud node. From the jump host ssh to the undercloud host and become the
-stack user. Alternativly ssh keys have been setup such that the root user on
-the jump host can ssh to undercloud directly as the stack user. For
-convenience a utility script has been provided to look up the undercloud's ip
-address and ssh to the undercloud all in one command. An optional user name can
-be passed to indicate whether to connect as the stack or root user. The stack
-user is default if a username is not specified.
-
-| ``opnfv-util undercloud root``
-| ``su - stack``
-
-Once connected to undercloud as the stack user look for two keystone files that
-can be used to interact with the undercloud and the overcloud. Source the
-appropriate RC file to interact with the respective OpenStack deployment.
-
-| ``source stackrc`` (undercloud)
-| ``source overcloudrc`` (overcloud / OPNFV)
-
-The contents of these files include the credentials for the administrative user
-for undercloud and OPNFV respectivly. At this point both undercloud and OPNFV
-can be interacted with just as any OpenStack installation can be. Start by
-listing the nodes in the undercloud that were used to deploy the overcloud.
-
-| ``source stackrc``
-| ``openstack server list``
-
-The control and compute nodes will be listed in the output of this server list
-command. The IP addresses that are listed are the control plane addresses that
-were used to provision the nodes. Use these IP addresses to connect to these
-nodes. Initial authentication requires using the user heat-admin.
-
-| ``ssh heat-admin@192.0.2.7``
-
-To begin creating users, images, networks, servers, etc in OPNFV source the
-overcloudrc file or retrieve the admin user's credentials from the overcloudrc
-file and connect to the web Dashboard.
-
-
-You are now able to follow the `OpenStack Verification`_ section.
-
-OpenStack Verification
-----------------------
-
-Once connected to the OPNFV Dashboard make sure the OPNFV target system is
-working correctly:
-
-1. In the left pane, click Compute -> Images, click Create Image.
-
-2. Insert a name "cirros", Insert an Image Location
- ``http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img``.
-
-3. Select format "QCOW2", select Public, then click Create Image.
-
-4. Now click Project -> Network -> Networks, click Create Network.
-
-5. Enter a name "internal", click Next.
-
-6. Enter a subnet name "internal_subnet", and enter Network Address
- ``172.16.1.0/24``, click Next.
-
-7. Now go to Project -> Compute -> Instances, click Launch Instance.
-
-8. Enter Instance Name "first_instance", select Instance Boot Source
- "Boot from image", and then select Image Name "cirros".
-
-9. Click Launch, status will cycle though a couple states before becoming
- "Active".
-
-10. Steps 7 though 9 can be repeated to launch more instances.
-
-11. Once an instance becomes "Active" their IP addresses will display on the
- Instances page.
-
-12. Click the name of an instance, then the "Console" tab and login as
- "cirros"/"cubswin:)"
-
-13. To verify storage is working,
- click Project -> Compute -> Volumes, Create Volume
-
-14. Give the volume a name and a size of 1 GB
-
-15. Once the volume becomes "Available" click the dropdown arrow and attach it
- to an instance.
-
-Congratulations you have successfully installed OPNFV!
diff --git a/docs/installationprocedure/virtualinstall.rst b/docs/installationprocedure/virtualinstall.rst
deleted file mode 100644
index 5da2ee3c..00000000
--- a/docs/installationprocedure/virtualinstall.rst
+++ /dev/null
@@ -1,69 +0,0 @@
-Installation High-Level Overview - Virtual Deployment
-=====================================================
-
-The VM nodes deployment operates almost the same way as the bare metal
-deployment with a few differences. ``opnfv-deploy`` still deploys an
-undercloud VM. In addition to the undercloud VM a collection of VMs
-(3 control nodes + 2 compute for an HA deployment or 1 control node and 1
-or more compute nodes for a non-HA Deployment) will be defined for the target
-OPNFV deployment. The part of the toolchain that executes IPMI power
-instructions calls into libvirt instead of the IPMI interfaces on baremetal
-servers to operate the power managment. These VMs are then provisioned with
-the same disk images and configuration that baremetal would be. To Triple-O
-these nodes look like they have just built and registered the same way as bare
-metal nodes, the main difference is the use of a libvirt driver for the power
-management. Finally, the default network_settings file will deploy without
-modification. Customizations are welcome but not needed if a generic set of
-network_settings are acceptable.
-
-Installation Guide - Virtual Deployment
-=======================================
-
-This section goes step-by-step on how to correctly install and provision the
-OPNFV target system to VM nodes.
-
-Install Jumphost
-----------------
-
-Follow the instructions in the `Install Bare Metal Jumphost`_ section.
-
-Running ``opnfv-deploy``
-------------------------
-
-You are now ready to deploy OPNFV!
-``opnfv-deploy`` has virtual deployment capability that includes all of
-the configuration nessesary to deploy OPNFV with no modifications.
-
-If no modifications are made to the included configurations the target
-environment will deploy with the following architecture:
-
- - 1 undercloud VM
-
- - The option of 3 control and 2 or more compute VMs (HA Deploy / default)
- or 1 control and 1 or more compute VM (Non-HA deploy / pass -n)
-
- - 1-5 networks: provisioning, private tenant networking, external, storage
- and internal API. The API, storage and tenant networking networks can be
- collapsed onto the provisioning network.
-
-Follow the steps below to execute:
-
-1. ``sudo opnfv-deploy -v [ --virtual-computes n ]
- [ --virtual-cpus n ] [ --virtual-ram n ]
- -n network_settings.yaml -d deploy_settings.yaml``
-
-2. It will take approximately 45 minutes to an hour to stand up undercloud,
- define the target virtual machines, configure the deployment and execute
- the deployment. You will notice different outputs in your shell.
-
-3. When the deployment is complete the IP for the undercloud and a url for the
- OpenStack dashboard will be displayed
-
-Verifying the Setup - VMs
--------------------------
-
-To verify the set you can follow the instructions in the `Verifying the Setup`_
-section.
-
-.. _`Install Bare Metal Jumphost`: index.html#install-bare-metal-jumphost
-.. _`Verifying the Setup`: index.html#verifying-the-setup