summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/installation-instructions/abstract.rst6
-rw-r--r--docs/installation-instructions/architecture.rst4
-rw-r--r--docs/installation-instructions/baremetal.rst95
-rw-r--r--docs/installation-instructions/introduction.rst24
-rw-r--r--docs/installation-instructions/references.rst8
-rw-r--r--docs/installation-instructions/requirements.rst8
-rw-r--r--docs/installation-instructions/verification.rst2
-rw-r--r--docs/installation-instructions/virtualinstall.rst43
8 files changed, 98 insertions, 92 deletions
diff --git a/docs/installation-instructions/abstract.rst b/docs/installation-instructions/abstract.rst
index 185ec43e..70814cea 100644
--- a/docs/installation-instructions/abstract.rst
+++ b/docs/installation-instructions/abstract.rst
@@ -1,16 +1,16 @@
Abstract
========
-This document describes how to install the Bramaputra release of OPNFV when
+This document describes how to install the Colorado release of OPNFV when
using Apex as a deployment tool covering it's limitations, dependencies
and required system resources.
License
=======
-Bramaputra release of OPNFV when using Apex as a deployment tool Docs
+Colorado release of OPNFV when using Apex as a deployment tool Docs
(c) by Tim Rozet (Red Hat) and Dan Radez (Red Hat)
-Bramaputra release of OPNFV when using Apex as a deployment tool Docs
+Colorado release of OPNFV when using Apex as a deployment tool Docs
are licensed under a Creative Commons Attribution 4.0 International License.
You should have received a copy of the license along with this.
If not, see <http://creativecommons.org/licenses/by/4.0/>.
diff --git a/docs/installation-instructions/architecture.rst b/docs/installation-instructions/architecture.rst
index f38afcc9..e9e83105 100644
--- a/docs/installation-instructions/architecture.rst
+++ b/docs/installation-instructions/architecture.rst
@@ -1,8 +1,8 @@
Triple-O Deployment Architecture
================================
-Apex is based on RDO Manager which is the RDO Project's implementation of
-the OpenStack Triple-O project. It is important to understand the basics
+Apex is based on the OpenStack Triple-O project as distributed by
+the RDO Project. It is important to understand the basics
of a Triple-O deployment to help make decisions that will assist in
successfully deploying OPNFV.
diff --git a/docs/installation-instructions/baremetal.rst b/docs/installation-instructions/baremetal.rst
index af4ced11..002b224d 100644
--- a/docs/installation-instructions/baremetal.rst
+++ b/docs/installation-instructions/baremetal.rst
@@ -1,32 +1,35 @@
Installation High-Level Overview - Bare Metal Deployment
========================================================
-The setup presumes that you have 6 bare metal servers and have already setup network
-connectivity on at least 2 interfaces for all servers via a TOR switch or other
-network implementation.
+The setup presumes that you have 6 or more bare metal servers already setup with
+network connectivity on at least 2 interfaces for all servers via a TOR switch or
+other network implementation.
The physical TOR switches are **not** automatically configured from the OPNFV reference
platform. All the networks involved in the OPNFV infrastructure as well as the provider
networks and the private tenant VLANs needs to be manually configured.
The Jumphost can be installed using the bootable ISO or by other means including the
-(``opnfv-apex``) RPMs and virtualization capabilities. The Jumphost should then be
+(``opnfv-apex*.rpm``) RPMs and virtualization capabilities. The Jumphost should then be
configured with an IP gateway on its admin or public interface and configured with a
working DNS server. The Jumphost should also have routable access to the lights out network.
``opnfv-deploy`` is then executed in order to deploy the Undercloud VM. ``opnfv-deploy`` uses
three configuration files in order to know how to install and provision the OPNFV target system.
The information gathered under section `Execution Requirements (Bare Metal Only)`_ is put
-into the YAML file (``/etc/opnfv-apex/inventory.yaml``) configuration file. Deployment
-options are put into the YAML file (``/etc/opnfv-apex/deploy_settings.yaml``). Networking
-definitions gathered under section `Network Requirements`_ are put into the YAML file
-(``/etc/opnfv-apex/network_settings.yaml``). ``opnfv-deploy`` will boot the Undercloud VM
-and load the target deployment configuration into the provisioning toolchain. This includes
-MAC address, IPMI, Networking Environment and OPNFV deployment options.
-
-Once configuration is loaded and Undercloud is configured it will then reboot the nodes via IPMI.
-The nodes should already be set to PXE boot first off the admin interface. The nodes will
-first PXE off of the Undercloud PXE server and go through a discovery/introspection process.
+into the YAML file ``/etc/opnfv-apex/inventory.yaml`` configuration file. Deployment
+options are put into the YAML file ``/etc/opnfv-apex/deploy_settings.yaml``. Alternatively
+there are pre-baked deploy_settings files available in ``/etc/opnfv-apex/``. These files are
+named with the naming convention os-sdn_controller-enabled_feature-[no]ha.yaml. These files can
+be used in place of the ``/etc/opnfv-apex/deploy_settings.yaml`` file if one suites your
+deployment needs. Networking definitions gathered under section `Network Requirements`_ are put
+into the YAML file ``/etc/opnfv-apex/network_settings.yaml``. ``opnfv-deploy`` will boot
+the Undercloud VM and load the target deployment configuration into the provisioning toolchain.
+This includes MAC address, IPMI, Networking Environment and OPNFV deployment options.
+
+Once configuration is loaded and the Undercloud is configured it will then reboot the nodes
+via IPMI. The nodes should already be set to PXE boot first off the admin interface. The nodes
+will first PXE off of the Undercloud PXE server and go through a discovery/introspection process.
Introspection boots off of custom introspection PXE images. These images are designed to look
at the properties of the hardware that is booting off of them and report the properties of
@@ -51,15 +54,12 @@ The part of the toolchain that executes IPMI power instructions calls into libvi
the IPMI interfaces on baremetal servers to operate the power managment. These VMs are then
provisioned with the same disk images and configuration that baremetal would be.
-To RDO Manager these nodes look like they have just built and registered the same way as
+To Triple-O these nodes look like they have just built and registered the same way as
bare metal nodes, the main difference is the use of a libvirt driver for the power management.
Installation Guide - Bare Metal Deployment
==========================================
-**WARNING: Baremetal documentation is not complete. WARNING: The main missing instructions are r elated to bridging
-the networking for the undercloud to the physical underlay network for the overcloud to be deployed to.**
-
This section goes step-by-step on how to correctly install and provision the OPNFV target
system to bare metal nodes.
@@ -72,26 +72,18 @@ Install Bare Metal Jumphost
installation successfully. In the unexpected event the ISO does not work please workaround
this by downloading the CentOS 7 DVD and performing a "Virtualization Host" install.
If you perform a "Minimal Install" or install type other than "Virtualization Host" simply
- run ``sudo yum groupinstall "Virtualization Host" && chkconfig libvird on`` and reboot
- the host. Once you have completed the base CentOS install proceed to step 1b.
+ run ``sudo yum groupinstall "Virtualization Host" && chkconfig libvirtd on && reboot``
+ to install virtualzation support and enable libvirt on boot. If you use the CentOS 7 DVD
+ proceed to step 1b once the CentOS 7 with "Virtualzation Host" support is completed.
-1b. If your Jump host already has CentOS 7 with libvirt running on it then install the
- opnfv-apex RPMs from OPNFV artifacts <http://artifacts.opnfv.org/>. The following RPMS
- are available for installation:
+1b. If your Jump host already has CentOS 7 with libvirt running on it then install the install
+ the RDO Release RPM:
- - opnfv-apex - OpenDaylight L2 / L3 and ONOS support **
- - opnfv-apex-opendaylight-sfc - OpenDaylight SFC support **
- - opnfv-apex-undercloud (required)
- - opnfv-apex-common (required)
-
- ** One or more of these RPMs is required
- If you only want the experimental SFC support then the opnfv-apex RPM is not required.
- If you only want OpenDaylight or ONOS support then the opnfv-apex-opendaylight-sfc RPM is
- not required.
+ ``sudo yum install -y https://www.rdoproject.org/repos/rdo-release.rpm opnfv-apex-{version}.rpm``
- To install these RPMs download them to the local disk on your CentOS 7 install and pass the
- file names directly to yum:
- ``sudo yum install opnfv-apex-<version>.rpm opnfv-apex-undercloud-<version>.rpm opnfv-apex-common-<version>.rpm``
+ The RDO Project release repository is needed to install OpenVSwitch, which is a dependency of
+ opnfv-apex. If you do not have external connectivity to use this repository you need to download
+ the OpenVSwitch RPM from the RDO Project repositories and install it with the opnfv-apex RPM.
2a. Boot the ISO off of a USB or other installation media and walk through installing OPNFV CentOS 7.
The ISO comes prepared to be written directly to a USB drive with dd as such:
@@ -101,13 +93,24 @@ Install Bare Metal Jumphost
Replace /dev/sdX with the device assigned to your usb drive. Then select the USB device as the
boot media on your Jumphost
-2b. Install the RDO Release RPM and the opnfv-apex RPM:
+2b. If your Jump host already has CentOS 7 with libvirt running on it then install the
+ opnfv-apex RPMs from OPNFV artifacts <http://artifacts.opnfv.org/>. The following RPMS
+ are available for installation:
- ``sudo yum install -y https://www.rdoproject.org/repos/rdo-release.rpm opnfv-apex-{version}.rpm``
+ - opnfv-apex - OpenDaylight L2 / L3 and ONOS support **
+ - opnfv-apex-onos - ONOS support **
+ - opnfv-apex-opendaylight-sfc - OpenDaylight SFC support **
+ - opnfv-apex-undercloud - (required) Undercloud Image
+ - opnfv-apex-common - (required) Supporting config files and scripts
+
+ ** One or more of these RPMs is required
+ Only one of opnfv-apex, opnfv-apex-onos and opnfv-apex-opendaylight-sfc is required. It is
+ safe to leave the unneeded SDN controller's RPMs uninstalled if you do not intend to use them.
+
+ To install these RPMs download them to the local disk on your CentOS 7 install and pass the
+ file names directly to yum:
+ ``sudo yum install opnfv-apex-<version>.rpm opnfv-apex-undercloud-<version>.rpm opnfv-apex-common-<version>.rpm``
- The RDO Project release repository is needed to install OpenVSwitch, which is a dependency of
- opnfv-apex. If you do not have external connectivity to use this repository you need to download
- the OpenVSwitch RPM from the RDO Project repositories and install it with the opnfv-apex RPM.
3. After the operating system and the opnfv-apex RPMs are installed, login to your Jumphost as root.
@@ -156,6 +159,12 @@ Edit the 2 settings files in /etc/opnfv-apex/. These files have comments to help
1. deploy_settings.yaml
This file includes basic configuration options deployment.
+ Alternatively, there are pre-built deploy_settings files available in (``/etc/opnfv-apex/``). These
+ files are named with the naming convention os-sdn_controller-enabled_feature-[no]ha.yaml. These
+ files can be used in place of the (``/etc/opnfv-apex/deploy_settings.yaml``) file if one suites your
+ deployment needs. If a pre-built deploy_settings file is choosen there is no need to customize
+ (``/etc/opnfv-apex/deploy_settings.yaml``). The pre-built file can be used in place of the
+ (``/etc/opnfv-apex/deploy_settings.yaml``) file.
2. network_settings.yaml
This file provides Apex with the networking information that satisfies the
@@ -170,10 +179,10 @@ You are now ready to deploy OPNFV using Apex!
Follow the steps below to execute:
1. Execute opnfv-deploy
- ``sudo opnfv-deploy [ --flat | -n network_settings.yaml ] -i inventory.yaml -d deploy_settings.yaml``
+ ``sudo opnfv-deploy [ --flat ] -n network_settings.yaml -i inventory.yaml -d deploy_settings.yaml``
If you need more information about the options that can be passed to opnfv-deploy use ``opnfv-deploy --help``
- --flat will collapse all networks onto a single nic, -n network_settings.yaml allows you to customize your
- networking topology.
+ --flat collapses all networks to a single nic, only uses the admin network from the network settings file.
+ -n network_settings.yaml allows you to customize your networking topology.
2. Wait while deployment is executed.
If something goes wrong during this part of the process,
diff --git a/docs/installation-instructions/introduction.rst b/docs/installation-instructions/introduction.rst
index 25452614..3d5d1bc4 100644
--- a/docs/installation-instructions/introduction.rst
+++ b/docs/installation-instructions/introduction.rst
@@ -1,7 +1,7 @@
Introduction
============
-This document describes the steps to install an OPNFV Bramaputra reference
+This document describes the steps to install an OPNFV Colorado reference
platform, as defined by the Genesis Project using the Apex installer.
The audience is assumed to have a good background in networking
@@ -10,31 +10,31 @@ and Linux administration.
Preface
=======
-Apex uses the RDO Manager Open Source project as a server provisioning tool.
-RDO Manager is the RDO Project implimentation of OpenStack's Triple-O project.
-The Triple-O image based life cycle installation tool provisions an OPNFV
-Target System (3 controllers, n number of compute nodes) with OPNFV specific
-configuration provided by the Apex deployment tool chain.
+Apex uses Triple-O from the RDO Project OpenStack distribution as a
+provisioning tool. The Triple-O image based life cycle installation
+tool provisions an OPNFV Target System (3 controllers, n number of
+compute nodes) with OPNFV specific configuration provided by the Apex
+deployment tool chain.
The Apex deployment artifacts contain the necessary tools to deploy and
configure an OPNFV target system using the Apex deployment toolchain.
These artifacts offer the choice of using the Apex bootable ISO
-(``opnfv-apex-bramaputra.iso``) to both install CentOS 7 and the
-nessesary materials to deploy or the Apex RPM (``opnfv-apex.rpm``)
+(``opnfv-apex-colorado.iso``) to both install CentOS 7 and the
+necessary materials to deploy or the Apex RPMs (``opnfv-apex*.rpm``)
which expects installation to a CentOS 7 libvirt enabled host. The RPM
-contains a collection of configuration file, prebuilt disk images,
+contains a collection of configuration files, prebuilt disk images,
and the automatic deployment script (``opnfv-deploy``).
An OPNFV install requires a "Jumphost" in order to operate. The bootable
ISO will allow you to install a customized CentOS 7 release to the Jumphost,
which includes the required packages needed to run ``opnfv-deploy``.
If you already have a Jumphost with CentOS 7 installed, you may choose to
-skip the ISO step and simply install the (``opnfv-apex.rpm``) RPM. The RPM
-is the same RPM included in the ISO and includes all the necessary disk
+skip the ISO step and simply install the (``opnfv-apex*.rpm``) RPMs. The RPMs
+are the same RPMs included in the ISO and include all the necessary disk
images and configuration files to execute an OPNFV deployment. Either method
will prepare a host to the same ready state for OPNFV deployment.
-``opnfv-deploy`` instantiates an RDO Manager Undercloud VM server using libvirt
+``opnfv-deploy`` instantiates a Triple-O Undercloud VM server using libvirt
as its provider. This VM is then configured and used to provision the
OPNFV target deployment (3 controllers, n compute nodes). These nodes can
be either virtual or bare metal. This guide contains instructions for
diff --git a/docs/installation-instructions/references.rst b/docs/installation-instructions/references.rst
index e58b4182..5ff2a542 100644
--- a/docs/installation-instructions/references.rst
+++ b/docs/installation-instructions/references.rst
@@ -21,7 +21,7 @@ OPNFV
OpenStack
---------
-`OpenStack Liberty Release artifacts <http://www.openstack.org/software/liberty>`_
+`OpenStack Mitaka Release artifacts <http://www.openstack.org/software/mitaka>`_
`OpenStack documentation <http://docs.openstack.org>`_
@@ -30,9 +30,9 @@ OpenDaylight
Upstream OpenDaylight provides `a number of packaging and deployment options <https://wiki.opendaylight.org/view/Deployment>`_ meant for consumption by downstream projects like OPNFV.
-Currently, OPNFV Apex uses `OpenDaylight's Puppet module <https://github.com/dfarrell07/puppet-opendaylight>`_, which in turn depends on `OpenDaylight's RPM <http://cbs.centos.org/repos/nfv7-opendaylight-3-candidate/x86_64/os/Packages/opendaylight-3.0.0-2.el7.noarch.rpm>`_.
+Currently, OPNFV Apex uses `OpenDaylight's Puppet module <https://github.com/dfarrell07/puppet-opendaylight>`_, which in turn depends on `OpenDaylight's RPM <http://cbs.centos.org/repos/nfv7-opendaylight-4-release/>`_.
-RDO Manager
+RDO Project
-----------
-`RDO Manager website <https://www.rdoproject.org/rdo-manager>`_
+`RDO Project website <https://www.rdoproject.org/>`_
diff --git a/docs/installation-instructions/requirements.rst b/docs/installation-instructions/requirements.rst
index 46dca2a0..ff9dc2ad 100644
--- a/docs/installation-instructions/requirements.rst
+++ b/docs/installation-instructions/requirements.rst
@@ -15,9 +15,9 @@ The Jumphost requirements are outlined below:
4. minimum 2 networks and maximum 6 networks, multiple NIC and/or VLAN combinations are supported.
This is virtualized for a VM deployment.
-5. The Bramaputra Apex RPM.
+5. The Colorado Apex RPMs.
-6. 16 GB of RAM for a bare metal deployment, 56 GB of RAM for a VM deployment.
+6. 16 GB of RAM for a bare metal deployment, 64 GB of RAM for a VM deployment.
Network Requirements
--------------------
@@ -28,9 +28,9 @@ Network requirements include:
2. 2-6 separate networks with connectivity between Jumphost and nodes.
- - Control Plane Network (Provisioning)
+ - Control Plane (Provisioning) / Private (API) Network
- - Private / Internal Network*
+ - Internal (Tenant Networking) Network
- External Network
diff --git a/docs/installation-instructions/verification.rst b/docs/installation-instructions/verification.rst
index bf3356bc..42dcb8c5 100644
--- a/docs/installation-instructions/verification.rst
+++ b/docs/installation-instructions/verification.rst
@@ -13,7 +13,7 @@ interact with the undercloud and the overcloud. Source the appropriate RC file t
the respective OpenStack deployment.
| ``source stackrc`` (Undercloud)
-| ``source overcloudrc`` (overcloud / OPNFV)
+| ``source overcloudrc`` (Overcloud / OPNFV)
The contents of these files include the credentials for the administrative user for Undercloud and
OPNFV respectivly. At this point both Undercloud and OPNFV can be interacted with just as any
diff --git a/docs/installation-instructions/virtualinstall.rst b/docs/installation-instructions/virtualinstall.rst
index c568e135..c2ee66d8 100644
--- a/docs/installation-instructions/virtualinstall.rst
+++ b/docs/installation-instructions/virtualinstall.rst
@@ -1,24 +1,20 @@
Installation High-Level Overview - Virtual Deployment
=====================================================
-The VM nodes deployment operates almost the same way as the bare metal deploymen
-t with a
-few differences. ``opnfv-deploy`` still deploys an Undercloud VM. In addition to t
-he Undercloud VM
-a collection of VMs (3 control nodes + 2 compute for an HA deployment or 1 contr
-ol node and
-1 compute node for a Non-HA Deployment) will be defined for the target OPNFV dep
-loyment.
-The part of the toolchain that executes IPMI power instructions calls into libvi
-rt instead of
+The VM nodes deployment operates almost the same way as the bare metal
+deployment with a few differences. ``opnfv-deploy`` still deploys an Undercloud
+VM. In addition to the Undercloud VM a collection of VMs (3 control nodes + 2
+compute for an HA deployment or 1 control node and 1 compute node for a Non-HA
+Deployment) will be defined for the target OPNFV deployment. The part of the
+toolchain that executes IPMI power instructions calls into libvirt instead of
the IPMI interfaces on baremetal servers to operate the power managment. These
-VMs are then
-provisioned with the same disk images and configuration that baremetal would be.
-
-To RDO Manager these nodes look like they have just built and registered the sam
-e way as
-bare metal nodes, the main difference is the use of a libvirt driver for the pow
-er management.
+VMs are then provisioned with the same disk images and configuration that
+baremetal would be. To Triple-O these nodes look like they have just built
+and registered the same way as bare metal nodes, the main difference is the use
+of a libvirt driver for the power management. Finally, the default
+network_settings file will deploy without modification. Customizations
+are welcome but not needed if a generic set of network_settings are
+acceptable.
Installation Guide - Virtual Deployment
=======================================
@@ -42,17 +38,18 @@ will deploy with the following architecture:
- 1 Undercloud VM
- - The option of 3 control and 2 compute VMs (HA Deploy / default)
- or 1 control and 1 compute VM (Non-HA deploy / pass -n)
+ - The option of 3 control and 2 or more compute VMs (HA Deploy / default)
+ or 1 control and 1 or more compute VM (Non-HA deploy / pass -n)
- - 2 networks, one for provisioning, internal API,
- storage and tenant networking traffic and a second for the external network
+ - 2-4 networks: provisioning / internal API, storage, private tenant networking
+ and the external network. The storage and tenant networking networks
+ can be collapsed onto the provisioning network.
Follow the steps below to execute:
-1. ``sudo opnfv-deploy --virtual [ --no-ha ]``
+1. ``sudo opnfv-deploy [ --flat ] -n network_settings.yaml -i inventory.yaml -d deploy_settings.yaml``
-2. It will take approximately 30 minutes to stand up Undercloud,
+2. It will take approximately 45 minutes to an hour to stand up Undercloud,
define the target virtual machines, configure the deployment and execute the deployment.
You will notice different outputs in your shell.