summaryrefslogtreecommitdiffstats
path: root/docs/release
diff options
context:
space:
mode:
authorSofia Wallin <sofia.wallin@ericsson.com>2017-02-17 14:00:36 +0100
committerSofia Wallin <sofia.wallin@ericsson.com>2017-02-17 14:00:36 +0100
commit4bdb2d6a128664082bddad51091e0d9fb63068fd (patch)
tree7063348d03a3c6464226ca7fffd85f41e52042a5 /docs/release
parent5e4c2ffc86d0426113f60b8069e81482f82bbc8d (diff)
Doc updates for MS6
Updated the docs structure according to directives and MS6 Change-Id: I36e92cbc58328528ebb91ff4f54ee701f5477443 Signed-off-by: Sofia Wallin <sofia.wallin@ericsson.com>
Diffstat (limited to 'docs/release')
-rw-r--r--docs/release/configguide/installerconfig.rst11
-rw-r--r--docs/release/installation/abstract.rst16
-rw-r--r--docs/release/installation/architecture.rst143
-rw-r--r--docs/release/installation/baremetal.rst273
-rw-r--r--docs/release/installation/index.rst28
-rw-r--r--docs/release/installation/introduction.rst42
-rw-r--r--docs/release/installation/references.rst40
-rw-r--r--docs/release/installation/requirements.rst78
-rw-r--r--docs/release/installation/troubleshooting.rst144
-rw-r--r--docs/release/installation/verification.rst89
-rw-r--r--docs/release/installation/virtualinstall.rst69
-rw-r--r--docs/release/release-notes/index.rst11
-rw-r--r--docs/release/release-notes/release-notes.rst411
13 files changed, 1355 insertions, 0 deletions
diff --git a/docs/release/configguide/installerconfig.rst b/docs/release/configguide/installerconfig.rst
new file mode 100644
index 00000000..0cbb00f6
--- /dev/null
+++ b/docs/release/configguide/installerconfig.rst
@@ -0,0 +1,11 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) OPNFV
+
+==================
+Apex configuration
+==================
+
+.. include:: ../installationprocedure/introduction.rst
+.. include:: ../installationprocedure/baremetal.rst
diff --git a/docs/release/installation/abstract.rst b/docs/release/installation/abstract.rst
new file mode 100644
index 00000000..70814cea
--- /dev/null
+++ b/docs/release/installation/abstract.rst
@@ -0,0 +1,16 @@
+Abstract
+========
+
+This document describes how to install the Colorado release of OPNFV when
+using Apex as a deployment tool covering it's limitations, dependencies
+and required system resources.
+
+License
+=======
+Colorado release of OPNFV when using Apex as a deployment tool Docs
+(c) by Tim Rozet (Red Hat) and Dan Radez (Red Hat)
+
+Colorado release of OPNFV when using Apex as a deployment tool Docs
+are licensed under a Creative Commons Attribution 4.0 International License.
+You should have received a copy of the license along with this.
+If not, see <http://creativecommons.org/licenses/by/4.0/>.
diff --git a/docs/release/installation/architecture.rst b/docs/release/installation/architecture.rst
new file mode 100644
index 00000000..38806391
--- /dev/null
+++ b/docs/release/installation/architecture.rst
@@ -0,0 +1,143 @@
+Triple-O Deployment Architecture
+================================
+
+Apex is based on the OpenStack Triple-O project as distributed by
+the RDO Project. It is important to understand the basics
+of a Triple-O deployment to help make decisions that will assist in
+successfully deploying OPNFV.
+
+Triple-O stands for OpenStack On OpenStack. This means that OpenStack
+will be used to install OpenStack. The target OPNFV deployment is an
+OpenStack cloud with NFV features built-in that will be deployed by a
+smaller all-in-one deployment of OpenStack. In this deployment
+methodology there are two OpenStack installations. They are referred
+to as the undercloud and the overcloud. The undercloud is used to
+deploy the overcloud.
+
+The undercloud is the all-in-one installation of OpenStack that includes
+baremetal provisioning capability. The undercloud will be deployed as a
+virtual machine on a jumphost. This VM is pre-built and distributed as part
+of the Apex RPM.
+
+The overcloud is OPNFV. Configuration will be passed into undercloud and
+the undercloud will use OpenStack's orchestration component, named Heat, to
+execute a deployment that will provision the target OPNFV nodes.
+
+Apex High Availability Architecture
+===================================
+
+Undercloud
+----------
+
+The undercloud is not Highly Available. End users do not depend on the
+underloud. It is only for management purposes.
+
+Overcloud
+---------
+
+Apex will deploy three control nodes in an HA deployment. Each of these nodes
+will run the following services:
+
+- Stateless OpenStack services
+- MariaDB / Galera
+- RabbitMQ
+- OpenDaylight
+- HA Proxy
+- Pacemaker & VIPs
+- Ceph Monitors and OSDs
+
+Stateless OpenStack services
+ All running statesless OpenStack services are load balanced by HA Proxy.
+ Pacemaker monitors the services and ensures that they are running.
+
+Stateful OpenStack services
+ All running stateful OpenStack services are load balanced by HA Proxy.
+ They are monitored by pacemaker in an active/passive failover configuration.
+
+MariaDB / Galera
+ The MariaDB database is replicated across the control nodes using Galera.
+ Pacemaker is responsible for a proper start up of the Galera cluster. HA
+ Proxy provides and active/passive failover methodology to connections to the
+ database.
+
+RabbitMQ
+ The message bus is managed by Pacemaker to ensure proper start up and
+ establishment of clustering across cluster members.
+
+OpenDaylight
+ OpenDaylight is currently installed on all three control nodes but only
+ started on the first control node. OpenDaylight's HA capabilities are not yet
+ mature enough to be enabled.
+
+HA Proxy
+ HA Proxy is monitored by Pacemaker to ensure it is running across all nodes
+ and available to balance connections.
+
+Pacemaker & VIPs
+ Pacemaker has relationships and restraints setup to ensure proper service
+ start up order and Virtual IPs associated with specific services are running
+ on the proper host.
+
+Ceph Monitors & OSDs
+ The Ceph monitors run on each of the control nodes. Each control node also
+ has a Ceph OSD running on it. By default the OSDs use an autogenerated
+ virtual disk as their target device. A non-autogenerated device can be
+ specified in the deploy file.
+
+VM Migration is configured and VMs can be evacuated as needed or as invoked
+by tools such as heat as part of a monitored stack deployment in the overcloud.
+
+
+OPNFV Scenario Architecture
+===========================
+
+OPNFV distinguishes different types of SDN controllers, deployment options, and
+features into "scenarios". These scenarios are universal across all OPNFV
+installers, although some may or may not be supported by each installer.
+
+The standard naming convention for a scenario is:
+<VIM platform>-<SDN type>-<feature>-<ha/noha>
+
+The only supported VIM type is "OS" (OpenStack), while SDN types can be any
+supported SDN controller. "feature" includes things like ovs_dpdk, sfc, etc.
+"ha" or "noha" determines if the deployment will be highly available. If "ha"
+is used at least 3 control nodes are required.
+
+OPNFV Scenarios in Apex
+=======================
+
+Apex provides pre-built scenario files in /etc/opnfv-apex which a user can
+select from to deploy the desired scenario. Simply pass the desired file to
+the installer as a (-d) deploy setting. Read further in the Apex documentation
+to learn more about invoking the deploy command. Below is quick reference
+matrix for OPNFV scenarios supported in Apex. Please refer to the respective
+OPNFV Docs documentation for each scenario in order to see a full scenario
+description. Also, please refer to release-notes for information about known
+issues per scenario. The following scenarios correspond to a supported
+<Scenario>.yaml deploy settings file:
+
++-------------------------+------------+-----------------+
+| **Scenario** | **Owner** | **Supported** |
++-------------------------+------------+-----------------+
+| os-nosdn-nofeature-ha | Apex | Yes |
++-------------------------+------------+-----------------+
+| os-nosdn-nofeature-noha | Apex | Yes |
++-------------------------+------------+-----------------+
+| os-nosdn-ovs-noha | OVS for NFV| Yes |
++-------------------------+------------+-----------------+
+| os-nosdn-fdio-noha | FDS | Yes |
++-------------------------+------------+-----------------+
+| os-odl_l2-nofeature-ha | Apex | Yes |
++-------------------------+------------+-----------------+
+| os-odl_l3-nofeature-ha | Apex | Yes |
++-------------------------+------------+-----------------+
+| os-odl_l2-sfc-noha | SFC | Yes |
++-------------------------+------------+-----------------+
+| os-odl-bgpvpn-ha | SDNVPN | No |
++-------------------------+------------+-----------------+
+| os-odl_l2-fdio-noha | FDS | Yes |
++-------------------------+------------+-----------------+
+| os-onos-nofeature-ha | ONOSFW | Yes |
++-------------------------+------------+-----------------+
+| os-onos-sfc-ha | ONOSFW | Yes |
++-------------------------+------------+-----------------+
diff --git a/docs/release/installation/baremetal.rst b/docs/release/installation/baremetal.rst
new file mode 100644
index 00000000..83cda326
--- /dev/null
+++ b/docs/release/installation/baremetal.rst
@@ -0,0 +1,273 @@
+Installation High-Level Overview - Bare Metal Deployment
+========================================================
+
+The setup presumes that you have 6 or more bare metal servers already setup
+with network connectivity on at least 1 or more network interfaces for all
+servers via a TOR switch or other network implementation.
+
+The physical TOR switches are **not** automatically configured from the OPNFV
+reference platform. All the networks involved in the OPNFV infrastructure as
+well as the provider networks and the private tenant VLANs needs to be manually
+configured.
+
+The Jumphost can be installed using the bootable ISO or by using the
+(``opnfv-apex*.rpm``) RPMs and their dependencies. The Jumphost should then be
+configured with an IP gateway on its admin or public interface and configured
+with a working DNS server. The Jumphost should also have routable access
+to the lights out network for the overcloud nodes.
+
+``opnfv-deploy`` is then executed in order to deploy the undercloud VM and to
+provision the overcloud nodes. ``opnfv-deploy`` uses three configuration files
+in order to know how to install and provision the OPNFV target system.
+The information gathered under section
+`Execution Requirements (Bare Metal Only)`_ is put into the YAML file
+``/etc/opnfv-apex/inventory.yaml`` configuration file. Deployment options are
+put into the YAML file ``/etc/opnfv-apex/deploy_settings.yaml``. Alternatively
+there are pre-baked deploy_settings files available in ``/etc/opnfv-apex/``.
+These files are named with the naming convention
+os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in place
+of the ``/etc/opnfv-apex/deploy_settings.yaml`` file if one suites your
+deployment needs. Networking definitions gathered under section
+`Network Requirements`_ are put into the YAML file
+``/etc/opnfv-apex/network_settings.yaml``. ``opnfv-deploy`` will boot the
+undercloud VM and load the target deployment configuration into the
+provisioning toolchain. This information includes MAC address, IPMI,
+Networking Environment and OPNFV deployment options.
+
+Once configuration is loaded and the undercloud is configured it will then
+reboot the overcloud nodes via IPMI. The nodes should already be set to PXE
+boot first off the admin interface. The nodes will first PXE off of the
+undercloud PXE server and go through a discovery/introspection process.
+
+Introspection boots off of custom introspection PXE images. These images are
+designed to look at the properties of the hardware that is being booted
+and report the properties of it back to the undercloud node.
+
+After introspection the undercloud will execute a Heat Stack Deployment to
+continue node provisioning and configuration. The nodes will reboot and PXE
+from the undercloud PXE server again to provision each node using Glance disk
+images provided by the undercloud. These disk images include all the necessary
+packages and configuration for an OPNFV deployment to execute. Once the disk
+images have been written to node's disks the nodes will boot locally and
+execute cloud-init which will execute the final node configuration. This
+configuration is largly completed by executing a puppet apply on each node.
+
+Installation High-Level Overview - VM Deployment
+================================================
+
+The VM nodes deployment operates almost the same way as the bare metal
+deployment with a few differences mainly related to power management.
+``opnfv-deploy`` still deploys an undercloud VM. In addition to the undercloud
+VM a collection of VMs (3 control nodes + 2 compute for an HA deployment or 1
+control node and 1 or more compute nodes for a Non-HA Deployment) will be
+defined for the target OPNFV deployment. The part of the toolchain that
+executes IPMI power instructions calls into libvirt instead of the IPMI
+interfaces on baremetal servers to operate the power managment. These VMs are
+then provisioned with the same disk images and configuration that baremetal
+would be.
+
+To Triple-O these nodes look like they have just built and registered the same
+way as bare metal nodes, the main difference is the use of a libvirt driver for
+the power management.
+
+Installation Guide - Bare Metal Deployment
+==========================================
+
+This section goes step-by-step on how to correctly install and provision the
+OPNFV target system to bare metal nodes.
+
+Install Bare Metal Jumphost
+---------------------------
+
+1a. If your Jumphost does not have CentOS 7 already on it, or you would like to
+ do a fresh install, then download the Apex bootable ISO from the OPNFV
+ artifacts site <http://artifacts.opnfv.org/apex.html>. There have been
+ isolated reports of problems with the ISO having trouble completing
+ installation successfully. In the unexpected event the ISO does not work
+ please workaround this by downloading the CentOS 7 DVD and performing a
+ "Virtualization Host" install. If you perform a "Minimal Install" or
+ install type other than "Virtualization Host" simply run
+ ``sudo yum groupinstall "Virtualization Host"``
+ ``chkconfig libvirtd on && reboot``
+ to install virtualzation support and enable libvirt on boot. If you use the
+ CentOS 7 DVD proceed to step 1b once the CentOS 7 with "Virtualzation Host"
+ support is completed.
+
+1b. If your Jump host already has CentOS 7 with libvirt running on it then
+ install the install the RDO Newton Release RPM and epel-release:
+
+ ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
+ ``sudo yum install epel-release``
+
+ The RDO Project release repository is needed to install OpenVSwitch, which
+ is a dependency of opnfv-apex. If you do not have external connectivity to
+ use this repository you need to download the OpenVSwitch RPM from the RDO
+ Project repositories and install it with the opnfv-apex RPM.
+
+2a. Boot the ISO off of a USB or other installation media and walk through
+ installing OPNFV CentOS 7. The ISO comes prepared to be written directly
+ to a USB drive with dd as such:
+
+ ``dd if=opnfv-apex.iso of=/dev/sdX bs=4M``
+
+ Replace /dev/sdX with the device assigned to your usb drive. Then select
+ the USB device as the boot media on your Jumphost
+
+2b. If your Jump host already has CentOS 7 with libvirt running on it then
+ install the opnfv-apex RPMs using the OPNFV artifacts yum repo. This yum
+ repo is created at release. It will not exist before release day.
+
+ ``sudo yum install http://artifacts.opnfv.org/apex/danube/opnfv-apex-release-danube.noarch.rpm``
+
+ Once you have installed the repo definitions for Apex, RDO and EPEL then
+ yum install Apex:
+
+ ``sudo yum install opnfv-apex``
+
+ If ONOS will be used, install the ONOS rpm instead of the opnfv-apex rpm.
+
+ ``sudo yum install opnfv-apex-onos``
+
+2c. If you choose not to use the Apex yum repo or you choose to use
+ pre-released RPMs you can download and install the required RPMs from the
+ artifacts site <http://artifacts.opnfv.org/apex.html>. The following RPMs
+ are available for installation:
+
+ - opnfv-apex - OpenDaylight L2 / L3 and ODL SFC support *
+ - opnfv-apex-onos - ONOS support *
+ - opnfv-apex-undercloud - (reqed) Undercloud Image
+ - opnfv-apex-common - (reqed) Supporting config files and scripts
+ - python34-markupsafe - (reqed) Dependency of opnfv-apex-common **
+ - python3-jinja2 - (reqed) Dependency of opnfv-apex-common **
+ - python3-ipmi - (reqed) Dependency of opnfv-apex-common **
+
+ \* One or more of these RPMs is required
+ Only one of opnfv-apex or opnfv-apex-onos is required. It is safe to leave
+ the unneeded SDN controller's RPMs uninstalled if you do not intend to use
+ them.
+
+ ** These RPMs are not yet distributed by CentOS or EPEL.
+ Apex has built these for distribution with Apex while CentOS and EPEL do
+ not distribute them. Once they are carried in an upstream channel Apex will
+ no longer carry them and they will not need special handling for
+ installation.
+
+
+ The EPEL and RDO yum repos are still required:
+ ``sudo yum install epel-release``
+ ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
+
+ Once the apex RPMs are downloaded install them by passing the file names
+ directly to yum:
+ ``sudo yum install python34-markupsafe-<version>.rpm
+ python3-jinja2-<version>.rpm python3-ipmi-<version>.rpm``
+ ``sudo yum install opnfv-apex-<version>.rpm
+ opnfv-apex-undercloud-<version>.rpm opnfv-apex-common-<version>.rpm``
+
+3. After the operating system and the opnfv-apex RPMs are installed, login to
+ your Jumphost as root.
+
+4. Configure IP addresses on the interfaces that you have selected as your
+ networks.
+
+5. Configure the IP gateway to the Internet either, preferably on the public
+ interface.
+
+6. Configure your ``/etc/resolv.conf`` to point to a DNS server
+ (8.8.8.8 is provided by Google).
+
+Creating a Node Inventory File
+------------------------------
+
+IPMI configuration information gathered in section
+`Execution Requirements (Bare Metal Only)`_ needs to be added to the
+``inventory.yaml`` file.
+
+1. Copy ``/usr/share/doc/opnfv/inventory.yaml.example`` as your inventory file
+ template to ``/etc/opnfv-apex/inventory.yaml``.
+
+2. The nodes dictionary contains a definition block for each baremetal host
+ that will be deployed. 1 or more compute nodes and 3 controller nodes are
+ required. (The example file contains blocks for each of these already).
+ It is optional at this point to add more compute nodes into the node list.
+
+3. Edit the following values for each node:
+
+ - ``mac_address``: MAC of the interface that will PXE boot from undercloud
+ - ``ipmi_ip``: IPMI IP Address
+ - ``ipmi_user``: IPMI username
+ - ``ipmi_password``: IPMI password
+ - ``pm_type``: Power Management driver to use for the node
+ values: pxe_ipmitool (tested) or pxe_wol (untested) or pxe_amt (untested)
+ - ``cpus``: (Introspected*) CPU cores available
+ - ``memory``: (Introspected*) Memory available in Mib
+ - ``disk``: (Introspected*) Disk space available in Gb
+ - ``disk_device``: (Opt***) Root disk device to use for installation
+ - ``arch``: (Introspected*) System architecture
+ - ``capabilities``: (Opt**) Node's role in deployment
+ values: profile:control or profile:compute
+
+ \* Introspection looks up the overcloud node's resources and overrides these
+ value. You can leave default values and Apex will get the correct values when
+ it runs introspection on the nodes.
+
+ ** If capabilities profile is not specified then Apex will select node's roles
+ in the OPNFV cluster in a non-deterministic fashion.
+
+ \*** disk_device declares which hard disk to use as the root device for
+ installation. The format is a comma delimited list of devices, such as
+ "sda,sdb,sdc". The disk chosen will be the first device in the list which
+ is found by introspection to exist on the system. Currently, only a single
+ definition is allowed for all nodes. Therefore if multiple disk_device
+ definitions occur within the inventory, only the last definition on a node
+ will be used for all nodes.
+
+Creating the Settings Files
+---------------------------
+
+Edit the 2 settings files in /etc/opnfv-apex/. These files have comments to
+help you customize them.
+
+1. deploy_settings.yaml
+ This file includes basic configuration options deployment, and also documents
+ all available options.
+ Alternatively, there are pre-built deploy_settings files available in
+ (``/etc/opnfv-apex/``). These files are named with the naming convention
+ os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in
+ place of the (``/etc/opnfv-apex/deploy_settings.yaml``) file if one suites
+ your deployment needs. If a pre-built deploy_settings file is choosen there
+ is no need to customize (``/etc/opnfv-apex/deploy_settings.yaml``). The
+ pre-built file can be used in place of the
+ (``/etc/opnfv-apex/deploy_settings.yaml``) file.
+
+2. network_settings.yaml
+ This file provides Apex with the networking information that satisfies the
+ prerequisite `Network Requirements`_. These are specific to your
+ environment.
+
+Running ``opnfv-deploy``
+------------------------
+
+You are now ready to deploy OPNFV using Apex!
+``opnfv-deploy`` will use the inventory and settings files to deploy OPNFV.
+
+Follow the steps below to execute:
+
+1. Execute opnfv-deploy
+ ``sudo opnfv-deploy -n network_settings.yaml
+ -i inventory.yaml -d deploy_settings.yaml``
+ If you need more information about the options that can be passed to
+ opnfv-deploy use ``opnfv-deploy --help``. -n
+ network_settings.yaml allows you to customize your networking topology.
+
+2. Wait while deployment is executed.
+ If something goes wrong during this part of the process, start by reviewing
+ your network or the information in your configuration files. It's not
+ uncommon for something small to be overlooked or mis-typed.
+ You will also notice outputs in your shell as the deployment progresses.
+
+3. When the deployment is complete the undercloud IP and ovecloud dashboard
+ url will be printed. OPNFV has now been deployed using Apex.
+
+.. _`Execution Requirements (Bare Metal Only)`: index.html#execution-requirements-bare-metal-only
+.. _`Network Requirements`: index.html#network-requirements
diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst
new file mode 100644
index 00000000..83e9292e
--- /dev/null
+++ b/docs/release/installation/index.rst
@@ -0,0 +1,28 @@
+**************************************
+OPNFV Installation instructions (Apex)
+**************************************
+
+Contents:
+
+.. toctree::
+ :numbered:
+ :maxdepth: 4
+
+ abstract.rst
+ introduction.rst
+ architecture.rst
+ requirements.rst
+ baremetal.rst
+ virtualinstall.rst
+ verification.rst
+ troubleshooting.rst
+ references.rst
+
+:Authors: Tim Rozet (trozet@redhat.com)
+:Authors: Dan Radez (dradez@redhat.com)
+:Version: 3.0
+
+Indices and tables
+==================
+
+* :ref:`search`
diff --git a/docs/release/installation/introduction.rst b/docs/release/installation/introduction.rst
new file mode 100644
index 00000000..cc489917
--- /dev/null
+++ b/docs/release/installation/introduction.rst
@@ -0,0 +1,42 @@
+Introduction
+============
+
+This document describes the steps to install an OPNFV Colorado reference
+platform, as defined by the Genesis Project using the Apex installer.
+
+The audience is assumed to have a good background in networking
+and Linux administration.
+
+Preface
+=======
+
+Apex uses Triple-O from the RDO Project OpenStack distribution as a
+provisioning tool. The Triple-O image based life cycle installation
+tool provisions an OPNFV Target System (3 controllers, 2 or more
+compute nodes) with OPNFV specific configuration provided by the Apex
+deployment tool chain.
+
+The Apex deployment artifacts contain the necessary tools to deploy and
+configure an OPNFV target system using the Apex deployment toolchain.
+These artifacts offer the choice of using the Apex bootable ISO
+(``opnfv-apex-colorado.iso``) to both install CentOS 7 and the
+necessary materials to deploy or the Apex RPMs (``opnfv-apex*.rpm``),
+and their associated dependencies, which expects installation to a
+CentOS 7 libvirt enabled host. The RPM contains a collection of
+configuration files, prebuilt disk images, and the automatic deployment
+script (``opnfv-deploy``).
+
+An OPNFV install requires a "Jumphost" in order to operate. The bootable
+ISO will allow you to install a customized CentOS 7 release to the Jumphost,
+which includes the required packages needed to run ``opnfv-deploy``.
+If you already have a Jumphost with CentOS 7 installed, you may choose to
+skip the ISO step and simply install the (``opnfv-apex*.rpm``) RPMs. The RPMs
+are the same RPMs included in the ISO and include all the necessary disk
+images and configuration files to execute an OPNFV deployment. Either method
+will prepare a host to the same ready state for OPNFV deployment.
+
+``opnfv-deploy`` instantiates a Triple-O Undercloud VM server using libvirt
+as its provider. This VM is then configured and used to provision the
+OPNFV target deployment (3 controllers, n compute nodes). These nodes can
+be either virtual or bare metal. This guide contains instructions for
+installing either method.
diff --git a/docs/release/installation/references.rst b/docs/release/installation/references.rst
new file mode 100644
index 00000000..a63a8421
--- /dev/null
+++ b/docs/release/installation/references.rst
@@ -0,0 +1,40 @@
+Frequently Asked Questions
+==========================
+
+License
+=======
+
+All Apex and "common" entities are protected by the `Apache 2.0 License <http://www.apache.org/licenses/>`_.
+
+References
+==========
+
+OPNFV
+-----
+
+`OPNFV Home Page <www.opnfv.org>`_
+
+`OPNFV Genesis project page <https://wiki.opnfv.org/get_started>`_
+
+`OPNFV Apex project page <https://wiki.opnfv.org/apex>`_
+
+`OPNFV Apex release notes <http://artifacts.opnfv.org/apex/colorado/docs/releasenotes/release-notes.html#references>`_
+
+OpenStack
+---------
+
+`OpenStack Mitaka Release artifacts <http://www.openstack.org/software/mitaka>`_
+
+`OpenStack documentation <http://docs.openstack.org>`_
+
+OpenDaylight
+------------
+
+Upstream OpenDaylight provides `a number of packaging and deployment options <https://wiki.opendaylight.org/view/Deployment>`_ meant for consumption by downstream projects like OPNFV.
+
+Currently, OPNFV Apex uses `OpenDaylight's Puppet module <https://github.com/dfarrell07/puppet-opendaylight>`_, which in turn depends on `OpenDaylight's RPM <http://cbs.centos.org/repos/nfv7-opendaylight-4-release/>`_.
+
+RDO Project
+-----------
+
+`RDO Project website <https://www.rdoproject.org/>`_
diff --git a/docs/release/installation/requirements.rst b/docs/release/installation/requirements.rst
new file mode 100644
index 00000000..507b671e
--- /dev/null
+++ b/docs/release/installation/requirements.rst
@@ -0,0 +1,78 @@
+Setup Requirements
+==================
+
+Jumphost Requirements
+---------------------
+
+The Jumphost requirements are outlined below:
+
+1. CentOS 7 (from ISO or self-installed).
+
+2. Root access.
+
+3. libvirt virtualization support.
+
+4. minimum 1 networks and maximum 5 networks, multiple NIC and/or VLAN
+ combinations are supported. This is virtualized for a VM deployment.
+
+5. The Colorado Apex RPMs and their dependencies.
+
+6. 16 GB of RAM for a bare metal deployment, 64 GB of RAM for a VM
+ deployment.
+
+Network Requirements
+--------------------
+
+Network requirements include:
+
+1. No DHCP or TFTP server running on networks used by OPNFV.
+
+2. 1-5 separate networks with connectivity between Jumphost and nodes.
+
+ - Control Plane (Provisioning)
+
+ - Private Tenant-Networking Network*
+
+ - External Network*
+
+ - Storage Network*
+
+ - Internal API Network* (required for IPv6 \*\*)
+
+3. Lights out OOB network access from Jumphost with IPMI node enabled
+ (bare metal deployment only).
+
+4. External network is a routable network from outside the cloud,
+ deployment. The External network is where public internet access would
+ reside if available.
+
+\*These networks can be combined with each other or all combined on the
+Control Plane network.
+
+\*\*Internal API network, by default, is collapsed with provisioning in IPv4
+deployments, this is not possible with the current lack of PXE boot
+support and therefore the API network is required to be its own
+network in an IPv6 deployment.
+
+Bare Metal Node Requirements
+----------------------------
+
+Bare metal nodes require:
+
+1. IPMI enabled on OOB interface for power control.
+
+2. BIOS boot priority should be PXE first then local hard disk.
+
+3. BIOS PXE interface should include Control Plane network mentioned above.
+
+Execution Requirements (Bare Metal Only)
+----------------------------------------
+
+In order to execute a deployment, one must gather the following information:
+
+1. IPMI IP addresses for the nodes.
+
+2. IPMI login information for the nodes (user/pass).
+
+3. MAC address of Control Plane / Provisioning interfaces of the overcloud
+ nodes.
diff --git a/docs/release/installation/troubleshooting.rst b/docs/release/installation/troubleshooting.rst
new file mode 100644
index 00000000..ed0d1ff6
--- /dev/null
+++ b/docs/release/installation/troubleshooting.rst
@@ -0,0 +1,144 @@
+Developer Guide and Troubleshooting
+===================================
+
+This section aims to explain in more detail the steps that Apex follows
+to make a deployment. It also tries to explain possible issues you might find
+in the process of building or deploying an environment.
+
+After installing the Apex RPMs in the jumphost, some files will be located
+around the system.
+
+1. /etc/opnfv-apex: this directory contains a bunch of scenarios to be
+ deployed with different characteristics such HA (High Availability), SDN
+ controller integration (OpenDaylight/ONOS), BGPVPN, FDIO, etc. Having a
+ look at any of these files will give you an idea of how to make a
+ customized scenario setting up different flags.
+
+2. /usr/bin/: it contains the binaries for the commands opnfv-deploy,
+ opnfv-clean and opnfv-util.
+
+3. /var/opt/opnfv/: it contains several files and directories.
+
+ 3.1. images/: this folder contains the images that will be deployed
+ according to the chosen scenario.
+
+ 3.2. lib/: bunch of scripts that will be executed in the different phases
+ of deployment.
+
+
+Utilization of Images
+---------------------
+
+As mentioned earlier in this guide, the Undercloud VM will be in charge of
+deploying OPNFV (Overcloud VMs). Since the Undercloud is an all-in-one
+OpenStack deployment, it will use Glance to manage the images that will be
+deployed as the Overcloud.
+
+So whatever customization that is done to the images located in the jumpserver
+(/var/opt/opnfv/images) will be uploaded to the undercloud and consequently, to
+the overcloud.
+
+Make sure, the customization is performed on the right image. For example, if I
+virt-customize the following image overcloud-full-opendaylight.qcow2, but then
+I deploy OPNFV with the following command:
+
+ ``sudo opnfv-deploy -n network_settings.yaml -d
+ /etc/opnfv-apex/os-onos-nofeature-ha.yaml``
+
+It will not have any effect over the deployment, since the customized image is
+the opendaylight one, and the scenario indicates that the image to be deployed
+is the overcloud-full-onos.qcow2.
+
+
+Post-deployment Configuration
+-----------------------------
+
+Post-deployment scripts will perform some configuration tasks such ssh-key
+injection, network configuration, NATing, OpenVswitch creation. It will take
+care of some OpenStack tasks such creation of endpoints, external networks,
+users, projects, etc.
+
+If any of these steps fail, the execution will be interrupted. In some cases,
+the interruption occurs at very early stages, so a new deployment must be
+executed. However, some other cases it could be worth it to try to debug it.
+
+ 1. There is not external connectivity from the overcloud nodes:
+
+ Post-deployment scripts will configure the routing, nameservers
+ and a bunch of other things between the overcloud and the
+ undercloud. If local connectivity, like pinging between the
+ different nodes, is working fine, script must have failed when
+ configuring the NAT via iptables. The main rules to enable
+ external connectivity would look like these:
+
+ ``iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE``
+ ``iptables -t nat -A POSTROUTING -s ${external_cidr} -o eth0 -j
+ MASQUERADE``
+ ``iptables -A FORWARD -i eth2 -j ACCEPT``
+ ``iptables -A FORWARD -s ${external_cidr} -m state --state
+ ESTABLISHED,RELATED -j ACCEPT``
+ ``service iptables save``
+
+ These rules must be executed as root (or sudo) in the
+ undercloud machine.
+
+OpenDaylight Integration
+------------------------
+
+When a user deploys any of the following scenarios:
+
+ - os-odl-bgpvpn-ha.yaml
+ - os-odl_l2-fdio-ha.yaml
+ - os-odl_l2-fdio-noha.yaml
+ - os-odl_l2-nofeature-ha.yaml
+ - os-odl_l2-sfc-noha.yaml
+ - os-odl_l3-nofeature-ha.yaml
+
+OpenDaylight (ODL) SDN controller will be deployed too and completely
+integrated with OpenStack. ODL is running as a systemd service, so you can
+manage it as a regular service:
+
+ ``systemctl start/restart/stop opendaylight.service``
+
+This command must be executed as root in the controller node of the overcloud,
+where OpenDaylight is running. ODL files are located in /opt/opendaylight. ODL
+uses karaf as a Java container management system that allows the users to
+install new features, check logs and configure a lot of things. In order to
+connect to Karaf's console, use the following command:
+
+ ``opnfv-util opendaylight``
+
+This command is very easy to use, but in case it is not connecting to Karaf,
+this is the command that is executing underneath:
+
+ ``ssh -p 8101 -o UserKnownHostsFile=/dev/null -o
+ StrictHostKeyChecking=no karaf@localhost``
+
+Of course, localhost when the command is executed in the overcloud controller,
+but you use its public IP to connect from elsewhere.
+
+Debugging Failures
+------------------
+
+This section will try to gather different type of failures, the root cause and
+some possible solutions or workarounds to get the process continued.
+
+1. I can see in the output log a post-deployment error messages:
+
+ Heat resources will apply puppet manifests during this phase. If one of
+ these processes fail, you could try to see the error and after that,
+ re-run puppet to apply that manifest. Log into the controller (see
+ verification section for that) and check as root /var/log/messages.
+ Search for the error you have encountered and see if you can fix it. In
+ order to re-run the puppet manifest, search for "puppet apply" in that
+ same log. You will have to run the last "puppet apply" before the
+ error. And It should look like this:
+
+ ``FACTER_heat_outputs_path="/var/run/heat-config/heat-config-puppet/5b4c7a01-0d63-4a71-81e9-d5ee6f0a1f2f" FACTER_fqdn="overcloud-controller-0.localdomain.com" \
+ FACTER_deploy_config_name="ControllerOvercloudServicesDeployment_Step4" puppet apply --detailed-exitcodes -l syslog -l console \
+ /var/lib/heat-config/heat-config-puppet/5b4c7a01-0d63-4a71-81e9-d5ee6f0a1f2f.pp``
+
+ As a comment, Heat will trigger the puppet run via os-apply-config and
+ it will pass a different value for step each time. There is a total of
+ five steps. Some of these steps will not be executed depending on the
+ type of scenario that is being deployed.
diff --git a/docs/release/installation/verification.rst b/docs/release/installation/verification.rst
new file mode 100644
index 00000000..81e4c8e4
--- /dev/null
+++ b/docs/release/installation/verification.rst
@@ -0,0 +1,89 @@
+Verifying the Setup
+-------------------
+
+Once the deployment has finished, the OPNFV deployment can be accessed via the
+undercloud node. From the jump host ssh to the undercloud host and become the
+stack user. Alternativly ssh keys have been setup such that the root user on
+the jump host can ssh to undercloud directly as the stack user. For
+convenience a utility script has been provided to look up the undercloud's ip
+address and ssh to the undercloud all in one command. An optional user name can
+be passed to indicate whether to connect as the stack or root user. The stack
+user is default if a username is not specified.
+
+| ``opnfv-util undercloud root``
+| ``su - stack``
+
+Once connected to undercloud as the stack user look for two keystone files that
+can be used to interact with the undercloud and the overcloud. Source the
+appropriate RC file to interact with the respective OpenStack deployment.
+
+| ``source stackrc`` (undercloud)
+| ``source overcloudrc`` (overcloud / OPNFV)
+
+The contents of these files include the credentials for the administrative user
+for undercloud and OPNFV respectivly. At this point both undercloud and OPNFV
+can be interacted with just as any OpenStack installation can be. Start by
+listing the nodes in the undercloud that were used to deploy the overcloud.
+
+| ``source stackrc``
+| ``openstack server list``
+
+The control and compute nodes will be listed in the output of this server list
+command. The IP addresses that are listed are the control plane addresses that
+were used to provision the nodes. Use these IP addresses to connect to these
+nodes. Initial authentication requires using the user heat-admin.
+
+| ``ssh heat-admin@192.0.2.7``
+
+To begin creating users, images, networks, servers, etc in OPNFV source the
+overcloudrc file or retrieve the admin user's credentials from the overcloudrc
+file and connect to the web Dashboard.
+
+
+You are now able to follow the `OpenStack Verification`_ section.
+
+OpenStack Verification
+----------------------
+
+Once connected to the OPNFV Dashboard make sure the OPNFV target system is
+working correctly:
+
+1. In the left pane, click Compute -> Images, click Create Image.
+
+2. Insert a name "cirros", Insert an Image Location
+ ``http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img``.
+
+3. Select format "QCOW2", select Public, then click Create Image.
+
+4. Now click Project -> Network -> Networks, click Create Network.
+
+5. Enter a name "internal", click Next.
+
+6. Enter a subnet name "internal_subnet", and enter Network Address
+ ``172.16.1.0/24``, click Next.
+
+7. Now go to Project -> Compute -> Instances, click Launch Instance.
+
+8. Enter Instance Name "first_instance", select Instance Boot Source
+ "Boot from image", and then select Image Name "cirros".
+
+9. Click Launch, status will cycle though a couple states before becoming
+ "Active".
+
+10. Steps 7 though 9 can be repeated to launch more instances.
+
+11. Once an instance becomes "Active" their IP addresses will display on the
+ Instances page.
+
+12. Click the name of an instance, then the "Console" tab and login as
+ "cirros"/"cubswin:)"
+
+13. To verify storage is working,
+ click Project -> Compute -> Volumes, Create Volume
+
+14. Give the volume a name and a size of 1 GB
+
+15. Once the volume becomes "Available" click the dropdown arrow and attach it
+ to an instance.
+
+Congratulations you have successfully installed OPNFV!
diff --git a/docs/release/installation/virtualinstall.rst b/docs/release/installation/virtualinstall.rst
new file mode 100644
index 00000000..5da2ee3c
--- /dev/null
+++ b/docs/release/installation/virtualinstall.rst
@@ -0,0 +1,69 @@
+Installation High-Level Overview - Virtual Deployment
+=====================================================
+
+The VM nodes deployment operates almost the same way as the bare metal
+deployment with a few differences. ``opnfv-deploy`` still deploys an
+undercloud VM. In addition to the undercloud VM a collection of VMs
+(3 control nodes + 2 compute for an HA deployment or 1 control node and 1
+or more compute nodes for a non-HA Deployment) will be defined for the target
+OPNFV deployment. The part of the toolchain that executes IPMI power
+instructions calls into libvirt instead of the IPMI interfaces on baremetal
+servers to operate the power managment. These VMs are then provisioned with
+the same disk images and configuration that baremetal would be. To Triple-O
+these nodes look like they have just built and registered the same way as bare
+metal nodes, the main difference is the use of a libvirt driver for the power
+management. Finally, the default network_settings file will deploy without
+modification. Customizations are welcome but not needed if a generic set of
+network_settings are acceptable.
+
+Installation Guide - Virtual Deployment
+=======================================
+
+This section goes step-by-step on how to correctly install and provision the
+OPNFV target system to VM nodes.
+
+Install Jumphost
+----------------
+
+Follow the instructions in the `Install Bare Metal Jumphost`_ section.
+
+Running ``opnfv-deploy``
+------------------------
+
+You are now ready to deploy OPNFV!
+``opnfv-deploy`` has virtual deployment capability that includes all of
+the configuration nessesary to deploy OPNFV with no modifications.
+
+If no modifications are made to the included configurations the target
+environment will deploy with the following architecture:
+
+ - 1 undercloud VM
+
+ - The option of 3 control and 2 or more compute VMs (HA Deploy / default)
+ or 1 control and 1 or more compute VM (Non-HA deploy / pass -n)
+
+ - 1-5 networks: provisioning, private tenant networking, external, storage
+ and internal API. The API, storage and tenant networking networks can be
+ collapsed onto the provisioning network.
+
+Follow the steps below to execute:
+
+1. ``sudo opnfv-deploy -v [ --virtual-computes n ]
+ [ --virtual-cpus n ] [ --virtual-ram n ]
+ -n network_settings.yaml -d deploy_settings.yaml``
+
+2. It will take approximately 45 minutes to an hour to stand up undercloud,
+ define the target virtual machines, configure the deployment and execute
+ the deployment. You will notice different outputs in your shell.
+
+3. When the deployment is complete the IP for the undercloud and a url for the
+ OpenStack dashboard will be displayed
+
+Verifying the Setup - VMs
+-------------------------
+
+To verify the set you can follow the instructions in the `Verifying the Setup`_
+section.
+
+.. _`Install Bare Metal Jumphost`: index.html#install-bare-metal-jumphost
+.. _`Verifying the Setup`: index.html#verifying-the-setup
diff --git a/docs/release/release-notes/index.rst b/docs/release/release-notes/index.rst
new file mode 100644
index 00000000..1f723960
--- /dev/null
+++ b/docs/release/release-notes/index.rst
@@ -0,0 +1,11 @@
+************************
+OPNFV Apex Release Notes
+************************
+
+Contents:
+
+.. toctree::
+ :numbered:
+ :maxdepth: 4
+
+ release-notes.rst
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
new file mode 100644
index 00000000..a5d52f0c
--- /dev/null
+++ b/docs/release/release-notes/release-notes.rst
@@ -0,0 +1,411 @@
+==========================================================================
+OPNFV Release Notes for the Colorado release of OPNFV Apex deployment tool
+==========================================================================
+
+
+.. contents:: Table of Contents
+ :backlinks: none
+
+
+Abstract
+========
+
+This document provides the release notes for Colorado release with the Apex
+deployment toolchain.
+
+License
+=======
+
+All Apex and "common" entities are protected by the Apache License
+( http://www.apache.org/licenses/ )
+
+
+Version History
+===============
+
+
++-------------+-----------+-----------------+----------------------+
+| **Date** | **Ver.** | **Authors** | **Comment** |
+| | | | |
++-------------+-----------+-----------------+----------------------+
+| 2016-09-20 | 2.1.0 | Tim Rozet | More updates for |
+| | | | Colorado |
++-------------+-----------+-----------------+----------------------+
+| 2016-08-11 | 2.0.0 | Dan Radez | Updates for Colorado |
++-------------+-----------+-----------------+----------------------+
+| 2015-09-17 | 1.0.0 | Dan Radez | Rewritten for |
+| | | | RDO Manager update |
++-------------+-----------+-----------------+----------------------+
+
+Important Notes
+===============
+
+This is the OPNFV Colorado release that implements the deploy stage of the
+OPNFV CI pipeline via Apex.
+
+Apex is based on RDO's Triple-O installation tool chain.
+More information at http://rdoproject.org
+
+Carefully follow the installation-instructions which guide a user on how to
+deploy OPNFV using Apex installer.
+
+Summary
+=======
+
+Colorado release with the Apex deployment toolchain will establish an OPNFV
+target system on a Pharos compliant lab infrastructure. The current definition
+of an OPNFV target system is OpenStack Mitaka combined with an SDN
+controller, such as OpenDaylight. The system is deployed with OpenStack High
+Availability (HA) for most OpenStack services. SDN controllers are deployed
+only on the first controller (see HAIssues_ for known HA SDN issues). Ceph
+storage is used as Cinder backend, and is the only supported storage for
+Colorado. Ceph is setup as 3 OSDs and 3 Monitors, one OSD+Mon per Controller
+node in an HA setup. Apex also supports non-HA deployments, which deploys a
+single controller and n number of compute nodes. Furthermore, Apex is
+capable of deploying scenarios in a bare metal or virtual fashion. Virtual
+deployments use multiple VMs on the jump host and internal networking to
+simulate the a bare metal deployment.
+
+- Documentation is built by Jenkins
+- .iso image is built by Jenkins
+- .rpm packages are built by Jenkins
+- Jenkins deploys a Colorado release with the Apex deployment toolchain
+ bare metal, which includes 3 control+network nodes, and 2 compute nodes.
+
+Release Data
+============
+
++--------------------------------------+--------------------------------------+
+| **Project** | apex |
+| | |
++--------------------------------------+--------------------------------------+
+| **Repo/tag** | apex/colorado.1.0 |
+| | |
++--------------------------------------+--------------------------------------+
+| **Release designation** | colorado.1.0 |
+| | |
++--------------------------------------+--------------------------------------+
+| **Release date** | 2016-09-22 |
+| | |
++--------------------------------------+--------------------------------------+
+| **Purpose of the delivery** | OPNFV Colorado release |
+| | |
++--------------------------------------+--------------------------------------+
+
+Version change
+--------------
+
+Module version changes
+~~~~~~~~~~~~~~~~~~~~~~
+This is the first tracked version of the Colorado release with the Apex
+deployment toolchain. It is based on following upstream versions:
+
+- OpenStack (Mitaka release)
+
+- OpenDaylight (Beryllium/Boron releases)
+
+- CentOS 7
+
+Document Version Changes
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+This is the first tracked version of Colorado release with the Apex
+deployment toolchain.
+The following documentation is provided with this release:
+
+- OPNFV Installation instructions for the Colorado release with the Apex
+ deployment toolchain - ver. 1.0.0
+- OPNFV Release Notes for the Colorado release with the Apex deployment
+ toolchain - ver. 1.0.0 (this document)
+
+Feature Additions
+~~~~~~~~~~~~~~~~~
+
++--------------------------------------+--------------------------------------+
+| **JIRA REFERENCE** | **SLOGAN** |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-107 | OpenDaylight HA - OVSDB Clustering |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-108 | Migrate to OpenStack Mitaka |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-30 | Support VLAN tagged deployments |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-105 | Enable Huge Page Configuration |
+| | Options |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-111 | Allow RAM to be specified for |
+| | Control/Compute in Virtual |
+| | Deployments |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-119 | Enable OVS DPDK as a deployment |
+| | Scenario in Apex |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-126 | Tacker Service deployed by Apex |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-135 | Congress Service deployed by Apex |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-127 | Nova Instance CPU Pinning |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-130 | IPv6 Underlay Deployment |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-133 | FDIO with Honeycomb Agent |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-141 | Integrate VSPERF into Apex |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-172 | Enable ONOS SFC |
++--------------------------------------+--------------------------------------+
+
+Bug Corrections
+~~~~~~~~~~~~~~~
+
+**JIRA TICKETS:**
+
++--------------------------------------+--------------------------------------+
+| **JIRA REFERENCE** | **SLOGAN** |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-86 | Need ability to specify number of |
+| | compute nodes |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-180 | Baremetal deployment error: Failed to|
+| | mount root partition /dev/sda on |
+| | /mnt/rootfs |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-161 | Heat autoscaling stack creation fails|
+| | for non-admin users |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-198 | Missing NAT iptables rule for public |
+| | network in instack VM |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-147 | Installer doesn't generate/distribute|
+| | SSH keys between compute nodes |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-109 | ONOS routes local subnet traffic to |
+| | GW |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-146 | Swift service present in available |
+| | endpoints |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-160 | Enable force_metadata to support |
+| | subnets with VM as the router |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-114 | OpenDaylight GUI is not available |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-100 | DNS1 and DNS2 should be handled in |
+| | nic bridging |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-100 | DNS1 and DNS2 should be handled in |
+| | nic bridging |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-155 | NIC Metric value not used when |
+| | bridging NICs |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-136 | 2 network deployment fails |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-89 | Deploy Ceph OSDs on compute nodes |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-137 | added arping ass dependency for |
+| | ONOS deployments |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-121 | VM Storage deletion intermittently |
+| | fails |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-182 | Nova services not correctly deployed |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-153 | brbm bridge not created in jumphost |
++--------------------------------------+--------------------------------------+
+
+Deliverables
+------------
+
+Software Deliverables
+~~~~~~~~~~~~~~~~~~~~~
+- Apex .iso file
+- Apex release .rpm (opnfv-apex-release)
+- Apex overcloud .rpm (opnfv-apex) - For nosdn and OpenDaylight Scenarios
+- Apex overcloud onos .rpm (opnfv-apex-onos) - ONOS Scenarios
+- Apex undercloud .rpm (opnfv-apex-undercloud)
+- Apex common .rpm (opnfv-apex-common)
+- build.sh - Builds the above artifacts
+- opnfv-deploy - Automatically deploys Target OPNFV System
+- opnfv-clean - Automatically resets a Target OPNFV Deployment
+- opnfv-util - Utility to connect to or debug Overcloud nodes + OpenDaylight
+
+Documentation Deliverables
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+- OPNFV Installation instructions for the Colorado release with the Apex
+ deployment toolchain - ver. 1.0.0
+- OPNFV Release Notes for the Colorado release with the Apex deployment
+ toolchain - ver. 1.0.0 (this document)
+
+Known Limitations, Issues and Workarounds
+=========================================
+
+System Limitations
+------------------
+
+**Max number of blades:** 1 Apex undercloud, 3 Controllers, 20 Compute blades
+
+**Min number of blades:** 1 Apex undercloud, 1 Controller, 1 Compute blade
+
+**Storage:** Ceph is the only supported storage configuration.
+
+**Min master requirements:** At least 16GB of RAM for baremetal jumphost,
+24GB for virtual deployments (noHA).
+
+
+Known Issues
+------------
+
+**JIRA TICKETS:**
+
++--------------------------------------+--------------------------------------+
+| **JIRA REFERENCE** | **SLOGAN** |
+| | |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-203 | Swift proxy enabled and fails in noha|
+| | deployments |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-215 | Keystone services not configured and |
+| | the error is silently ignored (VLAN |
+| | Deployments) |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-208 | Need ability to specify which NIC to |
+| | place VLAN on |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-254 | Add dynamic hugepages configuration |
++--------------------------------------+--------------------------------------+
+| JIRA: APEX-138 | Unclear error message when interface |
+| | set to dhcp |
++--------------------------------------+--------------------------------------+
+
+
+Workarounds
+-----------
+**-**
+
+Scenario specific release notes
+===============================
+
+Scenario os-odl_l3-nofeature known issues
+-----------------------------------------
+
+* `APEX-112 <https://jira.opnfv.org/browse/APEX-112>`_:
+ ODL routes local subnet traffic to GW
+
+Scenario os-odl_l2-nofeature known issues
+-----------------------------------------
+
+* `APEX-149 <https://jira.opnfv.org/browse/APEX-149>`_:
+ Openflow rules are populated very slowly
+
+Scenario os-odl-bgpvpn known issues
+--------------------------------------
+
+* `APEX-278 <https://jira.opnfv.org/browse/APEX-278>`_:
+ Duplicate neutron config class declaration for SDNVPN
+
+Scenario os-onos-nofeatures/os-onos-sfc known issues
+----------------------------------------------------
+
+* `APEX-281 <https://jira.opnfv.org/browse/APEX-281>`_:
+ ONOS sometimes fails to provide addresses to instances
+
+Scenario os-odl_l2-sfc-noha known issues
+----------------------------------------
+
+* `APEX-275 <https://jira.opnfv.org/browse/APEX-275>`_:
+ Metadata fails in Boron
+
+Scenario os-nosdn-ovs known issues
+----------------------------------
+
+* `APEX-274 <https://jira.opnfv.org/browse/APEX-274>`_:
+ OVS DPDK scenario does not create vhost user ports
+
+Scenario os-odl_l2-fdio-noha known issues
+-----------------------------------------
+
+* `FDS-16 <https://jira.opnfv.org/browse/FDS-16>`_:
+ Security group configuration through nova leads
+ to vhostuser port connection issues
+* `FDS-62 <https://jira.opnfv.org/browse/FDS-62>`_:
+ APEX - Increase number of files MariaDB can open
+* `FDS-79 <https://jira.opnfv.org/browse/FDS-79>`_:
+ Sometimes (especially in bulk crete/delete operations
+ when multiple networks/ports are created within short time)
+ OpenDaylight doesn't accept creation requests
+* `FDS-80 <https://jira.opnfv.org/browse/FDS-80>`_:
+ After launching a VM it stayed forever in BUILD status.
+ Also further operation related to this VM (volume attachment etc.)
+ caused problems
+* `FDS-81 <https://jira.opnfv.org/browse/FDS-81>`_:
+ After functest finishes there are two bds on computes and
+ none on controller
+* `FDS-82 <https://jira.opnfv.org/browse/FDS-82>`_:
+ Nova list shows no vms but there are some on computes in paused state
+* `APEX-217 <https://jira.opnfv.org/browse/APEX-217>`_:
+ qemu not configured with correct group:user
+
+Scenario os-nosdn-fdio-noha known issues
+----------------------------------------
+
+Note that a set of manual configration steps need to be performed
+post an automated deployment for the scenario to be fully functional.
+Please refer to `FDS-159 <https://jira.opnfv.org/browse/FDS-159>`_ and
+`FDS-160 <https://jira.opnfv.org/browse/FDS-160>`_ for details.
+
+* `FDS-155 <https://jira.opnfv.org/browse/FDS-155>`_:
+ os-nosdn-fdio-noha scenario: tempest_smoke_serial causes
+ mariadb/mysqld process to hang
+* `FDS-156 <https://jira.opnfv.org/browse/FDS-156>`_:
+ os-nosdn-fdio-noha scenario: Race conditions for
+ network-vif-plugged notification
+* `FDS-157 <https://jira.opnfv.org/browse/FDS-157>`_:
+ os-nosdn-fdio-noha scenario: Intermittently VMs
+ would get assigned 2 IPs instead of 1
+* `FDS-158 <https://jira.opnfv.org/browse/FDS-158>`_:
+ os-nosdn-fdio-noha scenario: VM start/launch fails with
+ "no more IP addresses" in neutron logs
+* `FDS-159 <https://jira.opnfv.org/browse/FDS-159>`_:
+ os-nosdn-fdio-noha scenario: Security groups not yet supported
+* `FDS-160 <https://jira.opnfv.org/browse/FDS-160>`_:
+ os-nosdn-fdio-noha scenario: Vlan fix on controller
+* `FDS-161 <https://jira.opnfv.org/browse/FDS-161>`_:
+ os-nosdn-fdio-noha scenario: VPP fails with certain UCS B-series blades
+
+.. _HAIssues:
+
+General HA scenario known issues
+--------------------------------
+
+* `COPPER-22 <https://jira.opnfv.org/browse/COPPER-22>`_:
+ Congress service HA deployment is not yet supported/verified.
+* `APEX-276 <https://jira.opnfv.org/browse/APEX-276>`_:
+ ODL HA unstable and crashes frequently
+
+Test Result
+===========
+
+The Colorado release with the Apex deployment toolchain has undergone QA
+test runs with the following results:
+
++--------------------------------------+--------------------------------------+
+| **TEST-SUITE** | **Results:** |
+| | |
++--------------------------------------+--------------------------------------+
+| **-** | **-** |
++--------------------------------------+--------------------------------------+
+
+
+References
+==========
+
+For more information on the OPNFV Colorado release, please see:
+
+http://wiki.opnfv.org/releases/Colorado
+
+:Authors: Tim Rozet (trozet@redhat.com)
+:Authors: Dan Radez (dradez@redhat.com)
+:Version: 2.1.0