summaryrefslogtreecommitdiffstats
path: root/docs/release/installation
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release/installation')
-rw-r--r--docs/release/installation/abstract.rst8
-rw-r--r--docs/release/installation/architecture.rst18
-rw-r--r--docs/release/installation/baremetal.rst2
-rw-r--r--docs/release/installation/index.rst3
-rw-r--r--docs/release/installation/introduction.rst6
-rw-r--r--docs/release/installation/references.rst4
-rw-r--r--docs/release/installation/requirements.rst2
-rw-r--r--docs/release/installation/troubleshooting.rst10
-rw-r--r--docs/release/installation/upstream.rst107
9 files changed, 139 insertions, 21 deletions
diff --git a/docs/release/installation/abstract.rst b/docs/release/installation/abstract.rst
index 534d8a89..aeef1246 100644
--- a/docs/release/installation/abstract.rst
+++ b/docs/release/installation/abstract.rst
@@ -1,16 +1,16 @@
Abstract
========
-This document describes how to install the Euphrates release of OPNFV when
+This document describes how to install the Fraser release of OPNFV when
using Apex as a deployment tool covering it's limitations, dependencies
and required system resources.
License
=======
-Euphrates release of OPNFV when using Apex as a deployment tool Docs
-(c) by Tim Rozet (Red Hat) and Dan Radez (Red Hat)
+Fraser release of OPNFV when using Apex as a deployment tool Docs
+(c) by Tim Rozet (Red Hat)
-Euphrates release of OPNFV when using Apex as a deployment tool Docs
+Fraser release of OPNFV when using Apex as a deployment tool Docs
are licensed under a Creative Commons Attribution 4.0 International License.
You should have received a copy of the license along with this.
If not, see <http://creativecommons.org/licenses/by/4.0/>.
diff --git a/docs/release/installation/architecture.rst b/docs/release/installation/architecture.rst
index 70067ed0..1ab7c7fc 100644
--- a/docs/release/installation/architecture.rst
+++ b/docs/release/installation/architecture.rst
@@ -131,22 +131,22 @@ issues per scenario. The following scenarios correspond to a supported
+-------------------------+-------------+---------------+
| os-nosdn-bar-noha | Barometer | Yes |
+-------------------------+-------------+---------------+
-| os-nosdn-calipso-noha | Calipso | Yes |
+| os-nosdn-calipso-noha | Calipso | No |
+-------------------------+-------------+---------------+
-| os-nosdn-ovs_dpdk-ha | Apex | Yes |
+| os-nosdn-ovs_dpdk-ha | Apex | No |
+-------------------------+-------------+---------------+
-| os-nosdn-ovs_dpdk-noha | Apex | Yes |
+| os-nosdn-ovs_dpdk-noha | Apex | No |
+-------------------------+-------------+---------------+
| os-nosdn-fdio-ha | FDS | No |
+-------------------------+-------------+---------------+
| os-nosdn-fdio-noha | FDS | No |
+-------------------------+-------------+---------------+
-| os-nosdn-kvm_ovs_dpdk-ha| KVM for NFV | Yes |
+| os-nosdn-kvm_ovs_dpdk-ha| KVM for NFV | No |
+-------------------------+-------------+---------------+
-| os-nosdn-kvm_ovs_dpdk | KVM for NFV | Yes |
+| os-nosdn-kvm_ovs_dpdk | KVM for NFV | No |
| -noha | | |
+-------------------------+-------------+---------------+
-| os-nosdn-performance-ha | Apex | Yes |
+| os-nosdn-performance-ha | Apex | No |
+-------------------------+-------------+---------------+
| os-odl-nofeature-ha | Apex | Yes |
+-------------------------+-------------+---------------+
@@ -170,15 +170,15 @@ issues per scenario. The following scenarios correspond to a supported
+-------------------------+-------------+---------------+
| os-odl-sfc-ha | SFC | No |
+-------------------------+-------------+---------------+
-| os-odl-sfc-noha | SFC | Yes |
+| os-odl-sfc-noha | SFC | No |
+-------------------------+-------------+---------------+
| os-odl-gluon-noha | Gluon | No |
+-------------------------+-------------+---------------+
| os-odl-csit-noha | Apex | No |
+-------------------------+-------------+---------------+
-| os-odl-fdio-ha | FDS | Yes |
+| os-odl-fdio-ha | FDS | No |
+-------------------------+-------------+---------------+
-| os-odl-fdio-noha | FDS | Yes |
+| os-odl-fdio-noha | FDS | No |
+-------------------------+-------------+---------------+
| os-odl-fdio_dvr-ha | FDS | No |
+-------------------------+-------------+---------------+
diff --git a/docs/release/installation/baremetal.rst b/docs/release/installation/baremetal.rst
index 703d1692..d8f90792 100644
--- a/docs/release/installation/baremetal.rst
+++ b/docs/release/installation/baremetal.rst
@@ -88,7 +88,7 @@ Install Bare Metal Jump Host
``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-pike/rdo-release-pike-1.noarch.rpm``
``sudo yum install epel-release``
- ``sudo curl -o /etc/yum.repos.d/opnfv-apex.repo http://artifacts.opnfv.org/apex/euphrates/opnfv-apex.repo``
+ ``sudo curl -o /etc/yum.repos.d/opnfv-apex.repo http://artifacts.opnfv.org/apex/fraser/opnfv-apex.repo``
The RDO Project release repository is needed to install OpenVSwitch, which
is a dependency of opnfv-apex. If you do not have external connectivity to
diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst
index 8fb49464..82b9d25c 100644
--- a/docs/release/installation/index.rst
+++ b/docs/release/installation/index.rst
@@ -16,13 +16,14 @@ Contents:
requirements.rst
baremetal.rst
virtual.rst
+ upstream.rst
verification.rst
troubleshooting.rst
references.rst
:Authors: Tim Rozet (trozet@redhat.com)
:Authors: Dan Radez (dradez@redhat.com)
-:Version: 5.0
+:Version: 6.0
Indices and tables
==================
diff --git a/docs/release/installation/introduction.rst b/docs/release/installation/introduction.rst
index bb220b7d..8dbf8f2f 100644
--- a/docs/release/installation/introduction.rst
+++ b/docs/release/installation/introduction.rst
@@ -1,8 +1,8 @@
Introduction
============
-This document describes the steps to install an OPNFV Euphrates reference
-platform, as defined by the Genesis Project using the Apex installer.
+This document describes the steps to install an OPNFV Fraser reference
+platform using the Apex installer.
The audience is assumed to have a good background in networking
and Linux administration.
@@ -19,7 +19,7 @@ deployment tool chain.
The Apex deployment artifacts contain the necessary tools to deploy and
configure an OPNFV target system using the Apex deployment toolchain.
These artifacts offer the choice of using the Apex bootable ISO
-(``opnfv-apex-euphrates.iso``) to both install CentOS 7 and the
+(``opnfv-apex-fraser.iso``) to both install CentOS 7 and the
necessary materials to deploy or the Apex RPMs (``opnfv-apex*.rpm``),
and their associated dependencies, which expects installation to a
CentOS 7 libvirt enabled host. The RPM contains a collection of
diff --git a/docs/release/installation/references.rst b/docs/release/installation/references.rst
index 249da226..935be038 100644
--- a/docs/release/installation/references.rst
+++ b/docs/release/installation/references.rst
@@ -21,7 +21,7 @@ OPNFV
OpenStack
---------
-`OpenStack Newton Release artifacts <http://www.openstack.org/software/newton>`_
+`OpenStack Pike Release artifacts <http://www.openstack.org/software/pike>`_
`OpenStack documentation <http://docs.openstack.org>`_
@@ -30,7 +30,7 @@ OpenDaylight
Upstream OpenDaylight provides `a number of packaging and deployment options <https://wiki.opendaylight.org/view/Deployment>`_ meant for consumption by downstream projects like OPNFV.
-Currently, OPNFV Apex uses `OpenDaylight's Puppet module <https://github.com/dfarrell07/puppet-opendaylight>`_, which in turn depends on `OpenDaylight's RPM <http://cbs.centos.org/repos/nfv7-opendaylight-4-release/>`_.
+Currently, OPNFV Apex uses `OpenDaylight's Puppet module <https://git.opendaylight.org/gerrit/#/admin/projects/integration/packaging/puppet-opendaylight>`_, which in turn depends on `OpenDaylight's RPM <https://nexus.opendaylight.org/content/repositories/opendaylight-nitrogen-epel-7-x86_64-devel/>`_.
RDO Project
-----------
diff --git a/docs/release/installation/requirements.rst b/docs/release/installation/requirements.rst
index 8d441404..9aefa21d 100644
--- a/docs/release/installation/requirements.rst
+++ b/docs/release/installation/requirements.rst
@@ -15,7 +15,7 @@ The Jump Host requirements are outlined below:
4. minimum 1 networks and maximum 5 networks, multiple NIC and/or VLAN
combinations are supported. This is virtualized for a VM deployment.
-5. The Euphrates Apex RPMs and their dependencies.
+5. The Fraser Apex RPMs and their dependencies.
6. 16 GB of RAM for a bare metal deployment, 64 GB of RAM for a Virtual
Deployment.
diff --git a/docs/release/installation/troubleshooting.rst b/docs/release/installation/troubleshooting.rst
index 6a81bef6..f5b42089 100644
--- a/docs/release/installation/troubleshooting.rst
+++ b/docs/release/installation/troubleshooting.rst
@@ -132,3 +132,13 @@ some possible solutions or workarounds to get the process continued.
it will pass a different value for step each time. There is a total of
five steps. Some of these steps will not be executed depending on the
type of scenario that is being deployed.
+
+Reporting a Bug
+---------------
+
+Please report bugs via the `OPNFV Apex JIRA <https://wiki.opnfv.org/apex>`_
+page. You may now use the log collecting utility provided by Apex in order
+to gather all of the logs from the overcloud after a deployment failure. To
+do this please use the ``opnfv-pyutil --fetch-logs`` command. The log file
+location will be displayed at the end of executing the script. Please attach
+this log to the JIRA Bug.
diff --git a/docs/release/installation/upstream.rst b/docs/release/installation/upstream.rst
new file mode 100644
index 00000000..b98b0c19
--- /dev/null
+++ b/docs/release/installation/upstream.rst
@@ -0,0 +1,107 @@
+Deploying Directly from Upstream - (Beta)
+=========================================
+
+In addition to deploying with OPNFV tested artifacts included in the
+opnfv-apex-undercloud and opnfv-apex RPMs, it is now possible to deploy
+directly from upstream artifacts. Essentially this deployment pulls the latest
+RDO overcloud and undercloud artifacts at deploy time. This option is useful
+for being able to deploy newer versions of OpenStack that are not included
+with this release, and offers some significant advantages for some users.
+Please note this feature is currently in beta for the Fraser release and will
+be fully supported in the next OPNFV release.
+
+Upstream Deployment Key Features
+--------------------------------
+
+In addition to being able to install newer versions of OpenStack, the upstream
+deployment option allows the use of a newer version of TripleO, which provides
+overcloud container support. Therefore when deploying from upstream with an
+OpenStack version newer than Pike, every OpenStack service (also OpenDaylight)
+will be running as a docker container. Furthermore, deploying upstream gives
+the user the flexibility of including any upstream OpenStack patches he/she
+may need by simply adding them into the deploy settings file. The patches will
+be applied live during deployment.
+
+Installation Guide - Upstream Deployment
+========================================
+
+This section goes step-by-step on how to correctly install and provision the
+OPNFV target system using a direct upstream deployment.
+
+Special Requirements for Upstream Deployments
+---------------------------------------------
+
+With upstream deployments it is required to have internet access. In addition,
+the upstream artifacts will be cached under the root partition of the jump
+host. It is required to at least have 10GB free space in the root partition
+in order to download and prepare the cached artifacts.
+
+Scenarios and Deploy Settings for Upstream Deployments
+------------------------------------------------------
+
+Some deploy settings files are already provided which have been tested by the
+Apex team. These include (under /etc/opnfv-apex/):
+
+ - os-nosdn-queens_upstream-noha.yaml
+ - os-nosdn-master_upstream-noha.yaml
+ - os-odl-queens_upstream-noha.yaml
+ - os-odl-master_upstream-noha.yaml
+
+Each of these scenarios has been tested by Apex over the Fraser release, but
+none are guaranteed to work as upstream is a moving target and this feature is
+relatively new. Still it is the goal of the Apex team to provide support
+and move to an upstream based deployments in the future, so please file a bug
+when encountering any issues.
+
+Including Upstream Patches with Deployment
+------------------------------------------------------
+
+With upstream deployments it is possible to include any pending patch in
+OpenStack gerrit with the deployment. These patches are applicable to either
+the undercloud or the overcloud. This feature is useful in the case where
+a developer or user desires to pull in an unmerged patch for testing with a
+deployment. In order to use this feature, include the following in the deploy
+settings file, under "global_params" section:
+
+.. code-block:: yaml
+
+ patches:
+ undercloud:
+ - change-id: <gerrit change id>
+ project: openstack/<project name>
+ branch: <branch where commit is proposed>
+ overcloud:
+ - change-id: <gerrit change id>
+ project: openstack/<project name>
+ branch: <branch where commit is proposed>
+
+You may include as many patches as needed. If the patch is already merged or
+abandoned, then it will not be included in the deployment.
+
+Running ``opnfv-deploy``
+------------------------
+
+Deploying is similar to the typical method used for baremetal and virtual
+deployments with the addition of a few new arguments to the ``opnfv-deploy``
+command. In order to use an upstream deployment, please use the ``--upstream``
+argument. Also, the artifacts for each upstream deployment are only
+downloaded when a newer version is detected upstream. In order to explicitly
+disable downloading new artifacts from upstream if previous artifacts are
+already cached, please use the ``--no-fetch`` argument.
+
+Interacting with Containerized Overcloud
+----------------------------------------
+
+Upstream deployments will use a containerized overcloud. These containers are
+Docker images built by the Kolla project. The Containers themselves are run
+and controlled through Docker as root user. In order to access logs for each
+service, examine the '/var/log/containers' directory or use the `docker logs
+<container name>`. To see a list of services running on the node, use the
+``docker ps`` command. Each container uses host networking, which means that
+the networking of the overcloud node will act the same exact way as a
+traditional deployment. In order to attach to a container, use this command:
+``docker exec -it <container name/id> bin/bash``. This will login to the
+container with a bash shell. Note the containers do not use systemd, unlike
+the traditional deployment model and are instead started as the first process
+in the container. To restart a service, use the ``docker restart <container>``
+command.