summaryrefslogtreecommitdiffstats
path: root/docs/installationprocedure
diff options
context:
space:
mode:
Diffstat (limited to 'docs/installationprocedure')
-rw-r--r--docs/installationprocedure/architecture.rst7
-rw-r--r--docs/installationprocedure/baremetal.rst47
-rw-r--r--docs/installationprocedure/requirements.rst2
3 files changed, 43 insertions, 13 deletions
diff --git a/docs/installationprocedure/architecture.rst b/docs/installationprocedure/architecture.rst
index c2b38d00..33536788 100644
--- a/docs/installationprocedure/architecture.rst
+++ b/docs/installationprocedure/architecture.rst
@@ -44,6 +44,7 @@ will run the following services:
- OpenDaylight
- HA Proxy
- Pacemaker & VIPs
+- Ceph Monitors and OSDs
Stateless OpenStack services
All running statesless OpenStack services are load balanced by HA Proxy.
@@ -77,6 +78,12 @@ Pacemaker & VIPs
start up order and Virtual IPs associated with specific services are running
on the proper host.
+Ceph Monitors & OSDs
+ The Ceph monitors run on each of the control nodes. Each control node also
+ has a Ceph OSD running on it. By default the OSDs use an autogenerated
+ virtual disk as their target device. A non-autogenerated device can be
+ specified in the deploy file.
+
VM Migration is configured and VMs can be evacuated as needed or as invoked
by tools such as heat as part of a monitored stack deployment in the overcloud.
diff --git a/docs/installationprocedure/baremetal.rst b/docs/installationprocedure/baremetal.rst
index 878a49d7..83cda326 100644
--- a/docs/installationprocedure/baremetal.rst
+++ b/docs/installationprocedure/baremetal.rst
@@ -94,9 +94,10 @@ Install Bare Metal Jumphost
support is completed.
1b. If your Jump host already has CentOS 7 with libvirt running on it then
- install the install the RDO Release RPM:
+ install the install the RDO Newton Release RPM and epel-release:
- ``sudo yum install -y https://www.rdoproject.org/repos/rdo-release.rpm``
+ ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
+ ``sudo yum install epel-release``
The RDO Project release repository is needed to install OpenVSwitch, which
is a dependency of opnfv-apex. If you do not have external connectivity to
@@ -113,11 +114,26 @@ Install Bare Metal Jumphost
the USB device as the boot media on your Jumphost
2b. If your Jump host already has CentOS 7 with libvirt running on it then
- install the opnfv-apex RPMs from the OPNFV artifacts site
- <http://artifacts.opnfv.org/apex.html>. The following RPMS are available
- for installation:
+ install the opnfv-apex RPMs using the OPNFV artifacts yum repo. This yum
+ repo is created at release. It will not exist before release day.
+
+ ``sudo yum install http://artifacts.opnfv.org/apex/danube/opnfv-apex-release-danube.noarch.rpm``
+
+ Once you have installed the repo definitions for Apex, RDO and EPEL then
+ yum install Apex:
+
+ ``sudo yum install opnfv-apex``
+
+ If ONOS will be used, install the ONOS rpm instead of the opnfv-apex rpm.
- - opnfv-apex - OpenDaylight L2 / L3 and ONOS support *
+ ``sudo yum install opnfv-apex-onos``
+
+2c. If you choose not to use the Apex yum repo or you choose to use
+ pre-released RPMs you can download and install the required RPMs from the
+ artifacts site <http://artifacts.opnfv.org/apex.html>. The following RPMs
+ are available for installation:
+
+ - opnfv-apex - OpenDaylight L2 / L3 and ODL SFC support *
- opnfv-apex-onos - ONOS support *
- opnfv-apex-undercloud - (reqed) Undercloud Image
- opnfv-apex-common - (reqed) Supporting config files and scripts
@@ -136,20 +152,18 @@ Install Bare Metal Jumphost
no longer carry them and they will not need special handling for
installation.
- Python 3.4 is also required and it needs to be installed if you are using
- the Centos 7 base image:
+ The EPEL and RDO yum repos are still required:
``sudo yum install epel-release``
- ``sudo yum install python34``
+ ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
- To install these RPMs download them to the local disk on your CentOS 7
- install and pass the file names directly to yum:
+ Once the apex RPMs are downloaded install them by passing the file names
+ directly to yum:
``sudo yum install python34-markupsafe-<version>.rpm
python3-jinja2-<version>.rpm python3-ipmi-<version>.rpm``
``sudo yum install opnfv-apex-<version>.rpm
opnfv-apex-undercloud-<version>.rpm opnfv-apex-common-<version>.rpm``
-
3. After the operating system and the opnfv-apex RPMs are installed, login to
your Jumphost as root.
@@ -188,6 +202,7 @@ IPMI configuration information gathered in section
- ``cpus``: (Introspected*) CPU cores available
- ``memory``: (Introspected*) Memory available in Mib
- ``disk``: (Introspected*) Disk space available in Gb
+ - ``disk_device``: (Opt***) Root disk device to use for installation
- ``arch``: (Introspected*) System architecture
- ``capabilities``: (Opt**) Node's role in deployment
values: profile:control or profile:compute
@@ -199,6 +214,14 @@ IPMI configuration information gathered in section
** If capabilities profile is not specified then Apex will select node's roles
in the OPNFV cluster in a non-deterministic fashion.
+ \*** disk_device declares which hard disk to use as the root device for
+ installation. The format is a comma delimited list of devices, such as
+ "sda,sdb,sdc". The disk chosen will be the first device in the list which
+ is found by introspection to exist on the system. Currently, only a single
+ definition is allowed for all nodes. Therefore if multiple disk_device
+ definitions occur within the inventory, only the last definition on a node
+ will be used for all nodes.
+
Creating the Settings Files
---------------------------
diff --git a/docs/installationprocedure/requirements.rst b/docs/installationprocedure/requirements.rst
index 1b3fe87d..507b671e 100644
--- a/docs/installationprocedure/requirements.rst
+++ b/docs/installationprocedure/requirements.rst
@@ -33,7 +33,7 @@ Network requirements include:
- Private Tenant-Networking Network*
- - External Network
+ - External Network*
- Storage Network*