summaryrefslogtreecommitdiffstats
path: root/docs/release/installation/baremetal.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release/installation/baremetal.rst')
-rw-r--r--docs/release/installation/baremetal.rst273
1 files changed, 273 insertions, 0 deletions
diff --git a/docs/release/installation/baremetal.rst b/docs/release/installation/baremetal.rst
new file mode 100644
index 00000000..83cda326
--- /dev/null
+++ b/docs/release/installation/baremetal.rst
@@ -0,0 +1,273 @@
+Installation High-Level Overview - Bare Metal Deployment
+========================================================
+
+The setup presumes that you have 6 or more bare metal servers already setup
+with network connectivity on at least 1 or more network interfaces for all
+servers via a TOR switch or other network implementation.
+
+The physical TOR switches are **not** automatically configured from the OPNFV
+reference platform. All the networks involved in the OPNFV infrastructure as
+well as the provider networks and the private tenant VLANs needs to be manually
+configured.
+
+The Jumphost can be installed using the bootable ISO or by using the
+(``opnfv-apex*.rpm``) RPMs and their dependencies. The Jumphost should then be
+configured with an IP gateway on its admin or public interface and configured
+with a working DNS server. The Jumphost should also have routable access
+to the lights out network for the overcloud nodes.
+
+``opnfv-deploy`` is then executed in order to deploy the undercloud VM and to
+provision the overcloud nodes. ``opnfv-deploy`` uses three configuration files
+in order to know how to install and provision the OPNFV target system.
+The information gathered under section
+`Execution Requirements (Bare Metal Only)`_ is put into the YAML file
+``/etc/opnfv-apex/inventory.yaml`` configuration file. Deployment options are
+put into the YAML file ``/etc/opnfv-apex/deploy_settings.yaml``. Alternatively
+there are pre-baked deploy_settings files available in ``/etc/opnfv-apex/``.
+These files are named with the naming convention
+os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in place
+of the ``/etc/opnfv-apex/deploy_settings.yaml`` file if one suites your
+deployment needs. Networking definitions gathered under section
+`Network Requirements`_ are put into the YAML file
+``/etc/opnfv-apex/network_settings.yaml``. ``opnfv-deploy`` will boot the
+undercloud VM and load the target deployment configuration into the
+provisioning toolchain. This information includes MAC address, IPMI,
+Networking Environment and OPNFV deployment options.
+
+Once configuration is loaded and the undercloud is configured it will then
+reboot the overcloud nodes via IPMI. The nodes should already be set to PXE
+boot first off the admin interface. The nodes will first PXE off of the
+undercloud PXE server and go through a discovery/introspection process.
+
+Introspection boots off of custom introspection PXE images. These images are
+designed to look at the properties of the hardware that is being booted
+and report the properties of it back to the undercloud node.
+
+After introspection the undercloud will execute a Heat Stack Deployment to
+continue node provisioning and configuration. The nodes will reboot and PXE
+from the undercloud PXE server again to provision each node using Glance disk
+images provided by the undercloud. These disk images include all the necessary
+packages and configuration for an OPNFV deployment to execute. Once the disk
+images have been written to node's disks the nodes will boot locally and
+execute cloud-init which will execute the final node configuration. This
+configuration is largly completed by executing a puppet apply on each node.
+
+Installation High-Level Overview - VM Deployment
+================================================
+
+The VM nodes deployment operates almost the same way as the bare metal
+deployment with a few differences mainly related to power management.
+``opnfv-deploy`` still deploys an undercloud VM. In addition to the undercloud
+VM a collection of VMs (3 control nodes + 2 compute for an HA deployment or 1
+control node and 1 or more compute nodes for a Non-HA Deployment) will be
+defined for the target OPNFV deployment. The part of the toolchain that
+executes IPMI power instructions calls into libvirt instead of the IPMI
+interfaces on baremetal servers to operate the power managment. These VMs are
+then provisioned with the same disk images and configuration that baremetal
+would be.
+
+To Triple-O these nodes look like they have just built and registered the same
+way as bare metal nodes, the main difference is the use of a libvirt driver for
+the power management.
+
+Installation Guide - Bare Metal Deployment
+==========================================
+
+This section goes step-by-step on how to correctly install and provision the
+OPNFV target system to bare metal nodes.
+
+Install Bare Metal Jumphost
+---------------------------
+
+1a. If your Jumphost does not have CentOS 7 already on it, or you would like to
+ do a fresh install, then download the Apex bootable ISO from the OPNFV
+ artifacts site <http://artifacts.opnfv.org/apex.html>. There have been
+ isolated reports of problems with the ISO having trouble completing
+ installation successfully. In the unexpected event the ISO does not work
+ please workaround this by downloading the CentOS 7 DVD and performing a
+ "Virtualization Host" install. If you perform a "Minimal Install" or
+ install type other than "Virtualization Host" simply run
+ ``sudo yum groupinstall "Virtualization Host"``
+ ``chkconfig libvirtd on && reboot``
+ to install virtualzation support and enable libvirt on boot. If you use the
+ CentOS 7 DVD proceed to step 1b once the CentOS 7 with "Virtualzation Host"
+ support is completed.
+
+1b. If your Jump host already has CentOS 7 with libvirt running on it then
+ install the install the RDO Newton Release RPM and epel-release:
+
+ ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
+ ``sudo yum install epel-release``
+
+ The RDO Project release repository is needed to install OpenVSwitch, which
+ is a dependency of opnfv-apex. If you do not have external connectivity to
+ use this repository you need to download the OpenVSwitch RPM from the RDO
+ Project repositories and install it with the opnfv-apex RPM.
+
+2a. Boot the ISO off of a USB or other installation media and walk through
+ installing OPNFV CentOS 7. The ISO comes prepared to be written directly
+ to a USB drive with dd as such:
+
+ ``dd if=opnfv-apex.iso of=/dev/sdX bs=4M``
+
+ Replace /dev/sdX with the device assigned to your usb drive. Then select
+ the USB device as the boot media on your Jumphost
+
+2b. If your Jump host already has CentOS 7 with libvirt running on it then
+ install the opnfv-apex RPMs using the OPNFV artifacts yum repo. This yum
+ repo is created at release. It will not exist before release day.
+
+ ``sudo yum install http://artifacts.opnfv.org/apex/danube/opnfv-apex-release-danube.noarch.rpm``
+
+ Once you have installed the repo definitions for Apex, RDO and EPEL then
+ yum install Apex:
+
+ ``sudo yum install opnfv-apex``
+
+ If ONOS will be used, install the ONOS rpm instead of the opnfv-apex rpm.
+
+ ``sudo yum install opnfv-apex-onos``
+
+2c. If you choose not to use the Apex yum repo or you choose to use
+ pre-released RPMs you can download and install the required RPMs from the
+ artifacts site <http://artifacts.opnfv.org/apex.html>. The following RPMs
+ are available for installation:
+
+ - opnfv-apex - OpenDaylight L2 / L3 and ODL SFC support *
+ - opnfv-apex-onos - ONOS support *
+ - opnfv-apex-undercloud - (reqed) Undercloud Image
+ - opnfv-apex-common - (reqed) Supporting config files and scripts
+ - python34-markupsafe - (reqed) Dependency of opnfv-apex-common **
+ - python3-jinja2 - (reqed) Dependency of opnfv-apex-common **
+ - python3-ipmi - (reqed) Dependency of opnfv-apex-common **
+
+ \* One or more of these RPMs is required
+ Only one of opnfv-apex or opnfv-apex-onos is required. It is safe to leave
+ the unneeded SDN controller's RPMs uninstalled if you do not intend to use
+ them.
+
+ ** These RPMs are not yet distributed by CentOS or EPEL.
+ Apex has built these for distribution with Apex while CentOS and EPEL do
+ not distribute them. Once they are carried in an upstream channel Apex will
+ no longer carry them and they will not need special handling for
+ installation.
+
+
+ The EPEL and RDO yum repos are still required:
+ ``sudo yum install epel-release``
+ ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
+
+ Once the apex RPMs are downloaded install them by passing the file names
+ directly to yum:
+ ``sudo yum install python34-markupsafe-<version>.rpm
+ python3-jinja2-<version>.rpm python3-ipmi-<version>.rpm``
+ ``sudo yum install opnfv-apex-<version>.rpm
+ opnfv-apex-undercloud-<version>.rpm opnfv-apex-common-<version>.rpm``
+
+3. After the operating system and the opnfv-apex RPMs are installed, login to
+ your Jumphost as root.
+
+4. Configure IP addresses on the interfaces that you have selected as your
+ networks.
+
+5. Configure the IP gateway to the Internet either, preferably on the public
+ interface.
+
+6. Configure your ``/etc/resolv.conf`` to point to a DNS server
+ (8.8.8.8 is provided by Google).
+
+Creating a Node Inventory File
+------------------------------
+
+IPMI configuration information gathered in section
+`Execution Requirements (Bare Metal Only)`_ needs to be added to the
+``inventory.yaml`` file.
+
+1. Copy ``/usr/share/doc/opnfv/inventory.yaml.example`` as your inventory file
+ template to ``/etc/opnfv-apex/inventory.yaml``.
+
+2. The nodes dictionary contains a definition block for each baremetal host
+ that will be deployed. 1 or more compute nodes and 3 controller nodes are
+ required. (The example file contains blocks for each of these already).
+ It is optional at this point to add more compute nodes into the node list.
+
+3. Edit the following values for each node:
+
+ - ``mac_address``: MAC of the interface that will PXE boot from undercloud
+ - ``ipmi_ip``: IPMI IP Address
+ - ``ipmi_user``: IPMI username
+ - ``ipmi_password``: IPMI password
+ - ``pm_type``: Power Management driver to use for the node
+ values: pxe_ipmitool (tested) or pxe_wol (untested) or pxe_amt (untested)
+ - ``cpus``: (Introspected*) CPU cores available
+ - ``memory``: (Introspected*) Memory available in Mib
+ - ``disk``: (Introspected*) Disk space available in Gb
+ - ``disk_device``: (Opt***) Root disk device to use for installation
+ - ``arch``: (Introspected*) System architecture
+ - ``capabilities``: (Opt**) Node's role in deployment
+ values: profile:control or profile:compute
+
+ \* Introspection looks up the overcloud node's resources and overrides these
+ value. You can leave default values and Apex will get the correct values when
+ it runs introspection on the nodes.
+
+ ** If capabilities profile is not specified then Apex will select node's roles
+ in the OPNFV cluster in a non-deterministic fashion.
+
+ \*** disk_device declares which hard disk to use as the root device for
+ installation. The format is a comma delimited list of devices, such as
+ "sda,sdb,sdc". The disk chosen will be the first device in the list which
+ is found by introspection to exist on the system. Currently, only a single
+ definition is allowed for all nodes. Therefore if multiple disk_device
+ definitions occur within the inventory, only the last definition on a node
+ will be used for all nodes.
+
+Creating the Settings Files
+---------------------------
+
+Edit the 2 settings files in /etc/opnfv-apex/. These files have comments to
+help you customize them.
+
+1. deploy_settings.yaml
+ This file includes basic configuration options deployment, and also documents
+ all available options.
+ Alternatively, there are pre-built deploy_settings files available in
+ (``/etc/opnfv-apex/``). These files are named with the naming convention
+ os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in
+ place of the (``/etc/opnfv-apex/deploy_settings.yaml``) file if one suites
+ your deployment needs. If a pre-built deploy_settings file is choosen there
+ is no need to customize (``/etc/opnfv-apex/deploy_settings.yaml``). The
+ pre-built file can be used in place of the
+ (``/etc/opnfv-apex/deploy_settings.yaml``) file.
+
+2. network_settings.yaml
+ This file provides Apex with the networking information that satisfies the
+ prerequisite `Network Requirements`_. These are specific to your
+ environment.
+
+Running ``opnfv-deploy``
+------------------------
+
+You are now ready to deploy OPNFV using Apex!
+``opnfv-deploy`` will use the inventory and settings files to deploy OPNFV.
+
+Follow the steps below to execute:
+
+1. Execute opnfv-deploy
+ ``sudo opnfv-deploy -n network_settings.yaml
+ -i inventory.yaml -d deploy_settings.yaml``
+ If you need more information about the options that can be passed to
+ opnfv-deploy use ``opnfv-deploy --help``. -n
+ network_settings.yaml allows you to customize your networking topology.
+
+2. Wait while deployment is executed.
+ If something goes wrong during this part of the process, start by reviewing
+ your network or the information in your configuration files. It's not
+ uncommon for something small to be overlooked or mis-typed.
+ You will also notice outputs in your shell as the deployment progresses.
+
+3. When the deployment is complete the undercloud IP and ovecloud dashboard
+ url will be printed. OPNFV has now been deployed using Apex.
+
+.. _`Execution Requirements (Bare Metal Only)`: index.html#execution-requirements-bare-metal-only
+.. _`Network Requirements`: index.html#network-requirements