summaryrefslogtreecommitdiffstats
path: root/docs/release/installation/baremetal.rst
blob: c6ab9e822477185d28525fb9da490e4cd01822a2 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
Installation High-Level Overview - Bare Metal Deployment
========================================================

The setup presumes that you have 6 or more bare metal servers already setup
with network connectivity on at least 1 or more network interfaces for all
servers via a TOR switch or other network implementation.

The physical TOR switches are **not** automatically configured from the OPNFV
reference platform.  All the networks involved in the OPNFV infrastructure as
well as the provider networks and the private tenant VLANs needs to be manually
configured.

The Jumphost can be installed using the bootable ISO or by using the
(``opnfv-apex*.rpm``) RPMs and their dependencies.  The Jumphost should then be
configured with an IP gateway on its admin or public interface and configured
with a working DNS server.  The Jumphost should also have routable access
to the lights out network for the overcloud nodes.

``opnfv-deploy`` is then executed in order to deploy the undercloud VM and to
provision the overcloud nodes.  ``opnfv-deploy`` uses three configuration files
in order to know how to install and provision the OPNFV target system.
The information gathered under section
`Execution Requirements (Bare Metal Only)`_ is put into the YAML file
``/etc/opnfv-apex/inventory.yaml`` configuration file.  Deployment options are
put into the YAML file ``/etc/opnfv-apex/deploy_settings.yaml``.  Alternatively
there are pre-baked deploy_settings files available in ``/etc/opnfv-apex/``.
These files are named with the naming convention
os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in place
of the ``/etc/opnfv-apex/deploy_settings.yaml`` file if one suites your
deployment needs.  Networking definitions gathered under section
`Network Requirements`_ are put into the YAML file
``/etc/opnfv-apex/network_settings.yaml``.  ``opnfv-deploy`` will boot the
undercloud VM and load the target deployment configuration into the
provisioning toolchain.  This information includes MAC address, IPMI,
Networking Environment and OPNFV deployment options.

Once configuration is loaded and the undercloud is configured it will then
reboot the overcloud nodes via IPMI.  The nodes should already be set to PXE
boot first off the admin interface.  The nodes will first PXE off of the
undercloud PXE server and go through a discovery/introspection process.

Introspection boots off of custom introspection PXE images. These images are
designed to look at the properties of the hardware that is being booted
and report the properties of it back to the undercloud node.

After introspection the undercloud will execute a Heat Stack Deployment to
continue node provisioning and configuration.  The nodes will reboot and PXE
from the undercloud PXE server again to provision each node using Glance disk
images provided by the undercloud.  These disk images include all the necessary
packages and configuration for an OPNFV deployment to execute.  Once the disk
images have been written to node's disks the nodes will boot locally and
execute cloud-init which will execute the final node configuration. This
configuration is largely completed by executing a puppet apply on each node.

Installation High-Level Overview - VM Deployment
================================================

The VM nodes deployment operates almost the same way as the bare metal
deployment with a few differences mainly related to power management.
``opnfv-deploy`` still deploys an undercloud VM. In addition to the undercloud
VM a collection of VMs (3 control nodes + 2 compute for an HA deployment or 1
control node and 1 or more compute nodes for a Non-HA Deployment) will be
defined for the target OPNFV deployment.  The part of the toolchain that
executes IPMI power instructions calls into libvirt instead of the IPMI
interfaces on baremetal servers to operate the power management.  These VMs are
then provisioned with the same disk images and configuration that baremetal
would be.

To Triple-O these nodes look like they have just built and registered the same
way as bare metal nodes, the main difference is the use of a libvirt driver for
the power management.

Installation Guide - Bare Metal Deployment
==========================================

This section goes step-by-step on how to correctly install and provision the
OPNFV target system to bare metal nodes.

Install Bare Metal Jumphost
---------------------------

1a. If your Jumphost does not have CentOS 7 already on it, or you would like to
    do a fresh install, then download the Apex bootable ISO from the OPNFV
    artifacts site <http://artifacts.opnfv.org/apex.html>.  There have been
    isolated reports of problems with the ISO having trouble completing
    installation successfully. In the unexpected event the ISO does not work
    please workaround this by downloading the CentOS 7 DVD and performing a
    "Virtualization Host" install.  If you perform a "Minimal Install" or
    install type other than "Virtualization Host" simply run
    ``sudo yum groupinstall "Virtualization Host"``
    ``chkconfig libvirtd on && reboot``
    to install virtualzation support and enable libvirt on boot. If you use the
    CentOS 7 DVD proceed to step 1b once the CentOS 7 with "Virtualzation Host"
    support is completed.

1b. If your Jump host already has CentOS 7 with libvirt running on it then
    install the RDO Newton Release RPM and epel-release:

    ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``
    ``sudo yum install epel-release``

    The RDO Project release repository is needed to install OpenVSwitch, which
    is a dependency of opnfv-apex. If you do not have external connectivity to
    use this repository you need to download the OpenVSwitch RPM from the RDO
    Project repositories and install it with the opnfv-apex RPM.

2a. Boot the ISO off of a USB or other installation media and walk through
    installing OPNFV CentOS 7.  The ISO comes prepared to be written directly
    to a USB drive with dd as such:

    ``dd if=opnfv-apex.iso of=/dev/sdX bs=4M``

    Replace /dev/sdX with the device assigned to your usb drive. Then select
    the USB device as the boot media on your Jumphost

2b. If your Jump host already has CentOS 7 with libvirt running on it then
    install the opnfv-apex RPMs using the OPNFV artifacts yum repo. This yum
    repo is created at release. It will not exist before release day.

    ``sudo yum install http://artifacts.opnfv.org/apex/euphrates/opnfv-apex-release-euphrates.noarch.rpm``

    Once you have installed the repo definitions for Apex, RDO and EPEL then
    yum install Apex:

    ``sudo yum install opnfv-apex``

2c. If you choose not to use the Apex yum repo or you choose to use
    pre-released RPMs you can download and install the required RPMs from the
    artifacts site <http://artifacts.opnfv.org/apex.html>. The following RPMs
    are available for installation:

    - opnfv-apex                 - OpenDaylight, OVN, and nosdn support *
    - opnfv-apex-undercloud      - (reqed) Undercloud Image
    - python34-opnfv-apex        - (reqed) OPNFV Apex Python package
    - python34-markupsafe        - (reqed) Dependency of python34-opnfv-apex **
    - python34-jinja2            - (reqed) Dependency of python34-opnfv-apex **
    - python3-ipmi               - (reqed) Dependency of python34-opnfv-apex **
    - python34-pbr               - (reqed) Dependency of python34-opnfv-apex **
    - python34-virtualbmc        - (reqed) Dependency of python34-opnfv-apex **
    - python34-iptables          - (reqed) Dependency of python34-opnfv-apex **
    - python34-cryptography      - (reqed) Dependency of python34-opnfv-apex **
    - python34-libvirt           - (reqed) Dependency of python34-opnfv-apex **

    \* One or more of these RPMs is required
    Only one of opnfv-apex or opnfv-apex-onos is required. It is safe to leave
    the unneeded SDN controller's RPMs uninstalled if you do not intend to use
    them.

    ** These RPMs are not yet distributed by CentOS or EPEL.
    Apex has built these for distribution with Apex while CentOS and EPEL do
    not distribute them. Once they are carried in an upstream channel Apex will
    no longer carry them and they will not need special handling for
    installation.


    The EPEL and RDO yum repos are still required:
    ``sudo yum install epel-release``
    ``sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-4.noarch.rpm``

    Once the apex RPMs are downloaded install them by passing the file names
    directly to yum:
    ``sudo yum install python34-markupsafe-<version>.rpm
    python3-jinja2-<version>.rpm python3-ipmi-<version>.rpm``
    ``sudo yum install opnfv-apex-<version>.rpm
    opnfv-apex-undercloud-<version>.rpm python34-opnfv-apex-<version>.rpm``

3.  After the operating system and the opnfv-apex RPMs are installed, login to
    your Jumphost as root.

4.  Configure IP addresses on the interfaces that you have selected as your
    networks.

5.  Configure the IP gateway to the Internet either, preferably on the public
    interface.

6.  Configure your ``/etc/resolv.conf`` to point to a DNS server
    (8.8.8.8 is provided by Google).

Creating a Node Inventory File
------------------------------

IPMI configuration information gathered in section
`Execution Requirements (Bare Metal Only)`_ needs to be added to the
``inventory.yaml`` file.

1.  Copy ``/usr/share/doc/opnfv/inventory.yaml.example`` as your inventory file
    template to ``/etc/opnfv-apex/inventory.yaml``.

2.  The nodes dictionary contains a definition block for each baremetal host
    that will be deployed.  1 or more compute nodes and 3 controller nodes are
    required.  (The example file contains blocks for each of these already).
    It is optional at this point to add more compute nodes into the node list.

3.  Edit the following values for each node:

    - ``mac_address``: MAC of the interface that will PXE boot from undercloud
    - ``ipmi_ip``: IPMI IP Address
    - ``ipmi_user``: IPMI username
    - ``ipmi_password``: IPMI password
    - ``pm_type``: Power Management driver to use for the node
        values: pxe_ipmitool (tested) or pxe_wol (untested) or pxe_amt (untested)
    - ``cpus``: (Introspected*) CPU cores available
    - ``memory``: (Introspected*) Memory available in Mib
    - ``disk``: (Introspected*) Disk space available in Gb
    - ``disk_device``: (Opt***) Root disk device to use for installation
    - ``arch``: (Introspected*) System architecture
    - ``capabilities``: (Opt**) Node's role in deployment
        values: profile:control or profile:compute

    \* Introspection looks up the overcloud node's resources and overrides these
    value. You can leave default values and Apex will get the correct values when
    it runs introspection on the nodes.

    ** If capabilities profile is not specified then Apex will select node's roles
    in the OPNFV cluster in a non-deterministic fashion.

    \*** disk_device declares which hard disk to use as the root device for
    installation.  The format is a comma delimited list of devices, such as
    "sda,sdb,sdc".  The disk chosen will be the first device in the list which
    is found by introspection to exist on the system.  Currently, only a single
    definition is allowed for all nodes.  Therefore if multiple disk_device
    definitions occur within the inventory, only the last definition on a node
    will be used for all nodes.

Creating the Settings Files
---------------------------

Edit the 2 settings files in /etc/opnfv-apex/. These files have comments to
help you customize them.

1. deploy_settings.yaml
   This file includes basic configuration options deployment, and also documents
   all available options.
   Alternatively, there are pre-built deploy_settings files available in
   (``/etc/opnfv-apex/``). These files are named with the naming convention
   os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in
   place of the (``/etc/opnfv-apex/deploy_settings.yaml``) file if one suites
   your deployment needs. If a pre-built deploy_settings file is chosen there
   is no need to customize (``/etc/opnfv-apex/deploy_settings.yaml``). The
   pre-built file can be used in place of the
   (``/etc/opnfv-apex/deploy_settings.yaml``) file.

2. network_settings.yaml
   This file provides Apex with the networking information that satisfies the
   prerequisite `Network Requirements`_. These are specific to your
   environment.

Running ``opnfv-deploy``
------------------------

You are now ready to deploy OPNFV using Apex!
``opnfv-deploy`` will use the inventory and settings files to deploy OPNFV.

Follow the steps below to execute:

1.  Execute opnfv-deploy
    ``sudo opnfv-deploy -n network_settings.yaml
    -i inventory.yaml -d deploy_settings.yaml``
    If you need more information about the options that can be passed to
    opnfv-deploy use ``opnfv-deploy --help``.  -n
    network_settings.yaml allows you to customize your networking topology.

2.  Wait while deployment is executed.
    If something goes wrong during this part of the process, start by reviewing
    your network or the information in your configuration files. It's not
    uncommon for something small to be overlooked or mis-typed.
    You will also notice outputs in your shell as the deployment progresses.

3.  When the deployment is complete the undercloud IP and ovecloud dashboard
    url will be printed. OPNFV has now been deployed using Apex.

.. _`Execution Requirements (Bare Metal Only)`: index.html#execution-requirements-bare-metal-only
.. _`Network Requirements`: index.html#network-requirements