summaryrefslogtreecommitdiffstats
path: root/docs/release
diff options
context:
space:
mode:
authorAlexandru Avadanii <Alexandru.Avadanii@enea.com>2018-09-28 16:35:10 +0200
committerAlexandru Avadanii <Alexandru.Avadanii@enea.com>2018-11-05 16:42:15 +0100
commit170d2d1c195d001d6ca786364aaf3c10e714ae36 (patch)
treec057ed1c6d32c719e28d06ea0efd7f1d030de54f /docs/release
parent532427ad43e1c1728bf21317aea6af00d9758227 (diff)
[docs] Refresh for Gambia release
- s/Fuel@OPNFV/OPNFV Fuel/g; - added README files for ci/scenarios/patches directories; - refresh & simplify cluster overview diagrams; - unify labels across docs; - fix TOC numbering; - remove local labs PDF/IDF files, as they are merely duplicates of Pharos files included as a git submodule; JIRA: FUEL-397 Change-Id: I87f61938eeb67f13fd9205d5226a30f02e55d267 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'docs/release')
l---------docs/release/developer-guide/img/README.rst1
-rwxr-xr-xdocs/release/developer-guide/img/detail_fuel.pngbin0 -> 40664 bytes
-rwxr-xr-xdocs/release/developer-guide/img/overview_fuel.pngbin0 -> 53292 bytes
-rwxr-xr-xdocs/release/developer-guide/img/overview_mcp.pngbin0 -> 50856 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_gerrit.pngbin0 -> 20012 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_git_blue.pngbin0 -> 2373 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_git_orange.pngbin0 -> 1956 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_git_red.pngbin0 -> 2396 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_jenkins.pngbin0 -> 2666 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_k8.pngbin0 -> 2142 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_os.pngbin0 -> 2396 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_salt.pngbin0 -> 5339 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_trigger.pngbin0 -> 1917 bytes
-rwxr-xr-xdocs/release/developer-guide/img/symbol_user.pngbin0 -> 2521 bytes
-rw-r--r--docs/release/installation/img/README.rst14
-rw-r--r--docs/release/installation/img/arm_pod5.pngbin178079 -> 0 bytes
-rw-r--r--docs/release/installation/img/fuel_baremetal.pngbin272115 -> 0 bytes
-rw-r--r--docs/release/installation/img/fuel_baremetal_ha.pngbin0 -> 289121 bytes
-rw-r--r--docs/release/installation/img/fuel_baremetal_noha.pngbin0 -> 197550 bytes
-rw-r--r--docs/release/installation/img/fuel_hybrid_noha.pngbin0 -> 191144 bytes
-rw-r--r--docs/release/installation/img/fuel_virtual.pngbin216442 -> 0 bytes
-rw-r--r--docs/release/installation/img/fuel_virtual_noha.pngbin0 -> 236222 bytes
-rw-r--r--docs/release/installation/img/lf_pod2.pngbin178795 -> 0 bytes
-rw-r--r--docs/release/installation/index.rst16
-rw-r--r--docs/release/installation/installation.instruction.rst1634
-rw-r--r--docs/release/release-notes/index.rst9
-rw-r--r--docs/release/release-notes/release-notes.rst258
-rw-r--r--docs/release/scenarios/index.rst7
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-ha/index.rst4
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst1
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-noha/index.rst4
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst1
-rw-r--r--docs/release/scenarios/os-nosdn-vpp-ha/index.rst4
-rw-r--r--docs/release/scenarios/os-nosdn-vpp-ha/os-nosdn-vpp-ha.rst1
-rw-r--r--docs/release/scenarios/os-nosdn-vpp-noha/index.rst4
-rw-r--r--docs/release/scenarios/os-nosdn-vpp-noha/os-nosdn-vpp-noha.rst1
-rw-r--r--docs/release/scenarios/os-ovn-nofeature-ha/index.rst4
-rw-r--r--docs/release/scenarios/os-ovn-nofeature-ha/os-ovn-nofeature-ha.rst1
-rw-r--r--docs/release/scenarios/os-ovn-nofeature-noha/index.rst4
-rw-r--r--docs/release/scenarios/os-ovn-nofeature-noha/os-ovn-nofeature-noha.rst1
-rw-r--r--docs/release/userguide/img/saltstack.pngbin14373 -> 0 bytes
-rw-r--r--docs/release/userguide/index.rst10
-rw-r--r--docs/release/userguide/userguide.rst1164
43 files changed, 2252 insertions, 891 deletions
diff --git a/docs/release/developer-guide/img/README.rst b/docs/release/developer-guide/img/README.rst
new file mode 120000
index 000000000..1104109df
--- /dev/null
+++ b/docs/release/developer-guide/img/README.rst
@@ -0,0 +1 @@
+../../installation/img/README.rst \ No newline at end of file
diff --git a/docs/release/developer-guide/img/detail_fuel.png b/docs/release/developer-guide/img/detail_fuel.png
new file mode 100755
index 000000000..02af61aa7
--- /dev/null
+++ b/docs/release/developer-guide/img/detail_fuel.png
Binary files differ
diff --git a/docs/release/developer-guide/img/overview_fuel.png b/docs/release/developer-guide/img/overview_fuel.png
new file mode 100755
index 000000000..6b879d756
--- /dev/null
+++ b/docs/release/developer-guide/img/overview_fuel.png
Binary files differ
diff --git a/docs/release/developer-guide/img/overview_mcp.png b/docs/release/developer-guide/img/overview_mcp.png
new file mode 100755
index 000000000..037b293b9
--- /dev/null
+++ b/docs/release/developer-guide/img/overview_mcp.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_gerrit.png b/docs/release/developer-guide/img/symbol_gerrit.png
new file mode 100755
index 000000000..aea346e25
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_gerrit.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_git_blue.png b/docs/release/developer-guide/img/symbol_git_blue.png
new file mode 100755
index 000000000..569ed3f7b
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_git_blue.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_git_orange.png b/docs/release/developer-guide/img/symbol_git_orange.png
new file mode 100755
index 000000000..32f672985
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_git_orange.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_git_red.png b/docs/release/developer-guide/img/symbol_git_red.png
new file mode 100755
index 000000000..f288afe0b
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_git_red.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_jenkins.png b/docs/release/developer-guide/img/symbol_jenkins.png
new file mode 100755
index 000000000..20fde4141
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_jenkins.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_k8.png b/docs/release/developer-guide/img/symbol_k8.png
new file mode 100755
index 000000000..0cbc31005
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_k8.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_os.png b/docs/release/developer-guide/img/symbol_os.png
new file mode 100755
index 000000000..c2c8b262b
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_os.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_salt.png b/docs/release/developer-guide/img/symbol_salt.png
new file mode 100755
index 000000000..e9011ae0c
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_salt.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_trigger.png b/docs/release/developer-guide/img/symbol_trigger.png
new file mode 100755
index 000000000..e7dc10ffd
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_trigger.png
Binary files differ
diff --git a/docs/release/developer-guide/img/symbol_user.png b/docs/release/developer-guide/img/symbol_user.png
new file mode 100755
index 000000000..6384f8205
--- /dev/null
+++ b/docs/release/developer-guide/img/symbol_user.png
Binary files differ
diff --git a/docs/release/installation/img/README.rst b/docs/release/installation/img/README.rst
index 4cb1f77d2..bf630445b 100644
--- a/docs/release/installation/img/README.rst
+++ b/docs/release/installation/img/README.rst
@@ -1,12 +1,18 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. SPDX-License-Identifier: CC-BY-4.0
-.. (c) 2017 Ericsson AB, Mirantis Inc., Enea AB and others.
+.. (c) 2018 Ericsson AB, Mirantis Inc., Enea AB and others.
+
+:orphan:
Image Editor
============
-All files in this directory have been created using `draw.io <https://draw.io>`_.
+
+All files in this directory have been created using `draw.io`_.
Image Sources
=============
-Image sources are embedded in each `png` file.
-To edit an image, import the `png` file using `draw.io <https://draw.io>`_.
+
+Image sources are embedded in each ``png`` file.
+To edit an image, import the ``png`` file using `draw.io`_.
+
+.. _`draw.io`: https://draw.io
diff --git a/docs/release/installation/img/arm_pod5.png b/docs/release/installation/img/arm_pod5.png
deleted file mode 100644
index 87edb8f45..000000000
--- a/docs/release/installation/img/arm_pod5.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/installation/img/fuel_baremetal.png b/docs/release/installation/img/fuel_baremetal.png
deleted file mode 100644
index 27e762021..000000000
--- a/docs/release/installation/img/fuel_baremetal.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/installation/img/fuel_baremetal_ha.png b/docs/release/installation/img/fuel_baremetal_ha.png
new file mode 100644
index 000000000..f2ed6106f
--- /dev/null
+++ b/docs/release/installation/img/fuel_baremetal_ha.png
Binary files differ
diff --git a/docs/release/installation/img/fuel_baremetal_noha.png b/docs/release/installation/img/fuel_baremetal_noha.png
new file mode 100644
index 000000000..5a3b42919
--- /dev/null
+++ b/docs/release/installation/img/fuel_baremetal_noha.png
Binary files differ
diff --git a/docs/release/installation/img/fuel_hybrid_noha.png b/docs/release/installation/img/fuel_hybrid_noha.png
new file mode 100644
index 000000000..51449a777
--- /dev/null
+++ b/docs/release/installation/img/fuel_hybrid_noha.png
Binary files differ
diff --git a/docs/release/installation/img/fuel_virtual.png b/docs/release/installation/img/fuel_virtual.png
deleted file mode 100644
index d7664865d..000000000
--- a/docs/release/installation/img/fuel_virtual.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/installation/img/fuel_virtual_noha.png b/docs/release/installation/img/fuel_virtual_noha.png
new file mode 100644
index 000000000..7d05a9dcd
--- /dev/null
+++ b/docs/release/installation/img/fuel_virtual_noha.png
Binary files differ
diff --git a/docs/release/installation/img/lf_pod2.png b/docs/release/installation/img/lf_pod2.png
deleted file mode 100644
index da419d87c..000000000
--- a/docs/release/installation/img/lf_pod2.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst
index 00332262f..866044eb5 100644
--- a/docs/release/installation/index.rst
+++ b/docs/release/installation/index.rst
@@ -1,24 +1,10 @@
-.. _fuel-installation:
-
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-.. _fuel-release-installation-label:
-
-****************************************
-Installation instruction for Fuel\@OPNFV
-****************************************
-
-Contents:
+.. _fuel-installation:
.. toctree::
- :numbered:
:maxdepth: 2
installation.instruction.rst
-
-Indices and tables
-==================
-
-* :ref:`search`
diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst
index 9aaebdd7c..40f9d26ae 100644
--- a/docs/release/installation/installation.instruction.rst
+++ b/docs/release/installation/installation.instruction.rst
@@ -2,637 +2,1395 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-========
+***********************************
+OPNFV Fuel Installation Instruction
+***********************************
+
Abstract
========
-This document describes how to install the Fraser release of
+This document describes how to install the ``Gambia`` release of
OPNFV when using Fuel as a deployment tool, covering its usage,
limitations, dependencies and required system resources.
-This is an unified documentation for both x86_64 and aarch64
+
+This is an unified documentation for both ``x86_64`` and ``aarch64``
architectures. All information is common for both architectures
except when explicitly stated.
-============
Introduction
============
This document provides guidelines on how to install and
-configure the Fraser release of OPNFV when using Fuel as a
+configure the ``Gambia`` release of OPNFV when using Fuel as a
deployment tool, including required software and hardware configurations.
Although the available installation options provide a high degree of
freedom in how the system is set up, including architecture, services
and features, etc., said permutations may not provide an OPNFV
compliant reference architecture. This document provides a
-step-by-step guide that results in an OPNFV Fraser compliant
+step-by-step guide that results in an OPNFV ``Gambia`` compliant
deployment.
The audience of this document is assumed to have good knowledge of
networking and Unix/Linux administration.
-=======
-Preface
-=======
-
-Before starting the installation of the Fraser release of
+Before starting the installation of the ``Gambia`` release of
OPNFV, using Fuel as a deployment tool, some planning must be
done.
Preparations
============
-Prior to installation, a number of deployment specific parameters must be collected, those are:
+Prior to installation, a number of deployment specific parameters must be
+collected, those are:
#. Provider sub-net and gateway information
-#. Provider VLAN information
-
-#. Provider DNS addresses
+#. Provider ``VLAN`` information
-#. Provider NTP addresses
+#. Provider ``DNS`` addresses
-#. Network overlay you plan to deploy (VLAN, VXLAN, FLAT)
-
-#. How many nodes and what roles you want to deploy (Controllers, Storage, Computes)
-
-#. Monitoring options you want to deploy (Ceilometer, Syslog, etc.).
-
-#. Other options not covered in the document are available in the links above
+#. Provider ``NTP`` addresses
+#. How many nodes and what roles you want to deploy (Controllers, Computes)
This information will be needed for the configuration procedures
provided in this document.
-=========================================
-Hardware Requirements for Virtual Deploys
-=========================================
-
-The following minimum hardware requirements must be met for the virtual
-installation of Fraser using Fuel:
-
-+----------------------------+--------------------------------------------------------+
-| **HW Aspect** | **Requirement** |
-| | |
-+============================+========================================================+
-| **1 Jumpserver** | A physical node (also called Foundation Node) that |
-| | will host a Salt Master VM and each of the VM nodes in |
-| | the virtual deploy |
-+----------------------------+--------------------------------------------------------+
-| **CPU** | Minimum 1 socket with Virtualization support |
-+----------------------------+--------------------------------------------------------+
-| **RAM** | Minimum 32GB/server (Depending on VNF work load) |
-+----------------------------+--------------------------------------------------------+
-| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended)|
-+----------------------------+--------------------------------------------------------+
-
-
-===========================================
-Hardware Requirements for Baremetal Deploys
-===========================================
-
-The following minimum hardware requirements must be met for the baremetal
-installation of Fraser using Fuel:
-
-+-------------------------+------------------------------------------------------+
-| **HW Aspect** | **Requirement** |
-| | |
-+=========================+======================================================+
-| **# of nodes** | Minimum 5 |
-| | |
-| | - 3 KVM servers which will run all the controller |
-| | services |
-| | |
-| | - 2 Compute nodes |
-| | |
-+-------------------------+------------------------------------------------------+
-| **CPU** | Minimum 1 socket with Virtualization support |
-+-------------------------+------------------------------------------------------+
-| **RAM** | Minimum 16GB/server (Depending on VNF work load) |
-+-------------------------+------------------------------------------------------+
-| **Disk** | Minimum 256GB 10kRPM spinning disks |
-+-------------------------+------------------------------------------------------+
-| **Networks** | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be |
-| | a mix of tagged/native |
-| | |
-| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network |
-| | |
-| | Note: These can be allocated to a single NIC - |
-| | or spread out over multiple NICs |
-+-------------------------+------------------------------------------------------+
-| **1 Jumpserver** | A physical node (also called Foundation Node) that |
-| | hosts the Salt Master and MaaS VMs |
-+-------------------------+------------------------------------------------------+
-| **Power management** | All targets need to have power management tools that |
-| | allow rebooting the hardware and setting the boot |
-| | order (e.g. IPMI) |
-+-------------------------+------------------------------------------------------+
+Hardware Requirements
+=====================
-.. NOTE::
-
- All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64).
+Mininum hardware requirements depend on the deployment type.
-.. NOTE::
+.. WARNING::
- For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2).
+ If ``baremetal`` nodes are present in the cluster, the architecture of the
+ nodes running the control plane (``kvm01``, ``kvm02``, ``kvm03`` for
+ ``HA`` scenarios, respectively ``ctl01``, ``gtw01``, ``odl01`` for
+ ``noHA`` scenarios) and the ``jumpserver`` architecture must be the same
+ (either ``x86_64`` or ``aarch64``).
+
+.. TIP::
+
+ The compute nodes may have different architectures, but extra
+ configuration might be required for scheduling VMs on the appropiate host.
+ This use-case is not tested in OPNFV CI, so it is considered experimental.
+
+Hardware Requirements for ``virtual`` Deploys
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following minimum hardware requirements must be met for the ``virtual``
+installation of ``Gambia`` using Fuel:
+
++------------------+------------------------------------------------------+
+| **HW Aspect** | **Requirement** |
+| | |
++==================+======================================================+
+| **1 Jumpserver** | A physical node (also called Foundation Node) that |
+| | will host a Salt Master container and each of the VM |
+| | nodes in the virtual deploy |
++------------------+------------------------------------------------------+
+| **CPU** | Minimum 1 socket with Virtualization support |
++------------------+------------------------------------------------------+
+| **RAM** | Minimum 32GB/server (Depending on VNF work load) |
++------------------+------------------------------------------------------+
+| **Disk** | Minimum 100GB (SSD or 15krpm SCSI highly recommended)|
++------------------+------------------------------------------------------+
+
+Hardware Requirements for ``baremetal`` Deploys
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following minimum hardware requirements must be met for the ``baremetal``
+installation of ``Gambia`` using Fuel:
+
++------------------+------------------------------------------------------+
+| **HW Aspect** | **Requirement** |
+| | |
++==================+======================================================+
+| **1 Jumpserver** | A physical node (also called Foundation Node) that |
+| | hosts the Salt Master container and MaaS VM |
++------------------+------------------------------------------------------+
+| **# of nodes** | Minimum 5 |
+| | |
+| | - 3 KVM servers which will run all the controller |
+| | services |
+| | |
+| | - 2 Compute nodes |
+| | |
+| | .. WARNING:: |
+| | |
+| | ``kvm01``, ``kvm02``, ``kvm03`` nodes and the |
+| | ``jumpserver`` must have the same architecture |
+| | (either ``x86_64`` or ``aarch64``). |
+| | |
+| | .. NOTE:: |
+| | |
+| | ``aarch64`` nodes should run an ``UEFI`` |
+| | compatible firmware with PXE support |
+| | (e.g. ``EDK2``). |
++------------------+------------------------------------------------------+
+| **CPU** | Minimum 1 socket with Virtualization support |
++------------------+------------------------------------------------------+
+| **RAM** | Minimum 16GB/server (Depending on VNF work load) |
++------------------+------------------------------------------------------+
+| **Disk** | Minimum 256GB 10kRPM spinning disks |
++------------------+------------------------------------------------------+
+| **Networks** | Mininum 4 |
+| | |
+| | - 3 VLANs (``public``, ``mgmt``, ``private``) - |
+| | can be a mix of tagged/native |
+| | |
+| | - 1 Un-Tagged VLAN for PXE Boot - |
+| | ``PXE/admin`` Network |
+| | |
+| | .. NOTE:: |
+| | |
+| | These can be allocated to a single NIC |
+| | or spread out over multiple NICs. |
++------------------+------------------------------------------------------+
+| **Power mgmt** | All targets need to have power management tools that |
+| | allow rebooting the hardware (e.g. ``IPMI``). |
++------------------+------------------------------------------------------+
+
+Hardware Requirements for ``hybrid`` (``baremetal`` + ``virtual``) Deploys
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following minimum hardware requirements must be met for the ``hybrid``
+installation of ``Gambia`` using Fuel:
+
++------------------+------------------------------------------------------+
+| **HW Aspect** | **Requirement** |
+| | |
++==================+======================================================+
+| **1 Jumpserver** | A physical node (also called Foundation Node) that |
+| | hosts the Salt Master container, MaaS VM and |
+| | each of the virtual nodes defined in ``PDF`` |
++------------------+------------------------------------------------------+
+| **# of nodes** | .. NOTE:: |
+| | |
+| | Depends on ``PDF`` configuration. |
+| | |
+| | If the control plane is virtualized, minimum |
+| | baremetal requirements are: |
+| | |
+| | - 2 Compute nodes |
+| | |
+| | If the computes are virtualized, minimum |
+| | baremetal requirements are: |
+| | |
+| | - 3 KVM servers which will run all the controller |
+| | services |
+| | |
+| | .. WARNING:: |
+| | |
+| | ``kvm01``, ``kvm02``, ``kvm03`` nodes and the |
+| | ``jumpserver`` must have the same architecture |
+| | (either ``x86_64`` or ``aarch64``). |
+| | |
+| | .. NOTE:: |
+| | |
+| | ``aarch64`` nodes should run an ``UEFI`` |
+| | compatible firmware with PXE support |
+| | (e.g. ``EDK2``). |
++------------------+------------------------------------------------------+
+| **CPU** | Minimum 1 socket with Virtualization support |
++------------------+------------------------------------------------------+
+| **RAM** | Minimum 16GB/server (Depending on VNF work load) |
++------------------+------------------------------------------------------+
+| **Disk** | Minimum 256GB 10kRPM spinning disks |
++------------------+------------------------------------------------------+
+| **Networks** | Same as for ``baremetal`` deployments |
++------------------+------------------------------------------------------+
+| **Power mgmt** | Same as for ``baremetal`` deployments |
++------------------+------------------------------------------------------+
-===============================
Help with Hardware Requirements
-===============================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Calculate hardware requirements:
-For information on compatible hardware types available for use,
-please see `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_
-
When choosing the hardware on which you will deploy your OpenStack
environment, you should think about:
-- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine.
+- CPU -- Consider the number of virtual machines that you plan to deploy in
+ your cloud environment and the CPUs per virtual machine.
-- Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node.
+- Memory -- Depends on the amount of RAM assigned per virtual machine and the
+ controller node.
-- Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage.
+- Storage -- Depends on the local drive space per virtual machine, remote
+ volumes that can be attached to a virtual machine, and object storage.
-- Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage.
+- Networking -- Depends on the Choose Network Topology, the network bandwidth
+ per virtual machine, and network storage.
-================================================
-Top of the Rack (TOR) Configuration Requirements
-================================================
+Top of the Rack (``TOR``) Configuration Requirements
+====================================================
The switching infrastructure provides connectivity for the OPNFV
infrastructure operations, tenant networks (East/West) and provider
connectivity (North/South); it also provides needed connectivity for
the Storage Area Network (SAN).
+
To avoid traffic congestion, it is strongly suggested that three
physically separated networks are used, that is: 1 physical network
for administration and control, one physical network for tenant private
and public networks, and one physical network for SAN.
+
The switching connectivity can (but does not need to) be fully redundant,
in such case it comprises a redundant 10GE switch pair for each of the
three physically separated networks.
-The physical TOR switches are **not** automatically configured from
-the Fuel OPNFV reference platform. All the networks involved in the OPNFV
-infrastructure as well as the provider networks and the private tenant
-VLANs needs to be manually configured.
+.. WARNING::
-Manual configuration of the Fraser hardware platform should
-be carried out according to the `OPNFV Pharos Specification
-<https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_.
+ The physical ``TOR`` switches are **not** automatically configured from
+ the OPNFV Fuel reference platform. All the networks involved in the OPNFV
+ infrastructure as well as the provider networks and the private tenant
+ VLANs needs to be manually configured.
+
+Manual configuration of the ``Gambia`` hardware platform should
+be carried out according to the `OPNFV Pharos Specification`_.
-============================
OPNFV Software Prerequisites
============================
-The Jumpserver node should be pre-provisioned with an operating system,
-according to the Pharos specification. Relevant network bridges should
-also be pre-configured (e.g. admin_br, mgmt_br, public_br).
+.. NOTE::
-- The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during Fuel installation.
-- The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is
- suggested to pre-configure it for debugging purposes.
-- The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory.
+ All prerequisites described in this chapter apply to the ``jumpserver``
+ node.
-The user running the deploy script on the Jumpserver should belong to ``sudo`` and ``libvirt`` groups,
-and have passwordless sudo access.
+OS Distribution Support
+~~~~~~~~~~~~~~~~~~~~~~~
-The following example adds the groups to the user ``jenkins``
+The Jumpserver node should be pre-provisioned with an operating system,
+according to the `OPNFV Pharos specification`_.
-.. code-block:: bash
+OPNFV Fuel has been validated by CI using the following distributions
+installed on the Jumpserver:
- $ sudo usermod -aG sudo jenkins
- $ sudo usermod -aG libvirt jenkins
- $ reboot
- $ groups
- jenkins sudo libvirt
+- ``CentOS 7`` (recommended by Pharos specification);
+- ``Ubuntu Xenial 16.04``;
- $ sudo visudo
- ...
- %jenkins ALL=(ALL) NOPASSWD:ALL
+.. TOPIC:: ``aarch64`` notes
-The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir`` in the examples below)
-needs to have mask 777 in order for libvirt to be able to use them.
+ For an ``aarch64`` Jumpserver, the ``libvirt`` minimum required
+ version is ``3.x``, ``3.5`` or newer highly recommended.
-.. code-block:: bash
+ .. TIP::
- $ mkdir -p -m 777 /home/jenkins/tmpdir
+ ``CentOS 7`` (``aarch64``) distro provided packages are already new
+ enough.
-For an AArch64 Jumpserver, the ``libvirt`` minimum required version is 3.x, 3.5 or newer highly recommended.
-While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended
-(especially on AArch64 Jumpservers).
+ .. WARNING::
-For CentOS 7.4 (AArch64), distro provided packages are already new enough.
-For Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used.
-For convenience, Armband provides a DEB repository holding all the required packages.
+ ``Ubuntu 16.04`` (``arm64``), distro packages are too old and 3rd party
+ repositories should be used.
-To add and enable the Armband repository on an Ubuntu 16.04 system,
-create a new sources list file ``/apt/sources.list.d/armband.list`` with the following contents:
+ For convenience, Armband provides a DEB repository holding all the
+ required packages.
-.. code-block:: bash
+ To add and enable the Armband repository on an Ubuntu 16.04 system,
+ create a new sources list file ``/apt/sources.list.d/armband.list``
+ with the following contents:
- $ cat /etc/apt/sources.list.d/armband.list
- //for OpenStack Queens release
- deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main
+ .. code-block:: console
- $ apt-get update
+ jenkins@jumpserver:~$ cat /etc/apt/sources.list.d/armband.list
+ deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main
-Fuel@OPNFV has been validated by CI using the following distributions
-installed on the Jumpserver:
+ jenkins@jumpserver:~$ sudo apt-key adv --keyserver keys.gnupg.net \
+ --recv 798AB1D1
+ jenkins@jumpserver:~$ sudo apt-get update
-- CentOS 7 (recommended by Pharos specification);
-- Ubuntu Xenial;
+OS Distribution Packages
+~~~~~~~~~~~~~~~~~~~~~~~~
-.. WARNING::
+By default, the ``deploy.sh`` script will automatically install the required
+distribution package dependencies on the Jumpserver, so the end user does
+not have to manually install them before starting the deployment.
- The install script expects ``libvirt`` to be already running on the Jumpserver.
- In case ``libvirt`` packages are missing, the script will install them; but
- depending on the OS distribution, the user might have to start the ``libvirtd``
- service manually, then run the deploy script again. Therefore, it
- is recommended to install libvirt-bin explicitly on the Jumpserver before the deployment.
+This includes Python, QEMU, libvirt etc.
-.. NOTE::
+.. SEEALSO::
- It is also recommended to install the newer kernel on the Jumpserver before the deployment.
+ To disable automatic package installation (and/or upgrade) during
+ deployment, check out the ``-P`` deploy argument.
.. WARNING::
- The install script will automatically install the rest of required distro package
- dependencies on the Jumpserver, unless explicitly asked not to (via ``-P`` deploy arg).
- This includes Python, QEMU, libvirt etc.
+ The install script expects ``libvirt`` to be already running on the
+ Jumpserver.
-.. WARNING::
+In case ``libvirt`` packages are missing, the script will install them; but
+depending on the OS distribution, the user might have to start the
+``libvirt`` daemon service manually, then run the deploy script again.
- The install script will alter Jumpserver sysconf and disable ``net.bridge.bridge-nf-call``.
+Therefore, it is recommended to install ``libvirt`` explicitly on the
+Jumpserver before the deployment.
-.. code-block:: bash
+While not mandatory, upgrading the kernel on the Jumpserver is also highly
+recommended.
- $ apt-get install linux-image-generic-hwe-16.04-edge libvirt-bin
+.. code-block:: console
+ jenkins@jumpserver:~$ sudo apt-get install \
+ linux-image-generic-hwe-16.04-edge libvirt-bin
+ jenkins@jumpserver:~$ sudo reboot
-==========================================
-OPNFV Software Installation and Deployment
-==========================================
+User Requirements
+~~~~~~~~~~~~~~~~~
-This section describes the process of installing all the components needed to
-deploy the full OPNFV reference platform stack across a server cluster.
+The user running the deploy script on the Jumpserver should belong to
+``sudo`` and ``libvirt`` groups, and have passwordless sudo access.
-The installation is done with Mirantis Cloud Platform (MCP), which is based on
-a reclass model. This model provides the formula inputs to Salt, to make the deploy
-automatic based on deployment scenario.
-The reclass model covers:
+.. NOTE::
- - Infrastructure node definition: Salt Master node (cfg01) and MaaS node (mas01)
- - OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002)
- - Infrastructure components to install (software packages, services etc.)
- - OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them
+ Throughout this documentation, we will use the ``jenkins`` username for
+ this role.
+The following example adds the groups to the user ``jenkins``:
-Automatic Installation of a Virtual POD
-=======================================
+.. code-block:: console
-For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will:
+ jenkins@jumpserver:~$ sudo usermod -aG sudo jenkins
+ jenkins@jumpserver:~$ sudo usermod -aG libvirt jenkins
+ jenkins@jumpserver:~$ sudo reboot
+ jenkins@jumpserver:~$ groups
+ jenkins sudo libvirt
- - Create a Salt Master VM on the Jumpserver which will drive the installation
- - Create the bridges for networking with virsh (only if a real bridge does not already exist for a given network)
- - Install OpenStack on the targets
- - Leverage Salt to install & configure OpenStack services
+ jenkins@jumpserver:~$ sudo visudo
+ ...
+ %jenkins ALL=(ALL) NOPASSWD:ALL
-.. figure:: img/fuel_virtual.png
- :align: center
- :alt: Fuel@OPNFV Virtual POD Network Layout Examples
+Local Artifact Storage
+~~~~~~~~~~~~~~~~~~~~~~
+
+The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir``
+in the examples below) needs to have mask ``777`` in order for ``libvirt`` to
+be able to use them.
+
+.. code-block:: console
+
+ jenkins@jumpserver:~$ mkdir -p -m 777 /home/jenkins/tmpdir
+
+Network Configuration
+~~~~~~~~~~~~~~~~~~~~~
+
+Relevant Linux bridges should also be pre-configured for certain networks,
+depending on the type of the deployment.
+
++------------+---------------+----------------------------------------------+
+| Network | Linux Bridge | Linux Bridge necessity based on deploy type |
+| | +--------------+---------------+---------------+
+| | | ``virtual`` | ``baremetal`` | ``hybrid`` |
++============+===============+==============+===============+===============+
+| PXE/admin | ``admin_br`` | absent | present | present |
++------------+---------------+--------------+---------------+---------------+
+| management | ``mgmt_br`` | optional | optional, | optional, |
+| | | | recommended, | recommended, |
+| | | | required for | required for |
+| | | | ``functest``, | ``functest``, |
+| | | | ``yardstick`` | ``yardstick`` |
++------------+---------------+--------------+---------------+---------------+
+| internal | ``int_br`` | optional | optional | present |
++------------+---------------+--------------+---------------+---------------+
+| public | ``public_br`` | optional | optional, | optional, |
+| | | | recommended, | recommended, |
+| | | | useful for | useful for |
+| | | | debugging | debugging |
++------------+---------------+--------------+---------------+---------------+
+
+.. TIP::
+
+ IP addresses should be assigned to the created bridge interfaces (not
+ to one of its ports).
- Fuel@OPNFV Virtual POD Network Layout Examples
+.. WARNING::
- +-----------------------+------------------------------------------------------------------------+
- | cfg01 | Salt Master VM |
- +-----------------------+------------------------------------------------------------------------+
- | ctl01 | Controller VM |
- +-----------------------+------------------------------------------------------------------------+
- | cmp001/cmp002 | Compute VMs |
- +-----------------------+------------------------------------------------------------------------+
- | gtw01 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) |
- +-----------------------+------------------------------------------------------------------------+
- | odl01 | VM on which ODL runs (for scenarios deployed with ODL) |
- +-----------------------+------------------------------------------------------------------------+
+ ``PXE/admin`` bridge (``admin_br``) **must** have an IP address.
+Changes ``deploy.sh`` Will Perform to Jumpserver OS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In this figure there are examples of two virtual deploys:
- - Jumphost 1 has only virsh bridges, created by the deploy script
- - Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network,
- the deploy script will skip creating a virsh bridge for it
+.. WARNING::
-.. NOTE::
+ The install script will alter Jumpserver sysconf and disable
+ ``net.bridge.bridge-nf-call``.
- A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost.
+.. WARNING::
+ The install script will automatically install and/or upgrade the
+ required distribution package dependencies on the Jumpserver,
+ unless explicitly asked not to (via the ``-P`` deploy arg).
-Automatic Installation of a Baremetal POD
-=========================================
+OPNFV Software Configuration (``XDF``)
+======================================
-The baremetal installation process can be done by editing the information about
-hardware and environment in the reclass files, or by using the files Pod Descriptor
-File (PDF) and Installer Descriptor File (IDF) as described in the OPNFV Pharos project.
-These files contain all the information about the hardware and network of the deployment
-that will be fed to the reclass model during deployment.
+.. versionadded:: 5.0.0
+.. versionchanged:: 7.0.0
-The installation is done automatically with the deploy script, which will:
+Unlike the old approach based on OpenStack Fuel, OPNFV Fuel no longer has a
+graphical user interface for configuring the environment, but instead
+switched to OPNFV specific descriptor files that we will call generically
+``XDF``:
- - Create a Salt Master VM on the Jumpserver which will drive the installation
- - Create a MaaS Node VM on the Jumpserver which will provision the targets
- - Install OpenStack on the targets
- - Leverage MaaS to provision baremetal nodes with the operating system
- - Leverage Salt to configure the operating system on the baremetal nodes
- - Leverage Salt to install & configure OpenStack services
+- ``PDF`` (POD Descriptor File) provides an abstraction of the target POD
+ with all its hardware characteristics and required parameters;
+- ``IDF`` (Installer Descriptor File) extends the ``PDF`` with POD related
+ parameters required by the OPNFV Fuel installer;
+- ``SDF`` (Scenario Descriptor File, **not** yet adopted) will later
+ replace embedded scenario definitions, describing the roles and layout of
+ the cluster enviroment for a given reference architecture;
-.. figure:: img/fuel_baremetal.png
- :align: center
- :alt: Fuel@OPNFV Baremetal POD Network Layout Example
-
- Fuel@OPNFV Baremetal POD Network Layout Example
-
- +-----------------------+---------------------------------------------------------+
- | cfg01 | Salt Master VM |
- +-----------------------+---------------------------------------------------------+
- | mas01 | MaaS Node VM |
- +-----------------------+---------------------------------------------------------+
- | kvm01..03 | Baremetals which hold the VMs with controller functions |
- +-----------------------+---------------------------------------------------------+
- | cmp001/cmp002 | Baremetal compute nodes |
- +-----------------------+---------------------------------------------------------+
- | prx01/prx02 | Proxy VMs for Nginx |
- +-----------------------+---------------------------------------------------------+
- | msg01..03 | RabbitMQ Service VMs |
- +-----------------------+---------------------------------------------------------+
- | dbs01..03 | MySQL service VMs |
- +-----------------------+---------------------------------------------------------+
- | mdb01..03 | Telemetry VMs |
- +-----------------------+---------------------------------------------------------+
- | odl01 | VM on which ODL runs (for scenarios deployed with ODL) |
- +-----------------------+---------------------------------------------------------+
- | Tenant VM | VM running in the cloud |
- +-----------------------+---------------------------------------------------------+
-
-In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the Jumpserver, it is
-required to pre-configure at least the admin_br bridge for the PXE/Admin.
-For the targets, the bridges are created by the deploy script.
+.. TIP::
+
+ For ``virtual`` deployments, if the ``public`` network will be accessed
+ from outside the ``jumpserver`` node, a custom ``PDF``/``IDF`` pair is
+ required for customizing ``idf.net_config.public`` and
+ ``idf.fuel.jumphost.bridges.public``.
.. NOTE::
- A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost.
+ For OPNFV CI PODs, as well as simple (no ``public`` bridge) ``virtual``
+ deployments, ``PDF``/``IDF`` files are already available in the
+ `pharos git repo`_. They can be used as a reference for user-supplied
+ inputs or to kick off a deployment right away.
++----------+------------------------------------------------------------------+
+| LAB/POD | ``PDF``/``IDF`` availability based on deploy type |
+| +------------------------+--------------------+--------------------+
+| | ``virtual`` | ``baremetal`` | ``hybrid`` |
++==========+========================+====================+====================+
+| OPNFV CI | available in | available in | N/A, as currently |
+| POD | `pharos git repo`_ | `pharos git repo`_ | there are 0 hybrid |
+| | (e.g. | (e.g. ``lf-pod2``, | PODs in OPNFV CI |
+| | ``ericsson-virtual1``) | ``arm-pod5``) | |
++----------+------------------------+--------------------+--------------------+
+| local or | ``user-supplied`` | ``user-supplied`` | ``user-supplied`` |
+| new POD | | | |
++----------+------------------------+--------------------+--------------------+
-Steps to Start the Automatic Deploy
-===================================
+.. TIP::
-These steps are common both for virtual and baremetal deploys.
+ Both ``PDF`` and ``IDF`` structure are modelled as ``yaml`` schemas in the
+ `pharos git repo`_, also included as a git submodule in OPNFV Fuel.
-#. Clone the Fuel code from gerrit
+ .. SEEALSO::
- For x86_64
+ - ``mcp/scripts/pharos/config/pdf/pod1.schema.yaml``
+ - ``mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml``
- .. code-block:: bash
+ Schema files are also used during the initial deployment phase to validate
+ the user-supplied input ``PDF``/``IDF`` files.
- $ git clone https://git.opnfv.org/fuel
- $ cd fuel
+``PDF``
+~~~~~~~
- For aarch64
+The Pod Descriptor File is a hardware description of the POD
+infrastructure. The information is modeled under a ``yaml`` structure.
- .. code-block:: bash
+The hardware description covers the ``jumphost`` node and a set of ``nodes``
+for the cluster target boards. For each node the following characteristics
+are defined:
- $ git clone https://git.opnfv.org/armband
- $ cd armband
+- Node parameters including ``CPU`` features and total memory;
+- A list of available disks;
+- Remote management parameters;
+- Network interfaces list including name, ``MAC`` address, link speed,
+ advanced features;
-#. Checkout the Fraser release
+.. SEEALSO::
- .. code-block:: bash
+ A reference file with the expected ``yaml`` structure is available at:
- $ git checkout opnfv-6.2.1
+ - ``mcp/scripts/pharos/config/pdf/pod1.yaml``
-#. Start the deploy script
+ For more information on ``PDF``, see the `OPNFV PDF Wiki Page`_.
- Besides the basic options, there are other recommended deploy arguments:
+.. WARNING::
- - use ``-D`` option to enable the debug info
- - use ``-S`` option to point to a tmp dir where the disk images are saved. The images will be
- re-used between deploys
- - use ``|& tee`` to save the deploy log to a file
+ The fixed IPs defined in ``PDF`` are ignored by the OPNFV Fuel installer
+ script and it will instead assign addresses based on the network ranges
+ defined in ``IDF``.
+
+ For more details on the way IP addresses are assigned, see
+ :ref:`OPNFV Fuel User Guide <fuel-userguide>`.
+
+``PDF``/``IDF`` Role (hostname) Mapping
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Upcoming ``SDF`` support will introduce a series of possible node roles.
+Until that happens, the role mapping logic is hardcoded, based on node index
+in ``PDF``/``IDF`` (which should also be in sync, i.e. the parameters of the
+``n``-th cluster node defined in ``PDF`` should be the ``n``-th node in
+``IDF`` structures too).
+
++-------------+------------------+----------------------+
+| Node index | ``HA`` scenario | ``noHA`` scenario |
++=============+==================+======================+
+| 1st | ``kvm01`` | ``ctl01`` |
++-------------+------------------+----------------------+
+| 2nd | ``kvm02`` | ``gtw01`` |
++-------------+------------------+----------------------+
+| 3rd | ``kvm03`` | ``odl01``/``unused`` |
++-------------+------------------+----------------------+
+| 4th, | ``cmp001``, | ``cmp001``, |
+| 5th, | ``cmp002``, | ``cmp002``, |
+| ... | ``...`` | ``...`` |
++-------------+------------------+----------------------+
+
+.. TIP::
+
+ To switch node role(s), simply reorder the node definitions in
+ ``PDF``/``IDF`` (make sure to keep them in sync).
+
+``IDF``
+~~~~~~~
+
+The Installer Descriptor File extends the ``PDF`` with POD related parameters
+required by the installer. This information may differ per each installer type
+and it is not considered part of the POD infrastructure.
+
+``idf.*`` Overview
+------------------
+
+The ``IDF`` file must be named after the ``PDF`` it attaches to, with the
+prefix ``idf-``.
+
+.. SEEALSO::
+
+ A reference file with the expected ``yaml`` structure is available at:
+
+ - ``mcp/scripts/pharos/config/pdf/idf-pod1.yaml``
+
+The file follows a ``yaml`` structure and at least two sections
+(``idf.net_config`` and ``idf.fuel``) are expected.
+
+The ``idf.fuel`` section defines several sub-sections required by the OPNFV
+Fuel installer:
+
+- ``jumphost``: List of bridge names for each network on the Jumpserver;
+- ``network``: List of device name and bus address info of all the target nodes.
+ The order must be aligned with the order defined in the ``PDF`` file.
+ The OPNFV Fuel installer relies on the ``IDF`` model to setup all node NICs
+ by defining the expected device name and bus address;
+- ``maas``: Defines the target nodes commission timeout and deploy timeout;
+- ``reclass``: Defines compute parameter tuning, including huge pages, ``CPU``
+ pinning and other ``DPDK`` settings;
+
+.. code-block:: yaml
+
+ ---
+ idf:
+ version: 0.1 # fixed, the only supported version (mandatory)
+ net_config: # POD network configuration overview (mandatory)
+ oob: ... # mandatory
+ admin: ... # mandatory
+ mgmt: ... # mandatory
+ storage: ... # mandatory
+ private: ... # mandatory
+ public: ... # mandatory
+ fuel: # OPNFV Fuel specific section (mandatory)
+ jumphost: # OPNFV Fuel jumpserver bridge configuration (mandatory)
+ bridges: # Bridge name mapping (mandatory)
+ admin: 'admin_br' # <PXE/admin bridge name> or ~
+ mgmt: 'mgmt_br' # <mgmt bridge name> or ~
+ private: ~ # <private bridge name> or ~
+ public: 'public_br' # <public bridge name> or ~
+ trunks: ... # Trunked networks (optional)
+ maas: # MaaS timeouts (optional)
+ timeout_comissioning: 10 # commissioning timeout in minutes
+ timeout_deploying: 15 # deploy timeout in minutes
+ network: # Cluster nodes network (mandatory)
+ ntp_strata_host1: 1.pool.ntp.org # NTP1 (optional)
+ ntp_strata_host2: 0.pool.ntp.org # NTP2 (optional)
+ node: ... # List of per-node cfg (mandatory)
+ reclass: # Additional params (mandatory)
+ node: ... # List of per-node cfg (mandatory)
+
+``idf.net_config``
+------------------
+
+``idf.net_config`` was introduced as a mechanism to map all the usual cluster
+networks (internal and provider networks, e.g. ``mgmt``) to their ``VLAN``
+tags, ``CIDR`` and a physical interface index (used to match networks to
+interface names, like ``eth0``, on the cluster nodes).
- .. code-block:: bash
- $ ci/deploy.sh -l <lab_name> \
- -p <pod_name> \
- -b <URI to configuration repo containing the PDF file> \
- -s <scenario> \
- -D \
- -S <Storage directory for disk images> |& tee deploy.log
+.. WARNING::
-.. NOTE::
+ The mapping between one network segment (e.g. ``mgmt``) and its ``CIDR``/
+ ``VLAN`` is not configurable on a per-node basis, but instead applies to
+ all the nodes in the cluster.
+
+For each network, the following parameters are currently supported:
+
++--------------------------+--------------------------------------------------+
+| ``idf.net_config.*`` key | Details |
++==========================+==================================================+
+| ``interface`` | The index of the interface to use for this net. |
+| | For each cluster node (if network is present), |
+| | OPNFV Fuel will determine the underlying physical|
+| | interface by picking the element at index |
+| | ``interface`` from the list of network interface |
+| | names defined in |
+| | ``idf.fuel.network.node.*.interfaces``. |
+| | Required for each network. |
+| | |
+| | .. NOTE:: |
+| | |
+| | The interface index should be the |
+| | same on all cluster nodes. This can be |
+| | achieved by ordering them accordingly in |
+| | ``PDF``/``IDF``. |
++--------------------------+--------------------------------------------------+
+| ``vlan`` | ``VLAN`` tag (integer) or the string ``native``. |
+| | Required for each network. |
++--------------------------+--------------------------------------------------+
+| ``ip-range`` | When specified, all cluster IPs dynamically |
+| | allocated by OPNFV Fuel for that network will be |
+| | assigned inside this range. |
+| | Required for ``oob``, optional for others. |
+| | |
+| | .. NOTE:: |
+| | |
+| | For now, only range start address is used. |
++--------------------------+--------------------------------------------------+
+| ``network`` | Network segment address. |
+| | Required for each network, except ``oob``. |
++--------------------------+--------------------------------------------------+
+| ``mask`` | Network segment mask. |
+| | Required for each network, except ``oob``. |
++--------------------------+--------------------------------------------------+
+| ``gateway`` | Gateway IP address. |
+| | Required for ``public``, N/A for others. |
++--------------------------+--------------------------------------------------+
+| ``dns`` | List of DNS IP addresses. |
+| | Required for ``public``, N/A for others. |
++--------------------------+--------------------------------------------------+
+
+Sample ``public`` network configuration block:
+
+.. code-block:: yaml
+
+ idf:
+ net_config:
+ public:
+ interface: 1
+ vlan: native
+ network: 10.0.16.0
+ ip-range: 10.0.16.100-10.0.16.253
+ mask: 24
+ gateway: 10.0.16.254
+ dns:
+ - 8.8.8.8
+ - 8.8.4.4
+
+.. TOPIC:: ``hybrid`` POD notes
+
+ Interface indexes must be the same for all nodes, which is problematic
+ when mixing ``virtual`` nodes (where all interfaces were untagged
+ so far) with ``baremetal`` nodes (where interfaces usually carry
+ tagged VLANs).
+
+ .. TIP::
+
+ To achieve this, a special ``jumpserver`` network layout is used:
+ ``mgmt``, ``storage``, ``private``, ``public`` are trunked together
+ in a single ``trunk`` bridge:
+
+ - without decapsulating them (if they are also tagged on ``baremetal``);
+ a ``trunk.<vlan_tag>`` interface should be created on the
+ ``jumpserver`` for each tagged VLAN so the kernel won't drop the
+ packets;
+ - by decapsulating them first (if they are also untagged on
+ ``baremetal`` nodes);
+
+ The ``trunk`` bridge is then used for all bridges OPNFV Fuel
+ is aware of in ``idf.fuel.jumphost.bridges``, e.g. for a ``trunk`` where
+ only ``mgmt`` network is not decapsulated:
+
+ .. code-block:: yaml
+
+ idf:
+ fuel:
+ jumphost:
+ bridges:
+ admin: 'admin_br'
+ mgmt: 'trunk'
+ private: 'trunk'
+ public: 'trunk'
+ trunks:
+ # mgmt network is not decapsulated for jumpserver infra VMs,
+ # to align with the VLAN configuration of baremetal nodes.
+ mgmt: True
- The deployment uses the OPNFV Pharos project as input (PDF and IDF files)
- for hardware and network configuration of all current OPNFV PODs.
- When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override
- the path for the labconfig directory structure containing the PDF and IDF (see below).
+.. WARNING::
-Examples
---------
-#. Virtual deploy
+ The Linux kernel limits the name of network interfaces to 16 characters.
+ Extra care is required when choosing bridge names, so appending the
+ ``VLAN`` tag won't lead to an interface name length exceeding that limit.
+
+``idf.fuel.network``
+--------------------
+
+``idf.fuel.network`` allows mapping the cluster networks (e.g. ``mgmt``) to
+their physical interface name (e.g. ``eth0``) and bus address on the cluster
+nodes.
+
+``idf.fuel.network.node`` should be a list with the same number (and order) of
+elements as the cluster nodes defined in ``PDF``, e.g. the second cluster node
+in ``PDF`` will use the interface name and bus address defined in the second
+list element.
+
+Below is a sample configuration block for a single node with two interfaces:
+
+.. code-block:: yaml
+
+ idf:
+ fuel:
+ network:
+ node:
+ # Ordered-list, index should be in sync with node index in PDF
+ - interfaces:
+ # Ordered-list, index should be in sync with interface index
+ # in PDF
+ - 'ens3'
+ - 'ens4'
+ busaddr:
+ # Bus-info reported by `ethtool -i ethX`
+ - '0000:00:03.0'
+ - '0000:00:04.0'
+
+
+``idf.fuel.reclass``
+--------------------
+
+``idf.fuel.reclass`` provides a way of overriding default values in the
+reclass cluster model.
+
+This currently covers strictly compute parameter tuning, including huge
+pages, ``CPU`` pinning and other ``DPDK`` settings.
+
+``idf.fuel.reclass.node`` should be a list with the same number (and order) of
+elements as the cluster nodes defined in ``PDF``, e.g. the second cluster node
+in ``PDF`` will use the parameters defined in the second list element.
+
+The following parameters are currently supported:
+
++---------------------------------+-------------------------------------------+
+| ``idf.fuel.reclass.node.*`` | Details |
+| key | |
++=================================+===========================================+
+| ``nova_cpu_pinning`` | List of CPU cores nova will be pinned to. |
+| | |
+| | .. WARNING:: |
+| | |
+| | Currently disabled. |
++---------------------------------+-------------------------------------------+
+| ``compute_hugepages_size`` | Size of each persistent huge pages. |
+| | |
+| | Usual values are ``2M`` and ``1G``. |
++---------------------------------+-------------------------------------------+
+| ``compute_hugepages_count`` | Total number of persistent huge pages. |
++---------------------------------+-------------------------------------------+
+| ``compute_hugepages_mount`` | Mount point to use for huge pages. |
++---------------------------------+-------------------------------------------+
+| ``compute_kernel_isolcpu`` | List of certain CPU cores that are |
+| | isolated from Linux scheduler. |
++---------------------------------+-------------------------------------------+
+| ``compute_dpdk_driver`` | Kernel module to provide userspace I/O |
+| | support. |
++---------------------------------+-------------------------------------------+
+| ``compute_ovs_pmd_cpu_mask`` | Hexadecimal mask of CPUs to run ``DPDK`` |
+| | Poll-mode drivers. |
++---------------------------------+-------------------------------------------+
+| ``compute_ovs_dpdk_socket_mem`` | Set of amount huge pages in ``MB`` to be |
+| | used by ``OVS-DPDK`` daemon taken for each|
+| | ``NUMA`` node. Set size is equal to |
+| | ``NUMA`` nodes count, elements are |
+| | divided by comma. |
++---------------------------------+-------------------------------------------+
+| ``compute_ovs_dpdk_lcore_mask`` | Hexadecimal mask of ``DPDK`` lcore |
+| | parameter used to run ``DPDK`` processes. |
++---------------------------------+-------------------------------------------+
+| ``compute_ovs_memory_channels`` | Number of memory channels to be used. |
++---------------------------------+-------------------------------------------+
+| ``dpdk0_driver`` | NIC driver to use for physical network |
+| | interface. |
++---------------------------------+-------------------------------------------+
+| ``dpdk0_n_rxq`` | Number of ``RX`` queues. |
++---------------------------------+-------------------------------------------+
+
+Sample ``compute_params`` configuration block (for a single node):
+
+.. code-block:: yaml
+
+ idf:
+ fuel:
+ reclass:
+ node:
+ - compute_params:
+ common: &compute_params_common
+ compute_hugepages_size: 2M
+ compute_hugepages_count: 2048
+ compute_hugepages_mount: /mnt/hugepages_2M
+ dpdk:
+ <<: *compute_params_common
+ compute_dpdk_driver: uio
+ compute_ovs_pmd_cpu_mask: "0x6"
+ compute_ovs_dpdk_socket_mem: "1024"
+ compute_ovs_dpdk_lcore_mask: "0x8"
+ compute_ovs_memory_channels: "2"
+ dpdk0_driver: igb_uio
+ dpdk0_n_rxq: 2
+
+``SDF``
+~~~~~~~
+
+Scenario Descriptor Files are not yet implemented in the OPNFV Fuel ``Gambia``
+release.
+
+Instead, embedded OPNFV Fuel scenarios files are locally available in
+``mcp/config/scenario``.
- To start a virtual deployment, it is required to have the **virtual** keyword
- while specifying the pod name to the installer script.
+OPNFV Software Installation and Deployment
+==========================================
- It will create the required bridges and networks, configure Salt Master and
- install OpenStack.
+This section describes the process of installing all the components needed to
+deploy the full OPNFV reference platform stack across a server cluster.
- .. code-block:: bash
+Deployment Types
+~~~~~~~~~~~~~~~~
- $ ci/deploy.sh -l ericsson \
- -p virtual3 \
- -s os-nosdn-nofeature-noha \
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
+.. WARNING::
- Once the deployment is complete, the OpenStack Dashboard, Horizon, is
- available at ``http://<controller VIP>:8078``
- The administrator credentials are **admin** / **opnfv_secret**.
+ OPNFV releases previous to ``Gambia`` used to rely on the ``virtual``
+ keyword being part of the POD name (e.g. ``ericsson-virtual2``) to
+ configure the deployment type as ``virtual``. Otherwise ``baremetal``
+ was implied.
- A simple (and generic) sample PDF/IDF set of configuration files may
- be used for virtual deployments by setting lab/POD name to ``local-virtual1``.
- This sample configuration is x86_64 specific and hardcodes certain parameters,
- like public network address space, so a dedicated PDF/IDF is highly recommended.
+``Gambia`` and newer releases are more flexbile towards supporting a mix
+of ``baremetal`` and ``virtual`` nodes, so the type of deployment is
+now automatically determined based on the cluster nodes types in ``PDF``:
- .. code-block:: bash
++---------------------------------+-------------------------------------------+
+| ``PDF`` has nodes of type | Deployment type |
++---------------+-----------------+ |
+| ``baremetal`` | ``virtual`` | |
++===============+=================+===========================================+
+| yes | no | ``baremetal`` |
++---------------+-----------------+-------------------------------------------+
+| yes | yes | ``hybrid`` |
++---------------+-----------------+-------------------------------------------+
+| no | yes | ``virtual`` |
++---------------+-----------------+-------------------------------------------+
- $ ci/deploy.sh -l local \
- -p virtual1 \
- -s os-nosdn-nofeature-noha \
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
+Based on that, the deployment script will later enable/disable certain extra
+nodes (e.g. ``mas01``) and/or ``STATE`` files (e.g. ``maas``).
-#. Baremetal deploy
+``HA`` vs ``noHA``
+~~~~~~~~~~~~~~~~~~
- A x86 deploy on pod2 from Linux Foundation lab
+High availability of OpenStack services is determined based on scenario name,
+e.g. ``os-nosdn-nofeature-noha`` vs ``os-nosdn-nofeature-ha``.
- .. code-block:: bash
+.. TIP::
- $ ci/deploy.sh -l lf \
- -p pod2 \
- -s os-nosdn-nofeature-ha \
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
+ ``HA`` scenarios imply a virtualized control plane (``VCP``) for the
+ OpenStack services running on the 3 ``kvm`` nodes.
- .. figure:: img/lf_pod2.png
- :align: center
- :alt: Fuel@OPNFV LF POD2 Network Layout
+ .. SEEALSO::
- Fuel@OPNFV LF POD2 Network Layout
+ An experimental feature argument (``-N``) is supported by the deploy
+ script for disabling ``VCP``, although it might not be supported by
+ all scenarios and is not being continuosly validated by OPNFV CI/CD.
- An aarch64 deploy on pod5 from Arm lab
+.. WARNING::
- .. code-block:: bash
+ ``virtual`` ``HA`` deployments are not officially supported, due to
+ poor performance and various limitations of nested virtualization on
+ both ``x86_64`` and ``aarch64`` architectures.
- $ ci/deploy.sh -l arm \
- -p pod5 \
- -s os-nosdn-nofeature-ha \
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
+ .. TIP::
- .. figure:: img/arm_pod5.png
- :align: center
- :alt: Fuel@OPNFV ARM POD5 Network Layout
+ ``virtual`` ``HA`` deployments without ``VCP`` are supported, but
+ highly experimental.
- Fuel@OPNFV ARM POD5 Network Layout
++-------------------------------+-------------------------+-------------------+
+| Feature | ``HA`` scenario | ``noHA`` scenario |
++===============================+=========================+===================+
+| ``VCP`` | yes, | no |
+| (Virtualized Control Plane) | disabled with ``-N`` | |
++-------------------------------+-------------------------+-------------------+
+| OpenStack APIs SSL | yes | no |
++-------------------------------+-------------------------+-------------------+
+| Storage | ``GlusterFS`` | ``NFS`` |
++-------------------------------+-------------------------+-------------------+
- Once the deployment is complete, the SaltStack Deployment Documentation is
- available at ``http://<proxy public VIP>:8090``.
+Steps to Start the Automatic Deploy
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override
- the path for the labconfig directory structure containing the PDF and IDF.
+These steps are common for ``virtual``, ``baremetal`` or ``hybrid`` deploys,
+``x86_64``, ``aarch64`` or ``mixed`` (``x86_64`` and ``aarch64``):
- .. code-block:: bash
+- Clone the OPNFV Fuel code from gerrit
+- Checkout the ``Gambia`` release tag
+- Start the deploy script
- $ ci/deploy.sh -b file://<absolute_path_to_labconfig> \
- -l <lab_name> \
- -p <pod_name> \
- -s <scenario> \
- -D \
- -S <tmp_folder> |& tee deploy.log
+.. NOTE::
- - <absolute_path_to_labconfig> is the absolute path to a local directory, populated
- similar to Pharos, i.e. PDF/IDF reside in ``<absolute_path_to_labconfig>/labs/<lab_name>``
- - <lab_name> is the same as the directory in the path above
- - <pod_name> is the name used for the PDF (``<pod_name>.yaml``) and IDF (``idf-<pod_name>.yaml``) files
+ The deployment uses the OPNFV Pharos project as input (``PDF`` and
+ ``IDF`` files) for hardware and network configuration of all current
+ OPNFV PODs.
+ When deploying a new POD, one may pass the ``-b`` flag to the deploy
+ script to override the path for the labconfig directory structure
+ containing the ``PDF`` and ``IDF`` (``<URI to configuration repo ...>`` is
+ the absolute path to a local or remote directory structure, populated
+ similar to `pharos git repo`_, i.e. ``PDF``/``IDF`` reside in a
+ subdirectory called ``labs/<lab_name>``).
+.. code-block:: console
-Pod and Installer Descriptor Files
-==================================
+ jenkins@jumpserver:~$ git clone https://git.opnfv.org/fuel
+ jenkins@jumpserver:~$ cd fuel
+ jenkins@jumpserver:~/fuel$ git checkout opnfv-7.0.0
+ jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> \
+ -p <pod_name> \
+ -b <URI to configuration repo containing the PDF/IDF files> \
+ -s <scenario> \
+ -D \
+ -S <Storage directory for deploy artifacts> |& tee deploy.log
-Descriptor files provide the installer with an abstraction of the target pod
-with all its hardware characteristics and required parameters. This information
-is split into two different files:
-Pod Descriptor File (PDF) and Installer Descriptor File (IDF).
+.. TIP::
-The Pod Descriptor File is a hardware description of the pod
-infrastructure. The information is modeled under a yaml structure.
-A reference file with the expected yaml structure is available at
-``mcp/config/labs/local/pod1.yaml``.
+ Besides the basic options, there are other recommended deploy arguments:
-The hardware description is arranged into a main "jumphost" node and a "nodes"
-set for all target boards. For each node the following characteristics
-are defined:
+ - use ``-D`` option to enable the debug info
+ - use ``-S`` option to point to a tmp dir where the disk images are saved.
+ The deploy artifacts will be re-used on subsequent (re)deployments.
+ - use ``|& tee`` to save the deploy log to a file
+
+Typical Cluster Examples
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Common cluster layouts usually fall into one of the cases described below,
+categorized by deployment type (``baremetal``, ``virtual`` or ``hybrid``) and
+high availability (``HA`` or ``noHA``).
-- Node parameters including CPU features and total memory.
-- A list of available disks.
-- Remote management parameters.
-- Network interfaces list including mac address, speed, advanced features and name.
+A simplified overview of the steps ``deploy.sh`` will automatically perform is:
+
+- create a Salt Master Docker container on the jumpserver, which will drive
+ the rest of the installation;
+- ``baremetal`` or ``hybrid`` only: create a ``MaaS`` infrastructure node VM,
+ which will be leveraged using Salt to handle OS provisioning on the
+ ``baremetal`` nodes;
+- leverage Salt to install & configure OpenStack;
.. NOTE::
- The fixed IPs are ignored by the MCP installer script and it will instead
- assign based on the network ranges defined in IDF.
+ A virtual network ``mcpcontrol`` is always created for initial connection
+ of the VMs on Jumphost.
-The Installer Descriptor File extends the PDF with pod related parameters
-required by the installer. This information may differ per each installer type
-and it is not considered part of the pod infrastructure.
-The IDF file must be named after the PDF with the prefix "idf-". A reference file with the expected
-structure is available at ``mcp/config/labs/local/idf-pod1.yaml``.
-
-The file follows a yaml structure and two sections "net_config" and "fuel" are expected.
-
-The "net_config" section describes all the internal and provider networks
-assigned to the pod. Each used network is expected to have a vlan tag, IP subnet and
-attached interface on the boards. Untagged vlans shall be defined as "native".
-
-The "fuel" section defines several sub-sections required by the Fuel installer:
-
-- jumphost: List of bridge names for each network on the Jumpserver.
-- network: List of device name and bus address info of all the target nodes.
- The order must be aligned with the order defined in PDF file. Fuel installer relies on the IDF model
- to setup all node NICs by defining the expected device name and bus address.
-- maas: Defines the target nodes commission timeout and deploy timeout. (optional)
-- reclass: Defines compute parameter tuning, including huge pages, cpu pinning
- and other DPDK settings. (optional)
-
-The following parameters can be defined in the IDF files under "reclass". Those value will
-overwrite the default configuration values in Fuel repository:
-
-- nova_cpu_pinning: List of CPU cores nova will be pinned to. Currently disabled.
-- compute_hugepages_size: Size of each persistent huge pages. Usual values are '2M' and '1G'.
-- compute_hugepages_count: Total number of persistent huge pages.
-- compute_hugepages_mount: Mount point to use for huge pages.
-- compute_kernel_isolcpu: List of certain CPU cores that are isolated from Linux scheduler.
-- compute_dpdk_driver: Kernel module to provide userspace I/O support.
-- compute_ovs_pmd_cpu_mask: Hexadecimal mask of CPUs to run DPDK Poll-mode drivers.
-- compute_ovs_dpdk_socket_mem: Set of amount huge pages in MB to be used by OVS-DPDK daemon
- taken for each NUMA node. Set size is equal to NUMA nodes count, elements are divided by comma.
-- compute_ovs_dpdk_lcore_mask: Hexadecimal mask of DPDK lcore parameter used to run DPDK processes.
-- compute_ovs_memory_channels: Number of memory channels to be used.
-- dpdk0_driver: NIC driver to use for physical network interface.
-- dpdk0_n_rxq: Number of RX queues.
-
-
-The full description of the PDF and IDF file structure are available as yaml schemas.
-The schemas are defined as a git submodule in Fuel repository. Input files provided
-to the installer will be validated against the schemas.
-
-- ``mcp/scripts/pharos/config/pdf/pod1.schema.yaml``
-- ``mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml``
+.. WARNING::
-=============
-Release Notes
-=============
+ A single cluster deployment per ``jumpserver`` node is currently supported,
+ indifferent of its type (``virtual``, ``baremetal`` or ``hybrid``).
-Please refer to the :ref:`Release Notes <fuel-release-notes-label>` article.
+Once the deployment is complete, the following should be accessible:
-==========
-References
-==========
++---------------+----------------------------------+---------------------------+
+| Resource | ``HA`` scenario | ``noHA`` scenario |
++===============+==================================+===========================+
+| ``Horizon`` | ``https://<prx public VIP>`` | ``http://<ctl VIP>:8078`` |
+| (Openstack | | |
+| Dashboard) | | |
++---------------+----------------------------------+---------------------------+
+| ``SaltStack`` | ``http://<prx public VIP>:8090`` | N/A |
+| Deployment | | |
+| Documentation | | |
++---------------+----------------------------------+---------------------------+
-OPNFV
+.. SEEALSO::
-1) `OPNFV Home Page <https://www.opnfv.org>`_
-2) `OPNFV documentation <https://docs.opnfv.org>`_
-3) `Software downloads <https://www.opnfv.org/software/download>`_
+ For more details on locating and importing the generated SSL certificate,
+ see :ref:`OPNFV Fuel User Guide <fuel-userguide>`.
-OpenStack
+``virtual`` ``noHA`` POD
+------------------------
-4) `OpenStack Queens Release Artifacts <https://www.openstack.org/software/queens>`_
-5) `OpenStack Documentation <https://docs.openstack.org>`_
+In the following figure there are two generic examples of ``virtual`` deploys,
+each on a separate Jumphost node, both behind the same ``TOR`` switch:
-OpenDaylight
+- Jumphost 1 has only virsh bridges (created by the deploy script);
+- Jumphost 2 has a mix of Linux (manually created) and ``libvirt`` managed
+ bridges (created by the deploy script);
-6) `OpenDaylight Artifacts <https://www.opendaylight.org/software/downloads>`_
+.. figure:: img/fuel_virtual_noha.png
+ :align: center
+ :width: 60%
+ :alt: OPNFV Fuel Virtual noHA POD Network Layout Examples
+
+ OPNFV Fuel Virtual noHA POD Network Layout Examples
+
+ +-------------+------------------------------------------------------------+
+ | ``cfg01`` | Salt Master Docker container |
+ +-------------+------------------------------------------------------------+
+ | ``ctl01`` | Controller VM |
+ +-------------+------------------------------------------------------------+
+ | ``gtw01`` | Gateway VM with neutron services |
+ | | (``DHCP`` agent, ``L3`` agent, ``metadata`` agent etc) |
+ +-------------+------------------------------------------------------------+
+ | ``odl01`` | VM on which ``ODL`` runs |
+ | | (for scenarios deployed with ODL) |
+ +-------------+------------------------------------------------------------+
+ | ``cmp001``, | Compute VMs |
+ | ``cmp002`` | |
+ +-------------+------------------------------------------------------------+
+
+.. TIP::
+
+ If external access to the ``public`` network is not required, there is
+ little to no motivation to create a custom ``PDF``/``IDF`` set for a
+ virtual deployment.
+
+ Instead, the existing virtual PODs definitions in `pharos git repo`_ can
+ be used as-is:
+
+ - ``ericsson-virtual1`` for ``x86_64``;
+ - ``arm-virtual2`` for ``aarch64``;
+
+.. code-block:: console
+
+ # example deploy cmd for an x86_64 virtual cluster
+ jenkins@jumpserver:~/fuel$ ci/deploy.sh -l ericsson \
+ -p virtual1 \
+ -s os-nosdn-nofeature-noha \
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
+
+``baremetal`` ``noHA`` POD
+--------------------------
-Fuel
+.. WARNING::
-7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_
+ These scenarios are not tested in OPNFV CI, so they are considered
+ experimental.
-Salt
+.. figure:: img/fuel_baremetal_noha.png
+ :align: center
+ :width: 60%
+ :alt: OPNFV Fuel Baremetal noHA POD Network Layout Example
+
+ OPNFV Fuel Baremetal noHA POD Network Layout Example
+
+ +-------------+------------------------------------------------------------+
+ | ``cfg01`` | Salt Master Docker container |
+ +-------------+------------------------------------------------------------+
+ | ``mas01`` | MaaS Node VM |
+ +-------------+------------------------------------------------------------+
+ | ``ctl01`` | Baremetal controller node |
+ +-------------+------------------------------------------------------------+
+ | ``gtw01`` | Baremetal Gateway with neutron services |
+ | | (dhcp agent, L3 agent, metadata, etc) |
+ +-------------+------------------------------------------------------------+
+ | ``odl01`` | Baremetal node on which ODL runs |
+ | | (for scenarios deployed with ODL, otherwise unused |
+ +-------------+------------------------------------------------------------+
+ | ``cmp001``, | Baremetal Computes |
+ | ``cmp002`` | |
+ +-------------+------------------------------------------------------------+
+ | Tenant VM | VM running in the cloud |
+ +-------------+------------------------------------------------------------+
+
+``baremetal`` ``HA`` POD
+------------------------
+
+.. figure:: img/fuel_baremetal_ha.png
+ :align: center
+ :width: 60%
+ :alt: OPNFV Fuel Baremetal HA POD Network Layout Example
+
+ OPNFV Fuel Baremetal HA POD Network Layout Example
+
+ +---------------------------+----------------------------------------------+
+ | ``cfg01`` | Salt Master Docker container |
+ +---------------------------+----------------------------------------------+
+ | ``mas01`` | MaaS Node VM |
+ +---------------------------+----------------------------------------------+
+ | ``kvm01``, | Baremetals which hold the VMs with |
+ | ``kvm02``, | controller functions |
+ | ``kvm03`` | |
+ +---------------------------+----------------------------------------------+
+ | ``prx01``, | Proxy VMs for Nginx |
+ | ``prx02`` | |
+ +---------------------------+----------------------------------------------+
+ | ``msg01``, | RabbitMQ Service VMs |
+ | ``msg02``, | |
+ | ``msg03`` | |
+ +---------------------------+----------------------------------------------+
+ | ``dbs01``, | MySQL service VMs |
+ | ``dbs02``, | |
+ | ``dbs03`` | |
+ +---------------------------+----------------------------------------------+
+ | ``mdb01``, | Telemetry VMs |
+ | ``mdb02``, | |
+ | ``mdb03`` | |
+ +---------------------------+----------------------------------------------+
+ | ``odl01`` | VM on which ``OpenDaylight`` runs |
+ | | (for scenarios deployed with ``ODL``) |
+ +---------------------------+----------------------------------------------+
+ | ``cmp001``, | Baremetal Computes |
+ | ``cmp002`` | |
+ +---------------------------+----------------------------------------------+
+ | Tenant VM | VM running in the cloud |
+ +---------------------------+----------------------------------------------+
+
+.. code-block:: console
+
+ # x86_x64 baremetal deploy on pod2 from Linux Foundation lab (lf-pod2)
+ jenkins@jumpserver:~/fuel$ ci/deploy.sh -l lf \
+ -p pod2 \
+ -s os-nosdn-nofeature-ha \
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
+
+.. code-block:: console
+
+ # aarch64 baremetal deploy on pod5 from Enea ARM lab (arm-pod5)
+ jenkins@jumpserver:~/fuel$ ci/deploy.sh -l arm \
+ -p pod5 \
+ -s os-nosdn-nofeature-ha \
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
+
+``hybrid`` ``noHA`` POD
+-----------------------
+
+.. figure:: img/fuel_hybrid_noha.png
+ :align: center
+ :width: 60%
+ :alt: OPNFV Fuel Hybrid noHA POD Network Layout Examples
+
+ OPNFV Fuel Hybrid noHA POD Network Layout Examples
+
+ +-------------+------------------------------------------------------------+
+ | ``cfg01`` | Salt Master Docker container |
+ +-------------+------------------------------------------------------------+
+ | ``mas01`` | MaaS Node VM |
+ +-------------+------------------------------------------------------------+
+ | ``ctl01`` | Controller VM |
+ +-------------+------------------------------------------------------------+
+ | ``gtw01`` | Gateway VM with neutron services |
+ | | (``DHCP`` agent, ``L3`` agent, ``metadata`` agent etc) |
+ +-------------+------------------------------------------------------------+
+ | ``odl01`` | VM on which ``ODL`` runs |
+ | | (for scenarios deployed with ODL) |
+ +-------------+------------------------------------------------------------+
+ | ``cmp001``, | Baremetal Computes |
+ | ``cmp002`` | |
+ +-------------+------------------------------------------------------------+
+
+Automatic Deploy Breakdown
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When an automatic deploy is started, the following operations are performed
+sequentially by the deploy script:
+
++------------------+----------------------------------------------------------+
+| **Deploy stage** | **Details** |
++==================+==========================================================+
+| Argument | enviroment variables and command line arguments passed |
+| Parsing | to ``deploy.sh`` are interpreted |
++------------------+----------------------------------------------------------+
+| Distribution | Install and/or configure mandatory requirements on the |
+| Package | ``jumpserver`` node: |
+| Installation | |
+| | - ``Docker`` (from upstream and not distribution repos, |
+| | as the version included in ``Ubuntu`` ``Xenial`` is |
+| | outdated); |
+| | - ``docker-compose`` (from upstream, as the version |
+| | included in both ``CentOS 7`` and |
+| | ``Ubuntu Xenial 16.04`` has dependency issues on most |
+| | systems); |
+| | - ``virt-inst`` (from upstream, as the version included |
+| | in ``Ubuntu Xenial 16.04`` is outdated and lacks |
+| | certain required features); |
+| | - other miscelaneous requirements, depending on |
+| | ``jumpserver`` distribution OS; |
+| | |
+| | .. SEEALSO:: |
+| | |
+| | - ``mcp/scripts/requirements_deb.yaml`` (``Ubuntu``) |
+| | - ``mcp/scripts/requirements_rpm.yaml`` (``CentOS``) |
+| | |
+| | .. WARNING:: |
+| | |
+| | Mininum required ``Docker`` version is ``17.x``. |
+| | |
+| | .. WARNING:: |
+| | |
+| | Mininum required ``virt-inst`` version is ``1.4``. |
++------------------+----------------------------------------------------------+
+| Patch | For each ``git`` submodule in OPNFV Fuel repository, |
+| Apply | if a subdirectory with the same name exists under |
+| | ``mcp/patches``, all patches in that subdirectory are |
+| | applied using ``git-am`` to the respective ``git`` |
+| | submodule. |
+| | |
+| | This allows OPNFV Fuel to alter upstream repositories |
+| | contents before consuming them, including: |
+| | |
+| | - ``Docker`` container build process customization; |
+| | - ``salt-formulas`` customization; |
+| | - ``reclass.system`` customization; |
+| | |
+| | .. SEEALSO:: |
+| | |
+| | - ``mcp/patches/README.rst`` |
++------------------+----------------------------------------------------------+
+| SSH RSA Keypair | If not already present, a RSA keypair is generated on |
+| Generation | the ``jumpserver`` node at: |
+| | |
+| | - ``/var/lib/opnfv/mcp.rsa{,.pub}`` |
+| | |
+| | The public key will be added to the ``authorized_keys`` |
+| | list for ``ubuntu`` user, so the private key can be used |
+| | for key-based logins on: |
+| | |
+| | - ``cfg01``, ``mas01`` infrastructure nodes; |
+| | - all cluster nodes (``baremetal`` and/or ``virtual``), |
+| | including ``VCP`` VMs; |
++------------------+----------------------------------------------------------+
+| ``j2`` | Based on ``XDF`` (``PDF``, ``IDF``, ``SDF``) and |
+| Expansion | additional deployment configuration determined during |
+| | ``argument parsing`` stage described above, all jinja2 |
+| | templates are expanded, including: |
+| | |
+| | - various classes in ``reclass.cluster``; |
+| | - docker-compose ``yaml`` for Salt Master bring-up; |
+| | - ``libvirt`` network definitions (``xml``); |
++------------------+----------------------------------------------------------+
+| Jumpserver | Basic validation that common ``jumpserver`` requirements |
+| Requirements | are satisfied, e.g. ``PXE/admin`` is Linux bridge if |
+| Check | ``baremetal`` nodes are defined in the ``PDF``. |
++------------------+----------------------------------------------------------+
+| Infrastucture | .. NOTE:: |
+| Setup | |
+| | All steps apply to and only to the ``jumpserver``. |
+| | |
+| | - prepare virtual machines; |
+| | - (re)create ``libvirt`` managed networks; |
+| | - apply ``sysctl`` configuration; |
+| | - apply ``udev`` configuration; |
+| | - create & start virtual machines prepared earlier; |
+| | - create & start Salt Master (``cfg01``) Docker |
+| | container; |
++------------------+----------------------------------------------------------+
+| ``STATE`` | Based on deployment type, scenario and other parameters, |
+| Files | a ``STATE`` file list is constructed, then executed |
+| | sequentially. |
+| | |
+| | .. TIP:: |
+| | |
+| | The table below lists all current ``STATE`` files |
+| | and their intended action. |
+| | |
+| | .. SEEALSO:: |
+| | |
+| | For more information on how the list of ``STATE`` |
+| | files is constructed, see |
+| | :ref:`OPNFV Fuel User Guide <fuel-userguide>`. |
++------------------+----------------------------------------------------------+
+| Log | Contents of ``/var/log`` are recursively gathered from |
+| Collection | all the nodes, then archived together for later |
+| | inspection. |
++------------------+----------------------------------------------------------+
+
+``STATE`` Files Overview
+------------------------
+
++---------------------------+-------------------------------------------------+
+| ``STATE`` file | Targets involved and main intended action |
++===========================+=================================================+
+| ``virtual_init`` | ``cfg01``: reclass node generation |
+| | |
+| | ``jumpserver`` VMs (e.g. ``mas01``): basic OS |
+| | config |
++---------------------------+-------------------------------------------------+
+| ``maas`` | ``mas01``: OS, MaaS installation, |
+| | ``baremetal`` node commissioning and deploy |
+| | |
+| | .. NOTE:: |
+| | |
+| | Skipped if no ``baremetal`` nodes are |
+| | defined in ``PDF`` (``virtual`` deploy). |
++---------------------------+-------------------------------------------------+
+| ``baremetal_init`` | ``kvm``, ``cmp``: OS install, config |
++---------------------------+-------------------------------------------------+
+| ``dpdk`` | ``cmp``: configure OVS-DPDK |
++---------------------------+-------------------------------------------------+
+| ``networks`` | ``ctl``: create OpenStack networks |
++---------------------------+-------------------------------------------------+
+| ``neutron_gateway`` | ``gtw01``: configure Neutron gateway |
++---------------------------+-------------------------------------------------+
+| ``opendaylight`` | ``odl01``: install & configure ``ODL`` |
++---------------------------+-------------------------------------------------+
+| ``openstack_noha`` | cluster nodes: install OpenStack without ``HA`` |
++---------------------------+-------------------------------------------------+
+| ``openstack_ha`` | cluster nodes: install OpenStack with ``HA`` |
++---------------------------+-------------------------------------------------+
+| ``virtual_control_plane`` | ``kvm``: create ``VCP`` VMs |
+| | |
+| | ``VCP`` VMs: basic OS config |
+| | |
+| | .. NOTE:: |
+| | |
+| | Skipped if ``-N`` deploy argument is used. |
++---------------------------+-------------------------------------------------+
+| ``tacker`` | ``ctl``: install & configure Tacker |
++---------------------------+-------------------------------------------------+
-8) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
-9) `Saltstack Formulas <https://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
+Release Notes
+=============
+
+Please refer to the :ref:`OPNFV Fuel Release Notes <fuel-releasenotes>`
+article.
-Reclass
+References
+==========
-10) `Reclass model <https://reclass.pantsfullofunix.net>`_
+For more information on the OPNFV ``Gambia`` 7.0 release, please see:
+
+#. `OPNFV Home Page`_
+#. `OPNFV Documentation`_
+#. `OPNFV Software Downloads`_
+#. `OPNFV Gambia Wiki Page`_
+#. `OpenStack Queens Release Artifacts`_
+#. `OpenStack Documentation`_
+#. `OpenDaylight Artifacts`_
+#. `Mirantis Cloud Platform Documentation`_
+#. `Saltstack Documentation`_
+#. `Saltstack Formulas`_
+#. `Reclass`_
+
+.. FIXME: cleanup unused refs, extend above list
+.. _`OpenDaylight`: https://www.opendaylight.org/software
+.. _`OpenDaylight Artifacts`: https://www.opendaylight.org/software/downloads
+.. _`MCP`: https://www.mirantis.com/software/mcp/
+.. _`Mirantis Cloud Platform Documentation`: https://docs.mirantis.com/mcp/latest/
+.. _`fuel git repository`: https://git.opnfv.org/fuel
+.. _`pharos git repo`: https://git.opnfv.org/pharos
+.. _`OpenStack Documentation`: https://docs.openstack.org
+.. _`OpenStack Queens Release Artifacts`: https://www.openstack.org/software/queens
+.. _`OPNFV Home Page`: https://www.opnfv.org
+.. _`OPNFV Gambia Wiki Page`: https://wiki.opnfv.org/releases/Gambia
+.. _`OPNFV Documentation`: https://docs.opnfv.org
+.. _`OPNFV Software Downloads`: https://www.opnfv.org/software/download
+.. _`Apache License 2.0`: https://www.apache.org/licenses/LICENSE-2.0
+.. _`Saltstack Documentation`: https://docs.saltstack.com/en/latest/topics/
+.. _`Saltstack Formulas`: https://salt-formulas.readthedocs.io/en/latest/
+.. _`Reclass`: https://reclass.pantsfullofunix.net
+.. _`OPNFV Pharos Specification`: https://wiki.opnfv.org/display/pharos/Pharos+Specification
+.. _`OPNFV PDF Wiki Page`: https://wiki.opnfv.org/display/INF/POD+Descriptor
diff --git a/docs/release/release-notes/index.rst b/docs/release/release-notes/index.rst
index 4b1e4fa77..d4560558b 100644
--- a/docs/release/release-notes/index.rst
+++ b/docs/release/release-notes/index.rst
@@ -1,17 +1,10 @@
-.. _fuel-releasenotes:
-
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-.. _fuel-release-notes-label:
-
-*****************************
-Release notes for Fuel\@OPNFV
-*****************************
+.. _fuel-releasenotes:
.. toctree::
- :numbered:
:maxdepth: 2
release-notes.rst
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 6fd007e76..909963c9b 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -2,152 +2,192 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-========
+************************
+OPNFV Fuel Release Notes
+************************
+
Abstract
========
-This document compiles the release notes for the Fraser release of
-OPNFV when using Fuel as a deployment tool. This is an unified documentation
-for both x86_64 and aarch64 architectures. All information is common for
-both architectures except when explicitly stated.
+This document provides the release notes for ``Gambia`` release with the Fuel
+deployment toolchain.
+Starting with this release, both ``x86_64`` and ``aarch64`` architectures
+are supported at the same time by the ``fuel`` codebase.
+
+License
+=======
+
+All Fuel and "common" entities are protected by the `Apache License 2.0`_.
-===============
Important Notes
===============
-These notes provides release information for the use of Fuel as deployment
-tool for the Fraser release of OPNFV.
+This is the OPNFV ``Gambia`` release that implements the deploy stage of the
+OPNFV CI pipeline via Fuel.
-The goal of the Fraser release and this Fuel-based deployment process is
+Fuel is based on the `MCP`_ installation tool chain.
+More information available at `Mirantis Cloud Platform Documentation`_.
+
+The goal of the ``Gambia`` release and this Fuel-based deployment process is
to establish a lab ready platform accelerating further development
of the OPNFV infrastructure.
-Carefully follow the installation-instructions.
+Carefully follow the installation instructions.
-=======
Summary
=======
-For Fraser, the typical use of Fuel as an OpenStack installer is
-supplemented with OPNFV unique components such as:
-
-- `OpenDaylight <https://www.opendaylight.org/software>`_
-- `Open vSwitch for NFV <https://wiki.opnfv.org/ovsnfv>`_
+``Gambia`` release with the Fuel deployment toolchain will establish an OPNFV
+target system on a Pharos compliant lab infrastructure. The current definition
+of an OPNFV target system is OpenStack Queens combined with an SDN
+controller, such as OpenDaylight. The system is deployed with OpenStack High
+Availability (HA) for most OpenStack services.
-As well as OPNFV-unique configurations of the Hardware and Software stack.
+Fuel also supports non-HA deployments, which deploys a
+single controller, one gateway node and a number of compute nodes.
-This Fraser artifact provides Fuel as the deployment stage tool in the
-OPNFV CI pipeline including:
+Fuel supports ``x86_64``, ``aarch64`` or ``mixed`` architecture clusters.
-- Documentation built by Jenkins
+Furthermore, Fuel is capable of deploying scenarios in a ``baremetal``,
+``virtual`` or ``hybrid`` fashion. ``virtual`` deployments use multiple VMs on
+the Jump Host and internal networking to simulate the ``baremetal`` deployment.
- - overall OPNFV documentation
+For ``Gambia``, the typical use of Fuel as an OpenStack installer is
+supplemented with OPNFV unique components such as:
- - this document (release notes)
+- `OpenDaylight`_
+- Open Virtual Network (``OVN``)
- - installation instructions
+As well as OPNFV-unique configurations of the Hardware and Software stack.
-- Automated deployment of Fraser with running on baremetal or a nested
- hypervisor environment (KVM)
+This ``Gambia`` artifact provides Fuel as the deployment stage tool in the
+OPNFV CI pipeline including:
-- Automated validation of the Fraser deployment
+- Automated (Jenkins, RTD) documentation build & publish (multiple documents);
+- Automated (Jenkins) build & publish of Salt Master Docker image;
+- Automated (Jenkins) deployment of ``Gambia`` running on baremetal or a nested
+ hypervisor environment (KVM);
+- Automated (Jenkins) validation of the ``Gambia`` deployment
-============
Release Data
============
+--------------------------------------+--------------------------------------+
-| **Project** | fuel/armband |
+| **Project** | fuel |
| | |
+--------------------------------------+--------------------------------------+
-| **Repo/tag** | opnfv-6.2.1 |
+| **Repo/tag** | opnfv-7.0.0 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release designation** | Fraser 6.2 |
+| **Release designation** | Gambia 7.0 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | June 29 2018 |
+| **Release date** | November 2nd, 2018 |
| | |
+--------------------------------------+--------------------------------------+
-| **Purpose of the delivery** | Fraser alignment to Released |
-| | MCP baseline + features and |
-| | bug-fixes for the following |
-| | feaures: |
-| | |
-| | - Open vSwitch for NFV |
-| | - OpenDaylight |
-| | - DPDK |
+| **Purpose of the delivery** | OPNFV Gambia 7.0 release |
+--------------------------------------+--------------------------------------+
Version Change
-==============
+--------------
Module Version Changes
-----------------------
-This is the Fraser 6.2 release.
-It is based on following upstream versions:
+~~~~~~~~~~~~~~~~~~~~~~
+
+This is the first tracked version of the ``Gambia`` release with the Fuel
+deployment toolchain. It is based on following upstream versions:
-- MCP Base Release
+- MCP (``Q2`18`` GA release)
-- OpenStack Pike Release
+- OpenStack (``Queens`` release)
-- OpenDaylight Oxygen Release
+- OpenDaylight (``Fluorine`` release)
+
+- Ubuntu (``16.04`` release)
Document Changes
-----------------
-This is the Fraser 6.2 release.
+~~~~~~~~~~~~~~~~
+
+This is the ``Gambia`` 7.0 release.
It comes with the following documentation:
-- :ref:`fuel-release-installation-label`
+- :ref:`OPNFV Fuel Installation Instruction <fuel-installation>`
- Release notes (This document)
-- :ref:`fuel-release-userguide-label`
+- :ref:`OPNFV Fuel Userguide <fuel-userguide>`
Reason for Version
-==================
+------------------
Feature Additions
------------------
-
-**JIRA TICKETS:**
-None
-
-Bug Corrections
----------------
+~~~~~~~~~~~~~~~~~
-**JIRA TICKETS:**
+- ``multiarch`` cluster support;
+- ``hybrid`` cluster support;
+- ``PDF``/``IDF`` support for ``virtual`` PODs;
+- ``baremetal`` support for noHA deployments;
+- containerized Salt Master;
+- ``OVN`` scenarios;
-`Fraser 6.2 bug fixes <https://jira.opnfv.org/issues/?filter=12318>`_
+For an exhaustive list, see the `OPNFV Fuel JIRA: Gambia New features`_ filter.
-(Also See respective Integrated feature project's bug tracking)
+Bug Corrections
+~~~~~~~~~~~~~~~
-Deliverables
-============
+For an exhaustive list, see the `OPNFV Fuel JIRA: Gambia Bugs (fixed)`_ filter.
Software Deliverables
----------------------
-
-- `Fuel@x86_64 installer script files <https://git.opnfv.org/fuel>`_
+~~~~~~~~~~~~~~~~~~~~~
-- `Fuel@aarch64 installer script files <https://git.opnfv.org/armband>`_
+- `fuel git repository`_ with multiarch (``x86_64``, ``aarch64`` or ``mixed``)
+ installer script files
Documentation Deliverables
---------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~
-- :ref:`fuel-release-installation-label`
+- :ref:`OPNFV Fuel Installation Instruction <fuel-installation>`
- Release notes (This document)
-- :ref:`fuel-release-userguide-label`
+- :ref:`OPNFV Fuel Userguide <fuel-userguide>`
+
+Scenario Matrix
+---------------
+
++-------------------------+---------------+-------------+------------+
+| | ``baremetal`` | ``virtual`` | ``hybrid`` |
++=========================+===============+=============+============+
+| os-nosdn-nofeature-noha | | ``x86_64`` | |
++-------------------------+---------------+-------------+------------+
+| os-nosdn-nofeature-ha | ``x86_64``, | | |
+| | ``aarch64`` | | |
++-------------------------+---------------+-------------+------------+
+| os-nosdn-ovs-noha | | ``x86_64`` | |
++-------------------------+---------------+-------------+------------+
+| os-nosdn-ovs-ha | ``x86_64``, | | |
+| | ``aarch64`` | | |
++-------------------------+---------------+-------------+------------+
+| os-odl-nofeature-noha | | ``x86_64`` | |
++-------------------------+---------------+-------------+------------+
+| os-odl-nofeature-ha | ``x86_64``, | | |
+| | ``aarch64`` | | |
++-------------------------+---------------+-------------+------------+
+| os-odl-ovs-noha | | ``x86_64`` | |
++-------------------------+---------------+-------------+------------+
+| os-odl-ovs-ha | ``x86_64`` | | |
++-------------------------+---------------+-------------+------------+
+| os-ovn-nofeature-noha | | ``x86_64`` | |
++-------------------------+---------------+-------------+------------+
+| os-ovn-nofeature-ha | ``aarch64`` | | |
++-------------------------+---------------+-------------+------------+
-=========================================
Known Limitations, Issues and Workarounds
=========================================
System Limitations
-==================
+------------------
- **Max number of blades:** 1 Jumpserver, 3 Controllers, 20 Compute blades
@@ -159,54 +199,50 @@ System Limitations
Known Issues
-============
-
-**JIRA TICKETS:**
+------------
-`Known issues <https://jira.opnfv.org/issues/?filter=12317>`_
-
-(Also See respective Integrated feature project's bug tracking)
+For an exhaustive list, see the `OPNFV Fuel JIRA: Gambia Known issues`_ filter.
Workarounds
-===========
-
-**JIRA TICKETS:**
-
-None
+-----------
-(Also See respective Integrated feature project's bug tracking)
+For an exhaustive list, see the `OPNFV Fuel JIRA: Gambia Workarounds`_ filter.
-============
Test Results
============
-The Fraser 6.2 release with the Fuel deployment tool has undergone QA test
+
+The ``Gambia`` 7.0 release with the Fuel deployment tool has undergone QA test
runs, see separate test results.
-==========
References
==========
-For more information on the OPNFV Fraser 6.2 release, please see:
-
-OPNFV
-=====
-
-1) `OPNFV Home Page <https://www.opnfv.org>`_
-2) `OPNFV Documentation <https://docs.opnfv.org>`_
-3) `OPNFV Software Downloads <https://www.opnfv.org/software/download>`_
-
-OpenStack
-=========
-
-4) `OpenStack Pike Release Artifacts <https://www.openstack.org/software/pike>`_
-
-5) `OpenStack Documentation <https://docs.openstack.org>`_
-
-OpenDaylight
-============
-
-6) `OpenDaylight Artifacts <https://www.opendaylight.org/software/downloads>`_
-
-Fuel
-====
-7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_
+For more information on the OPNFV ``Gambia`` 7.0 release, please see:
+
+#. `OPNFV Home Page`_
+#. `OPNFV Documentation`_
+#. `OPNFV Software Downloads`_
+#. `OPNFV Gambia Wiki Page`_
+#. `OpenStack Queens Release Artifacts`_
+#. `OpenStack Documentation`_
+#. `OpenDaylight Artifacts`_
+#. `Mirantis Cloud Platform Documentation`_
+
+.. FIXME: cleanup unused refs, extend above list
+.. _`OpenDaylight`: https://www.opendaylight.org/software
+.. _`OpenDaylight Artifacts`: https://www.opendaylight.org/software/downloads
+.. _`MCP`: https://www.mirantis.com/software/mcp/
+.. _`Mirantis Cloud Platform Documentation`: https://docs.mirantis.com/mcp/latest/
+.. _`fuel git repository`: https://git.opnfv.org/fuel
+.. _`OpenStack Documentation`: https://docs.openstack.org
+.. _`OpenStack Queens Release Artifacts`: https://www.openstack.org/software/queens
+.. _`OPNFV Home Page`: https://www.opnfv.org
+.. _`OPNFV Gambia Wiki Page`: https://wiki.opnfv.org/releases/Gambia
+.. _`OPNFV Documentation`: https://docs.opnfv.org
+.. _`OPNFV Software Downloads`: https://www.opnfv.org/software/download
+.. _`Apache License 2.0`: https://www.apache.org/licenses/LICENSE-2.0
+.. OPNFV Fuel Gambia JIRA filters
+.. _`OPNFV Fuel JIRA: Gambia Bugs (fixed)`: https://jira.opnfv.org/issues/?filter=12503
+.. _`OPNFV Fuel JIRA: Gambia New features`: https://jira.opnfv.org/issues/?filter=12504
+.. _`OPNFV Fuel JIRA: Gambia Known issues`: https://jira.opnfv.org/issues/?filter=12505
+.. _`OPNFV Fuel JIRA: Gambia Workarounds`: https://jira.opnfv.org/issues/?filter=12506
diff --git a/docs/release/scenarios/index.rst b/docs/release/scenarios/index.rst
index dc12fd05a..29509c0e2 100644
--- a/docs/release/scenarios/index.rst
+++ b/docs/release/scenarios/index.rst
@@ -4,11 +4,12 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-*************************
-Scenarios for Fuel\@OPNFV
-*************************
+********************
+OPNFV Fuel Scenarios
+********************
.. toctree::
+ :maxdepth: 2
os-nosdn-ovs-noha/index.rst
os-nosdn-ovs-ha/index.rst
diff --git a/docs/release/scenarios/os-nosdn-ovs-ha/index.rst b/docs/release/scenarios/os-nosdn-ovs-ha/index.rst
index 723e83be4..c9c9b9985 100644
--- a/docs/release/scenarios/os-nosdn-ovs-ha/index.rst
+++ b/docs/release/scenarios/os-nosdn-ovs-ha/index.rst
@@ -9,8 +9,6 @@ os-nosdn-ovs-ha overview and description
========================================
.. toctree::
- :numbered:
:maxdepth: 2
- os-nosdn-ovs-ha.rst
-
+.. include:: os-nosdn-ovs-ha.rst
diff --git a/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst b/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
index 6841c620c..e653a6232 100644
--- a/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
+++ b/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
@@ -5,7 +5,6 @@
This document provides scenario level details for Gambia 7.0 of
deployment with no SDN controller and no extra features enabled.
-============
Introduction
============
diff --git a/docs/release/scenarios/os-nosdn-ovs-noha/index.rst b/docs/release/scenarios/os-nosdn-ovs-noha/index.rst
index 9726dd07e..135cefca0 100644
--- a/docs/release/scenarios/os-nosdn-ovs-noha/index.rst
+++ b/docs/release/scenarios/os-nosdn-ovs-noha/index.rst
@@ -9,8 +9,6 @@ os-nosdn-ovs-noha overview and description
==========================================
.. toctree::
- :numbered:
:maxdepth: 2
- os-nosdn-ovs-noha.rst
-
+.. include:: os-nosdn-ovs-noha.rst
diff --git a/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst b/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
index edda710c1..42f6ccc36 100644
--- a/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
+++ b/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
@@ -5,7 +5,6 @@
This document provides scenario level details for Gambia 7.0 of
deployment with no SDN controller and no extra features enabled.
-============
Introduction
============
diff --git a/docs/release/scenarios/os-nosdn-vpp-ha/index.rst b/docs/release/scenarios/os-nosdn-vpp-ha/index.rst
index a17c272a8..d4d5a46ef 100644
--- a/docs/release/scenarios/os-nosdn-vpp-ha/index.rst
+++ b/docs/release/scenarios/os-nosdn-vpp-ha/index.rst
@@ -9,8 +9,6 @@ os-nosdn-vpp-ha overview and description
========================================
.. toctree::
- :numbered:
:maxdepth: 2
- os-nosdn-vpp-ha.rst
-
+.. include:: os-nosdn-vpp-ha.rst
diff --git a/docs/release/scenarios/os-nosdn-vpp-ha/os-nosdn-vpp-ha.rst b/docs/release/scenarios/os-nosdn-vpp-ha/os-nosdn-vpp-ha.rst
index eb49e3d34..80c829acc 100644
--- a/docs/release/scenarios/os-nosdn-vpp-ha/os-nosdn-vpp-ha.rst
+++ b/docs/release/scenarios/os-nosdn-vpp-ha/os-nosdn-vpp-ha.rst
@@ -5,7 +5,6 @@
This document provides scenario level details for Gambia 7.0 of
deployment with no SDN controller and VPP enabled as virtual switch.
-============
Introduction
============
diff --git a/docs/release/scenarios/os-nosdn-vpp-noha/index.rst b/docs/release/scenarios/os-nosdn-vpp-noha/index.rst
index d6576a5e6..35059859d 100644
--- a/docs/release/scenarios/os-nosdn-vpp-noha/index.rst
+++ b/docs/release/scenarios/os-nosdn-vpp-noha/index.rst
@@ -9,8 +9,6 @@ os-nosdn-vpp-noha overview and description
==========================================
.. toctree::
- :numbered:
:maxdepth: 2
- os-nosdn-vpp-noha.rst
-
+.. include:: os-nosdn-vpp-noha.rst
diff --git a/docs/release/scenarios/os-nosdn-vpp-noha/os-nosdn-vpp-noha.rst b/docs/release/scenarios/os-nosdn-vpp-noha/os-nosdn-vpp-noha.rst
index 51a0000b4..a699779ba 100644
--- a/docs/release/scenarios/os-nosdn-vpp-noha/os-nosdn-vpp-noha.rst
+++ b/docs/release/scenarios/os-nosdn-vpp-noha/os-nosdn-vpp-noha.rst
@@ -5,7 +5,6 @@
This document provides scenario level details for Gambia 7.0 of
deployment with no SDN controller and VPP enabled as virtual switch.
-============
Introduction
============
diff --git a/docs/release/scenarios/os-ovn-nofeature-ha/index.rst b/docs/release/scenarios/os-ovn-nofeature-ha/index.rst
index 704172235..5a9b2cdfe 100644
--- a/docs/release/scenarios/os-ovn-nofeature-ha/index.rst
+++ b/docs/release/scenarios/os-ovn-nofeature-ha/index.rst
@@ -9,8 +9,6 @@ os-ovn-nofeature-ha overview and description
============================================
.. toctree::
- :numbered:
:maxdepth: 2
- os-ovn-nofeature-ha.rst
-
+.. include:: os-ovn-nofeature-ha.rst
diff --git a/docs/release/scenarios/os-ovn-nofeature-ha/os-ovn-nofeature-ha.rst b/docs/release/scenarios/os-ovn-nofeature-ha/os-ovn-nofeature-ha.rst
index cb469cb3c..0317c4b5b 100644
--- a/docs/release/scenarios/os-ovn-nofeature-ha/os-ovn-nofeature-ha.rst
+++ b/docs/release/scenarios/os-ovn-nofeature-ha/os-ovn-nofeature-ha.rst
@@ -6,7 +6,6 @@ This document provides scenario level details for Gambia 7.0 of deployment
with Open Virtual Network (OVN) providing Layers 2 and 3 networking and no
extra features enabled.
-============
Introduction
============
diff --git a/docs/release/scenarios/os-ovn-nofeature-noha/index.rst b/docs/release/scenarios/os-ovn-nofeature-noha/index.rst
index 7c5baf5bb..ba823f3b5 100644
--- a/docs/release/scenarios/os-ovn-nofeature-noha/index.rst
+++ b/docs/release/scenarios/os-ovn-nofeature-noha/index.rst
@@ -9,8 +9,6 @@ os-ovn-nofeature-noha overview and description
==============================================
.. toctree::
- :numbered:
:maxdepth: 2
- os-ovn-nofeature-noha.rst
-
+.. include:: os-ovn-nofeature-noha.rst
diff --git a/docs/release/scenarios/os-ovn-nofeature-noha/os-ovn-nofeature-noha.rst b/docs/release/scenarios/os-ovn-nofeature-noha/os-ovn-nofeature-noha.rst
index 0005f7549..44bcbfa7c 100644
--- a/docs/release/scenarios/os-ovn-nofeature-noha/os-ovn-nofeature-noha.rst
+++ b/docs/release/scenarios/os-ovn-nofeature-noha/os-ovn-nofeature-noha.rst
@@ -6,7 +6,6 @@ This document provides scenario level details for Gambia 7.0 of deployment
with Open Virtual Network (OVN) providing Layers 2 and 3 networking and no
extra features enabled.
-============
Introduction
============
diff --git a/docs/release/userguide/img/saltstack.png b/docs/release/userguide/img/saltstack.png
deleted file mode 100644
index d57452c65..000000000
--- a/docs/release/userguide/img/saltstack.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst
index d4330d08c..ab616d317 100644
--- a/docs/release/userguide/index.rst
+++ b/docs/release/userguide/index.rst
@@ -1,18 +1,10 @@
-.. _fuel-userguide:
-
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-.. _fuel-release-userguide-label:
-
-**************************
-User guide for Fuel\@OPNFV
-**************************
+.. _fuel-userguide:
.. toctree::
- :numbered:
:maxdepth: 2
userguide.rst
-
diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst
index 76639abcf..c6602f3cb 100644
--- a/docs/release/userguide/userguide.rst
+++ b/docs/release/userguide/userguide.rst
@@ -2,410 +2,1016 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-========
+*********************
+OPNFV Fuel User Guide
+*********************
+
Abstract
========
-This document contains details about how to use OPNFV Fuel - Fraser
-release - after it was deployed. For details on how to deploy check the
-installation instructions in the :ref:`fuel_userguide_references` section.
+This document contains details about using OPNFV Fuel ``Gambia`` release after
+it was deployed. For details on how to deploy OpenStack, check
+the installation instructions in the :ref:`fuel_userguide_references` section.
-This is an unified documentation for both x86_64 and aarch64
+This is an unified documentation for both ``x86_64`` and ``aarch64``
architectures. All information is common for both architectures
except when explicitly stated.
-
-
-================
Network Overview
================
Fuel uses several networks to deploy and administer the cloud:
-+------------------+---------------------------------------------------------+
-| Network name | Description |
-| | |
-+==================+=========================================================+
-| **PXE/ADMIN** | Used for booting the nodes via PXE and/or Salt |
-| | control network |
-+------------------+---------------------------------------------------------+
-| **MCPCONTROL** | Used to provision the infrastructure VMs (Salt & MaaS) |
-+------------------+---------------------------------------------------------+
-| **Mgmt** | Used for internal communication between |
-| | OpenStack components |
-+------------------+---------------------------------------------------------+
-| **Internal** | Used for VM data communication within the |
-| | cloud deployment |
-+------------------+---------------------------------------------------------+
-| **Public** | Used to provide Virtual IPs for public endpoints |
-| | that are used to connect to OpenStack services APIs. |
-| | Used by Virtual machines to access the Internet |
-+------------------+---------------------------------------------------------+
++------------------+----------------------------------------------------------+
+| Network name | Description |
+| | |
++==================+==========================================================+
+| **PXE/admin** | Used for booting the nodes via PXE and/or Salt |
+| | control network |
++------------------+----------------------------------------------------------+
+| **mcpcontrol** | Used to provision the infrastructure hosts (Salt & MaaS) |
++------------------+----------------------------------------------------------+
+| **management** | Used for internal communication between |
+| | OpenStack components |
++------------------+----------------------------------------------------------+
+| **internal** | Used for VM data communication within the |
+| | cloud deployment |
++------------------+----------------------------------------------------------+
+| **public** | Used to provide Virtual IPs for public endpoints |
+| | that are used to connect to OpenStack services APIs. |
+| | Used by Virtual machines to access the Internet |
++------------------+----------------------------------------------------------+
+
+These networks - except ``mcpcontrol`` - can be Linux bridges configured
+before the deploy on the Jumpserver.
+If they don't exists at deploy time, they will be created by the scripts as
+``libvirt`` managed networks.
+
+Network ``mcpcontrol``
+~~~~~~~~~~~~~~~~~~~~~~
+
+``mcpcontrol`` is a virtual network, managed by libvirt. Its only purpose is to
+provide a simple method of assigning an arbitrary ``INSTALLER_IP`` to the Salt
+master node (``cfg01``), to maintain backwards compatibility with old OPNFV
+Fuel behavior. Normally, end-users only need to change the ``INSTALLER_IP`` if
+the default CIDR (``10.20.0.0/24``) overlaps with existing lab networks.
+
+``mcpcontrol`` has both NAT and DHCP enabled, so the Salt master (``cfg01``)
+and the MaaS VM (``mas01``, when present) get assigned predefined IPs (``.2``,
+``.3``, while the jumpserver bridge port gets ``.1``).
+
++------------------+---------------------------+-----------------------------+
+| Host | Offset in IP range | Default address |
++==================+===========================+=============================+
+| ``jumpserver`` | 1st | ``10.20.0.1`` |
++------------------+---------------------------+-----------------------------+
+| ``cfg01`` | 2nd | ``10.20.0.2`` |
++------------------+---------------------------+-----------------------------+
+| ``mas01`` | 3rd | ``10.20.0.3`` |
++------------------+---------------------------+-----------------------------+
+
+This network is limited to the ``jumpserver`` host and does not require any
+manual setup.
+
+Network ``PXE/admin``
+~~~~~~~~~~~~~~~~~~~~~
+
+.. TIP::
+
+ ``PXE/admin`` does not usually use an IP range offset in ``IDF``.
+
+.. NOTE::
+
+ During ``MaaS`` commissioning phase, IP addresses are handed out by
+ ``MaaS``'s DHCP.
+.. NOTE::
-These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
-Jumpserver. If they don't exists at deploy time, they will be created by the scripts as virsh
-networks.
+ Default addresses in below table correspond to a ``PXE/admin`` CIDR of
+ ``192.168.11.0/24`` (the usual value used in OPNFV labs).
+
+ This is defined in ``IDF`` and can easily be changed to something else.
+
+.. TODO: detail MaaS DHCP range start/end
+
++------------------+-----------------------+---------------------------------+
+| Host | Offset in IP range | Default address |
++==================+=======================+=================================+
+| ``jumpserver`` | 1st | ``192.168.11.1`` |
+| | | (manual assignment) |
++------------------+-----------------------+---------------------------------+
+| ``cfg01`` | 2nd | ``192.168.11.2`` |
++------------------+-----------------------+---------------------------------+
+| ``mas01`` | 3rd | ``192.168.11.3`` |
++------------------+-----------------------+---------------------------------+
+| ``prx01``, | 4th, | ``192.168.11.4``, |
+| ``prx02`` | 5th | ``192.168.11.5`` |
++------------------+-----------------------+---------------------------------+
+| ``gtw01``, | ... | ``...`` |
+| ``gtw02``, | | |
+| ``gtw03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``kvm01``, | | |
+| ``kvm02``, | | |
+| ``kvm03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``dbs01``, | | |
+| ``dbs02``, | | |
+| ``dbs03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``msg01``, | | |
+| ``msg02``, | | |
+| ``msg03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``mdb01``, | | |
+| ``mdb02``, | | |
+| ``mdb03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``ctl01``, | | |
+| ``ctl02``, | | |
+| ``ctl03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``odl01``, | | |
+| ``odl02``, | | |
+| ``odl03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``mon01``, | | |
+| ``mon02``, | | |
+| ``mon03``, | | |
+| ``log01``, | | |
+| ``log02``, | | |
+| ``log03``, | | |
+| ``mtr01``, | | |
+| ``mtr02``, | | |
+| ``mtr03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``cmp001``, | | |
+| ``cmp002``, | | |
+| ``...`` | | |
++------------------+-----------------------+---------------------------------+
+
+Network ``management``
+~~~~~~~~~~~~~~~~~~~~~~
+
+.. TIP::
+
+ ``management`` often has an IP range offset defined in ``IDF``.
-Mcpcontrol exists only on the Jumpserver and needs to be virtual because a DHCP server runs
-on this network and associates static host entry IPs for Salt and Maas VMs.
+.. NOTE::
+ Default addresses in below table correspond to a ``management`` CIDR of
+ ``172.16.10.0/24`` (one of the commonly used values in OPNFV labs).
+ This is defined in ``IDF`` and can easily be changed to something else.
+
+.. WARNING::
+
+ Default addresses in below table correspond to a ``management`` IP range of
+ ``172.16.10.10-172.16.10.254`` (one of the commonly used values in OPNFV
+ labs). This is defined in ``IDF`` and can easily be changed to something
+ else. Since the ``jumpserver`` address is manually assigned, this is
+ usually not subject to the IP range restriction in ``IDF``.
+
++------------------+-----------------------+---------------------------------+
+| Host | Offset in IP range | Default address |
++==================+=======================+=================================+
+| ``jumpserver`` | N/A | ``172.16.10.1`` |
+| | | (manual assignment) |
++------------------+-----------------------+---------------------------------+
+| ``cfg01`` | 1st | ``172.16.10.2`` |
+| | | (IP range ignored for now) |
++------------------+-----------------------+---------------------------------+
+| ``mas01`` | 2nd | ``172.16.10.12`` |
++------------------+-----------------------+---------------------------------+
+| ``prx`` | 3rd, | ``172.16.10.13``, |
+| | | |
+| ``prx01``, | 4th, | ``172.16.10.14``, |
+| ``prx02`` | 5th | ``172.16.10.15`` |
++------------------+-----------------------+---------------------------------+
+| ``gtw01``, | ... | ``...`` |
+| ``gtw02``, | | |
+| ``gtw03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``kvm``, | | |
+| | | |
+| ``kvm01``, | | |
+| ``kvm02``, | | |
+| ``kvm03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``dbs``, | | |
+| | | |
+| ``dbs01``, | | |
+| ``dbs02``, | | |
+| ``dbs03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``msg``, | | |
+| | | |
+| ``msg01``, | | |
+| ``msg02``, | | |
+| ``msg03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``mdb``, | | |
+| | | |
+| ``mdb01``, | | |
+| ``mdb02``, | | |
+| ``mdb03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``ctl``, | | |
+| | | |
+| ``ctl01``, | | |
+| ``ctl02``, | | |
+| ``ctl03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``odl``, | | |
+| | | |
+| ``odl01``, | | |
+| ``odl02``, | | |
+| ``odl03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``mon``, | | |
+| | | |
+| ``mon01``, | | |
+| ``mon02``, | | |
+| ``mon03``, | | |
+| | | |
+| ``log``, | | |
+| | | |
+| ``log01``, | | |
+| ``log02``, | | |
+| ``log03``, | | |
+| | | |
+| ``mtr``, | | |
+| | | |
+| ``mtr01``, | | |
+| ``mtr02``, | | |
+| ``mtr03`` | | |
++------------------+-----------------------+---------------------------------+
+| ``cmp001``, | | |
+| ``cmp002``, | | |
+| ``...`` | | |
++------------------+-----------------------+---------------------------------+
+
+Network ``internal``
+~~~~~~~~~~~~~~~~~~~~
+
+.. TIP::
+
+ ``internal`` does not usually use an IP range offset in ``IDF``.
+.. NOTE::
-===================
-Accessing the Cloud
-===================
+ Default addresses in below table correspond to an ``internal`` CIDR of
+ ``10.1.0.0/24`` (the usual value used in OPNFV labs).
+ This is defined in ``IDF`` and can easily be changed to something else.
+
++------------------+------------------------+--------------------------------+
+| Host | Offset in IP range | Default address |
++==================+========================+================================+
+| ``jumpserver`` | N/A | ``10.1.0.1`` |
+| | | (manual assignment, optional) |
++------------------+------------------------+--------------------------------+
+| ``gtw01``, | 1st, | ``10.1.0.2``, |
+| ``gtw02``, | 2nd, | ``10.1.0.3``, |
+| ``gtw03`` | 3rd | ``10.1.0.4`` |
++------------------+------------------------+--------------------------------+
+| ``cmp001``, | 4th, | ``10.1.0.5``, |
+| ``cmp002``, | 5th, | ``10.1.0.6``, |
+| ``...`` | ... | ``...`` |
++------------------+------------------------+--------------------------------+
-Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
-ssh key ``/var/lib/opnfv/mcp.rsa``. The example below is a connection to Salt master.
+Network ``public``
+~~~~~~~~~~~~~~~~~~
- .. code-block:: bash
+.. TIP::
- $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
+ ``public`` often has an IP range offset defined in ``IDF``.
.. NOTE::
- The Salt master IP is not hard set, it is configurable via ``INSTALLER_IP`` during deployment
+ Default addresses in below table correspond to a ``public`` CIDR of
+ ``172.30.10.0/24`` (one of the used values in OPNFV labs).
+ This is defined in ``IDF`` and can easily be changed to something else.
+
+.. WARNING::
+
+ Default addresses in below table correspond to a ``public`` IP range of
+ ``172.30.10.100-172.30.10.254`` (one of the used values in OPNFV
+ labs). This is defined in ``IDF`` and can easily be changed to something
+ else. Since the ``jumpserver`` address is manually assigned, this is
+ usually not subject to the IP range restriction in ``IDF``.
+
++------------------+------------------------+--------------------------------+
+| Host | Offset in IP range | Default address |
++==================+========================+================================+
+| ``jumpserver`` | N/A | ``172.30.10.72`` |
+| | | (manual assignment, optional) |
++------------------+------------------------+--------------------------------+
+| ``prx``, | 1st, | ``172.30.10.101``, |
+| | | |
+| ``prx01``, | 2nd, | ``172.30.10.102``, |
+| ``prx02`` | 3rd | ``172.30.10.103`` |
++------------------+------------------------+--------------------------------+
+| ``gtw01``, | 4th, | ``172.30.10.104``, |
+| ``gtw02``, | 5th, | ``172.30.10.105``, |
+| ``gtw03`` | 6th | ``172.30.10.106`` |
++------------------+------------------------+--------------------------------+
+| ``ctl01``, | ... | ``...`` |
+| ``ctl02``, | | |
+| ``ctl03`` | | |
++------------------+------------------------+--------------------------------+
+| ``odl``, | | |
++------------------+------------------------+--------------------------------+
+| ``cmp001``, | | |
+| ``cmp002``, | | |
+| ``...`` | | |
++------------------+------------------------+--------------------------------+
+
+Accessing the Salt Master Node (``cfg01``)
+==========================================
+
+The Salt Master node (``cfg01``) runs a ``sshd`` server listening on
+``0.0.0.0:22``.
+
+To login as ``ubuntu`` user, use the RSA private key ``/var/lib/opnfv/mcp.rsa``:
+
+.. code-block:: console
+
+ jenkins@jumpserver:~$ ssh -o StrictHostKeyChecking=no \
+ -i /var/lib/opnfv/mcp.rsa \
+ -l ubuntu 10.20.0.2
+ ubuntu@cfg01:~$
-Logging in to cluster nodes is possible from the Jumpserver and from Salt master. On the Salt master
-cluster hostnames can be used instead of IP addresses:
+.. NOTE::
- .. code-block:: bash
+ User ``ubuntu`` has sudo rights.
- $ sudo -i
- $ ssh -i mcp.rsa ubuntu@ctl01
+.. TIP::
-User *ubuntu* has sudo rights.
+ The Salt master IP (``10.20.0.2``) is not hard set, it is configurable via
+ ``INSTALLER_IP`` during deployment.
+.. TIP::
-=============================
-Exploring the Cloud with Salt
-=============================
+ Starting with the ``Gambia`` release, ``cfg01`` is containerized, so this
+ also works (from ``jumpserver`` only):
-To gather information about the cloud, the salt commands can be used. It is based
-around a master-minion idea where the salt-master pushes config to the minions to
-execute actions.
+.. code-block:: console
-For example tell salt to execute a ping to ``8.8.8.8`` on all the nodes.
+ jenkins@jumpserver:~$ docker exec -it fuel bash
+ root@cfg01:~$
-.. figure:: img/saltstack.png
+Accessing Cluster Nodes
+=======================
-Complex filters can be done to the target like compound queries or node roles.
-For more information about Salt see the :ref:`fuel_userguide_references` section.
+Logging in to cluster nodes is possible from the Jumpserver, Salt Master etc.
-Some examples are listed below. Note that these commands are issued from Salt master
-as *root* user.
+.. code-block:: console
+ jenkins@jumpserver:~$ ssh -i /var/lib/opnfv/mcp.rsa ubuntu@192.168.11.52
-#. View the IPs of all the components
-
- .. code-block:: bash
-
- root@cfg01:~$ salt "*" network.ip_addrs
- cfg01.mcp-pike-odl-ha.local:
- - 10.20.0.2
- - 172.16.10.100
- mas01.mcp-pike-odl-ha.local:
- - 10.20.0.3
- - 172.16.10.3
- - 192.168.11.3
- .........................
+.. TIP::
+ ``/etc/hosts`` on ``cfg01`` has all the cluster hostnames, which can be
+ used instead of IP addresses.
-#. View the interfaces of all the components and put the output in a file with yaml format
+.. code-block:: console
- .. code-block:: bash
+ root@cfg01:~$ ssh -i ~/fuel/mcp/scripts/mcp.rsa ubuntu@ctl01
- root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
- root@cfg01:~# cat interfaces.yaml
- cfg01.mcp-pike-odl-ha.local:
- enp1s0:
- hwaddr: 52:54:00:72:77:12
- inet:
- - address: 10.20.0.2
- broadcast: 10.20.0.255
- label: enp1s0
- netmask: 255.255.255.0
- inet6:
- - address: fe80::5054:ff:fe72:7712
- prefixlen: '64'
- scope: link
- up: true
- .........................
+Debugging ``MaaS`` Comissioning/Deployment Issues
+=================================================
+One of the most common issues when setting up a new POD is ``MaaS`` failing to
+commission/deploy the nodes, usually timing out after a couple of retries.
-#. View installed packages in MaaS node
+Such failures might indicate misconfiguration in ``PDF``/``IDF``, ``TOR``
+switch configuration or even faulty hardware.
- .. code-block:: bash
+Here are a couple of pointers for isolating the problem.
- root@cfg01:~# salt "mas*" pkg.list_pkgs
- mas01.mcp-pike-odl-ha.local:
- ----------
- accountsservice:
- 0.6.40-2ubuntu11.3
- acl:
- 2.2.52-3
- acpid:
- 1:2.0.26-1ubuntu2
- adduser:
- 3.113+nmu3ubuntu4
- anerd:
- 1
- .........................
+Accessing the ``MaaS`` Dashboard
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+``MaaS`` web-based dashboard is available at
+``http://<mas01 IP address>:5240/MAAS``, e.g.
+``http://172.16.10.12:5240/MAAS``.
-#. Execute any linux command on all nodes (list the content of ``/var/log`` in this example)
+The administrator credentials are ``opnfv``/``opnfv_secret``.
- .. code-block:: bash
+.. NOTE::
- root@cfg01:~# salt "*" cmd.run 'ls /var/log'
- cfg01.mcp-pike-odl-ha.local:
- alternatives.log
- apt
- auth.log
- boot.log
- btmp
- cloud-init-output.log
- cloud-init.log
- .........................
+ ``mas01`` VM does not automatically get assigned an IP address in the
+ public network segment. If ``MaaS`` dashboard should be accesiable from
+ the public network, such an address can be manually added to the last
+ VM NIC interface in ``mas01`` (which is already connected to the public
+ network bridge).
+Ensure Commission/Deploy Timeouts Are Not Too Small
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-#. Execute any linux command on nodes using compound queries filter
+Some hardware takes longer to boot or to run the initial scripts during
+commissioning/deployment phases. If that's the case, ``MaaS`` will time out
+waiting for the process to finish. ``MaaS`` logs will reflect that, and the
+issue is usually easy to observe on the nodes' serial console - if the node
+seems to PXE-boot the OS live image, starts executing cloud-init/curtin
+hooks without spilling critical errors, then it is powered down/shut off,
+most likely the timeout was hit.
- .. code-block:: bash
+To access the serial console of a node, see your board manufacturer's
+documentation. Some hardware no longer has a physical serial connector these
+days, usually being replaced by a vendor-specific software-based interface.
- root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
- cfg01.mcp-pike-odl-ha.local:
- alternatives.log
- apt
- auth.log
- boot.log
- btmp
- cloud-init-output.log
- cloud-init.log
- .........................
+If the board supports ``SOL`` (Serial Over LAN) over ``IPMI`` lanplus protocol,
+a simpler solution to hook to the serial console is to use ``ipmitool``.
+.. TIP::
-#. Execute any linux command on nodes using role filter
+ Early boot stage output might not be shown over ``SOL``, but only over
+ the video console provided by the (vendor-specific) interface.
- .. code-block:: bash
+.. code-block:: console
- root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
- cmp001.mcp-pike-odl-ha.local:
- alternatives.log
- apache2
- apt
- auth.log
- btmp
- ceilometer
- cinder
- cloud-init-output.log
- cloud-init.log
- .........................
+ jenkins@jumpserver:~$ ipmitool -H <host BMC IP> -U <user> -P <pass> \
+ -I lanplus sol activate
+To bypass this, simply set a larger timeout in the ``IDF``.
+Check Jumpserver Network Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-===================
-Accessing Openstack
-===================
+.. code-block:: console
-Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
-Openstack credentials are at ``/root/keystonercv3``.
+ jenkins@jumpserver:~$ brctl show
+ jenkins@jumpserver:~$ ifconfig -a
- .. code-block:: bash
++-----------------------+------------------------------------------------+
+| Configuration item | Expected behavior |
++=======================+================================================+
+| IP addresses assigned | IP addresses should be assigned to the bridge, |
+| to bridge ports | and not to individual bridge ports |
++-----------------------+------------------------------------------------+
- root@ctl01:~# source keystonercv3
- root@ctl01:~# openstack image list
- +--------------------------------------+-----------------------------------------------+--------+
- | ID | Name | Status |
- +======================================+===============================================+========+
- | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
- | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
- +--------------------------------------+-----------------------------------------------+--------+
+Check Network Connectivity Between Nodes on the Jumpserver
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+``cfg01`` is a Docker container running on the ``jumpserver``, connected to
+Docker networks (created by docker-compose automatically on container up),
+which in turn are connected using veth pairs to their ``libvirt`` managed
+counterparts.
-The OpenStack Dashboard, Horizon, is available at ``http://<proxy public VIP>``.
-The administrator credentials are **admin**/**opnfv_secret**.
+For example, the ``mcpcontrol`` network(s) should look like below.
-.. figure:: img/horizon_login.png
+.. code-block:: console
+ jenkins@jumpserver:~$ brctl show mcpcontrol
+ bridge name bridge id STP enabled interfaces
+ mcpcontrol 8000.525400064f77 yes mcpcontrol-nic
+ veth_mcp0
+ vnet8
-A full list of IPs/services is available at ``<proxy public VIP>:8090`` for baremetal deploys.
+ jenkins@jumpserver:~$ docker network ls
+ NETWORK ID NAME DRIVER SCOPE
+ 81a0fdb3bd78 docker-compose_docker-mcpcontrol macvlan local
+ [...]
-.. figure:: img/salt_services_ip.png
+ jenkins@jumpserver:~$ docker network inspect docker-compose_mcpcontrol
+ [
+ {
+ "Name": "docker-compose_mcpcontrol",
+ [...]
+ "Options": {
+ "parent": "veth_mcp1"
+ },
+ }
+ ]
-==============================
-Guest Operating System Support
-==============================
+Before investigating the rest of the cluster networking configuration, the
+first thing to check is that ``cfg01`` has network connectivity to other
+jumpserver hosted nodes, e.g. ``mas01`` and to the jumpserver itself
+(provided that the jumpserver has an IP address in that particular network
+segment).
-There are a number of possibilities regarding the guest operating systems which can be spawned
-on the nodes. The current system spawns virtual machines for VCP VMs on the KVM nodes and VMs
-requested by users in OpenStack compute nodes. Currently the system supports the following
-UEFI-images for the guests:
-
-+------------------+-------------------+------------------+
-| OS name | x86_64 status | aarch64 status |
-+==================+===================+==================+
-| Ubuntu 17.10 | untested | Full support |
-+------------------+-------------------+------------------+
-| Ubuntu 16.04 | Full support | Full support |
-+------------------+-------------------+------------------+
-| Ubuntu 14.04 | untested | Full support |
-+------------------+-------------------+------------------+
-| Fedora atomic 27 | untested | Full support |
-+------------------+-------------------+------------------+
-| Fedora cloud 27 | untested | Full support |
-+------------------+-------------------+------------------+
-| Debian | untested | Full support |
-+------------------+-------------------+------------------+
-| Centos 7 | untested | Not supported |
-+------------------+-------------------+------------------+
-| Cirros 0.3.5 | Full support | Full support |
-+------------------+-------------------+------------------+
-| Cirros 0.4.0 | Full support | Full support |
-+------------------+-------------------+------------------+
-
-
-The above table covers only UEFI image and implies OVMF/AAVMF firmware on the host. An x86 deployment
-also supports non-UEFI images, however that choice is up to the underlying hardware and the administrator
-to make.
-
-The images for the above operating systems can be found in their respective websites.
+.. code-block:: console
+ jenkins@jumpserver:~$ docker exec -it fuel bash
+ root@cfg01:~# ifconfig -a | grep inet
+ inet addr:10.20.0.2 Bcast:0.0.0.0 Mask:255.255.255.0
+ inet addr:172.16.10.2 Bcast:0.0.0.0 Mask:255.255.255.0
+ inet addr:192.168.11.2 Bcast:0.0.0.0 Mask:255.255.255.0
-=================
-OpenStack Storage
-=================
+For each network of interest (``mcpcontrol``, ``mgmt``, ``PXE/admin``), check
+that ``cfg01`` can ping the jumpserver IP in that network segment, as well as
+the ``mas01`` IP in that network.
+
+.. NOTE::
-OpenStack Cinder is the project behind block storage in OpenStack and Fuel@OPNFV supports LVM out of the box.
-By default x86 supports 2 additional block storage devices and ARMBand supports only one.
-More devices can be supported if the OS-image created has additional properties allowing block storage devices
-to be spawned as SCSI drives. To do this, add the properties below to the server:
+ ``mcpcontrol`` is set up at VM bringup, so it should always be available,
+ while the other networks are configured by Salt as part of the
+ ``virtual_init`` STATE file.
- .. code-block:: bash
+.. code-block:: console
- $ openstack image set --property hw_disk_bus='scsi' --property hw_scsi_model='virtio-scsi' <image>
+ root@cfg01:~# ping -c1 10.20.0.1 # mcpcontrol jumpserver IP
+ root@cfg01:~# ping -c1 10.20.0.3 # mcpcontrol mas01 IP
-The choice regarding which bus to use for the storage drives is an important one. Virtio-blk is the default
-choice for Fuel@OPNFV which attaches the drives in ``/dev/vdX``. However, since we want to be able to attach a
-larger number of volumes to the virtual machines, we recommend the switch to SCSI drives which are attached
-in ``/dev/sdX`` instead. Virtio-scsi is a little worse in terms of performance but the ability to add a larger
-number of drives combined with added features like ZFS, Ceph et al, leads us to suggest the use of virtio-scsi in Fuel@OPNFV for both architectures.
+.. TIP::
-More details regarding the differences and performance of virtio-blk vs virtio-scsi are beyond the scope
-of this manual but can be easily found in other sources online like `4`_ or `5`_.
+ ``mcpcontrol`` CIDR is configurable via ``INSTALLER_IP`` env var during
+ deployment. However, IP offsets inside that segment are hard set to ``.1``
+ for the jumpserver, ``.2`` for ``cfg01``, respectively to ``.3`` for
+ ``mas01`` node.
-.. _4: https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/
+.. code-block:: console
-.. _5: https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/
+ root@cfg01:~# salt 'mas*' pillar.item --out yaml \
+ _param:infra_maas_node01_deploy_address \
+ _param:infra_maas_node01_address
+ mas01.mcp-ovs-noha.local:
+ _param:infra_maas_node01_address: 172.16.10.12
+ _param:infra_maas_node01_deploy_address: 192.168.11.3
-Additional configuration for configuring images in openstack can be found in the OpenStack Glance documentation.
+ root@cfg01:~# ping -c1 192.168.11.1 # PXE/admin jumpserver IP
+ root@cfg01:~# ping -c1 192.168.11.3 # PXE/admin mas01 IP
+ root@cfg01:~# ping -c1 172.16.10.1 # mgmt jumpserver IP
+ root@cfg01:~# ping -c1 172.16.10.12 # mgmt mas01 IP
+.. TIP::
+ Jumpserver IP addresses for ``PXE/admin``, ``mgmt`` and ``public`` bridges
+ are user-chosen and manually set, so above snippets should be adjusted
+ accordingly if the user chose a different IP, other than ``.1`` in each
+ CIDR.
-===================
-Openstack Endpoints
-===================
+Alternatively, a quick ``nmap`` scan would work just as well.
-For each Openstack service three endpoints are created: ``admin``, ``internal`` and ``public``.
+.. code-block:: console
- .. code-block:: bash
+ root@cfg01:~# apt update && apt install -y nmap
+ root@cfg01:~# nmap -sn 10.20.0.0/24 # expected: cfg01, mas01, jumpserver
+ root@cfg01:~# nmap -sn 192.168.11.0/24 # expected: cfg01, mas01, jumpserver
+ root@cfg01:~# nmap -sn 172.16.10.0/24 # expected: cfg01, mas01, jumpserver
- ubuntu@ctl01:~$ openstack endpoint list --service keystone
- +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
- | ID | Region | Service Name | Service Type | Enabled | Interface | URL |
- +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
- | 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone | identity | True | internal | http://172.16.10.26:5000/v3 |
- | 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone | identity | True | admin | http://172.16.10.26:35357/v3 |
- | b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone | identity | True | public | https://10.0.15.2:5000/v3 |
- +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+Check ``DHCP`` Reaches Cluster Nodes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-MCP sets up all Openstack services to talk to each other over unencrypted
-connections on the internal management network. All admin/internal endpoints use
-plain http, while the public endpoints are https connections terminated via nginx
-at the VCP proxy VMs.
+One common symptom observed during failed commissioning is that ``DHCP`` does
+not work as expected between cluster nodes (baremetal nodes in the cluster; or
+virtual machines on the jumpserver in case of ``hybrid`` deployments) and
+the ``MaaS`` node.
-To access the public endpoints an SSL certificate has to be provided. For
-convenience, the installation script will copy the required certificate into
-to the cfg01 node at ``/etc/ssl/certs/os_cacert``.
+To confirm or rule out this possibility, monitor the serial console output of
+one (or more) cluster nodes during ``MaaS`` commissioning. If the node is
+properly configured to attempt PXE boot, yet it times out waiting for an IP
+address from ``mas01`` ``DHCP``, it's worth checking that ``DHCP`` packets
+reach the ``jumpserver``, respectively the ``mas01`` VM.
-Copy the certificate from the cfg01 node to the client that will access the https
-endpoints and place it under ``/etc/ssl/certs/``. The SSL connection will be established
-automatically after.
+.. code-block:: console
- .. code-block:: bash
+ jenkins@jumpserver:~$ sudo apt update && sudo apt install -y dhcpdump
+ jenkins@jumpserver:~$ sudo dhcpdump -i admin_br
- $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 \
- "cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert
+.. TIP::
+ If ``DHCP`` requests are present, but no replies are sent, ``iptables``
+ might be interfering on the jumpserver.
+Check ``MaaS`` Logs
+~~~~~~~~~~~~~~~~~~~
+
+If networking looks fine, yet nodes still fail to commission and/or deploy,
+``MaaS`` logs might offer more details about the failure:
+
+* ``/var/log/maas/maas.log``
+* ``/var/log/maas/rackd.log``
+* ``/var/log/maas/regiond.log``
+
+.. TIP::
+
+ If the problem is with the cluster node and not on the ``MaaS`` server,
+ node's kernel logs usually contain useful information.
+ These are saved via rsyslog on the ``mas01`` node in
+ ``/var/log/maas/rsyslog``.
+
+Recovering Failed Deployments
=============================
-Reclass model viewer tutorial
-=============================
+The first deploy attempt might fail due to various reasons. If the problem
+is not systemic (i.e. fixing it will not introduce incompatible configuration
+changes, like setting a different ``INSTALLER_IP``), the environment is safe
+to be reused and the deployment process can pick up from where it left off.
+
+Leveraging these mechanisms requires a minimum understanding of how the
+deploy process works, at least for manual ``STATE`` runs.
+
+Automatic (re)deploy
+~~~~~~~~~~~~~~~~~~~~
+
+OPNFV Fuel's ``deploy.sh`` script offers a dedicated argument for this, ``-f``,
+which will skip executing the first ``N`` ``STATE`` files, where ``N`` is the
+number of ``-f`` occurrences in the argument list.
+
+.. TIP::
+
+ The list of ``STATE`` files to be executed for a specific environment
+ depends on the OPNFV scenario chosen, deployment type (``virtual``,
+ ``baremetal`` or ``hybrid``) and the presence/absence of a ``VCP``
+ (virtualized control plane).
+
+e.g.: Let's consider a ``baremetal`` enviroment, with ``VCP`` and a simple
+scenario ``os-nosdn-nofeature-ha``, where ``deploy.sh`` failed executing the
+``openstack_ha`` ``STATE`` file.
+
+The simplest redeploy approach (which usually works for **any** combination of
+deployment type/VCP/scenario) is to issue the same deploy command as the
+original attempt used, then adding a single ``-f``:
+
+.. code-block:: console
+
+ jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> -p <pod_name> \
+ -s <scenario> [...] \
+ -f # skips running the virtual_init STATE file
-In order to get a better understanding on the reclass model Fuel uses, the `reclass-doc
-<https://github.com/jirihybek/reclass-doc>`_ can be used to visualise the reclass model.
-A simplified installation can be done with the use of a docker ubuntu container. This
-approach will avoid installing packages on the host, which might collide with other packages.
-After the installation is done, a webbrowser on the host can be used to view the results.
+All ``STATE`` files are re-entrant, so the above is equivalent (but a little
+slower) to skipping all ``STATE`` files before the ``openstack_ha`` one, like:
+
+.. code-block:: console
+
+ jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> -p <pod_name> \
+ -s <scenario> [...] \
+ -ffff # skips virtual_init, maas, baremetal_init, virtual_control_plane
+
+.. TIP::
+
+ For fine tuning the infrastructure setup steps executed during deployment,
+ see also the ``-e`` and ``-P`` deploy arguments.
.. NOTE::
- The host can be any device with Docker package already installed.
- The user which runs the docker needs to have root priviledges.
+ On rare occassions, the cluster cannot idempotently be redeployed (e.g.
+ broken MySQL/Galera cluster), in which case some cleanup is due before
+ (re)running the ``STATE`` files. See ``-E`` deploy arg, which allows
+ either forcing a ``MaaS`` node deletion, then redeployment of all
+ baremetal nodes, if used twice (``-EE``); or only erasing the ``VCP`` VMs
+ if used only once (``-E``).
+
+Manual ``STATE`` Run
+~~~~~~~~~~~~~~~~~~~~
+
+Instead of leveraging the full ``deploy.sh``, one could execute the ``STATE``
+files one by one (or partially) from the ``cfg01``.
+
+However, this requires a better understanding of how the list of ``STATE``
+files to be executed is constructed for a specific scenario, depending on the
+deployment type and the cluster having baremetal nodes, implemented in:
+* ``mcp/config/scenario/defaults.yaml.j2``
+* ``mcp/config/scenario/<scenario-name>.yaml``
-**Instructions**
+e.g.: For the example presented above (baremetal with ``VCP``,
+``os-nosdn-nofeature-ha``), the list of ``STATE`` files would be:
+* ``virtual_init``
+* ``maas``
+* ``baremetal_init``
+* ``virtual_control_plane``
+* ``openstack_ha``
+* ``networks``
-#. Create a new directory at any location
+To execute one (or more) of the remaining ``STATE`` files after a failure:
- .. code-block:: bash
+.. code-block:: console
- $ mkdir -p modeler
+ jenkins@jumpserver:~$ docker exec -it fuel bash
+ root@cfg01:~$ cd ~/fuel/mcp/config/states
+ root@cfg01:~/fuel/mcp/config/states$ ./openstack_ha
+ root@cfg01:~/fuel/mcp/config/states$ CI_DEBUG=true ./networks
+For even finer granularity, one can also run the commands in a ``STATE`` file
+one by one manually, e.g. if the execution failed applying the ``rabbitmq``
+sls:
-#. Place fuel repo in the above directory
+.. code-block:: console
- .. code-block:: bash
+ root@cfg01:~$ salt -I 'rabbitmq:server' state.sls rabbitmq
- $ cd modeler
- $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
+Exploring the Cloud with Salt
+=============================
+To gather information about the cloud, the salt commands can be used.
+It is based around a master-minion idea where the salt-master pushes config to
+the minions to execute actions.
-#. Create a container and mount the above host directory
+For example tell salt to execute a ping to ``8.8.8.8`` on all the nodes.
- .. code-block:: bash
+.. code-block:: console
+
+ root@cfg01:~$ salt "*" network.ping 8.8.8.8
+ ^^^ target
+ ^^^^^^^^^^^^ function to execute
+ ^^^^^^^ argument passed to the function
+
+.. TIP::
+
+ Complex filters can be done to the target like compound queries or node roles.
+
+For more information about Salt see the :ref:`fuel_userguide_references`
+section.
+
+Some examples are listed below. Note that these commands are issued from Salt
+master as ``root`` user.
+
+View the IPs of All the Components
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ root@cfg01:~$ salt "*" network.ip_addrs
+ cfg01.mcp-odl-ha.local:
+ - 10.20.0.2
+ - 172.16.10.100
+ mas01.mcp-odl-ha.local:
+ - 10.20.0.3
+ - 172.16.10.3
+ - 192.168.11.3
+ .........................
+
+View the Interfaces of All the Components and Put the Output in a ``yaml`` File
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
+ root@cfg01:~# cat interfaces.yaml
+ cfg01.mcp-odl-ha.local:
+ enp1s0:
+ hwaddr: 52:54:00:72:77:12
+ inet:
+ - address: 10.20.0.2
+ broadcast: 10.20.0.255
+ label: enp1s0
+ netmask: 255.255.255.0
+ inet6:
+ - address: fe80::5054:ff:fe72:7712
+ prefixlen: '64'
+ scope: link
+ up: true
+ .........................
+
+View Installed Packages on MaaS Node
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ root@cfg01:~# salt "mas*" pkg.list_pkgs
+ mas01.mcp-odl-ha.local:
+ ----------
+ accountsservice:
+ 0.6.40-2ubuntu11.3
+ acl:
+ 2.2.52-3
+ acpid:
+ 1:2.0.26-1ubuntu2
+ adduser:
+ 3.113+nmu3ubuntu4
+ anerd:
+ 1
+ .........................
+
+Execute Any Linux Command on All Nodes (e.g. ``ls /var/log``)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ root@cfg01:~# salt "*" cmd.run 'ls /var/log'
+ cfg01.mcp-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
+
+Execute Any Linux Command on Nodes Using Compound Queries Filter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
+ cfg01.mcp-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
+
+Execute Any Linux Command on Nodes Using Role Filter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. code-block:: console
+
+ root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
+ cmp001.mcp-odl-ha.local:
+ alternatives.log
+ apache2
+ apt
+ auth.log
+ btmp
+ ceilometer
+ cinder
+ cloud-init-output.log
+ cloud-init.log
+ .........................
- $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
+Accessing Openstack
+===================
+
+Once the deployment is complete, Openstack CLI is accessible from controller
+VMs (``ctl01`` ... ``ctl03``).
+Openstack credentials are at ``/root/keystonercv3``.
-#. Install all the required packages inside the container.
+.. code-block:: console
- .. code-block:: bash
+ root@ctl01:~# source keystonercv3
+ root@ctl01:~# openstack image list
+ +--------------------------------------+-----------------------------------------------+--------+
+ | ID | Name | Status |
+ +======================================+===============================================+========+
+ | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
+ | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
+ +--------------------------------------+-----------------------------------------------+--------+
- $ apt-get update
- $ apt-get install -y npm nodejs
- $ npm install -g reclass-doc
- $ cd /host/fuel/mcp/reclass
- $ ln -s /usr/bin/nodejs /usr/bin/node
- $ reclass-doc --output /host /host/fuel/mcp/reclass
+The OpenStack Dashboard, Horizon, is available at ``http://<proxy public VIP>``.
+The administrator credentials are ``admin``/``opnfv_secret``.
+
+.. figure:: img/horizon_login.png
+ :width: 60%
+ :align: center
+
+A full list of IPs/services is available at ``<proxy public VIP>:8090`` for
+``baremetal`` deploys.
+
+.. figure:: img/salt_services_ip.png
+ :width: 60%
+ :align: center
+
+Guest Operating System Support
+==============================
+
+There are a number of possibilities regarding the guest operating systems
+which can be spawned on the nodes.
+The current system spawns virtual machines for VCP VMs on the KVM nodes and VMs
+requested by users in OpenStack compute nodes. Currently the system supports
+the following ``UEFI``-images for the guests:
+
++------------------+-------------------+--------------------+
+| OS name | ``x86_64`` status | ``aarch64`` status |
++==================+===================+====================+
+| Ubuntu 17.10 | untested | Full support |
++------------------+-------------------+--------------------+
+| Ubuntu 16.04 | Full support | Full support |
++------------------+-------------------+--------------------+
+| Ubuntu 14.04 | untested | Full support |
++------------------+-------------------+--------------------+
+| Fedora atomic 27 | untested | Full support |
++------------------+-------------------+--------------------+
+| Fedora cloud 27 | untested | Full support |
++------------------+-------------------+--------------------+
+| Debian | untested | Full support |
++------------------+-------------------+--------------------+
+| Centos 7 | untested | Not supported |
++------------------+-------------------+--------------------+
+| Cirros 0.3.5 | Full support | Full support |
++------------------+-------------------+--------------------+
+| Cirros 0.4.0 | Full support | Full support |
++------------------+-------------------+--------------------+
+
+The above table covers only ``UEFI`` images and implies ``OVMF``/``AAVMF``
+firmware on the host. An ``x86_64`` deployment also supports ``non-UEFI``
+images, however that choice is up to the underlying hardware and the
+administrator to make.
+
+The images for the above operating systems can be found in their respective
+websites.
+
+OpenStack Storage
+=================
+OpenStack Cinder is the project behind block storage in OpenStack and OPNFV
+Fuel supports LVM out of the box.
-#. View the results from the host by using a browser. The file to open should be now at modeler/index.html
+By default ``x86_64`` supports 2 additional block storage devices, while
+``aarch64`` supports only one.
- .. figure:: img/reclass_doc.png
+More devices can be supported if the OS-image created has additional
+properties allowing block storage devices to be spawned as ``SCSI`` drives.
+To do this, add the properties below to the server:
+.. code-block:: console
+
+ root@ctl01:~$ openstack image set --property hw_disk_bus='scsi' \
+ --property hw_scsi_model='virtio-scsi' \
+ <image>
+
+The choice regarding which bus to use for the storage drives is an important
+one. ``virtio-blk`` is the default choice for OPNFV Fuel, which attaches the
+drives in ``/dev/vdX``. However, since we want to be able to attach a
+larger number of volumes to the virtual machines, we recommend the switch to
+``SCSI`` drives which are attached in ``/dev/sdX`` instead.
+
+``virtio-scsi`` is a little worse in terms of performance but the ability to
+add a larger number of drives combined with added features like ZFS, Ceph et
+al, leads us to suggest the use of ``virtio-scsi`` in OPNFV Fuel for both
+architectures.
+
+More details regarding the differences and performance of ``virtio-blk`` vs
+``virtio-scsi`` are beyond the scope of this manual but can be easily found
+in other sources online like `VirtIO SCSI`_ or `VirtIO performance`_.
+
+Additional configuration for configuring images in OpenStack can be found in
+the OpenStack Glance documentation.
+
+OpenStack Endpoints
+===================
+
+For each OpenStack service three endpoints are created: ``admin``, ``internal``
+and ``public``.
+
+.. code-block:: console
+
+ ubuntu@ctl01:~$ openstack endpoint list --service keystone
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+ | 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone | identity | True | internal | http://172.16.10.26:5000/v3 |
+ | 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone | identity | True | admin | http://172.16.10.26:35357/v3 |
+ | b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone | identity | True | public | https://10.0.15.2:5000/v3 |
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+
+MCP sets up all Openstack services to talk to each other over unencrypted
+connections on the internal management network. All admin/internal endpoints
+use plain http, while the public endpoints are https connections terminated
+via nginx at the ``VCP`` proxy VMs.
+
+To access the public endpoints an SSL certificate has to be provided. For
+convenience, the installation script will copy the required certificate
+to the ``cfg01`` node at ``/etc/ssl/certs/os_cacert``.
+
+Copy the certificate from the ``cfg01`` node to the client that will access
+the https endpoints and place it under ``/etc/ssl/certs/``.
+The SSL connection will be established automatically after.
+
+.. code-block:: console
+
+ jenkins@jumpserver:~$ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 \
+ "cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert
+
+Reclass Model Viewer Tutorial
+=============================
+
+In order to get a better understanding of the ``reclass`` model Fuel uses, the
+`reclass-doc`_ tool can be used to visualise the ``reclass`` model.
+
+To avoid installing packages on the ``jumpserver`` or another host, the
+``cfg01`` Docker container can be used. Since the ``fuel`` git repository
+located on the ``jumpserver`` is already mounted inside ``cfg01`` container,
+the results can be visualized using a web browser on the ``jumpserver`` at the
+end of the procedure.
+
+.. code-block:: console
+
+ jenkins@jumpserver:~$ docker exec -it fuel bash
+ root@cfg01:~$ apt-get update
+ root@cfg01:~$ apt-get install -y npm nodejs
+ root@cfg01:~$ npm install -g reclass-doc
+ root@cfg01:~$ ln -s /usr/bin/nodejs /usr/bin/node
+ root@cfg01:~$ reclass-doc --output ~/fuel/mcp/reclass/modeler \
+ ~/fuel/mcp/reclass
+
+The generated documentation should be available on the ``jumpserver`` inside
+``fuel`` git repo subpath ``mcp/reclass/modeler/index.html``.
+
+.. figure:: img/reclass_doc.png
+ :width: 60%
+ :align: center
.. _fuel_userguide_references:
-==========
References
==========
-1) :ref:`fuel-release-installation-label`
-2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics/>`_
-3) `Saltstack Formulas <https://salt-formulas.readthedocs.io/en/latest/>`_
-4) `Virtio performance <https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/>`_
-5) `Virtio SCSI <https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/>`_
+#. :ref:`OPNFV Fuel Installation Instruction <fuel-installation>`
+#. `Saltstack Documentation`_
+#. `Saltstack Formulas`_
+#. `VirtIO performance`_
+#. `VirtIO SCSI`_
+
+.. _`Saltstack Documentation`: https://docs.saltstack.com/en/latest/topics/
+.. _`Saltstack Formulas`: https://salt-formulas.readthedocs.io/en/latest/
+.. _`VirtIO performance`: https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/
+.. _`VirtIO SCSI`: https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/
+.. _`reclass-doc`: https://github.com/jirihybek/reclass-doc