diff options
author | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2018-09-28 16:35:10 +0200 |
---|---|---|
committer | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2018-11-05 16:42:15 +0100 |
commit | 170d2d1c195d001d6ca786364aaf3c10e714ae36 (patch) | |
tree | c057ed1c6d32c719e28d06ea0efd7f1d030de54f /docs/release/installation | |
parent | 532427ad43e1c1728bf21317aea6af00d9758227 (diff) |
[docs] Refresh for Gambia release
- s/Fuel@OPNFV/OPNFV Fuel/g;
- added README files for ci/scenarios/patches directories;
- refresh & simplify cluster overview diagrams;
- unify labels across docs;
- fix TOC numbering;
- remove local labs PDF/IDF files, as they are merely duplicates of
Pharos files included as a git submodule;
JIRA: FUEL-397
Change-Id: I87f61938eeb67f13fd9205d5226a30f02e55d267
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'docs/release/installation')
-rw-r--r-- | docs/release/installation/img/README.rst | 14 | ||||
-rw-r--r-- | docs/release/installation/img/arm_pod5.png | bin | 178079 -> 0 bytes | |||
-rw-r--r-- | docs/release/installation/img/fuel_baremetal.png | bin | 272115 -> 0 bytes | |||
-rw-r--r-- | docs/release/installation/img/fuel_baremetal_ha.png | bin | 0 -> 289121 bytes | |||
-rw-r--r-- | docs/release/installation/img/fuel_baremetal_noha.png | bin | 0 -> 197550 bytes | |||
-rw-r--r-- | docs/release/installation/img/fuel_hybrid_noha.png | bin | 0 -> 191144 bytes | |||
-rw-r--r-- | docs/release/installation/img/fuel_virtual.png | bin | 216442 -> 0 bytes | |||
-rw-r--r-- | docs/release/installation/img/fuel_virtual_noha.png | bin | 0 -> 236222 bytes | |||
-rw-r--r-- | docs/release/installation/img/lf_pod2.png | bin | 178795 -> 0 bytes | |||
-rw-r--r-- | docs/release/installation/index.rst | 16 | ||||
-rw-r--r-- | docs/release/installation/installation.instruction.rst | 1634 |
11 files changed, 1207 insertions, 457 deletions
diff --git a/docs/release/installation/img/README.rst b/docs/release/installation/img/README.rst index 4cb1f77d2..bf630445b 100644 --- a/docs/release/installation/img/README.rst +++ b/docs/release/installation/img/README.rst @@ -1,12 +1,18 @@ .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. SPDX-License-Identifier: CC-BY-4.0 -.. (c) 2017 Ericsson AB, Mirantis Inc., Enea AB and others. +.. (c) 2018 Ericsson AB, Mirantis Inc., Enea AB and others. + +:orphan: Image Editor ============ -All files in this directory have been created using `draw.io <https://draw.io>`_. + +All files in this directory have been created using `draw.io`_. Image Sources ============= -Image sources are embedded in each `png` file. -To edit an image, import the `png` file using `draw.io <https://draw.io>`_. + +Image sources are embedded in each ``png`` file. +To edit an image, import the ``png`` file using `draw.io`_. + +.. _`draw.io`: https://draw.io diff --git a/docs/release/installation/img/arm_pod5.png b/docs/release/installation/img/arm_pod5.png Binary files differdeleted file mode 100644 index 87edb8f45..000000000 --- a/docs/release/installation/img/arm_pod5.png +++ /dev/null diff --git a/docs/release/installation/img/fuel_baremetal.png b/docs/release/installation/img/fuel_baremetal.png Binary files differdeleted file mode 100644 index 27e762021..000000000 --- a/docs/release/installation/img/fuel_baremetal.png +++ /dev/null diff --git a/docs/release/installation/img/fuel_baremetal_ha.png b/docs/release/installation/img/fuel_baremetal_ha.png Binary files differnew file mode 100644 index 000000000..f2ed6106f --- /dev/null +++ b/docs/release/installation/img/fuel_baremetal_ha.png diff --git a/docs/release/installation/img/fuel_baremetal_noha.png b/docs/release/installation/img/fuel_baremetal_noha.png Binary files differnew file mode 100644 index 000000000..5a3b42919 --- /dev/null +++ b/docs/release/installation/img/fuel_baremetal_noha.png diff --git a/docs/release/installation/img/fuel_hybrid_noha.png b/docs/release/installation/img/fuel_hybrid_noha.png Binary files differnew file mode 100644 index 000000000..51449a777 --- /dev/null +++ b/docs/release/installation/img/fuel_hybrid_noha.png diff --git a/docs/release/installation/img/fuel_virtual.png b/docs/release/installation/img/fuel_virtual.png Binary files differdeleted file mode 100644 index d7664865d..000000000 --- a/docs/release/installation/img/fuel_virtual.png +++ /dev/null diff --git a/docs/release/installation/img/fuel_virtual_noha.png b/docs/release/installation/img/fuel_virtual_noha.png Binary files differnew file mode 100644 index 000000000..7d05a9dcd --- /dev/null +++ b/docs/release/installation/img/fuel_virtual_noha.png diff --git a/docs/release/installation/img/lf_pod2.png b/docs/release/installation/img/lf_pod2.png Binary files differdeleted file mode 100644 index da419d87c..000000000 --- a/docs/release/installation/img/lf_pod2.png +++ /dev/null diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst index 00332262f..866044eb5 100644 --- a/docs/release/installation/index.rst +++ b/docs/release/installation/index.rst @@ -1,24 +1,10 @@ -.. _fuel-installation: - .. This work is licensed under a Creative Commons Attribution 4.0 International License. .. http://creativecommons.org/licenses/by/4.0 .. (c) Open Platform for NFV Project, Inc. and its contributors -.. _fuel-release-installation-label: - -**************************************** -Installation instruction for Fuel\@OPNFV -**************************************** - -Contents: +.. _fuel-installation: .. toctree:: - :numbered: :maxdepth: 2 installation.instruction.rst - -Indices and tables -================== - -* :ref:`search` diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst index 9aaebdd7c..40f9d26ae 100644 --- a/docs/release/installation/installation.instruction.rst +++ b/docs/release/installation/installation.instruction.rst @@ -2,637 +2,1395 @@ .. http://creativecommons.org/licenses/by/4.0 .. (c) Open Platform for NFV Project, Inc. and its contributors -======== +*********************************** +OPNFV Fuel Installation Instruction +*********************************** + Abstract ======== -This document describes how to install the Fraser release of +This document describes how to install the ``Gambia`` release of OPNFV when using Fuel as a deployment tool, covering its usage, limitations, dependencies and required system resources. -This is an unified documentation for both x86_64 and aarch64 + +This is an unified documentation for both ``x86_64`` and ``aarch64`` architectures. All information is common for both architectures except when explicitly stated. -============ Introduction ============ This document provides guidelines on how to install and -configure the Fraser release of OPNFV when using Fuel as a +configure the ``Gambia`` release of OPNFV when using Fuel as a deployment tool, including required software and hardware configurations. Although the available installation options provide a high degree of freedom in how the system is set up, including architecture, services and features, etc., said permutations may not provide an OPNFV compliant reference architecture. This document provides a -step-by-step guide that results in an OPNFV Fraser compliant +step-by-step guide that results in an OPNFV ``Gambia`` compliant deployment. The audience of this document is assumed to have good knowledge of networking and Unix/Linux administration. -======= -Preface -======= - -Before starting the installation of the Fraser release of +Before starting the installation of the ``Gambia`` release of OPNFV, using Fuel as a deployment tool, some planning must be done. Preparations ============ -Prior to installation, a number of deployment specific parameters must be collected, those are: +Prior to installation, a number of deployment specific parameters must be +collected, those are: #. Provider sub-net and gateway information -#. Provider VLAN information - -#. Provider DNS addresses +#. Provider ``VLAN`` information -#. Provider NTP addresses +#. Provider ``DNS`` addresses -#. Network overlay you plan to deploy (VLAN, VXLAN, FLAT) - -#. How many nodes and what roles you want to deploy (Controllers, Storage, Computes) - -#. Monitoring options you want to deploy (Ceilometer, Syslog, etc.). - -#. Other options not covered in the document are available in the links above +#. Provider ``NTP`` addresses +#. How many nodes and what roles you want to deploy (Controllers, Computes) This information will be needed for the configuration procedures provided in this document. -========================================= -Hardware Requirements for Virtual Deploys -========================================= - -The following minimum hardware requirements must be met for the virtual -installation of Fraser using Fuel: - -+----------------------------+--------------------------------------------------------+ -| **HW Aspect** | **Requirement** | -| | | -+============================+========================================================+ -| **1 Jumpserver** | A physical node (also called Foundation Node) that | -| | will host a Salt Master VM and each of the VM nodes in | -| | the virtual deploy | -+----------------------------+--------------------------------------------------------+ -| **CPU** | Minimum 1 socket with Virtualization support | -+----------------------------+--------------------------------------------------------+ -| **RAM** | Minimum 32GB/server (Depending on VNF work load) | -+----------------------------+--------------------------------------------------------+ -| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended)| -+----------------------------+--------------------------------------------------------+ - - -=========================================== -Hardware Requirements for Baremetal Deploys -=========================================== - -The following minimum hardware requirements must be met for the baremetal -installation of Fraser using Fuel: - -+-------------------------+------------------------------------------------------+ -| **HW Aspect** | **Requirement** | -| | | -+=========================+======================================================+ -| **# of nodes** | Minimum 5 | -| | | -| | - 3 KVM servers which will run all the controller | -| | services | -| | | -| | - 2 Compute nodes | -| | | -+-------------------------+------------------------------------------------------+ -| **CPU** | Minimum 1 socket with Virtualization support | -+-------------------------+------------------------------------------------------+ -| **RAM** | Minimum 16GB/server (Depending on VNF work load) | -+-------------------------+------------------------------------------------------+ -| **Disk** | Minimum 256GB 10kRPM spinning disks | -+-------------------------+------------------------------------------------------+ -| **Networks** | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be | -| | a mix of tagged/native | -| | | -| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network | -| | | -| | Note: These can be allocated to a single NIC - | -| | or spread out over multiple NICs | -+-------------------------+------------------------------------------------------+ -| **1 Jumpserver** | A physical node (also called Foundation Node) that | -| | hosts the Salt Master and MaaS VMs | -+-------------------------+------------------------------------------------------+ -| **Power management** | All targets need to have power management tools that | -| | allow rebooting the hardware and setting the boot | -| | order (e.g. IPMI) | -+-------------------------+------------------------------------------------------+ +Hardware Requirements +===================== -.. NOTE:: - - All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64). +Mininum hardware requirements depend on the deployment type. -.. NOTE:: +.. WARNING:: - For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2). + If ``baremetal`` nodes are present in the cluster, the architecture of the + nodes running the control plane (``kvm01``, ``kvm02``, ``kvm03`` for + ``HA`` scenarios, respectively ``ctl01``, ``gtw01``, ``odl01`` for + ``noHA`` scenarios) and the ``jumpserver`` architecture must be the same + (either ``x86_64`` or ``aarch64``). + +.. TIP:: + + The compute nodes may have different architectures, but extra + configuration might be required for scheduling VMs on the appropiate host. + This use-case is not tested in OPNFV CI, so it is considered experimental. + +Hardware Requirements for ``virtual`` Deploys +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following minimum hardware requirements must be met for the ``virtual`` +installation of ``Gambia`` using Fuel: + ++------------------+------------------------------------------------------+ +| **HW Aspect** | **Requirement** | +| | | ++==================+======================================================+ +| **1 Jumpserver** | A physical node (also called Foundation Node) that | +| | will host a Salt Master container and each of the VM | +| | nodes in the virtual deploy | ++------------------+------------------------------------------------------+ +| **CPU** | Minimum 1 socket with Virtualization support | ++------------------+------------------------------------------------------+ +| **RAM** | Minimum 32GB/server (Depending on VNF work load) | ++------------------+------------------------------------------------------+ +| **Disk** | Minimum 100GB (SSD or 15krpm SCSI highly recommended)| ++------------------+------------------------------------------------------+ + +Hardware Requirements for ``baremetal`` Deploys +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following minimum hardware requirements must be met for the ``baremetal`` +installation of ``Gambia`` using Fuel: + ++------------------+------------------------------------------------------+ +| **HW Aspect** | **Requirement** | +| | | ++==================+======================================================+ +| **1 Jumpserver** | A physical node (also called Foundation Node) that | +| | hosts the Salt Master container and MaaS VM | ++------------------+------------------------------------------------------+ +| **# of nodes** | Minimum 5 | +| | | +| | - 3 KVM servers which will run all the controller | +| | services | +| | | +| | - 2 Compute nodes | +| | | +| | .. WARNING:: | +| | | +| | ``kvm01``, ``kvm02``, ``kvm03`` nodes and the | +| | ``jumpserver`` must have the same architecture | +| | (either ``x86_64`` or ``aarch64``). | +| | | +| | .. NOTE:: | +| | | +| | ``aarch64`` nodes should run an ``UEFI`` | +| | compatible firmware with PXE support | +| | (e.g. ``EDK2``). | ++------------------+------------------------------------------------------+ +| **CPU** | Minimum 1 socket with Virtualization support | ++------------------+------------------------------------------------------+ +| **RAM** | Minimum 16GB/server (Depending on VNF work load) | ++------------------+------------------------------------------------------+ +| **Disk** | Minimum 256GB 10kRPM spinning disks | ++------------------+------------------------------------------------------+ +| **Networks** | Mininum 4 | +| | | +| | - 3 VLANs (``public``, ``mgmt``, ``private``) - | +| | can be a mix of tagged/native | +| | | +| | - 1 Un-Tagged VLAN for PXE Boot - | +| | ``PXE/admin`` Network | +| | | +| | .. NOTE:: | +| | | +| | These can be allocated to a single NIC | +| | or spread out over multiple NICs. | ++------------------+------------------------------------------------------+ +| **Power mgmt** | All targets need to have power management tools that | +| | allow rebooting the hardware (e.g. ``IPMI``). | ++------------------+------------------------------------------------------+ + +Hardware Requirements for ``hybrid`` (``baremetal`` + ``virtual``) Deploys +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following minimum hardware requirements must be met for the ``hybrid`` +installation of ``Gambia`` using Fuel: + ++------------------+------------------------------------------------------+ +| **HW Aspect** | **Requirement** | +| | | ++==================+======================================================+ +| **1 Jumpserver** | A physical node (also called Foundation Node) that | +| | hosts the Salt Master container, MaaS VM and | +| | each of the virtual nodes defined in ``PDF`` | ++------------------+------------------------------------------------------+ +| **# of nodes** | .. NOTE:: | +| | | +| | Depends on ``PDF`` configuration. | +| | | +| | If the control plane is virtualized, minimum | +| | baremetal requirements are: | +| | | +| | - 2 Compute nodes | +| | | +| | If the computes are virtualized, minimum | +| | baremetal requirements are: | +| | | +| | - 3 KVM servers which will run all the controller | +| | services | +| | | +| | .. WARNING:: | +| | | +| | ``kvm01``, ``kvm02``, ``kvm03`` nodes and the | +| | ``jumpserver`` must have the same architecture | +| | (either ``x86_64`` or ``aarch64``). | +| | | +| | .. NOTE:: | +| | | +| | ``aarch64`` nodes should run an ``UEFI`` | +| | compatible firmware with PXE support | +| | (e.g. ``EDK2``). | ++------------------+------------------------------------------------------+ +| **CPU** | Minimum 1 socket with Virtualization support | ++------------------+------------------------------------------------------+ +| **RAM** | Minimum 16GB/server (Depending on VNF work load) | ++------------------+------------------------------------------------------+ +| **Disk** | Minimum 256GB 10kRPM spinning disks | ++------------------+------------------------------------------------------+ +| **Networks** | Same as for ``baremetal`` deployments | ++------------------+------------------------------------------------------+ +| **Power mgmt** | Same as for ``baremetal`` deployments | ++------------------+------------------------------------------------------+ -=============================== Help with Hardware Requirements -=============================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Calculate hardware requirements: -For information on compatible hardware types available for use, -please see `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_ - When choosing the hardware on which you will deploy your OpenStack environment, you should think about: -- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine. +- CPU -- Consider the number of virtual machines that you plan to deploy in + your cloud environment and the CPUs per virtual machine. -- Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node. +- Memory -- Depends on the amount of RAM assigned per virtual machine and the + controller node. -- Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage. +- Storage -- Depends on the local drive space per virtual machine, remote + volumes that can be attached to a virtual machine, and object storage. -- Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage. +- Networking -- Depends on the Choose Network Topology, the network bandwidth + per virtual machine, and network storage. -================================================ -Top of the Rack (TOR) Configuration Requirements -================================================ +Top of the Rack (``TOR``) Configuration Requirements +==================================================== The switching infrastructure provides connectivity for the OPNFV infrastructure operations, tenant networks (East/West) and provider connectivity (North/South); it also provides needed connectivity for the Storage Area Network (SAN). + To avoid traffic congestion, it is strongly suggested that three physically separated networks are used, that is: 1 physical network for administration and control, one physical network for tenant private and public networks, and one physical network for SAN. + The switching connectivity can (but does not need to) be fully redundant, in such case it comprises a redundant 10GE switch pair for each of the three physically separated networks. -The physical TOR switches are **not** automatically configured from -the Fuel OPNFV reference platform. All the networks involved in the OPNFV -infrastructure as well as the provider networks and the private tenant -VLANs needs to be manually configured. +.. WARNING:: -Manual configuration of the Fraser hardware platform should -be carried out according to the `OPNFV Pharos Specification -<https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_. + The physical ``TOR`` switches are **not** automatically configured from + the OPNFV Fuel reference platform. All the networks involved in the OPNFV + infrastructure as well as the provider networks and the private tenant + VLANs needs to be manually configured. + +Manual configuration of the ``Gambia`` hardware platform should +be carried out according to the `OPNFV Pharos Specification`_. -============================ OPNFV Software Prerequisites ============================ -The Jumpserver node should be pre-provisioned with an operating system, -according to the Pharos specification. Relevant network bridges should -also be pre-configured (e.g. admin_br, mgmt_br, public_br). +.. NOTE:: -- The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during Fuel installation. -- The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is - suggested to pre-configure it for debugging purposes. -- The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory. + All prerequisites described in this chapter apply to the ``jumpserver`` + node. -The user running the deploy script on the Jumpserver should belong to ``sudo`` and ``libvirt`` groups, -and have passwordless sudo access. +OS Distribution Support +~~~~~~~~~~~~~~~~~~~~~~~ -The following example adds the groups to the user ``jenkins`` +The Jumpserver node should be pre-provisioned with an operating system, +according to the `OPNFV Pharos specification`_. -.. code-block:: bash +OPNFV Fuel has been validated by CI using the following distributions +installed on the Jumpserver: - $ sudo usermod -aG sudo jenkins - $ sudo usermod -aG libvirt jenkins - $ reboot - $ groups - jenkins sudo libvirt +- ``CentOS 7`` (recommended by Pharos specification); +- ``Ubuntu Xenial 16.04``; - $ sudo visudo - ... - %jenkins ALL=(ALL) NOPASSWD:ALL +.. TOPIC:: ``aarch64`` notes -The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir`` in the examples below) -needs to have mask 777 in order for libvirt to be able to use them. + For an ``aarch64`` Jumpserver, the ``libvirt`` minimum required + version is ``3.x``, ``3.5`` or newer highly recommended. -.. code-block:: bash + .. TIP:: - $ mkdir -p -m 777 /home/jenkins/tmpdir + ``CentOS 7`` (``aarch64``) distro provided packages are already new + enough. -For an AArch64 Jumpserver, the ``libvirt`` minimum required version is 3.x, 3.5 or newer highly recommended. -While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended -(especially on AArch64 Jumpservers). + .. WARNING:: -For CentOS 7.4 (AArch64), distro provided packages are already new enough. -For Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used. -For convenience, Armband provides a DEB repository holding all the required packages. + ``Ubuntu 16.04`` (``arm64``), distro packages are too old and 3rd party + repositories should be used. -To add and enable the Armband repository on an Ubuntu 16.04 system, -create a new sources list file ``/apt/sources.list.d/armband.list`` with the following contents: + For convenience, Armband provides a DEB repository holding all the + required packages. -.. code-block:: bash + To add and enable the Armband repository on an Ubuntu 16.04 system, + create a new sources list file ``/apt/sources.list.d/armband.list`` + with the following contents: - $ cat /etc/apt/sources.list.d/armband.list - //for OpenStack Queens release - deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main + .. code-block:: console - $ apt-get update + jenkins@jumpserver:~$ cat /etc/apt/sources.list.d/armband.list + deb http://linux.enea.com/mcp-repos/queens/xenial queens-armband main -Fuel@OPNFV has been validated by CI using the following distributions -installed on the Jumpserver: + jenkins@jumpserver:~$ sudo apt-key adv --keyserver keys.gnupg.net \ + --recv 798AB1D1 + jenkins@jumpserver:~$ sudo apt-get update -- CentOS 7 (recommended by Pharos specification); -- Ubuntu Xenial; +OS Distribution Packages +~~~~~~~~~~~~~~~~~~~~~~~~ -.. WARNING:: +By default, the ``deploy.sh`` script will automatically install the required +distribution package dependencies on the Jumpserver, so the end user does +not have to manually install them before starting the deployment. - The install script expects ``libvirt`` to be already running on the Jumpserver. - In case ``libvirt`` packages are missing, the script will install them; but - depending on the OS distribution, the user might have to start the ``libvirtd`` - service manually, then run the deploy script again. Therefore, it - is recommended to install libvirt-bin explicitly on the Jumpserver before the deployment. +This includes Python, QEMU, libvirt etc. -.. NOTE:: +.. SEEALSO:: - It is also recommended to install the newer kernel on the Jumpserver before the deployment. + To disable automatic package installation (and/or upgrade) during + deployment, check out the ``-P`` deploy argument. .. WARNING:: - The install script will automatically install the rest of required distro package - dependencies on the Jumpserver, unless explicitly asked not to (via ``-P`` deploy arg). - This includes Python, QEMU, libvirt etc. + The install script expects ``libvirt`` to be already running on the + Jumpserver. -.. WARNING:: +In case ``libvirt`` packages are missing, the script will install them; but +depending on the OS distribution, the user might have to start the +``libvirt`` daemon service manually, then run the deploy script again. - The install script will alter Jumpserver sysconf and disable ``net.bridge.bridge-nf-call``. +Therefore, it is recommended to install ``libvirt`` explicitly on the +Jumpserver before the deployment. -.. code-block:: bash +While not mandatory, upgrading the kernel on the Jumpserver is also highly +recommended. - $ apt-get install linux-image-generic-hwe-16.04-edge libvirt-bin +.. code-block:: console + jenkins@jumpserver:~$ sudo apt-get install \ + linux-image-generic-hwe-16.04-edge libvirt-bin + jenkins@jumpserver:~$ sudo reboot -========================================== -OPNFV Software Installation and Deployment -========================================== +User Requirements +~~~~~~~~~~~~~~~~~ -This section describes the process of installing all the components needed to -deploy the full OPNFV reference platform stack across a server cluster. +The user running the deploy script on the Jumpserver should belong to +``sudo`` and ``libvirt`` groups, and have passwordless sudo access. -The installation is done with Mirantis Cloud Platform (MCP), which is based on -a reclass model. This model provides the formula inputs to Salt, to make the deploy -automatic based on deployment scenario. -The reclass model covers: +.. NOTE:: - - Infrastructure node definition: Salt Master node (cfg01) and MaaS node (mas01) - - OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002) - - Infrastructure components to install (software packages, services etc.) - - OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them + Throughout this documentation, we will use the ``jenkins`` username for + this role. +The following example adds the groups to the user ``jenkins``: -Automatic Installation of a Virtual POD -======================================= +.. code-block:: console -For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will: + jenkins@jumpserver:~$ sudo usermod -aG sudo jenkins + jenkins@jumpserver:~$ sudo usermod -aG libvirt jenkins + jenkins@jumpserver:~$ sudo reboot + jenkins@jumpserver:~$ groups + jenkins sudo libvirt - - Create a Salt Master VM on the Jumpserver which will drive the installation - - Create the bridges for networking with virsh (only if a real bridge does not already exist for a given network) - - Install OpenStack on the targets - - Leverage Salt to install & configure OpenStack services + jenkins@jumpserver:~$ sudo visudo + ... + %jenkins ALL=(ALL) NOPASSWD:ALL -.. figure:: img/fuel_virtual.png - :align: center - :alt: Fuel@OPNFV Virtual POD Network Layout Examples +Local Artifact Storage +~~~~~~~~~~~~~~~~~~~~~~ + +The folder containing the temporary deploy artifacts (``/home/jenkins/tmpdir`` +in the examples below) needs to have mask ``777`` in order for ``libvirt`` to +be able to use them. + +.. code-block:: console + + jenkins@jumpserver:~$ mkdir -p -m 777 /home/jenkins/tmpdir + +Network Configuration +~~~~~~~~~~~~~~~~~~~~~ + +Relevant Linux bridges should also be pre-configured for certain networks, +depending on the type of the deployment. + ++------------+---------------+----------------------------------------------+ +| Network | Linux Bridge | Linux Bridge necessity based on deploy type | +| | +--------------+---------------+---------------+ +| | | ``virtual`` | ``baremetal`` | ``hybrid`` | ++============+===============+==============+===============+===============+ +| PXE/admin | ``admin_br`` | absent | present | present | ++------------+---------------+--------------+---------------+---------------+ +| management | ``mgmt_br`` | optional | optional, | optional, | +| | | | recommended, | recommended, | +| | | | required for | required for | +| | | | ``functest``, | ``functest``, | +| | | | ``yardstick`` | ``yardstick`` | ++------------+---------------+--------------+---------------+---------------+ +| internal | ``int_br`` | optional | optional | present | ++------------+---------------+--------------+---------------+---------------+ +| public | ``public_br`` | optional | optional, | optional, | +| | | | recommended, | recommended, | +| | | | useful for | useful for | +| | | | debugging | debugging | ++------------+---------------+--------------+---------------+---------------+ + +.. TIP:: + + IP addresses should be assigned to the created bridge interfaces (not + to one of its ports). - Fuel@OPNFV Virtual POD Network Layout Examples +.. WARNING:: - +-----------------------+------------------------------------------------------------------------+ - | cfg01 | Salt Master VM | - +-----------------------+------------------------------------------------------------------------+ - | ctl01 | Controller VM | - +-----------------------+------------------------------------------------------------------------+ - | cmp001/cmp002 | Compute VMs | - +-----------------------+------------------------------------------------------------------------+ - | gtw01 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) | - +-----------------------+------------------------------------------------------------------------+ - | odl01 | VM on which ODL runs (for scenarios deployed with ODL) | - +-----------------------+------------------------------------------------------------------------+ + ``PXE/admin`` bridge (``admin_br``) **must** have an IP address. +Changes ``deploy.sh`` Will Perform to Jumpserver OS +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In this figure there are examples of two virtual deploys: - - Jumphost 1 has only virsh bridges, created by the deploy script - - Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network, - the deploy script will skip creating a virsh bridge for it +.. WARNING:: -.. NOTE:: + The install script will alter Jumpserver sysconf and disable + ``net.bridge.bridge-nf-call``. - A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost. +.. WARNING:: + The install script will automatically install and/or upgrade the + required distribution package dependencies on the Jumpserver, + unless explicitly asked not to (via the ``-P`` deploy arg). -Automatic Installation of a Baremetal POD -========================================= +OPNFV Software Configuration (``XDF``) +====================================== -The baremetal installation process can be done by editing the information about -hardware and environment in the reclass files, or by using the files Pod Descriptor -File (PDF) and Installer Descriptor File (IDF) as described in the OPNFV Pharos project. -These files contain all the information about the hardware and network of the deployment -that will be fed to the reclass model during deployment. +.. versionadded:: 5.0.0 +.. versionchanged:: 7.0.0 -The installation is done automatically with the deploy script, which will: +Unlike the old approach based on OpenStack Fuel, OPNFV Fuel no longer has a +graphical user interface for configuring the environment, but instead +switched to OPNFV specific descriptor files that we will call generically +``XDF``: - - Create a Salt Master VM on the Jumpserver which will drive the installation - - Create a MaaS Node VM on the Jumpserver which will provision the targets - - Install OpenStack on the targets - - Leverage MaaS to provision baremetal nodes with the operating system - - Leverage Salt to configure the operating system on the baremetal nodes - - Leverage Salt to install & configure OpenStack services +- ``PDF`` (POD Descriptor File) provides an abstraction of the target POD + with all its hardware characteristics and required parameters; +- ``IDF`` (Installer Descriptor File) extends the ``PDF`` with POD related + parameters required by the OPNFV Fuel installer; +- ``SDF`` (Scenario Descriptor File, **not** yet adopted) will later + replace embedded scenario definitions, describing the roles and layout of + the cluster enviroment for a given reference architecture; -.. figure:: img/fuel_baremetal.png - :align: center - :alt: Fuel@OPNFV Baremetal POD Network Layout Example - - Fuel@OPNFV Baremetal POD Network Layout Example - - +-----------------------+---------------------------------------------------------+ - | cfg01 | Salt Master VM | - +-----------------------+---------------------------------------------------------+ - | mas01 | MaaS Node VM | - +-----------------------+---------------------------------------------------------+ - | kvm01..03 | Baremetals which hold the VMs with controller functions | - +-----------------------+---------------------------------------------------------+ - | cmp001/cmp002 | Baremetal compute nodes | - +-----------------------+---------------------------------------------------------+ - | prx01/prx02 | Proxy VMs for Nginx | - +-----------------------+---------------------------------------------------------+ - | msg01..03 | RabbitMQ Service VMs | - +-----------------------+---------------------------------------------------------+ - | dbs01..03 | MySQL service VMs | - +-----------------------+---------------------------------------------------------+ - | mdb01..03 | Telemetry VMs | - +-----------------------+---------------------------------------------------------+ - | odl01 | VM on which ODL runs (for scenarios deployed with ODL) | - +-----------------------+---------------------------------------------------------+ - | Tenant VM | VM running in the cloud | - +-----------------------+---------------------------------------------------------+ - -In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the Jumpserver, it is -required to pre-configure at least the admin_br bridge for the PXE/Admin. -For the targets, the bridges are created by the deploy script. +.. TIP:: + + For ``virtual`` deployments, if the ``public`` network will be accessed + from outside the ``jumpserver`` node, a custom ``PDF``/``IDF`` pair is + required for customizing ``idf.net_config.public`` and + ``idf.fuel.jumphost.bridges.public``. .. NOTE:: - A virtual network ``mcpcontrol`` is always created for initial connection of the VMs on Jumphost. + For OPNFV CI PODs, as well as simple (no ``public`` bridge) ``virtual`` + deployments, ``PDF``/``IDF`` files are already available in the + `pharos git repo`_. They can be used as a reference for user-supplied + inputs or to kick off a deployment right away. ++----------+------------------------------------------------------------------+ +| LAB/POD | ``PDF``/``IDF`` availability based on deploy type | +| +------------------------+--------------------+--------------------+ +| | ``virtual`` | ``baremetal`` | ``hybrid`` | ++==========+========================+====================+====================+ +| OPNFV CI | available in | available in | N/A, as currently | +| POD | `pharos git repo`_ | `pharos git repo`_ | there are 0 hybrid | +| | (e.g. | (e.g. ``lf-pod2``, | PODs in OPNFV CI | +| | ``ericsson-virtual1``) | ``arm-pod5``) | | ++----------+------------------------+--------------------+--------------------+ +| local or | ``user-supplied`` | ``user-supplied`` | ``user-supplied`` | +| new POD | | | | ++----------+------------------------+--------------------+--------------------+ -Steps to Start the Automatic Deploy -=================================== +.. TIP:: -These steps are common both for virtual and baremetal deploys. + Both ``PDF`` and ``IDF`` structure are modelled as ``yaml`` schemas in the + `pharos git repo`_, also included as a git submodule in OPNFV Fuel. -#. Clone the Fuel code from gerrit + .. SEEALSO:: - For x86_64 + - ``mcp/scripts/pharos/config/pdf/pod1.schema.yaml`` + - ``mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml`` - .. code-block:: bash + Schema files are also used during the initial deployment phase to validate + the user-supplied input ``PDF``/``IDF`` files. - $ git clone https://git.opnfv.org/fuel - $ cd fuel +``PDF`` +~~~~~~~ - For aarch64 +The Pod Descriptor File is a hardware description of the POD +infrastructure. The information is modeled under a ``yaml`` structure. - .. code-block:: bash +The hardware description covers the ``jumphost`` node and a set of ``nodes`` +for the cluster target boards. For each node the following characteristics +are defined: - $ git clone https://git.opnfv.org/armband - $ cd armband +- Node parameters including ``CPU`` features and total memory; +- A list of available disks; +- Remote management parameters; +- Network interfaces list including name, ``MAC`` address, link speed, + advanced features; -#. Checkout the Fraser release +.. SEEALSO:: - .. code-block:: bash + A reference file with the expected ``yaml`` structure is available at: - $ git checkout opnfv-6.2.1 + - ``mcp/scripts/pharos/config/pdf/pod1.yaml`` -#. Start the deploy script + For more information on ``PDF``, see the `OPNFV PDF Wiki Page`_. - Besides the basic options, there are other recommended deploy arguments: +.. WARNING:: - - use ``-D`` option to enable the debug info - - use ``-S`` option to point to a tmp dir where the disk images are saved. The images will be - re-used between deploys - - use ``|& tee`` to save the deploy log to a file + The fixed IPs defined in ``PDF`` are ignored by the OPNFV Fuel installer + script and it will instead assign addresses based on the network ranges + defined in ``IDF``. + + For more details on the way IP addresses are assigned, see + :ref:`OPNFV Fuel User Guide <fuel-userguide>`. + +``PDF``/``IDF`` Role (hostname) Mapping +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Upcoming ``SDF`` support will introduce a series of possible node roles. +Until that happens, the role mapping logic is hardcoded, based on node index +in ``PDF``/``IDF`` (which should also be in sync, i.e. the parameters of the +``n``-th cluster node defined in ``PDF`` should be the ``n``-th node in +``IDF`` structures too). + ++-------------+------------------+----------------------+ +| Node index | ``HA`` scenario | ``noHA`` scenario | ++=============+==================+======================+ +| 1st | ``kvm01`` | ``ctl01`` | ++-------------+------------------+----------------------+ +| 2nd | ``kvm02`` | ``gtw01`` | ++-------------+------------------+----------------------+ +| 3rd | ``kvm03`` | ``odl01``/``unused`` | ++-------------+------------------+----------------------+ +| 4th, | ``cmp001``, | ``cmp001``, | +| 5th, | ``cmp002``, | ``cmp002``, | +| ... | ``...`` | ``...`` | ++-------------+------------------+----------------------+ + +.. TIP:: + + To switch node role(s), simply reorder the node definitions in + ``PDF``/``IDF`` (make sure to keep them in sync). + +``IDF`` +~~~~~~~ + +The Installer Descriptor File extends the ``PDF`` with POD related parameters +required by the installer. This information may differ per each installer type +and it is not considered part of the POD infrastructure. + +``idf.*`` Overview +------------------ + +The ``IDF`` file must be named after the ``PDF`` it attaches to, with the +prefix ``idf-``. + +.. SEEALSO:: + + A reference file with the expected ``yaml`` structure is available at: + + - ``mcp/scripts/pharos/config/pdf/idf-pod1.yaml`` + +The file follows a ``yaml`` structure and at least two sections +(``idf.net_config`` and ``idf.fuel``) are expected. + +The ``idf.fuel`` section defines several sub-sections required by the OPNFV +Fuel installer: + +- ``jumphost``: List of bridge names for each network on the Jumpserver; +- ``network``: List of device name and bus address info of all the target nodes. + The order must be aligned with the order defined in the ``PDF`` file. + The OPNFV Fuel installer relies on the ``IDF`` model to setup all node NICs + by defining the expected device name and bus address; +- ``maas``: Defines the target nodes commission timeout and deploy timeout; +- ``reclass``: Defines compute parameter tuning, including huge pages, ``CPU`` + pinning and other ``DPDK`` settings; + +.. code-block:: yaml + + --- + idf: + version: 0.1 # fixed, the only supported version (mandatory) + net_config: # POD network configuration overview (mandatory) + oob: ... # mandatory + admin: ... # mandatory + mgmt: ... # mandatory + storage: ... # mandatory + private: ... # mandatory + public: ... # mandatory + fuel: # OPNFV Fuel specific section (mandatory) + jumphost: # OPNFV Fuel jumpserver bridge configuration (mandatory) + bridges: # Bridge name mapping (mandatory) + admin: 'admin_br' # <PXE/admin bridge name> or ~ + mgmt: 'mgmt_br' # <mgmt bridge name> or ~ + private: ~ # <private bridge name> or ~ + public: 'public_br' # <public bridge name> or ~ + trunks: ... # Trunked networks (optional) + maas: # MaaS timeouts (optional) + timeout_comissioning: 10 # commissioning timeout in minutes + timeout_deploying: 15 # deploy timeout in minutes + network: # Cluster nodes network (mandatory) + ntp_strata_host1: 1.pool.ntp.org # NTP1 (optional) + ntp_strata_host2: 0.pool.ntp.org # NTP2 (optional) + node: ... # List of per-node cfg (mandatory) + reclass: # Additional params (mandatory) + node: ... # List of per-node cfg (mandatory) + +``idf.net_config`` +------------------ + +``idf.net_config`` was introduced as a mechanism to map all the usual cluster +networks (internal and provider networks, e.g. ``mgmt``) to their ``VLAN`` +tags, ``CIDR`` and a physical interface index (used to match networks to +interface names, like ``eth0``, on the cluster nodes). - .. code-block:: bash - $ ci/deploy.sh -l <lab_name> \ - -p <pod_name> \ - -b <URI to configuration repo containing the PDF file> \ - -s <scenario> \ - -D \ - -S <Storage directory for disk images> |& tee deploy.log +.. WARNING:: -.. NOTE:: + The mapping between one network segment (e.g. ``mgmt``) and its ``CIDR``/ + ``VLAN`` is not configurable on a per-node basis, but instead applies to + all the nodes in the cluster. + +For each network, the following parameters are currently supported: + ++--------------------------+--------------------------------------------------+ +| ``idf.net_config.*`` key | Details | ++==========================+==================================================+ +| ``interface`` | The index of the interface to use for this net. | +| | For each cluster node (if network is present), | +| | OPNFV Fuel will determine the underlying physical| +| | interface by picking the element at index | +| | ``interface`` from the list of network interface | +| | names defined in | +| | ``idf.fuel.network.node.*.interfaces``. | +| | Required for each network. | +| | | +| | .. NOTE:: | +| | | +| | The interface index should be the | +| | same on all cluster nodes. This can be | +| | achieved by ordering them accordingly in | +| | ``PDF``/``IDF``. | ++--------------------------+--------------------------------------------------+ +| ``vlan`` | ``VLAN`` tag (integer) or the string ``native``. | +| | Required for each network. | ++--------------------------+--------------------------------------------------+ +| ``ip-range`` | When specified, all cluster IPs dynamically | +| | allocated by OPNFV Fuel for that network will be | +| | assigned inside this range. | +| | Required for ``oob``, optional for others. | +| | | +| | .. NOTE:: | +| | | +| | For now, only range start address is used. | ++--------------------------+--------------------------------------------------+ +| ``network`` | Network segment address. | +| | Required for each network, except ``oob``. | ++--------------------------+--------------------------------------------------+ +| ``mask`` | Network segment mask. | +| | Required for each network, except ``oob``. | ++--------------------------+--------------------------------------------------+ +| ``gateway`` | Gateway IP address. | +| | Required for ``public``, N/A for others. | ++--------------------------+--------------------------------------------------+ +| ``dns`` | List of DNS IP addresses. | +| | Required for ``public``, N/A for others. | ++--------------------------+--------------------------------------------------+ + +Sample ``public`` network configuration block: + +.. code-block:: yaml + + idf: + net_config: + public: + interface: 1 + vlan: native + network: 10.0.16.0 + ip-range: 10.0.16.100-10.0.16.253 + mask: 24 + gateway: 10.0.16.254 + dns: + - 8.8.8.8 + - 8.8.4.4 + +.. TOPIC:: ``hybrid`` POD notes + + Interface indexes must be the same for all nodes, which is problematic + when mixing ``virtual`` nodes (where all interfaces were untagged + so far) with ``baremetal`` nodes (where interfaces usually carry + tagged VLANs). + + .. TIP:: + + To achieve this, a special ``jumpserver`` network layout is used: + ``mgmt``, ``storage``, ``private``, ``public`` are trunked together + in a single ``trunk`` bridge: + + - without decapsulating them (if they are also tagged on ``baremetal``); + a ``trunk.<vlan_tag>`` interface should be created on the + ``jumpserver`` for each tagged VLAN so the kernel won't drop the + packets; + - by decapsulating them first (if they are also untagged on + ``baremetal`` nodes); + + The ``trunk`` bridge is then used for all bridges OPNFV Fuel + is aware of in ``idf.fuel.jumphost.bridges``, e.g. for a ``trunk`` where + only ``mgmt`` network is not decapsulated: + + .. code-block:: yaml + + idf: + fuel: + jumphost: + bridges: + admin: 'admin_br' + mgmt: 'trunk' + private: 'trunk' + public: 'trunk' + trunks: + # mgmt network is not decapsulated for jumpserver infra VMs, + # to align with the VLAN configuration of baremetal nodes. + mgmt: True - The deployment uses the OPNFV Pharos project as input (PDF and IDF files) - for hardware and network configuration of all current OPNFV PODs. - When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override - the path for the labconfig directory structure containing the PDF and IDF (see below). +.. WARNING:: -Examples --------- -#. Virtual deploy + The Linux kernel limits the name of network interfaces to 16 characters. + Extra care is required when choosing bridge names, so appending the + ``VLAN`` tag won't lead to an interface name length exceeding that limit. + +``idf.fuel.network`` +-------------------- + +``idf.fuel.network`` allows mapping the cluster networks (e.g. ``mgmt``) to +their physical interface name (e.g. ``eth0``) and bus address on the cluster +nodes. + +``idf.fuel.network.node`` should be a list with the same number (and order) of +elements as the cluster nodes defined in ``PDF``, e.g. the second cluster node +in ``PDF`` will use the interface name and bus address defined in the second +list element. + +Below is a sample configuration block for a single node with two interfaces: + +.. code-block:: yaml + + idf: + fuel: + network: + node: + # Ordered-list, index should be in sync with node index in PDF + - interfaces: + # Ordered-list, index should be in sync with interface index + # in PDF + - 'ens3' + - 'ens4' + busaddr: + # Bus-info reported by `ethtool -i ethX` + - '0000:00:03.0' + - '0000:00:04.0' + + +``idf.fuel.reclass`` +-------------------- + +``idf.fuel.reclass`` provides a way of overriding default values in the +reclass cluster model. + +This currently covers strictly compute parameter tuning, including huge +pages, ``CPU`` pinning and other ``DPDK`` settings. + +``idf.fuel.reclass.node`` should be a list with the same number (and order) of +elements as the cluster nodes defined in ``PDF``, e.g. the second cluster node +in ``PDF`` will use the parameters defined in the second list element. + +The following parameters are currently supported: + ++---------------------------------+-------------------------------------------+ +| ``idf.fuel.reclass.node.*`` | Details | +| key | | ++=================================+===========================================+ +| ``nova_cpu_pinning`` | List of CPU cores nova will be pinned to. | +| | | +| | .. WARNING:: | +| | | +| | Currently disabled. | ++---------------------------------+-------------------------------------------+ +| ``compute_hugepages_size`` | Size of each persistent huge pages. | +| | | +| | Usual values are ``2M`` and ``1G``. | ++---------------------------------+-------------------------------------------+ +| ``compute_hugepages_count`` | Total number of persistent huge pages. | ++---------------------------------+-------------------------------------------+ +| ``compute_hugepages_mount`` | Mount point to use for huge pages. | ++---------------------------------+-------------------------------------------+ +| ``compute_kernel_isolcpu`` | List of certain CPU cores that are | +| | isolated from Linux scheduler. | ++---------------------------------+-------------------------------------------+ +| ``compute_dpdk_driver`` | Kernel module to provide userspace I/O | +| | support. | ++---------------------------------+-------------------------------------------+ +| ``compute_ovs_pmd_cpu_mask`` | Hexadecimal mask of CPUs to run ``DPDK`` | +| | Poll-mode drivers. | ++---------------------------------+-------------------------------------------+ +| ``compute_ovs_dpdk_socket_mem`` | Set of amount huge pages in ``MB`` to be | +| | used by ``OVS-DPDK`` daemon taken for each| +| | ``NUMA`` node. Set size is equal to | +| | ``NUMA`` nodes count, elements are | +| | divided by comma. | ++---------------------------------+-------------------------------------------+ +| ``compute_ovs_dpdk_lcore_mask`` | Hexadecimal mask of ``DPDK`` lcore | +| | parameter used to run ``DPDK`` processes. | ++---------------------------------+-------------------------------------------+ +| ``compute_ovs_memory_channels`` | Number of memory channels to be used. | ++---------------------------------+-------------------------------------------+ +| ``dpdk0_driver`` | NIC driver to use for physical network | +| | interface. | ++---------------------------------+-------------------------------------------+ +| ``dpdk0_n_rxq`` | Number of ``RX`` queues. | ++---------------------------------+-------------------------------------------+ + +Sample ``compute_params`` configuration block (for a single node): + +.. code-block:: yaml + + idf: + fuel: + reclass: + node: + - compute_params: + common: &compute_params_common + compute_hugepages_size: 2M + compute_hugepages_count: 2048 + compute_hugepages_mount: /mnt/hugepages_2M + dpdk: + <<: *compute_params_common + compute_dpdk_driver: uio + compute_ovs_pmd_cpu_mask: "0x6" + compute_ovs_dpdk_socket_mem: "1024" + compute_ovs_dpdk_lcore_mask: "0x8" + compute_ovs_memory_channels: "2" + dpdk0_driver: igb_uio + dpdk0_n_rxq: 2 + +``SDF`` +~~~~~~~ + +Scenario Descriptor Files are not yet implemented in the OPNFV Fuel ``Gambia`` +release. + +Instead, embedded OPNFV Fuel scenarios files are locally available in +``mcp/config/scenario``. - To start a virtual deployment, it is required to have the **virtual** keyword - while specifying the pod name to the installer script. +OPNFV Software Installation and Deployment +========================================== - It will create the required bridges and networks, configure Salt Master and - install OpenStack. +This section describes the process of installing all the components needed to +deploy the full OPNFV reference platform stack across a server cluster. - .. code-block:: bash +Deployment Types +~~~~~~~~~~~~~~~~ - $ ci/deploy.sh -l ericsson \ - -p virtual3 \ - -s os-nosdn-nofeature-noha \ - -D \ - -S /home/jenkins/tmpdir |& tee deploy.log +.. WARNING:: - Once the deployment is complete, the OpenStack Dashboard, Horizon, is - available at ``http://<controller VIP>:8078`` - The administrator credentials are **admin** / **opnfv_secret**. + OPNFV releases previous to ``Gambia`` used to rely on the ``virtual`` + keyword being part of the POD name (e.g. ``ericsson-virtual2``) to + configure the deployment type as ``virtual``. Otherwise ``baremetal`` + was implied. - A simple (and generic) sample PDF/IDF set of configuration files may - be used for virtual deployments by setting lab/POD name to ``local-virtual1``. - This sample configuration is x86_64 specific and hardcodes certain parameters, - like public network address space, so a dedicated PDF/IDF is highly recommended. +``Gambia`` and newer releases are more flexbile towards supporting a mix +of ``baremetal`` and ``virtual`` nodes, so the type of deployment is +now automatically determined based on the cluster nodes types in ``PDF``: - .. code-block:: bash ++---------------------------------+-------------------------------------------+ +| ``PDF`` has nodes of type | Deployment type | ++---------------+-----------------+ | +| ``baremetal`` | ``virtual`` | | ++===============+=================+===========================================+ +| yes | no | ``baremetal`` | ++---------------+-----------------+-------------------------------------------+ +| yes | yes | ``hybrid`` | ++---------------+-----------------+-------------------------------------------+ +| no | yes | ``virtual`` | ++---------------+-----------------+-------------------------------------------+ - $ ci/deploy.sh -l local \ - -p virtual1 \ - -s os-nosdn-nofeature-noha \ - -D \ - -S /home/jenkins/tmpdir |& tee deploy.log +Based on that, the deployment script will later enable/disable certain extra +nodes (e.g. ``mas01``) and/or ``STATE`` files (e.g. ``maas``). -#. Baremetal deploy +``HA`` vs ``noHA`` +~~~~~~~~~~~~~~~~~~ - A x86 deploy on pod2 from Linux Foundation lab +High availability of OpenStack services is determined based on scenario name, +e.g. ``os-nosdn-nofeature-noha`` vs ``os-nosdn-nofeature-ha``. - .. code-block:: bash +.. TIP:: - $ ci/deploy.sh -l lf \ - -p pod2 \ - -s os-nosdn-nofeature-ha \ - -D \ - -S /home/jenkins/tmpdir |& tee deploy.log + ``HA`` scenarios imply a virtualized control plane (``VCP``) for the + OpenStack services running on the 3 ``kvm`` nodes. - .. figure:: img/lf_pod2.png - :align: center - :alt: Fuel@OPNFV LF POD2 Network Layout + .. SEEALSO:: - Fuel@OPNFV LF POD2 Network Layout + An experimental feature argument (``-N``) is supported by the deploy + script for disabling ``VCP``, although it might not be supported by + all scenarios and is not being continuosly validated by OPNFV CI/CD. - An aarch64 deploy on pod5 from Arm lab +.. WARNING:: - .. code-block:: bash + ``virtual`` ``HA`` deployments are not officially supported, due to + poor performance and various limitations of nested virtualization on + both ``x86_64`` and ``aarch64`` architectures. - $ ci/deploy.sh -l arm \ - -p pod5 \ - -s os-nosdn-nofeature-ha \ - -D \ - -S /home/jenkins/tmpdir |& tee deploy.log + .. TIP:: - .. figure:: img/arm_pod5.png - :align: center - :alt: Fuel@OPNFV ARM POD5 Network Layout + ``virtual`` ``HA`` deployments without ``VCP`` are supported, but + highly experimental. - Fuel@OPNFV ARM POD5 Network Layout ++-------------------------------+-------------------------+-------------------+ +| Feature | ``HA`` scenario | ``noHA`` scenario | ++===============================+=========================+===================+ +| ``VCP`` | yes, | no | +| (Virtualized Control Plane) | disabled with ``-N`` | | ++-------------------------------+-------------------------+-------------------+ +| OpenStack APIs SSL | yes | no | ++-------------------------------+-------------------------+-------------------+ +| Storage | ``GlusterFS`` | ``NFS`` | ++-------------------------------+-------------------------+-------------------+ - Once the deployment is complete, the SaltStack Deployment Documentation is - available at ``http://<proxy public VIP>:8090``. +Steps to Start the Automatic Deploy +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - When deploying a new POD, one can pass the ``-b`` flag to the deploy script to override - the path for the labconfig directory structure containing the PDF and IDF. +These steps are common for ``virtual``, ``baremetal`` or ``hybrid`` deploys, +``x86_64``, ``aarch64`` or ``mixed`` (``x86_64`` and ``aarch64``): - .. code-block:: bash +- Clone the OPNFV Fuel code from gerrit +- Checkout the ``Gambia`` release tag +- Start the deploy script - $ ci/deploy.sh -b file://<absolute_path_to_labconfig> \ - -l <lab_name> \ - -p <pod_name> \ - -s <scenario> \ - -D \ - -S <tmp_folder> |& tee deploy.log +.. NOTE:: - - <absolute_path_to_labconfig> is the absolute path to a local directory, populated - similar to Pharos, i.e. PDF/IDF reside in ``<absolute_path_to_labconfig>/labs/<lab_name>`` - - <lab_name> is the same as the directory in the path above - - <pod_name> is the name used for the PDF (``<pod_name>.yaml``) and IDF (``idf-<pod_name>.yaml``) files + The deployment uses the OPNFV Pharos project as input (``PDF`` and + ``IDF`` files) for hardware and network configuration of all current + OPNFV PODs. + When deploying a new POD, one may pass the ``-b`` flag to the deploy + script to override the path for the labconfig directory structure + containing the ``PDF`` and ``IDF`` (``<URI to configuration repo ...>`` is + the absolute path to a local or remote directory structure, populated + similar to `pharos git repo`_, i.e. ``PDF``/``IDF`` reside in a + subdirectory called ``labs/<lab_name>``). +.. code-block:: console -Pod and Installer Descriptor Files -================================== + jenkins@jumpserver:~$ git clone https://git.opnfv.org/fuel + jenkins@jumpserver:~$ cd fuel + jenkins@jumpserver:~/fuel$ git checkout opnfv-7.0.0 + jenkins@jumpserver:~/fuel$ ci/deploy.sh -l <lab_name> \ + -p <pod_name> \ + -b <URI to configuration repo containing the PDF/IDF files> \ + -s <scenario> \ + -D \ + -S <Storage directory for deploy artifacts> |& tee deploy.log -Descriptor files provide the installer with an abstraction of the target pod -with all its hardware characteristics and required parameters. This information -is split into two different files: -Pod Descriptor File (PDF) and Installer Descriptor File (IDF). +.. TIP:: -The Pod Descriptor File is a hardware description of the pod -infrastructure. The information is modeled under a yaml structure. -A reference file with the expected yaml structure is available at -``mcp/config/labs/local/pod1.yaml``. + Besides the basic options, there are other recommended deploy arguments: -The hardware description is arranged into a main "jumphost" node and a "nodes" -set for all target boards. For each node the following characteristics -are defined: + - use ``-D`` option to enable the debug info + - use ``-S`` option to point to a tmp dir where the disk images are saved. + The deploy artifacts will be re-used on subsequent (re)deployments. + - use ``|& tee`` to save the deploy log to a file + +Typical Cluster Examples +~~~~~~~~~~~~~~~~~~~~~~~~ + +Common cluster layouts usually fall into one of the cases described below, +categorized by deployment type (``baremetal``, ``virtual`` or ``hybrid``) and +high availability (``HA`` or ``noHA``). -- Node parameters including CPU features and total memory. -- A list of available disks. -- Remote management parameters. -- Network interfaces list including mac address, speed, advanced features and name. +A simplified overview of the steps ``deploy.sh`` will automatically perform is: + +- create a Salt Master Docker container on the jumpserver, which will drive + the rest of the installation; +- ``baremetal`` or ``hybrid`` only: create a ``MaaS`` infrastructure node VM, + which will be leveraged using Salt to handle OS provisioning on the + ``baremetal`` nodes; +- leverage Salt to install & configure OpenStack; .. NOTE:: - The fixed IPs are ignored by the MCP installer script and it will instead - assign based on the network ranges defined in IDF. + A virtual network ``mcpcontrol`` is always created for initial connection + of the VMs on Jumphost. -The Installer Descriptor File extends the PDF with pod related parameters -required by the installer. This information may differ per each installer type -and it is not considered part of the pod infrastructure. -The IDF file must be named after the PDF with the prefix "idf-". A reference file with the expected -structure is available at ``mcp/config/labs/local/idf-pod1.yaml``. - -The file follows a yaml structure and two sections "net_config" and "fuel" are expected. - -The "net_config" section describes all the internal and provider networks -assigned to the pod. Each used network is expected to have a vlan tag, IP subnet and -attached interface on the boards. Untagged vlans shall be defined as "native". - -The "fuel" section defines several sub-sections required by the Fuel installer: - -- jumphost: List of bridge names for each network on the Jumpserver. -- network: List of device name and bus address info of all the target nodes. - The order must be aligned with the order defined in PDF file. Fuel installer relies on the IDF model - to setup all node NICs by defining the expected device name and bus address. -- maas: Defines the target nodes commission timeout and deploy timeout. (optional) -- reclass: Defines compute parameter tuning, including huge pages, cpu pinning - and other DPDK settings. (optional) - -The following parameters can be defined in the IDF files under "reclass". Those value will -overwrite the default configuration values in Fuel repository: - -- nova_cpu_pinning: List of CPU cores nova will be pinned to. Currently disabled. -- compute_hugepages_size: Size of each persistent huge pages. Usual values are '2M' and '1G'. -- compute_hugepages_count: Total number of persistent huge pages. -- compute_hugepages_mount: Mount point to use for huge pages. -- compute_kernel_isolcpu: List of certain CPU cores that are isolated from Linux scheduler. -- compute_dpdk_driver: Kernel module to provide userspace I/O support. -- compute_ovs_pmd_cpu_mask: Hexadecimal mask of CPUs to run DPDK Poll-mode drivers. -- compute_ovs_dpdk_socket_mem: Set of amount huge pages in MB to be used by OVS-DPDK daemon - taken for each NUMA node. Set size is equal to NUMA nodes count, elements are divided by comma. -- compute_ovs_dpdk_lcore_mask: Hexadecimal mask of DPDK lcore parameter used to run DPDK processes. -- compute_ovs_memory_channels: Number of memory channels to be used. -- dpdk0_driver: NIC driver to use for physical network interface. -- dpdk0_n_rxq: Number of RX queues. - - -The full description of the PDF and IDF file structure are available as yaml schemas. -The schemas are defined as a git submodule in Fuel repository. Input files provided -to the installer will be validated against the schemas. - -- ``mcp/scripts/pharos/config/pdf/pod1.schema.yaml`` -- ``mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml`` +.. WARNING:: -============= -Release Notes -============= + A single cluster deployment per ``jumpserver`` node is currently supported, + indifferent of its type (``virtual``, ``baremetal`` or ``hybrid``). -Please refer to the :ref:`Release Notes <fuel-release-notes-label>` article. +Once the deployment is complete, the following should be accessible: -========== -References -========== ++---------------+----------------------------------+---------------------------+ +| Resource | ``HA`` scenario | ``noHA`` scenario | ++===============+==================================+===========================+ +| ``Horizon`` | ``https://<prx public VIP>`` | ``http://<ctl VIP>:8078`` | +| (Openstack | | | +| Dashboard) | | | ++---------------+----------------------------------+---------------------------+ +| ``SaltStack`` | ``http://<prx public VIP>:8090`` | N/A | +| Deployment | | | +| Documentation | | | ++---------------+----------------------------------+---------------------------+ -OPNFV +.. SEEALSO:: -1) `OPNFV Home Page <https://www.opnfv.org>`_ -2) `OPNFV documentation <https://docs.opnfv.org>`_ -3) `Software downloads <https://www.opnfv.org/software/download>`_ + For more details on locating and importing the generated SSL certificate, + see :ref:`OPNFV Fuel User Guide <fuel-userguide>`. -OpenStack +``virtual`` ``noHA`` POD +------------------------ -4) `OpenStack Queens Release Artifacts <https://www.openstack.org/software/queens>`_ -5) `OpenStack Documentation <https://docs.openstack.org>`_ +In the following figure there are two generic examples of ``virtual`` deploys, +each on a separate Jumphost node, both behind the same ``TOR`` switch: -OpenDaylight +- Jumphost 1 has only virsh bridges (created by the deploy script); +- Jumphost 2 has a mix of Linux (manually created) and ``libvirt`` managed + bridges (created by the deploy script); -6) `OpenDaylight Artifacts <https://www.opendaylight.org/software/downloads>`_ +.. figure:: img/fuel_virtual_noha.png + :align: center + :width: 60% + :alt: OPNFV Fuel Virtual noHA POD Network Layout Examples + + OPNFV Fuel Virtual noHA POD Network Layout Examples + + +-------------+------------------------------------------------------------+ + | ``cfg01`` | Salt Master Docker container | + +-------------+------------------------------------------------------------+ + | ``ctl01`` | Controller VM | + +-------------+------------------------------------------------------------+ + | ``gtw01`` | Gateway VM with neutron services | + | | (``DHCP`` agent, ``L3`` agent, ``metadata`` agent etc) | + +-------------+------------------------------------------------------------+ + | ``odl01`` | VM on which ``ODL`` runs | + | | (for scenarios deployed with ODL) | + +-------------+------------------------------------------------------------+ + | ``cmp001``, | Compute VMs | + | ``cmp002`` | | + +-------------+------------------------------------------------------------+ + +.. TIP:: + + If external access to the ``public`` network is not required, there is + little to no motivation to create a custom ``PDF``/``IDF`` set for a + virtual deployment. + + Instead, the existing virtual PODs definitions in `pharos git repo`_ can + be used as-is: + + - ``ericsson-virtual1`` for ``x86_64``; + - ``arm-virtual2`` for ``aarch64``; + +.. code-block:: console + + # example deploy cmd for an x86_64 virtual cluster + jenkins@jumpserver:~/fuel$ ci/deploy.sh -l ericsson \ + -p virtual1 \ + -s os-nosdn-nofeature-noha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log + +``baremetal`` ``noHA`` POD +-------------------------- -Fuel +.. WARNING:: -7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_ + These scenarios are not tested in OPNFV CI, so they are considered + experimental. -Salt +.. figure:: img/fuel_baremetal_noha.png + :align: center + :width: 60% + :alt: OPNFV Fuel Baremetal noHA POD Network Layout Example + + OPNFV Fuel Baremetal noHA POD Network Layout Example + + +-------------+------------------------------------------------------------+ + | ``cfg01`` | Salt Master Docker container | + +-------------+------------------------------------------------------------+ + | ``mas01`` | MaaS Node VM | + +-------------+------------------------------------------------------------+ + | ``ctl01`` | Baremetal controller node | + +-------------+------------------------------------------------------------+ + | ``gtw01`` | Baremetal Gateway with neutron services | + | | (dhcp agent, L3 agent, metadata, etc) | + +-------------+------------------------------------------------------------+ + | ``odl01`` | Baremetal node on which ODL runs | + | | (for scenarios deployed with ODL, otherwise unused | + +-------------+------------------------------------------------------------+ + | ``cmp001``, | Baremetal Computes | + | ``cmp002`` | | + +-------------+------------------------------------------------------------+ + | Tenant VM | VM running in the cloud | + +-------------+------------------------------------------------------------+ + +``baremetal`` ``HA`` POD +------------------------ + +.. figure:: img/fuel_baremetal_ha.png + :align: center + :width: 60% + :alt: OPNFV Fuel Baremetal HA POD Network Layout Example + + OPNFV Fuel Baremetal HA POD Network Layout Example + + +---------------------------+----------------------------------------------+ + | ``cfg01`` | Salt Master Docker container | + +---------------------------+----------------------------------------------+ + | ``mas01`` | MaaS Node VM | + +---------------------------+----------------------------------------------+ + | ``kvm01``, | Baremetals which hold the VMs with | + | ``kvm02``, | controller functions | + | ``kvm03`` | | + +---------------------------+----------------------------------------------+ + | ``prx01``, | Proxy VMs for Nginx | + | ``prx02`` | | + +---------------------------+----------------------------------------------+ + | ``msg01``, | RabbitMQ Service VMs | + | ``msg02``, | | + | ``msg03`` | | + +---------------------------+----------------------------------------------+ + | ``dbs01``, | MySQL service VMs | + | ``dbs02``, | | + | ``dbs03`` | | + +---------------------------+----------------------------------------------+ + | ``mdb01``, | Telemetry VMs | + | ``mdb02``, | | + | ``mdb03`` | | + +---------------------------+----------------------------------------------+ + | ``odl01`` | VM on which ``OpenDaylight`` runs | + | | (for scenarios deployed with ``ODL``) | + +---------------------------+----------------------------------------------+ + | ``cmp001``, | Baremetal Computes | + | ``cmp002`` | | + +---------------------------+----------------------------------------------+ + | Tenant VM | VM running in the cloud | + +---------------------------+----------------------------------------------+ + +.. code-block:: console + + # x86_x64 baremetal deploy on pod2 from Linux Foundation lab (lf-pod2) + jenkins@jumpserver:~/fuel$ ci/deploy.sh -l lf \ + -p pod2 \ + -s os-nosdn-nofeature-ha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log + +.. code-block:: console + + # aarch64 baremetal deploy on pod5 from Enea ARM lab (arm-pod5) + jenkins@jumpserver:~/fuel$ ci/deploy.sh -l arm \ + -p pod5 \ + -s os-nosdn-nofeature-ha \ + -D \ + -S /home/jenkins/tmpdir |& tee deploy.log + +``hybrid`` ``noHA`` POD +----------------------- + +.. figure:: img/fuel_hybrid_noha.png + :align: center + :width: 60% + :alt: OPNFV Fuel Hybrid noHA POD Network Layout Examples + + OPNFV Fuel Hybrid noHA POD Network Layout Examples + + +-------------+------------------------------------------------------------+ + | ``cfg01`` | Salt Master Docker container | + +-------------+------------------------------------------------------------+ + | ``mas01`` | MaaS Node VM | + +-------------+------------------------------------------------------------+ + | ``ctl01`` | Controller VM | + +-------------+------------------------------------------------------------+ + | ``gtw01`` | Gateway VM with neutron services | + | | (``DHCP`` agent, ``L3`` agent, ``metadata`` agent etc) | + +-------------+------------------------------------------------------------+ + | ``odl01`` | VM on which ``ODL`` runs | + | | (for scenarios deployed with ODL) | + +-------------+------------------------------------------------------------+ + | ``cmp001``, | Baremetal Computes | + | ``cmp002`` | | + +-------------+------------------------------------------------------------+ + +Automatic Deploy Breakdown +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When an automatic deploy is started, the following operations are performed +sequentially by the deploy script: + ++------------------+----------------------------------------------------------+ +| **Deploy stage** | **Details** | ++==================+==========================================================+ +| Argument | enviroment variables and command line arguments passed | +| Parsing | to ``deploy.sh`` are interpreted | ++------------------+----------------------------------------------------------+ +| Distribution | Install and/or configure mandatory requirements on the | +| Package | ``jumpserver`` node: | +| Installation | | +| | - ``Docker`` (from upstream and not distribution repos, | +| | as the version included in ``Ubuntu`` ``Xenial`` is | +| | outdated); | +| | - ``docker-compose`` (from upstream, as the version | +| | included in both ``CentOS 7`` and | +| | ``Ubuntu Xenial 16.04`` has dependency issues on most | +| | systems); | +| | - ``virt-inst`` (from upstream, as the version included | +| | in ``Ubuntu Xenial 16.04`` is outdated and lacks | +| | certain required features); | +| | - other miscelaneous requirements, depending on | +| | ``jumpserver`` distribution OS; | +| | | +| | .. SEEALSO:: | +| | | +| | - ``mcp/scripts/requirements_deb.yaml`` (``Ubuntu``) | +| | - ``mcp/scripts/requirements_rpm.yaml`` (``CentOS``) | +| | | +| | .. WARNING:: | +| | | +| | Mininum required ``Docker`` version is ``17.x``. | +| | | +| | .. WARNING:: | +| | | +| | Mininum required ``virt-inst`` version is ``1.4``. | ++------------------+----------------------------------------------------------+ +| Patch | For each ``git`` submodule in OPNFV Fuel repository, | +| Apply | if a subdirectory with the same name exists under | +| | ``mcp/patches``, all patches in that subdirectory are | +| | applied using ``git-am`` to the respective ``git`` | +| | submodule. | +| | | +| | This allows OPNFV Fuel to alter upstream repositories | +| | contents before consuming them, including: | +| | | +| | - ``Docker`` container build process customization; | +| | - ``salt-formulas`` customization; | +| | - ``reclass.system`` customization; | +| | | +| | .. SEEALSO:: | +| | | +| | - ``mcp/patches/README.rst`` | ++------------------+----------------------------------------------------------+ +| SSH RSA Keypair | If not already present, a RSA keypair is generated on | +| Generation | the ``jumpserver`` node at: | +| | | +| | - ``/var/lib/opnfv/mcp.rsa{,.pub}`` | +| | | +| | The public key will be added to the ``authorized_keys`` | +| | list for ``ubuntu`` user, so the private key can be used | +| | for key-based logins on: | +| | | +| | - ``cfg01``, ``mas01`` infrastructure nodes; | +| | - all cluster nodes (``baremetal`` and/or ``virtual``), | +| | including ``VCP`` VMs; | ++------------------+----------------------------------------------------------+ +| ``j2`` | Based on ``XDF`` (``PDF``, ``IDF``, ``SDF``) and | +| Expansion | additional deployment configuration determined during | +| | ``argument parsing`` stage described above, all jinja2 | +| | templates are expanded, including: | +| | | +| | - various classes in ``reclass.cluster``; | +| | - docker-compose ``yaml`` for Salt Master bring-up; | +| | - ``libvirt`` network definitions (``xml``); | ++------------------+----------------------------------------------------------+ +| Jumpserver | Basic validation that common ``jumpserver`` requirements | +| Requirements | are satisfied, e.g. ``PXE/admin`` is Linux bridge if | +| Check | ``baremetal`` nodes are defined in the ``PDF``. | ++------------------+----------------------------------------------------------+ +| Infrastucture | .. NOTE:: | +| Setup | | +| | All steps apply to and only to the ``jumpserver``. | +| | | +| | - prepare virtual machines; | +| | - (re)create ``libvirt`` managed networks; | +| | - apply ``sysctl`` configuration; | +| | - apply ``udev`` configuration; | +| | - create & start virtual machines prepared earlier; | +| | - create & start Salt Master (``cfg01``) Docker | +| | container; | ++------------------+----------------------------------------------------------+ +| ``STATE`` | Based on deployment type, scenario and other parameters, | +| Files | a ``STATE`` file list is constructed, then executed | +| | sequentially. | +| | | +| | .. TIP:: | +| | | +| | The table below lists all current ``STATE`` files | +| | and their intended action. | +| | | +| | .. SEEALSO:: | +| | | +| | For more information on how the list of ``STATE`` | +| | files is constructed, see | +| | :ref:`OPNFV Fuel User Guide <fuel-userguide>`. | ++------------------+----------------------------------------------------------+ +| Log | Contents of ``/var/log`` are recursively gathered from | +| Collection | all the nodes, then archived together for later | +| | inspection. | ++------------------+----------------------------------------------------------+ + +``STATE`` Files Overview +------------------------ + ++---------------------------+-------------------------------------------------+ +| ``STATE`` file | Targets involved and main intended action | ++===========================+=================================================+ +| ``virtual_init`` | ``cfg01``: reclass node generation | +| | | +| | ``jumpserver`` VMs (e.g. ``mas01``): basic OS | +| | config | ++---------------------------+-------------------------------------------------+ +| ``maas`` | ``mas01``: OS, MaaS installation, | +| | ``baremetal`` node commissioning and deploy | +| | | +| | .. NOTE:: | +| | | +| | Skipped if no ``baremetal`` nodes are | +| | defined in ``PDF`` (``virtual`` deploy). | ++---------------------------+-------------------------------------------------+ +| ``baremetal_init`` | ``kvm``, ``cmp``: OS install, config | ++---------------------------+-------------------------------------------------+ +| ``dpdk`` | ``cmp``: configure OVS-DPDK | ++---------------------------+-------------------------------------------------+ +| ``networks`` | ``ctl``: create OpenStack networks | ++---------------------------+-------------------------------------------------+ +| ``neutron_gateway`` | ``gtw01``: configure Neutron gateway | ++---------------------------+-------------------------------------------------+ +| ``opendaylight`` | ``odl01``: install & configure ``ODL`` | ++---------------------------+-------------------------------------------------+ +| ``openstack_noha`` | cluster nodes: install OpenStack without ``HA`` | ++---------------------------+-------------------------------------------------+ +| ``openstack_ha`` | cluster nodes: install OpenStack with ``HA`` | ++---------------------------+-------------------------------------------------+ +| ``virtual_control_plane`` | ``kvm``: create ``VCP`` VMs | +| | | +| | ``VCP`` VMs: basic OS config | +| | | +| | .. NOTE:: | +| | | +| | Skipped if ``-N`` deploy argument is used. | ++---------------------------+-------------------------------------------------+ +| ``tacker`` | ``ctl``: install & configure Tacker | ++---------------------------+-------------------------------------------------+ -8) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_ -9) `Saltstack Formulas <https://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_ +Release Notes +============= + +Please refer to the :ref:`OPNFV Fuel Release Notes <fuel-releasenotes>` +article. -Reclass +References +========== -10) `Reclass model <https://reclass.pantsfullofunix.net>`_ +For more information on the OPNFV ``Gambia`` 7.0 release, please see: + +#. `OPNFV Home Page`_ +#. `OPNFV Documentation`_ +#. `OPNFV Software Downloads`_ +#. `OPNFV Gambia Wiki Page`_ +#. `OpenStack Queens Release Artifacts`_ +#. `OpenStack Documentation`_ +#. `OpenDaylight Artifacts`_ +#. `Mirantis Cloud Platform Documentation`_ +#. `Saltstack Documentation`_ +#. `Saltstack Formulas`_ +#. `Reclass`_ + +.. FIXME: cleanup unused refs, extend above list +.. _`OpenDaylight`: https://www.opendaylight.org/software +.. _`OpenDaylight Artifacts`: https://www.opendaylight.org/software/downloads +.. _`MCP`: https://www.mirantis.com/software/mcp/ +.. _`Mirantis Cloud Platform Documentation`: https://docs.mirantis.com/mcp/latest/ +.. _`fuel git repository`: https://git.opnfv.org/fuel +.. _`pharos git repo`: https://git.opnfv.org/pharos +.. _`OpenStack Documentation`: https://docs.openstack.org +.. _`OpenStack Queens Release Artifacts`: https://www.openstack.org/software/queens +.. _`OPNFV Home Page`: https://www.opnfv.org +.. _`OPNFV Gambia Wiki Page`: https://wiki.opnfv.org/releases/Gambia +.. _`OPNFV Documentation`: https://docs.opnfv.org +.. _`OPNFV Software Downloads`: https://www.opnfv.org/software/download +.. _`Apache License 2.0`: https://www.apache.org/licenses/LICENSE-2.0 +.. _`Saltstack Documentation`: https://docs.saltstack.com/en/latest/topics/ +.. _`Saltstack Formulas`: https://salt-formulas.readthedocs.io/en/latest/ +.. _`Reclass`: https://reclass.pantsfullofunix.net +.. _`OPNFV Pharos Specification`: https://wiki.opnfv.org/display/pharos/Pharos+Specification +.. _`OPNFV PDF Wiki Page`: https://wiki.opnfv.org/display/INF/POD+Descriptor |