aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorCristina Pauna <cristina.pauna@enea.com>2018-02-01 15:30:01 +0200
committerCristina Pauna <cristina.pauna@enea.com>2018-02-05 18:25:21 +0200
commit32bba0e639eaca254e18b5e4d36969f21f9e3d0e (patch)
tree847969c5c4a18eedc66aef74060647d2b9b1dad5
parentf7a780a85429d1975e3109e67760865d2ee2226d (diff)
[docs] Remove redundant information
Armband and fuel documentation have been the same for E release. Instead of duplicating everyhting, make references to Fuel from armband docs. I removed the scenario folder as it should not have been here in the first place, otherwise the folder structure is kept as is. JIRA: ARMBAND-357 Change-Id: I060f22aee60713cabfd09ccf2fc0201e68a03c2a Signed-off-by: Cristina Pauna <cristina.pauna@enea.com>
-rw-r--r--docs/release/installation/img/README.rst12
-rw-r--r--docs/release/installation/img/arm_pod5.pngbin168862 -> 0 bytes
-rw-r--r--docs/release/installation/img/fuel_baremetal.pngbin245916 -> 0 bytes
-rw-r--r--docs/release/installation/img/fuel_virtual.pngbin216442 -> 0 bytes
-rw-r--r--docs/release/installation/img/lf_pod2.pngbin167832 -> 0 bytes
-rw-r--r--docs/release/installation/index.rst4
-rw-r--r--docs/release/installation/installation.instruction.rst552
-rw-r--r--docs/release/release-notes/index.rst4
-rw-r--r--docs/release/release-notes/release-notes.rst210
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-ha/index.rst16
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst42
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-noha/index.rst16
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst41
-rw-r--r--docs/release/userguide/img/horizon_login.pngbin32205 -> 0 bytes
-rw-r--r--docs/release/userguide/img/reclass_doc.pngbin78645 -> 0 bytes
-rw-r--r--docs/release/userguide/img/salt_services_ip.pngbin149270 -> 0 bytes
-rw-r--r--docs/release/userguide/img/saltstack.pngbin14373 -> 0 bytes
-rw-r--r--docs/release/userguide/index.rst4
-rw-r--r--docs/release/userguide/userguide.rst326
19 files changed, 31 insertions, 1196 deletions
diff --git a/docs/release/installation/img/README.rst b/docs/release/installation/img/README.rst
deleted file mode 100644
index bc8d9be..0000000
--- a/docs/release/installation/img/README.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. SPDX-License-Identifier: CC-BY-4.0
-.. (c) 2017 Ericsson AB, Mirantis Inc., Enea AB and others.
-
-Image Editor
-============
-All files in this directory have been created using `draw.io <http://draw.io>`_.
-
-Image Sources
-=============
-Image sources are embedded in each `png` file.
-To edit an image, import the `png` file using `draw.io <http://draw.io>`_.
diff --git a/docs/release/installation/img/arm_pod5.png b/docs/release/installation/img/arm_pod5.png
deleted file mode 100644
index b35b661..0000000
--- a/docs/release/installation/img/arm_pod5.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/installation/img/fuel_baremetal.png b/docs/release/installation/img/fuel_baremetal.png
deleted file mode 100644
index aee42ac..0000000
--- a/docs/release/installation/img/fuel_baremetal.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/installation/img/fuel_virtual.png b/docs/release/installation/img/fuel_virtual.png
deleted file mode 100644
index d766486..0000000
--- a/docs/release/installation/img/fuel_virtual.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/installation/img/lf_pod2.png b/docs/release/installation/img/lf_pod2.png
deleted file mode 100644
index b6c9b8e..0000000
--- a/docs/release/installation/img/lf_pod2.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst
index 784eec2..93bbc40 100644
--- a/docs/release/installation/index.rst
+++ b/docs/release/installation/index.rst
@@ -1,10 +1,8 @@
-.. _fuel-installation:
-
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-.. _fuel-release-installation-label:
+.. _armband-release-installation-label:
****************************************
Installation instruction for Fuel\@OPNFV
diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst
index cad1b10..52a5b35 100644
--- a/docs/release/installation/installation.instruction.rst
+++ b/docs/release/installation/installation.instruction.rst
@@ -6,545 +6,13 @@
Abstract
========
-This document describes how to install the Euphrates release of
-OPNFV when using Fuel as a deployment tool, covering its usage,
-limitations, dependencies and required system resources.
-This is an unified documentation for both x86_64 and aarch64
-architectures. All information is common for both architectures
-except when explicitly stated.
-
-============
-Introduction
-============
-
-This document provides guidelines on how to install and
-configure the Euphrates release of OPNFV when using Fuel as a
-deployment tool, including required software and hardware configurations.
-
-Although the available installation options provide a high degree of
-freedom in how the system is set up, including architecture, services
-and features, etc., said permutations may not provide an OPNFV
-compliant reference architecture. This document provides a
-step-by-step guide that results in an OPNFV Euphrates compliant
-deployment.
-
-The audience of this document is assumed to have good knowledge of
-networking and Unix/Linux administration.
-
-=======
-Preface
-=======
-
-Before starting the installation of the Euphrates release of
-OPNFV, using Fuel as a deployment tool, some planning must be
-done.
-
-Preparations
-============
-
-Prior to installation, a number of deployment specific parameters must be collected, those are:
-
-#. Provider sub-net and gateway information
-
-#. Provider VLAN information
-
-#. Provider DNS addresses
-
-#. Provider NTP addresses
-
-#. Network overlay you plan to deploy (VLAN, VXLAN, FLAT)
-
-#. How many nodes and what roles you want to deploy (Controllers, Storage, Computes)
-
-#. Monitoring options you want to deploy (Ceilometer, Syslog, etc.).
-
-#. Other options not covered in the document are available in the links above
-
-
-This information will be needed for the configuration procedures
-provided in this document.
-
-=========================================
-Hardware Requirements for Virtual Deploys
-=========================================
-
-The following minimum hardware requirements must be met for the virtual
-installation of Euphrates using Fuel:
-
-+----------------------------+--------------------------------------------------------+
-| **HW Aspect** | **Requirement** |
-| | |
-+============================+========================================================+
-| **1 Jumpserver** | A physical node (also called Foundation Node) that |
-| | will host a Salt Master VM and each of the VM nodes in |
-| | the virtual deploy |
-+----------------------------+--------------------------------------------------------+
-| **CPU** | Minimum 1 socket with Virtualization support |
-+----------------------------+--------------------------------------------------------+
-| **RAM** | Minimum 32GB/server (Depending on VNF work load) |
-+----------------------------+--------------------------------------------------------+
-| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended |
-+----------------------------+--------------------------------------------------------+
-
-
-===========================================
-Hardware Requirements for Baremetal Deploys
-===========================================
-
-The following minimum hardware requirements must be met for the baremetal
-installation of Euphrates using Fuel:
-
-+-------------------------+------------------------------------------------------+
-| **HW Aspect** | **Requirement** |
-| | |
-+=========================+======================================================+
-| **# of nodes** | Minimum 5 |
-| | |
-| | - 3 KVM servers which will run all the controller |
-| | services |
-| | |
-| | - 2 Compute nodes |
-| | |
-+-------------------------+------------------------------------------------------+
-| **CPU** | Minimum 1 socket with Virtualization support |
-+-------------------------+------------------------------------------------------+
-| **RAM** | Minimum 16GB/server (Depending on VNF work load) |
-+-------------------------+------------------------------------------------------+
-| **Disk** | Minimum 256GB 10kRPM spinning disks |
-+-------------------------+------------------------------------------------------+
-| **Networks** | 4 VLANs (PUBLIC, MGMT, STORAGE, PRIVATE) - can be |
-| | a mix of tagged/native |
-| | |
-| | 1 Un-Tagged VLAN for PXE Boot - ADMIN Network |
-| | |
-| | Note: These can be allocated to a single NIC - |
-| | or spread out over multiple NICs |
-+-------------------------+------------------------------------------------------+
-| **1 Jumpserver** | A physical node (also called Foundation Node) that |
-| | hosts the Salt Master and MaaS VMs |
-+-------------------------+------------------------------------------------------+
-| **Power management** | All targets need to have power management tools that |
-| | allow rebooting the hardware and setting the boot |
-| | order (e.g. IPMI) |
-+-------------------------+------------------------------------------------------+
-
-**NOTE:** All nodes including the Jumpserver must have the same architecture (either x86_64 or aarch64).
-
-**NOTE:** For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2).
-
-===============================
-Help with Hardware Requirements
-===============================
-
-Calculate hardware requirements:
-
-For information on compatible hardware types available for use,
-please see `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_
-
-When choosing the hardware on which you will deploy your OpenStack
-environment, you should think about:
-
-- CPU -- Consider the number of virtual machines that you plan to deploy in your cloud environment and the CPUs per virtual machine.
-
-- Memory -- Depends on the amount of RAM assigned per virtual machine and the controller node.
-
-- Storage -- Depends on the local drive space per virtual machine, remote volumes that can be attached to a virtual machine, and object storage.
-
-- Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage.
-
-================================================
-Top of the Rack (TOR) Configuration Requirements
-================================================
-
-The switching infrastructure provides connectivity for the OPNFV
-infrastructure operations, tenant networks (East/West) and provider
-connectivity (North/South); it also provides needed connectivity for
-the Storage Area Network (SAN).
-To avoid traffic congestion, it is strongly suggested that three
-physically separated networks are used, that is: 1 physical network
-for administration and control, one physical network for tenant private
-and public networks, and one physical network for SAN.
-The switching connectivity can (but does not need to) be fully redundant,
-in such case it comprises a redundant 10GE switch pair for each of the
-three physically separated networks.
-
-The physical TOR switches are **not** automatically configured from
-the Fuel OPNFV reference platform. All the networks involved in the OPNFV
-infrastructure as well as the provider networks and the private tenant
-VLANs needs to be manually configured.
-
-Manual configuration of the Euphrates hardware platform should
-be carried out according to the `OPNFV Pharos Specification
-<https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_.
-
-============================
-OPNFV Software Prerequisites
-============================
-
-The Jumpserver node should be pre-provisioned with an operating system,
-according to the Pharos specification. Relevant network bridges should
-also be pre-configured (e.g. admin_br, mgmt_br, public_br).
-
- - The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during fuel installation.
- - The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is
- suggested to pre-configure it for debugging purposes.
- - The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory.
-
-The user running the deploy script on the Jumpserver should belong to "sudo" and "libvirt" groups,
-and have passwordless sudo access.
-
-The following example adds the groups to the user "jenkins"
-
-.. code-block:: bash
-
- $ sudo usermod -aG sudo jenkins
- $ sudo usermod -aG libvirt jenkins
- $ reboot
- $ groups
- jenkins sudo libvirt
-
- $ sudo visudo
- ...
- %jenkins ALL=(ALL) NOPASSWD:ALL
-
-For an AArch64 Jumpserver, the "libvirt" minimum required version is 3.x, 3.5 or newer highly recommended.
-While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended
-(especially on AArch64 Jumpservers).
-
-For CentOS 7.4 (AArch64), distro provided packages are already new enough.
-For Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used.
-For convenience, Armband provides a DEB repository holding all the required packages.
-
-To add and enable the Armband repository on an Ubuntu 16.04 system,
-create a new sources list file `/apt/sources.list.d/armband.list` with the following contents:
-
-.. code-block:: bash
-
- $ cat /etc/apt/sources.list.d/armband.list
- //for OpenStack Pike release
- deb http://linux.enea.com/mcp-repos/pike/xenial pike-armband main
-
- $ apt-get update
-
-Fuel@OPNFV has been validated by CI using the following distributions
-installed on the Jumpserver:
-
- - CentOS 7 (recommended by Pharos specification);
- - Ubuntu Xenial;
-
-**NOTE**: The install script expects 'libvirt' to be already running on the Jumpserver.In case libvirt
-packages are missing, the script will install them; but depending on the OS distribution, the user
-might have to start the 'libvirtd' service manually, then run the deploy script again. Therefore, it
-is recommened to install libvirt-bin explicitly on the Jumpserver before the deployment.
-
-**NOTE**: It is also recommened to install the newer kernel on the Jumpserver before the deployment.
-
-**NOTE**: The install script will automatically install the rest of required distro package
-dependencies on the Jumpserver, unless explicitly asked not to (via -P deploy arg). This includes
-Python, QEMU, libvirt etc.
-
-.. code-block:: bash
-
- $ apt-get install linux-image-generic-hwe-16.04-edge libvirt-bin
-
-
-==========================================
-OPNFV Software Installation and Deployment
-==========================================
-
-This section describes the process of installing all the components needed to
-deploy the full OPNFV reference platform stack across a server cluster.
-
-The installation is done with Mirantis Cloud Platform (MCP), which is based on
-a reclass model. This model provides the formula inputs to Salt, to make the deploy
-automatic based on deployment scenario.
-The reclass model covers:
-
- - Infrastucture node definition: Salt Master node (cfg01) and MaaS node (mas01)
- - OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002)
- - Infrastructure components to install (software packages, services etc.)
- - OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them
-
-
-Automatic Installation of a Virtual POD
-=======================================
-
-For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will:
-
- - Create a Salt Master VM on the Jumpserver which will drive the installation
- - Create the bridges for networking with virsh (only if a real bridge does not already exist for a given network)
- - Install OpenStack on the targets
- - Leverage Salt to install & configure OpenStack services
-
-.. figure:: img/fuel_virtual.png
- :align: center
- :alt: Fuel@OPNFV Virtual POD Network Layout Examples
-
- Fuel@OPNFV Virtual POD Network Layout Examples
-
- +-----------------------+------------------------------------------------------------------------+
- | cfg01 | Salt Master VM |
- +-----------------------+------------------------------------------------------------------------+
- | ctl01 | Controller VM |
- +-----------------------+------------------------------------------------------------------------+
- | cmp01/cmp02 | Compute VMs |
- +-----------------------+------------------------------------------------------------------------+
- | gtw01 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) |
- +-----------------------+------------------------------------------------------------------------+
- | odl01 | VM on which ODL runs (for scenarios deployed with ODL) |
- +-----------------------+------------------------------------------------------------------------+
-
-
-In this figure there are examples of two virtual deploys:
- - Jumphost 1 has only virsh bridges, created by the deploy script
- - Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network,
- the deploy script will skip creating a virsh bridge for it
-
-**Note**: A virtual network "mcpcontrol" is always created. For virtual deploys, "mcpcontrol" is also
- used for Admin, leaving the PXE/Admin bridge unused.
-
-
-Automatic Installation of a Baremetal POD
-=========================================
-
-The baremetal installation process can be done by editing the information about
-hardware and environment in the reclass files, or by using a Pod Descriptor File (PDF).
-This file contains all the information about the hardware and network of the deployment
-the will be fed to the reclass model during deployment.
-
-The installation is done automatically with the deploy script, which will:
-
- - Create a Salt Master VM on the Jumpserver which will drive the installation
- - Create a MaaS Node VM on the Jumpserver which will provision the targets
- - Install OpenStack on the targets
- - Leverage MaaS to provision baremetal nodes with the operating system
- - Leverage Salt to configure the operating system on the baremetal nodes
- - Leverage Salt to install & configure OpenStack services
-
-.. figure:: img/fuel_baremetal.png
- :align: center
- :alt: Fuel@OPNFV Baremetal POD Network Layout Example
-
- Fuel@OPNFV Baremetal POD Network Layout Example
-
- +-----------------------+---------------------------------------------------------+
- | cfg01 | Salt Master VM |
- +-----------------------+---------------------------------------------------------+
- | mas01 | MaaS Node VM |
- +-----------------------+---------------------------------------------------------+
- | kvm01..03 | Baremetals which hold the VMs with controller functions |
- +-----------------------+---------------------------------------------------------+
- | cmp001/cmp002 | Baremetal compute nodes |
- +-----------------------+---------------------------------------------------------+
- | prx01/prx02 | Proxy VMs for Nginx |
- +-----------------------+---------------------------------------------------------+
- | msg01..03 | RabbitMQ Service VMs |
- +-----------------------+---------------------------------------------------------+
- | dbs01..03 | MySQL service VMs |
- +-----------------------+---------------------------------------------------------+
- | mdb01..03 | Telemetry VMs |
- +-----------------------+---------------------------------------------------------+
- | odl01 | VM on which ODL runs (for scenarios deployed with ODL) |
- +-----------------------+---------------------------------------------------------+
- | Tenant VM | VM running in the cloud |
- +-----------------------+---------------------------------------------------------+
-
-In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the Jumpserver, it is
-required to pre-configure at least the admin_br bridge for the PXE/Admin.
-For the targets, the bridges are created by the deploy script.
-
-**Note**: A virtual network "mcpcontrol" is always created. For baremetal deploys, PXE bridge is used
-for baremetal node provisioning, while "mcpcontrol" is used to provision the infrastructure VMs only.
-
-
-Steps to Start the Automatic Deploy
-===================================
-
-These steps are common both for virtual and baremetal deploys.
-
-#. Clone the Fuel code from gerrit
-
- For x86_64
-
- .. code-block:: bash
-
- $ git clone https://git.opnfv.org/fuel
- $ cd fuel
-
- For aarch64
-
- .. code-block:: bash
-
- $ git clone https://git.opnfv.org/armband
- $ cd armband
-
-#. Checkout the Euphrates release
-
- .. code-block:: bash
-
- $ git checkout opnfv-5.0.2
-
-#. Start the deploy script
-
- Besides the basic options, there are other recommended deploy arguments:
-
- - use **-D** option to enable the debug info
- - use **-S** option to point to a tmp dir where the disk images are saved. The images will be
- re-used between deploys
- - use **|& tee** to save the deploy log to a file
-
- .. code-block:: bash
-
- $ ci/deploy.sh -l <lab_name> \
- -p <pod_name> \
- -b <URI to configuration repo containing the PDF file> \
- -s <scenario> \
- -B <list of admin, management, private and public bridges> \
- -D \
- -S <Storage directory for disk images> |& tee deploy.log
-
-Examples
---------
-#. Virtual deploy
-
- To start a virtual deployment, it is required to have the `virtual` keyword
- while specifying the pod name to the installer script.
-
- It will create the required bridges and networks, configure Salt Master and
- install OpenStack.
-
- .. code-block:: bash
-
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l ericsson \
- -p virtual_kvm \
- -s os-nosdn-nofeature-noha \
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
-
- Once the deployment is complete, the OpenStack Dashboard, Horizon is
- available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
- The administrator credentials are **admin** / **opnfv_secret**.
-
-#. Baremetal deploy
-
- A x86 deploy on pod2 from Linux Foundation lab
-
- .. code-block:: bash
-
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l lf \
- -p pod2 \
- -s os-nosdn-nofeature-ha \
- -B pxebr,br-ctl
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
-
- .. figure:: img/lf_pod2.png
- :align: center
- :alt: Fuel@OPNFV LF POD2 Network Layout
-
- Fuel@OPNFV LF POD2 Network Layout
-
- Once the deployment is complete, the SaltStack Deployment Documentation is
- available at http://<Proxy VIP>:8090, e.g. http://172.30.10.103:8090.
-
- An aarch64 deploy on pod5 from Arm lab
-
- .. code-block:: bash
-
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l arm \
- -p pod5 \
- -s os-nosdn-nofeature-ha \
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
-
- .. figure:: img/arm_pod5.png
- :align: center
- :alt: Fuel@OPNFV ARM POD5 Network Layout
-
- Fuel@OPNFV ARM POD5 Network Layout
-
-Pod Descriptor Files
-====================
-
-Descriptor files provide the installer with an abstraction of the target pod
-with all its hardware characteristics and required parameters. This information
-is split into two different files:
-Pod Descriptor File (PDF) and Installer Descriptor File (IDF).
-
-
-The Pod Descriptor File is a hardware and network description of the pod
-infrastructure. The information is modeled under a yaml structure.
-A reference file with the expected yaml structure is available at
-*mcp/config/labs/local/pod1.yaml*
-
-A common network section describes all the internal and provider networks
-assigned to the pod. Each network is expected to have a vlan tag, IP subnet and
-attached interface on the boards. Untagged vlans shall be defined as "native".
-
-The hardware description is arranged into a main "jumphost" node and a "nodes"
-set for all target boards. For each node the following characteristics
-are defined:
-
-- Node parameters including CPU features and total memory.
-- A list of available disks.
-- Remote management parameters.
-- Network interfaces list including mac address, speed and advanced features.
-- IP list of fixed IPs for the node
-
-**Note**: the fixed IPs are ignored by the MCP installer script and it will instead
-assign based on the network ranges defined under the pod network configuration.
-
-
-The Installer Descriptor File extends the PDF with pod related parameters
-required by the installer. This information may differ per each installer type
-and it is not considered part of the pod infrastructure. Fuel installer relies
-on the IDF model to map the networks to the bridges on the foundation node and
-to setup all node NICs by defining the expected OS device name and bus address.
-
-
-The file follows a yaml structure and a "fuel" section is expected. Contents and
-references must be aligned with the PDF file. The IDF file must be named after
-the PDF with the prefix "idf-". A reference file with the expected structure
-is available at *mcp/config/labs/local/idf-pod1.yaml*
-
-
-=============
-Release Notes
-=============
-
-Please refer to the :ref:`Release Notes <fuel-release-notes-label>` article.
-
-==========
-References
-==========
-
-OPNFV
-
-1) `OPNFV Home Page <http://www.opnfv.org>`_
-2) `OPNFV documentation <http://docs.opnfv.org>`_
-3) `Software downloads <https://www.opnfv.org/software/download>`_
-
-OpenStack
-
-4) `OpenStack Ocata Release Artifacts <http://www.openstack.org/software/ocata>`_
-5) `OpenStack Documentation <http://docs.openstack.org>`_
-
-OpenDaylight
-
-6) `OpenDaylight Artifacts <http://www.opendaylight.org/software/downloads>`_
-
-Fuel
-
-7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_
-
-Salt
-
-8) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
-9) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
-
-Reclass
-
-10) `Reclass model <http://reclass.pantsfullofunix.net>`_
+Armband project aims to integrate and test all aspects of OPNFV releases
+on ARM-based servers. The goal is to replicate all OPNFV software build,
+continuous integration, lab provisioning, and testing processes of each
+standard release OPNFV, such that the release can be available on both
+Intel Architecture-based and ARM Architecture-based servers.
+
+The armband repo contains the patches necessary for Fuel installer to run on
+aarch64 hardware. For more information on how to install the Euphrates release
+of OPNFV when using Fuel as a deployment tool check
+:ref:`fuel-release-installation-label`
diff --git a/docs/release/release-notes/index.rst b/docs/release/release-notes/index.rst
index 4b1e4fa..0e0f8e3 100644
--- a/docs/release/release-notes/index.rst
+++ b/docs/release/release-notes/index.rst
@@ -1,10 +1,8 @@
-.. _fuel-releasenotes:
-
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-.. _fuel-release-notes-label:
+.. _armband-release-notes-label:
*****************************
Release notes for Fuel\@OPNFV
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 0052ab6..710d2f3 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -6,207 +6,13 @@
Abstract
========
-This document compiles the release notes for the Euphrates release of
-OPNFV when using Fuel as a deployment tool. This is an unified documentation
-for both x86_64 and aarch64 architectures. All information is common for
-both architectures except when explicitly stated.
+Armband project aims to integrate and test all aspects of OPNFV releases
+on ARM-based servers. The goal is to replicate all OPNFV software build,
+continuous integration, lab provisioning, and testing processes of each
+standard release OPNFV, such that the release can be available on both
+Intel Architecture-based and ARM Architecture-based servers.
+The armband repo contains the patches necessary for Fuel installer to run on
+aarch64 hardware. For more information on Euphrates release notes check
+ref:`fuel-release-notes-label`
-===============
-Important Notes
-===============
-
-These notes provides release information for the use of Fuel as deployment
-tool for the Euphrates release of OPNFV.
-
-The goal of the Euphrates release and this Fuel-based deployment process is
-to establish a lab ready platform accelerating further development
-of the OPNFV infrastructure.
-
-Carefully follow the installation-instructions.
-
-=======
-Summary
-=======
-
-For Euphrates, the typical use of Fuel as an OpenStack installer is
-supplemented with OPNFV unique components such as:
-
-- `OpenDaylight <http://www.opendaylight.org/software>`_
-- `Open vSwitch for NFV <https://wiki.opnfv.org/ovsnfv>`_
-
-As well as OPNFV-unique configurations of the Hardware and Software stack.
-
-This Euphrates artifact provides Fuel as the deployment stage tool in the
-OPNFV CI pipeline including:
-
-- Documentation built by Jenkins
-
- - overall OPNFV documentation
-
- - this document (release notes)
-
- - installation instructions
-
-- Automated deployment of Euphrates with running on bare metal or a nested
- hypervisor environment (KVM)
-
-- Automated validation of the Euphrates deployment
-
-============
-Release Data
-============
-
-+--------------------------------------+--------------------------------------+
-| **Project** | fuel/armband |
-| | |
-+--------------------------------------+--------------------------------------+
-| **Repo/tag** | opnfv-5.1.0 |
-| | |
-+--------------------------------------+--------------------------------------+
-| **Release designation** | Euphrates 5.1 |
-| | |
-+--------------------------------------+--------------------------------------+
-| **Release date** | December 15 2017 |
-| | |
-+--------------------------------------+--------------------------------------+
-| **Purpose of the delivery** | Euphrates alignment to Released |
-| | MCP 1.0 baseline + features and |
-| | bug-fixes for the following |
-| | feaures: |
-| | |
-| | - Open vSwitch for NFV |
-| | - OpenDaylight |
-+--------------------------------------+--------------------------------------+
-
-Version Change
-==============
-
-Module Version Changes
-----------------------
-This is the Euphrates 5.1 release.
-It is based on following upstream versions:
-
-- MCP 1.0 Base Release
-
-- OpenStack Ocata Release
-
-- OpenDaylight
-
-Document Changes
-----------------
-This is the Euphrates 5.1 release.
-It comes with the following documentation:
-
-- `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/armband/docs/release/installation/installation.instruction.html>`_
-
-- Release notes (This document)
-
-- `User guide <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/userguide/userguide.html>`_
-
-Reason for Version
-==================
-
-Feature Additions
------------------
-
-**JIRA TICKETS:**
-`Euphrates 5.1 new features <https://jira.opnfv.org/issues/?filter=12114>`_
-
-Bug Corrections
----------------
-
-**JIRA TICKETS:**
-
-`Euphrates 5.1 bug fixes <https://jira.opnfv.org/issues/?filter=12115>`_
-
-(Also See respective Integrated feature project's bug tracking)
-
-Deliverables
-============
-
-Software Deliverables
----------------------
-
-- `Fuel@x86_64 installer script files <https://git.opnfv.org/fuel>`_
-
-- `Fuel@aarch64 installer script files <https://git.opnfv.org/armband>`_
-
-Documentation Deliverables
---------------------------
-
-- `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/armband/docs/release/installation/installation.instruction.html>`_
-
-- Release notes (This document)
-
-- `User guide <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/userguide/userguide.html>`_
-
-
-=========================================
-Known Limitations, Issues and Workarounds
-=========================================
-
-System Limitations
-==================
-
-- **Max number of blades:** 1 Jumpserver, 3 Controllers, 20 Compute blades
-
-- **Min number of blades:** 1 Jumpserver
-
-- **Storage:** Cinder is the only supported storage configuration
-
-- **Max number of networks:** 65k
-
-
-Known Issues
-============
-
-**JIRA TICKETS:**
-
-`Known issues <https://jira.opnfv.org/issues/?filter=12116>`_
-
-(Also See respective Integrated feature project's bug tracking)
-
-Workarounds
-===========
-
-**JIRA TICKETS:**
-
--
-
-(Also See respective Integrated feature project's bug tracking)
-
-============
-Test Results
-============
-The Euphrates 5.1 release with the Fuel deployment tool has undergone QA test
-runs, see separate test results.
-
-==========
-References
-==========
-For more information on the OPNFV Euphrates 5.1 release, please see:
-
-OPNFV
-=====
-
-1) `OPNFV Home Page <http://www.opnfv.org>`_
-2) `OPNFV Documentation <http://docs.opnfv.org>`_
-3) `OPNFV Software Downloads <https://www.opnfv.org/software/download>`_
-
-OpenStack
-=========
-
-4) `OpenStack Ocata Release Artifacts <http://www.openstack.org/software/ocata>`_
-
-5) `OpenStack Documentation <http://docs.openstack.org>`_
-
-OpenDaylight
-============
-
-6) `OpenDaylight Artifacts <http://www.opendaylight.org/software/downloads>`_
-
-Fuel
-====
-
-7) `Mirantis Cloud Platform Documentation <https://docs.mirantis.com/mcp/latest>`_
diff --git a/docs/release/scenarios/os-nosdn-ovs-ha/index.rst b/docs/release/scenarios/os-nosdn-ovs-ha/index.rst
deleted file mode 100644
index af0105b..0000000
--- a/docs/release/scenarios/os-nosdn-ovs-ha/index.rst
+++ /dev/null
@@ -1,16 +0,0 @@
-.. _os-nosdn-ovs-ha1:
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International Licence.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2017 Mirantis Inc., Enea Software AB and others
-
-========================================
-os-nosdn-ovs-ha overview and description
-========================================
-
-.. toctree::
- :numbered:
- :maxdepth: 2
-
- os-nosdn-ovs-ha.rst
-
diff --git a/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst b/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
deleted file mode 100644
index 5e30ab5..0000000
--- a/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
+++ /dev/null
@@ -1,42 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c)2017 Mirantis Inc., Enea Software AB and others
-
-This document provides scenario level details for Euphrates 5.0 of
-deployment with no SDN controller and no extra features enabled.
-
-============
-Introduction
-============
-
-This scenario is used primarily to validate and deploy a Ocata OpenStack
-deployment without any NFV features or SDN controller enabled.
-
-Scenario components and composition
-===================================
-
-This scenario is composed of common OpenStack services enabled by default,
-including Nova, Neutron, Glance, Cinder, Keystone, Horizon. It also installs
-the DPDK-enabled Open vSwitch component.
-
-All services are in HA, meaning that there are multiple cloned instances of
-each service, and they are balanced by HA Proxy using a Virtual IP Address
-per service.
-
-
-Scenario usage overview
-=======================
-
-Simply deploy this scenario by using the os-nosdn-ovs-ha.yaml deploy
-settings file.
-
-Limitations, Issues and Workarounds
-===================================
-
-None
-
-References
-==========
-
-For more information on the OPNFV Euphrates release, please visit
-http://www.opnfv.org/euphrates
diff --git a/docs/release/scenarios/os-nosdn-ovs-noha/index.rst b/docs/release/scenarios/os-nosdn-ovs-noha/index.rst
deleted file mode 100644
index 066abc9..0000000
--- a/docs/release/scenarios/os-nosdn-ovs-noha/index.rst
+++ /dev/null
@@ -1,16 +0,0 @@
-.. _os-nosdn-ovs-noha1:
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International Licence.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2017 Mirantis Inc., Enea Software AB and others
-
-==========================================
-os-nosdn-ovs-noha overview and description
-==========================================
-
-.. toctree::
- :numbered:
- :maxdepth: 2
-
- os-nosdn-ovs-noha.rst
-
diff --git a/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst b/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
deleted file mode 100644
index 7ac4e11..0000000
--- a/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
+++ /dev/null
@@ -1,41 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. (c) 2017 Mirantis Inc., Enea Software AB and others
-
-This document provides scenario level details for Euphrates 5.0 of
-deployment with no SDN controller and no extra features enabled.
-
-============
-Introduction
-============
-
-This scenario is used primarily to validate and deploy a Ocata OpenStack
-deployment without any NFV features or SDN controller enabled.
-
-
-Scenario components and composition
-===================================
-
-This scenario is composed of common OpenStack services enabled by default,
-including Nova, Neutron, Glance, Cinder, Keystone, Horizon. It also installs
-the DPDK-enabled Open vSwitch component.
-
-
-Scenario usage overview
-=======================
-
-Simply deploy this scenario by using the os-nosdn-ovs-ha.yaml deploy
-settings file.
-
-
-Limitations, Issues and Workarounds
-===================================
-
-Tested on virtual deploy only.
-
-References
-==========
-
-For more information on the OPNFV Euphrates release, please visit
-http://www.opnfv.org/euphrates
-
diff --git a/docs/release/userguide/img/horizon_login.png b/docs/release/userguide/img/horizon_login.png
deleted file mode 100644
index 641ca6c..0000000
--- a/docs/release/userguide/img/horizon_login.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/userguide/img/reclass_doc.png b/docs/release/userguide/img/reclass_doc.png
deleted file mode 100644
index 374f92a..0000000
--- a/docs/release/userguide/img/reclass_doc.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/userguide/img/salt_services_ip.png b/docs/release/userguide/img/salt_services_ip.png
deleted file mode 100644
index 504beb3..0000000
--- a/docs/release/userguide/img/salt_services_ip.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/userguide/img/saltstack.png b/docs/release/userguide/img/saltstack.png
deleted file mode 100644
index d57452c..0000000
--- a/docs/release/userguide/img/saltstack.png
+++ /dev/null
Binary files differ
diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst
index d4330d0..58baffd 100644
--- a/docs/release/userguide/index.rst
+++ b/docs/release/userguide/index.rst
@@ -1,10 +1,8 @@
-.. _fuel-userguide:
-
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
.. (c) Open Platform for NFV Project, Inc. and its contributors
-.. _fuel-release-userguide-label:
+.. _armband-release-userguide-label:
**************************
User guide for Fuel\@OPNFV
diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst
index 2b46a84..7b86c8e 100644
--- a/docs/release/userguide/userguide.rst
+++ b/docs/release/userguide/userguide.rst
@@ -6,319 +6,13 @@
Abstract
========
-This document contains details about how to use OPNFV Fuel - Euphrates
-release - after it was deployed. For details on how to deploy check the
-installation instructions in the :ref:`references` section.
-
-This is an unified documentation for both x86_64 and aarch64
-architectures. All information is common for both architectures
-except when explicitly stated.
-
-
-
-================
-Network Overview
-================
-
-Fuel uses several networks to deploy and administer the cloud:
-
-+------------------+-------------------+---------------------------------------------------------+
-| Network name | Deploy Type | Description |
-| | | |
-+==================+===================+=========================================================+
-| **PXE/ADMIN** | baremetal only | Used for booting the nodes via PXE |
-+------------------+-------------------+---------------------------------------------------------+
-| **MCPCONTROL** | baremetal & | Used to provision the infrastructure VMs (Salt & MaaS). |
-| | virtual | On virtual deploys, it is used for Admin too (on target |
-| | | VMs) leaving the PXE/Admin bridge unused |
-+------------------+-------------------+---------------------------------------------------------+
-| **Mgmt** | baremetal & | Used for internal communication between |
-| | virtual | OpenStack components |
-+------------------+-------------------+---------------------------------------------------------+
-| **Internal** | baremetal & | Used for VM data communication within the |
-| | virtual | cloud deployment |
-+------------------+-------------------+---------------------------------------------------------+
-| **Public** | baremetal & | Used to provide Virtual IPs for public endpoints |
-| | virtual | that are used to connect to OpenStack services APIs. |
-| | | Used by Virtual machines to access the Internet |
-+------------------+-------------------+---------------------------------------------------------+
-
-
-These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
-Jumpserver. If they don't exists at deploy time, they will be created by the scripts as virsh
-networks.
-
-Mcpcontrol exists only on the Jumpserver and needs to be virtual because a DHCP server runs
-on this network and associates static host entry IPs for Salt and Maas VMs.
-
-
-
-===================
-Accessing the Cloud
-===================
-
-Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
-ssh key */var/lib/opnfv/mcp.rsa*. The example below is a connection to Salt master.
-
- .. code-block:: bash
-
- $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
-
-**Note**: The Salt master IP is not hard set, it is configurable via INSTALLER_IP during deployment
-
-
-The Fuel baremetal deploy has a Virtualized Control Plane (VCP) which means that the controller
-services are installed in VMs on the baremetal targets (kvm servers). These VMs can also be
-accessed with virsh console: user *opnfv*, password *opnfv_secret*. This method does not apply
-to infrastructure VMs (Salt master and MaaS).
-
-The example below is a connection to a controller VM. The connection is made from the baremetal
-server kvm01.
-
- .. code-block:: bash
-
- $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu x.y.z.141
- ubuntu@kvm01:~$ virsh console ctl01
-
-User *ubuntu* has sudo rights. User *opnfv* has sudo rights only on aarch64 deploys.
-
-
-=============================
-Exploring the Cloud with Salt
-=============================
-
-To gather information about the cloud, the salt commands can be used. It is based
-around a master-minion idea where the salt-master pushes config to the minions to
-execute actions.
-
-For example tell salt to execute a ping to 8.8.8.8 on all the nodes.
-
-.. figure:: img/saltstack.png
-
-Complex filters can be done to the target like compound queries or node roles.
-For more information about Salt see the :ref:`references` section.
-
-Some examples are listed below. Note that these commands are issued from Salt master
-with *root* user.
-
-
-#. View the IPs of all the components
-
- .. code-block:: bash
-
- root@cfg01:~$ salt "*" network.ip_addrs
- cfg01.baremetal-mcp-ocata-odl-ha.local:
- - 10.20.0.2
- - 172.16.10.100
- mas01.baremetal-mcp-ocata-odl-ha.local:
- - 10.20.0.3
- - 172.16.10.3
- - 192.168.11.3
- .........................
-
-
-#. View the interfaces of all the components and put the output in a file with yaml format
-
- .. code-block:: bash
-
- root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
- root@cfg01:~# cat interfaces.yaml
- cfg01.baremetal-mcp-ocata-odl-ha.local:
- enp1s0:
- hwaddr: 52:54:00:72:77:12
- inet:
- - address: 10.20.0.2
- broadcast: 10.20.0.255
- label: enp1s0
- netmask: 255.255.255.0
- inet6:
- - address: fe80::5054:ff:fe72:7712
- prefixlen: '64'
- scope: link
- up: true
- .........................
-
-
-#. View installed packages in MaaS node
-
- .. code-block:: bash
-
- root@cfg01:~# salt "mas*" pkg.list_pkgs
- mas01.baremetal-mcp-ocata-odl-ha.local:
- ----------
- accountsservice:
- 0.6.40-2ubuntu11.3
- acl:
- 2.2.52-3
- acpid:
- 1:2.0.26-1ubuntu2
- adduser:
- 3.113+nmu3ubuntu4
- anerd:
- 1
- .........................
-
-
-#. Execute any linux command on all nodes (list the content of */var/log* in this example)
-
- .. code-block:: bash
-
- root@cfg01:~# salt "*" cmd.run 'ls /var/log'
- cfg01.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apt
- auth.log
- boot.log
- btmp
- cloud-init-output.log
- cloud-init.log
- .........................
-
-
-#. Execute any linux command on nodes using compound queries filter
-
- .. code-block:: bash
-
- root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
- cfg01.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apt
- auth.log
- boot.log
- btmp
- cloud-init-output.log
- cloud-init.log
- .........................
-
-
-#. Execute any linux command on nodes using role filter
-
- .. code-block:: bash
-
- root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
- cmp001.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apache2
- apt
- auth.log
- btmp
- ceilometer
- cinder
- cloud-init-output.log
- cloud-init.log
- .........................
-
-
-
-===================
-Accessing Openstack
-===================
-
-Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
-Openstack credentials are at */root/keystonercv3*.
-
- .. code-block:: bash
-
- root@ctl01:~# source keystonercv3
- root@ctl01:~# openstack image list
- +--------------------------------------+-----------------------------------------------+--------+
- | ID | Name | Status |
- +======================================+===============================================+========+
- | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
- | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
- +--------------------------------------+-----------------------------------------------+--------+
-
-
-The OpenStack Dashboard, Horizon is available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
-The administrator credentials are *admin*/*opnfv_secret*.
-
-.. figure:: img/horizon_login.png
-
-
-A full list of IPs/services is available at <proxy public VIP>:8090 for baremetal deploys.
-
-.. figure:: img/salt_services_ip.png
-
-For Virtual deploys, the most commonly used IPs are in the table below.
-
-+-----------+--------------+---------------+
-| Component | IP | Default value |
-+===========+==============+===============+
-| gtw01 | x.y.z.110 | 172.16.10.110 |
-+-----------+--------------+---------------+
-| ctl01 | x.y.z.100 | 172.16.10.100 |
-+-----------+--------------+---------------+
-| cmp001 | x.y.z.105 | 172.16.10.105 |
-+-----------+--------------+---------------+
-| cmp002 | x.y.z.106 | 172.16.10.106 |
-+-----------+--------------+---------------+
-
-
-=============================
-Reclass model viewer tutorial
-=============================
-
-
-In order to get a better understanding on the reclass model Fuel uses, the `reclass-doc
-<https://github.com/jirihybek/reclass-doc>`_ can be used to visualise the reclass model.
-A simplified installation can be done with the use of a docker ubuntu container. This
-approach will avoid installing packages on the host, which might collide with other packages.
-After the installation is done, a webbrowser on the host can be used to view the results.
-
-**NOTE**: The host can be any device with Docker package already installed.
- The user which runs the docker needs to have root priviledges.
-
-
-**Instructions**
-
-
-#. Create a new directory at any location
-
- .. code-block:: bash
-
- $ mkdir -p modeler
-
-
-#. Place fuel repo in the above directory
-
- .. code-block:: bash
-
- $ cd modeler
- $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
-
-
-#. Create a container and mount the above host directory
-
- .. code-block:: bash
-
- $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
-
-
-#. Install all the required packages inside the container.
-
- .. code-block:: bash
-
- $ apt-get update
- $ apt-get install -y npm nodejs
- $ npm install -g reclass-doc
- $ cd /host/fuel/mcp/reclass
- $ ln -s /usr/bin/nodejs /usr/bin/node
- $ reclass-doc --output /host /host/fuel/mcp/reclass
-
-
-#. View the results from the host by using a browser. The file to open should be now at modeler/index.html
-
- .. figure:: img/reclass_doc.png
-
-
-.. _references:
-
-==========
-References
-==========
-
-1) `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/installation/installation.instruction.html>`_
-2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
-3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
-
-
+Armband project aims to integrate and test all aspects of OPNFV releases
+on ARM-based servers. The goal is to replicate all OPNFV software build,
+continuous integration, lab provisioning, and testing processes of each
+standard release OPNFV, such that the release can be available on both
+Intel Architecture-based and ARM Architecture-based servers.
+
+The armband repo contains the patches necessary for Fuel installer to run on
+aarch64 hardware. For more information on how to use Fuel@OPNFV - Euphrates
+release - after it was deployed check
+:ref:`fuel-release-userguide-label`