aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/release/installation/installation.instruction.rst132
-rw-r--r--docs/release/release-notes/release-notes.rst29
-rw-r--r--docs/release/userguide/img/reclass_doc.pngbin0 -> 78645 bytes
-rw-r--r--docs/release/userguide/userguide.rst57
4 files changed, 172 insertions, 46 deletions
diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst
index 502c7509..af00d46b 100644
--- a/docs/release/installation/installation.instruction.rst
+++ b/docs/release/installation/installation.instruction.rst
@@ -132,14 +132,14 @@ installation of Euphrates using Fuel:
**NOTE:** For aarch64 deployments an UEFI compatible firmware with PXE support is needed (e.g. EDK2).
-
===============================
Help with Hardware Requirements
===============================
Calculate hardware requirements:
-For information on compatible hardware types available for use, please see `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_.
+For information on compatible hardware types available for use,
+please see `Fuel OpenStack Hardware Compatibility List <https://www.mirantis.com/software/hardware-compatibility/>`_
When choosing the hardware on which you will deploy your OpenStack
environment, you should think about:
@@ -183,7 +183,49 @@ OPNFV Software Prerequisites
The Jumpserver node should be pre-provisioned with an operating system,
according to the Pharos specification. Relevant network bridges should
-also be pre-configured (e.g. admin, management, public).
+also be pre-configured (e.g. admin_br, mgmt_br, public_br).
+
+ - The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during fuel installation.
+ - The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is
+ suggested to pre-configure it for debugging purposes.
+ - The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory.
+
+The user running the deploy script on the Jumpserver should belong to "sudo" and "libvirt" groups,
+and have passwordless sudo access.
+
+The following example adds the groups to the user "jenkins"
+
+.. code-block:: bash
+
+ $ sudo usermod -aG sudo jenkins
+ $ sudo usermod -aG libvirt jenkins
+ $ reboot
+ $ groups
+ jenkins sudo libvirt
+
+ $ sudo visudo
+ ...
+ %jenkins ALL=(ALL) NOPASSWD:ALL
+
+For an AArch64 Jumpserver, the "libvirt" minimum required version is 3.x, 3.5 or newer highly recommended.
+While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended
+(especially on AArch64 Jumpservers).
+
+For CentOS 7.4 (AArch64), distro provided packages are already new enough.
+For Ubuntu 16.04 (arm64), distro packages are too old and 3rd party repositories should be used.
+For convenience, Armband provides a DEB repository holding all the required packages.
+
+To add and enable the Armband repository on an Ubuntu 16.04 system,
+create a new sources list file `/apt/sources.list.d/armband.list` with the following contents:
+
+.. code-block:: bash
+
+ $ cat /etc/apt/sources.list.d/armband.list
+ //for OpenStack ocata release
+ deb http://linux.enea.com/mcp-repos/ocata/xenial ocata main
+ deb http://linux.enea.com/apt-mk/xenial nightly ocata
+
+ $ apt-get update
Fuel@OPNFV has been validated by CI using the following distributions
installed on the Jumpserver:
@@ -191,10 +233,21 @@ installed on the Jumpserver:
- CentOS 7 (recommended by Pharos specification);
- Ubuntu Xenial;
-**NOTE:** The install script expects 'libvirt' to be installed and running
-on the Jumpserver. In case the packages are missing, the script will install
-them; but depending on the OS distribution, the user might have to start the
-'libvirtd' service manually.
+**NOTE**: The install script expects 'libvirt' to be already running on the Jumpserver.In case libvirt
+packages are missing, the script will install them; but depending on the OS distribution, the user
+might have to start the 'libvirtd' service manually, then run the deploy script again. Therefore, it
+is recommened to install libvirt-bin explicitly on the Jumpserver before the deployment.
+
+**NOTE**: It is also recommened to install the newer kernel on the Jumpserver before the deployment.
+
+**NOTE**: The install script will automatically install the rest of required distro package
+dependencies on the Jumpserver, unless explicitly asked not to (via -P deploy arg). This includes
+Python, QEMU, libvirt etc.
+
+.. code-block:: bash
+
+ $ apt-get install linux-image-generic-hwe-16.04-edge libvirt-bin
+
==========================================
OPNFV Software Installation and Deployment
@@ -209,9 +262,9 @@ automatic based on deployment scenario.
The reclass model covers:
- Infrastucture node definition: Salt Master node (cfg01) and MaaS node (mas01)
- - Openstack node defition: Controler nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002)
+ - OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002)
- Infrastructure components to install (software packages, services etc.)
- - Openstack components and services (rabbitmq, galera etc.), as well as all configuration for them
+ - OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them
Automatic Installation of a Virtual POD
@@ -220,9 +273,9 @@ Automatic Installation of a Virtual POD
For virtual deploys all the targets are VMs on the Jumpserver. The deploy script will:
- Create a Salt Master VM on the Jumpserver which will drive the installation
- - Create the bridges for networking with virsh (only if a real bridge does not already exists for a given network)
- - Install Openstack on the targets
- - Leverage Salt to install & configure Openstack services
+ - Create the bridges for networking with virsh (only if a real bridge does not already exist for a given network)
+ - Install OpenStack on the targets
+ - Leverage Salt to install & configure OpenStack services
.. figure:: img/fuel_virtual.png
:align: center
@@ -245,18 +298,18 @@ For virtual deploys all the targets are VMs on the Jumpserver. The deploy script
In this figure there are examples of two virtual deploys:
- Jumphost 1 has only virsh bridges, created by the deploy script
- - Jumphost 2 has a mix of linux and virsh briges; when linux bridge exist for a specified network,
+ - Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network,
the deploy script will skip creating a virsh bridge for it
-**Note**: A virtual network "mcpcontrol" is always created. For virtual deploys, "mcpcontrol" is also used
-for Admin, leaving the PXE/Admin bridge unused.
+**Note**: A virtual network "mcpcontrol" is always created. For virtual deploys, "mcpcontrol" is also
+ used for Admin, leaving the PXE/Admin bridge unused.
Automatic Installation of a Baremetal POD
=========================================
The baremetal installation process can be done by editing the information about
-hardware and enviroment in the reclass files, or by using a Pod Descriptor File (PDF).
+hardware and environment in the reclass files, or by using a Pod Descriptor File (PDF).
This file contains all the information about the hardware and network of the deployment
the will be fed to the reclass model during deployment.
@@ -264,10 +317,10 @@ The installation is done automatically with the deploy script, which will:
- Create a Salt Master VM on the Jumpserver which will drive the installation
- Create a MaaS Node VM on the Jumpserver which will provision the targets
- - Install Openstack on the targets
+ - Install OpenStack on the targets
- Leverage MaaS to provision baremetal nodes with the operating system
- - Leverage Salt to configure the operatign system on the baremetal nodes
- - Leverage Salt to install & configure Openstack services
+ - Leverage Salt to configure the operating system on the baremetal nodes
+ - Leverage Salt to install & configure OpenStack services
.. figure:: img/fuel_baremetal.png
:align: center
@@ -297,11 +350,12 @@ The installation is done automatically with the deploy script, which will:
| Tenant VM | VM running in the cloud |
+-----------------------+---------------------------------------------------------+
-In the baremetal deploy all bridges but "mcpcontrol" are linux bridges. For the Jumpserver, if they are already created
-they will be used; otherwise they will be created. For the targets, the bridges are created by the deploy script.
+In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the Jumpserver, it is
+required to pre-configure at least the admin_br bridge for the PXE/Admin.
+For the targets, the bridges are created by the deploy script.
-**Note**: A virtual network "mcpcontrol" is always created. For baremetal deploys, PXE bridge is used for
-baremetal node provisioning, while "mcpcontrol" is used to provision the infrastructure VMs only.
+**Note**: A virtual network "mcpcontrol" is always created. For baremetal deploys, PXE bridge is used
+for baremetal node provisioning, while "mcpcontrol" is used to provision the infrastructure VMs only.
Steps to Start the Automatic Deploy
@@ -333,13 +387,22 @@ These steps are common both for virtual and baremetal deploys.
#. Start the deploy script
+ Besides the basic options, there are other recommended deploy arguments:
+
+ - use **-D** option to enable the debug info
+ - use **-S** option to point to a tmp dir where the disk images are saved. The images will be
+ re-used between deploys
+ - use **|& tee** to save the deploy log to a file
+
.. code-block:: bash
$ ci/deploy.sh -l <lab_name> \
-p <pod_name> \
-b <URI to configuration repo containing the PDF file> \
-s <scenario> \
- -B <list of admin, management, private and public bridges>
+ -B <list of admin, management, private and public bridges> \
+ -D \
+ -S <Storage directory for disk images> |& tee deploy.log
Examples
--------
@@ -356,7 +419,9 @@ Examples
$ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
-l ericsson \
-p virtual_kvm \
- -s os-nosdn-nofeature-noha
+ -s os-nosdn-nofeature-noha \
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
Once the deployment is complete, the OpenStack Dashboard, Horizon is
available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
@@ -373,6 +438,8 @@ Examples
-p pod2 \
-s os-nosdn-nofeature-ha \
-B pxebr,br-ctl
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
.. figure:: img/lf_pod2.png
:align: center
@@ -387,11 +454,12 @@ Examples
.. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l arm \
- -p pod5 \
- -s os-nosdn-nofeature-ha \
- -B admin7_br0,mgmt7_br0,,public7_br0
+ $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
+ -l arm \
+ -p pod5 \
+ -s os-nosdn-nofeature-ha \
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
.. figure:: img/arm_pod5.png
:align: center
@@ -399,10 +467,6 @@ Examples
Fuel@OPNFV ARM POD5 Network Layout
- Once the deployment is complete, the SaltStack Deployment Documentation is
- available at http://<Proxy VIP>:8090, e.g. http://10.0.8.103:8090.
-
-
Pod Descriptor Files
====================
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 6e1b5342..0052ab63 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -61,13 +61,13 @@ Release Data
| **Project** | fuel/armband |
| | |
+--------------------------------------+--------------------------------------+
-| **Repo/tag** | opnfv-5.0.2 |
+| **Repo/tag** | opnfv-5.1.0 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release designation** | Euphrates 5.0 |
+| **Release designation** | Euphrates 5.1 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | October 20 2017 |
+| **Release date** | December 15 2017 |
| | |
+--------------------------------------+--------------------------------------+
| **Purpose of the delivery** | Euphrates alignment to Released |
@@ -84,7 +84,7 @@ Version Change
Module Version Changes
----------------------
-This is the Euphrates 5.0 release.
+This is the Euphrates 5.1 release.
It is based on following upstream versions:
- MCP 1.0 Base Release
@@ -95,13 +95,15 @@ It is based on following upstream versions:
Document Changes
----------------
-This is the Euphrates 5.0 release.
+This is the Euphrates 5.1 release.
It comes with the following documentation:
-- Installation instructions
+- `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/armband/docs/release/installation/installation.instruction.html>`_
- Release notes (This document)
+- `User guide <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/userguide/userguide.html>`_
+
Reason for Version
==================
@@ -109,14 +111,14 @@ Feature Additions
-----------------
**JIRA TICKETS:**
-`Euphrates 5.0 new features <https://jira.opnfv.org/issues/?filter=12029>`_
+`Euphrates 5.1 new features <https://jira.opnfv.org/issues/?filter=12114>`_
Bug Corrections
---------------
**JIRA TICKETS:**
-`Euphrates 5.0 bug fixes <https://jira.opnfv.org/issues/?filter=12027>`_
+`Euphrates 5.1 bug fixes <https://jira.opnfv.org/issues/?filter=12115>`_
(Also See respective Integrated feature project's bug tracking)
@@ -133,10 +135,13 @@ Software Deliverables
Documentation Deliverables
--------------------------
-- Installation instructions
+- `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/armband/docs/release/installation/installation.instruction.html>`_
- Release notes (This document)
+- `User guide <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/userguide/userguide.html>`_
+
+
=========================================
Known Limitations, Issues and Workarounds
=========================================
@@ -158,7 +163,7 @@ Known Issues
**JIRA TICKETS:**
-`Known issues <https://jira.opnfv.org/issues/?filter=12028>`_
+`Known issues <https://jira.opnfv.org/issues/?filter=12116>`_
(Also See respective Integrated feature project's bug tracking)
@@ -174,13 +179,13 @@ Workarounds
============
Test Results
============
-The Euphrates 5.0 release with the Fuel deployment tool has undergone QA test
+The Euphrates 5.1 release with the Fuel deployment tool has undergone QA test
runs, see separate test results.
==========
References
==========
-For more information on the OPNFV Euphrates 5.0 release, please see:
+For more information on the OPNFV Euphrates 5.1 release, please see:
OPNFV
=====
diff --git a/docs/release/userguide/img/reclass_doc.png b/docs/release/userguide/img/reclass_doc.png
new file mode 100644
index 00000000..374f92a6
--- /dev/null
+++ b/docs/release/userguide/img/reclass_doc.png
Binary files differ
diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst
index f00e6635..2b46a84a 100644
--- a/docs/release/userguide/userguide.rst
+++ b/docs/release/userguide/userguide.rst
@@ -254,6 +254,63 @@ For Virtual deploys, the most commonly used IPs are in the table below.
+-----------+--------------+---------------+
+=============================
+Reclass model viewer tutorial
+=============================
+
+
+In order to get a better understanding on the reclass model Fuel uses, the `reclass-doc
+<https://github.com/jirihybek/reclass-doc>`_ can be used to visualise the reclass model.
+A simplified installation can be done with the use of a docker ubuntu container. This
+approach will avoid installing packages on the host, which might collide with other packages.
+After the installation is done, a webbrowser on the host can be used to view the results.
+
+**NOTE**: The host can be any device with Docker package already installed.
+ The user which runs the docker needs to have root priviledges.
+
+
+**Instructions**
+
+
+#. Create a new directory at any location
+
+ .. code-block:: bash
+
+ $ mkdir -p modeler
+
+
+#. Place fuel repo in the above directory
+
+ .. code-block:: bash
+
+ $ cd modeler
+ $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
+
+
+#. Create a container and mount the above host directory
+
+ .. code-block:: bash
+
+ $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
+
+
+#. Install all the required packages inside the container.
+
+ .. code-block:: bash
+
+ $ apt-get update
+ $ apt-get install -y npm nodejs
+ $ npm install -g reclass-doc
+ $ cd /host/fuel/mcp/reclass
+ $ ln -s /usr/bin/nodejs /usr/bin/node
+ $ reclass-doc --output /host /host/fuel/mcp/reclass
+
+
+#. View the results from the host by using a browser. The file to open should be now at modeler/index.html
+
+ .. figure:: img/reclass_doc.png
+
+
.. _references:
==========