summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/release/installation/installation.instruction.rst179
-rw-r--r--docs/release/release-notes/release-notes.rst56
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst8
-rw-r--r--docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst11
-rw-r--r--docs/release/userguide/userguide.rst328
5 files changed, 342 insertions, 240 deletions
diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst
index 6c0bf4cb8..acf4aac4b 100644
--- a/docs/release/installation/installation.instruction.rst
+++ b/docs/release/installation/installation.instruction.rst
@@ -6,7 +6,7 @@
Abstract
========
-This document describes how to install the Euphrates release of
+This document describes how to install the Fraser release of
OPNFV when using Fuel as a deployment tool, covering its usage,
limitations, dependencies and required system resources.
This is an unified documentation for both x86_64 and aarch64
@@ -18,14 +18,14 @@ Introduction
============
This document provides guidelines on how to install and
-configure the Euphrates release of OPNFV when using Fuel as a
+configure the Fraser release of OPNFV when using Fuel as a
deployment tool, including required software and hardware configurations.
Although the available installation options provide a high degree of
freedom in how the system is set up, including architecture, services
and features, etc., said permutations may not provide an OPNFV
compliant reference architecture. This document provides a
-step-by-step guide that results in an OPNFV Euphrates compliant
+step-by-step guide that results in an OPNFV Fraser compliant
deployment.
The audience of this document is assumed to have good knowledge of
@@ -35,7 +35,7 @@ networking and Unix/Linux administration.
Preface
=======
-Before starting the installation of the Euphrates release of
+Before starting the installation of the Fraser release of
OPNFV, using Fuel as a deployment tool, some planning must be
done.
@@ -69,7 +69,7 @@ Hardware Requirements for Virtual Deploys
=========================================
The following minimum hardware requirements must be met for the virtual
-installation of Euphrates using Fuel:
+installation of Fraser using Fuel:
+----------------------------+--------------------------------------------------------+
| **HW Aspect** | **Requirement** |
@@ -83,7 +83,7 @@ installation of Euphrates using Fuel:
+----------------------------+--------------------------------------------------------+
| **RAM** | Minimum 32GB/server (Depending on VNF work load) |
+----------------------------+--------------------------------------------------------+
-| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended |
+| **Disk** | Minimum 100GB (SSD or SCSI (15krpm) highly recommended)|
+----------------------------+--------------------------------------------------------+
@@ -92,7 +92,7 @@ Hardware Requirements for Baremetal Deploys
===========================================
The following minimum hardware requirements must be met for the baremetal
-installation of Euphrates using Fuel:
+installation of Fraser using Fuel:
+-------------------------+------------------------------------------------------+
| **HW Aspect** | **Requirement** |
@@ -173,7 +173,7 @@ the Fuel OPNFV reference platform. All the networks involved in the OPNFV
infrastructure as well as the provider networks and the private tenant
VLANs needs to be manually configured.
-Manual configuration of the Euphrates hardware platform should
+Manual configuration of the Fraser hardware platform should
be carried out according to the `OPNFV Pharos Specification
<https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_.
@@ -185,10 +185,10 @@ The Jumpserver node should be pre-provisioned with an operating system,
according to the Pharos specification. Relevant network bridges should
also be pre-configured (e.g. admin_br, mgmt_br, public_br).
- - The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during fuel installation.
- - The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is
- suggested to pre-configure it for debugging purposes.
- - The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory.
+- The admin bridge (admin_br) is mandatory for the baremetal nodes PXE booting during Fuel installation.
+- The management bridge (mgmt_br) is required for testing suites (e.g. functest/yardstick), it is
+ suggested to pre-configure it for debugging purposes.
+- The public bridge (public_br) is also nice to have for debugging purposes, but not mandatory.
The user running the deploy script on the Jumpserver should belong to "sudo" and "libvirt" groups,
and have passwordless sudo access.
@@ -207,6 +207,13 @@ The following example adds the groups to the user "jenkins"
...
%jenkins ALL=(ALL) NOPASSWD:ALL
+The folder containing the temporary deploy artifacts (/home/jenkins/tmpdir in the examples below)
+needs to have mask 777 in order for libvirt to be able to use them.
+
+.. code-block:: bash
+
+ $ mkdir -p -m 777 /home/jenkins/tmpdir
+
For an AArch64 Jumpserver, the "libvirt" minimum required version is 3.x, 3.5 or newer highly recommended.
While not mandatory, upgrading the kernel and QEMU on the Jumpserver is also highly recommended
(especially on AArch64 Jumpservers).
@@ -229,15 +236,15 @@ create a new sources list file `/apt/sources.list.d/armband.list` with the follo
Fuel@OPNFV has been validated by CI using the following distributions
installed on the Jumpserver:
- - CentOS 7 (recommended by Pharos specification);
- - Ubuntu Xenial;
+- CentOS 7 (recommended by Pharos specification);
+- Ubuntu Xenial;
-**NOTE**: The install script expects 'libvirt' to be already running on the Jumpserver.In case libvirt
+**NOTE**: The install script expects 'libvirt' to be already running on the Jumpserver. In case libvirt
packages are missing, the script will install them; but depending on the OS distribution, the user
might have to start the 'libvirtd' service manually, then run the deploy script again. Therefore, it
-is recommened to install libvirt-bin explicitly on the Jumpserver before the deployment.
+is recommended to install libvirt-bin explicitly on the Jumpserver before the deployment.
-**NOTE**: It is also recommened to install the newer kernel on the Jumpserver before the deployment.
+**NOTE**: It is also recommended to install the newer kernel on the Jumpserver before the deployment.
**NOTE**: The install script will automatically install the rest of required distro package
dependencies on the Jumpserver, unless explicitly asked not to (via -P deploy arg). This includes
@@ -262,7 +269,7 @@ a reclass model. This model provides the formula inputs to Salt, to make the dep
automatic based on deployment scenario.
The reclass model covers:
- - Infrastucture node definition: Salt Master node (cfg01) and MaaS node (mas01)
+ - Infrastructure node definition: Salt Master node (cfg01) and MaaS node (mas01)
- OpenStack node definition: Controller nodes (ctl01, ctl02, ctl03) and Compute nodes (cmp001, cmp002)
- Infrastructure components to install (software packages, services etc.)
- OpenStack components and services (rabbitmq, galera etc.), as well as all configuration for them
@@ -302,17 +309,18 @@ In this figure there are examples of two virtual deploys:
- Jumphost 2 has a mix of Linux and virsh bridges; When Linux bridge exists for a specified network,
the deploy script will skip creating a virsh bridge for it
-**Note**: A virtual network "mcpcontrol" is always created. For virtual deploys, "mcpcontrol" is also
- used for Admin, leaving the PXE/Admin bridge unused.
+**Note**: A virtual network "mcpcontrol" is always created for initial connection
+of the VMs on Jumphost.
Automatic Installation of a Baremetal POD
=========================================
The baremetal installation process can be done by editing the information about
-hardware and environment in the reclass files, or by using a Pod Descriptor File (PDF).
-This file contains all the information about the hardware and network of the deployment
-the will be fed to the reclass model during deployment.
+hardware and environment in the reclass files, or by using the files Pod Descriptor
+File (PDF) and Installer Descriptor File (IDF) as described in the OPNFV Pharos project.
+These files contain all the information about the hardware and network of the deployment
+that will be fed to the reclass model during deployment.
The installation is done automatically with the deploy script, which will:
@@ -355,8 +363,8 @@ In the baremetal deploy all bridges but "mcpcontrol" are Linux bridges. For the
required to pre-configure at least the admin_br bridge for the PXE/Admin.
For the targets, the bridges are created by the deploy script.
-**Note**: A virtual network "mcpcontrol" is always created. For baremetal deploys, PXE bridge is used
-for baremetal node provisioning, while "mcpcontrol" is used to provision the infrastructure VMs only.
+**Note**: A virtual network "mcpcontrol" is always created for initial connection
+of the VMs on Jumphost.
Steps to Start the Automatic Deploy
@@ -380,11 +388,11 @@ These steps are common both for virtual and baremetal deploys.
$ git clone https://git.opnfv.org/armband
$ cd armband
-#. Checkout the Euphrates release
+#. Checkout the Fraser release
.. code-block:: bash
- $ git checkout opnfv-5.0.2
+ $ git checkout opnfv-6.0.0
#. Start the deploy script
@@ -416,15 +424,14 @@ Examples
.. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l ericsson \
- -p virtual_kvm \
- -s os-nosdn-nofeature-noha \
- -D \
- -S /home/jenkins/tmpdir |& tee deploy.log
+ $ ci/deploy.sh -l ericsson \
+ -p virtual3 \
+ -s os-nosdn-nofeature-noha \
+ -D \
+ -S /home/jenkins/tmpdir |& tee deploy.log
- Once the deployment is complete, the OpenStack Dashboard, Horizon is
- available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
+ Once the deployment is complete, the OpenStack Dashboard, Horizon, is
+ available at http://<controller VIP>:8078
The administrator credentials are **admin** / **opnfv_secret**.
#. Baremetal deploy
@@ -433,8 +440,7 @@ Examples
.. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l lf \
+ $ ci/deploy.sh -l lf \
-p pod2 \
-s os-nosdn-nofeature-ha \
-D \
@@ -446,15 +452,11 @@ Examples
Fuel@OPNFV LF POD2 Network Layout
- Once the deployment is complete, the SaltStack Deployment Documentation is
- available at http://<Proxy VIP>:8090, e.g. http://172.30.10.103:8090.
-
An aarch64 deploy on pod5 from Arm lab
.. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l arm \
+ $ ci/deploy.sh -l arm \
-p pod5 \
-s os-nosdn-nofeature-ha \
-D \
@@ -466,24 +468,43 @@ Examples
Fuel@OPNFV ARM POD5 Network Layout
-Pod Descriptor Files
-====================
+ Once the deployment is complete, the SaltStack Deployment Documentation is
+ available at http://<proxy public VIP>:8090
+
+**NOTE**: The deployment uses the OPNFV Pharos project as input (PDF and IDF files)
+for hardware and network configuration of all current OPNFV PODs.
+When deploying a new POD, one can pass the `-b` flag to the deploy script to override
+the path for the labconfig directory structure containing the PDF and IDF.
+
+ .. code-block:: bash
+
+ $ ci/deploy.sh -b file://<absolute_path_to_labconfig> \
+ -l <lab_name> \
+ -p <pod_name> \
+ -s <scenario> \
+ -D \
+ -S <tmp_folder> |& tee deploy.log
+
+ - <absolute_path_to_labconfig> is the absolute path to a local directory, populated
+ similar to Pharos, i.e. PDF/IDF reside in <absolute_path_to_labconfig>/labs/<lab_name>
+ - <lab_name> is the same as the directory in the path above
+ - <pod_name> is the name used for the PDF (<pod_name>.yaml) and IDF (idf-<pod_name>.yaml) files
+
+
+
+Pod and Installer Descriptor Files
+==================================
Descriptor files provide the installer with an abstraction of the target pod
with all its hardware characteristics and required parameters. This information
is split into two different files:
Pod Descriptor File (PDF) and Installer Descriptor File (IDF).
-
-The Pod Descriptor File is a hardware and network description of the pod
+The Pod Descriptor File is a hardware description of the pod
infrastructure. The information is modeled under a yaml structure.
A reference file with the expected yaml structure is available at
*mcp/config/labs/local/pod1.yaml*
-A common network section describes all the internal and provider networks
-assigned to the pod. Each network is expected to have a vlan tag, IP subnet and
-attached interface on the boards. Untagged vlans shall be defined as "native".
-
The hardware description is arranged into a main "jumphost" node and a "nodes"
set for all target boards. For each node the following characteristics
are defined:
@@ -491,25 +512,57 @@ are defined:
- Node parameters including CPU features and total memory.
- A list of available disks.
- Remote management parameters.
-- Network interfaces list including mac address, speed and advanced features.
-- IP list of fixed IPs for the node
-
-**Note**: the fixed IPs are ignored by the MCP installer script and it will instead
-assign based on the network ranges defined under the pod network configuration.
+- Network interfaces list including mac address, speed, advanced features and name.
+**Note**: The fixed IPs are ignored by the MCP installer script and it will instead
+assign based on the network ranges defined in IDF.
The Installer Descriptor File extends the PDF with pod related parameters
required by the installer. This information may differ per each installer type
-and it is not considered part of the pod infrastructure. Fuel installer relies
-on the IDF model to map the networks to the bridges on the foundation node and
-to setup all node NICs by defining the expected OS device name and bus address.
+and it is not considered part of the pod infrastructure.
+The IDF file must be named after the PDF with the prefix "idf-". A reference file with the expected
+structure is available at *mcp/config/labs/local/idf-pod1.yaml*
+The file follows a yaml structure and two sections "net_config" and "fuel" are expected.
-The file follows a yaml structure and a "fuel" section is expected. Contents and
-references must be aligned with the PDF file. The IDF file must be named after
-the PDF with the prefix "idf-". A reference file with the expected structure
-is available at *mcp/config/labs/local/idf-pod1.yaml*
+The "net_config" section describes all the internal and provider networks
+assigned to the pod. Each used network is expected to have a vlan tag, IP subnet and
+attached interface on the boards. Untagged vlans shall be defined as "native".
+The "fuel" section defines several sub-sections required by the Fuel installer:
+
+- jumphost: List of bridge names for each network on the Jumpserver.
+- network: List of device name and bus address info of all the target nodes.
+ The order must be aligned with the order defined in PDF file. Fuel installer relies on the IDF model
+ to setup all node NICs by defining the expected device name and bus address.
+- maas: Defines the target nodes commission timeout and deploy timeout. (optional)
+- reclass: Defines compute parameter tuning, including huge pages, cpu pinning
+ and other DPDK settings. (optional)
+
+The following parameters can be defined in the IDF files under "reclass". Those value will
+overwrite the default configuration values in Fuel repository:
+
+- nova_cpu_pinning: List of CPU cores nova will be pinned to. Currently disabled.
+- compute_hugepages_size: Size of each persistent huge pages. Usual values are '2M' and '1G'.
+- compute_hugepages_count: Total number of persistent huge pages.
+- compute_hugepages_mount: Mount point to use for huge pages.
+- compute_kernel_isolcpu: List of certain CPU cores that are isolated from Linux scheduler.
+- compute_dpdk_driver: Kernel module to provide userspace I/O support.
+- compute_ovs_pmd_cpu_mask: Hexadecimal mask of CPUs to run DPDK Poll-mode drivers.
+- compute_ovs_dpdk_socket_mem: Set of amount huge pages in MB to be used by OVS-DPDK daemon
+ taken for each NUMA node. Set size is equal to NUMA nodes count, elements are divided by comma.
+- compute_ovs_dpdk_lcore_mask: Hexadecimal mask of DPDK lcore parameter used to run DPDK processes.
+- compute_ovs_memory_channels: Number of memory channels to be used.
+- dpdk0_driver: NIC driver to use for physical network interface.
+- dpdk0_n_rxq: Number of RX queues.
+
+
+The full description of the PDF and IDF file structure are available as yaml schemas.
+The schemas are defined as a git submodule in Fuel repository. Input files provided
+to the installer will be validated against the schemas.
+
+- *mcp/scripts/pharos/config/pdf/pod1.schema.yaml*
+- *mcp/scripts/pharos/config/pdf/idf-pod1.schema.yaml*
=============
Release Notes
@@ -529,7 +582,7 @@ OPNFV
OpenStack
-4) `OpenStack Ocata Release Artifacts <http://www.openstack.org/software/ocata>`_
+4) `OpenStack Pike Release Artifacts <http://www.openstack.org/software/pike>`_
5) `OpenStack Documentation <http://docs.openstack.org>`_
OpenDaylight
diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst
index 0052ab63f..5ee3bc0ff 100644
--- a/docs/release/release-notes/release-notes.rst
+++ b/docs/release/release-notes/release-notes.rst
@@ -6,7 +6,7 @@
Abstract
========
-This document compiles the release notes for the Euphrates release of
+This document compiles the release notes for the Fraser release of
OPNFV when using Fuel as a deployment tool. This is an unified documentation
for both x86_64 and aarch64 architectures. All information is common for
both architectures except when explicitly stated.
@@ -17,9 +17,9 @@ Important Notes
===============
These notes provides release information for the use of Fuel as deployment
-tool for the Euphrates release of OPNFV.
+tool for the Fraser release of OPNFV.
-The goal of the Euphrates release and this Fuel-based deployment process is
+The goal of the Fraser release and this Fuel-based deployment process is
to establish a lab ready platform accelerating further development
of the OPNFV infrastructure.
@@ -29,7 +29,7 @@ Carefully follow the installation-instructions.
Summary
=======
-For Euphrates, the typical use of Fuel as an OpenStack installer is
+For Fraser, the typical use of Fuel as an OpenStack installer is
supplemented with OPNFV unique components such as:
- `OpenDaylight <http://www.opendaylight.org/software>`_
@@ -37,7 +37,7 @@ supplemented with OPNFV unique components such as:
As well as OPNFV-unique configurations of the Hardware and Software stack.
-This Euphrates artifact provides Fuel as the deployment stage tool in the
+This Fraser artifact provides Fuel as the deployment stage tool in the
OPNFV CI pipeline including:
- Documentation built by Jenkins
@@ -48,10 +48,10 @@ OPNFV CI pipeline including:
- installation instructions
-- Automated deployment of Euphrates with running on bare metal or a nested
+- Automated deployment of Fraser with running on baremetal or a nested
hypervisor environment (KVM)
-- Automated validation of the Euphrates deployment
+- Automated validation of the Fraser deployment
============
Release Data
@@ -61,22 +61,23 @@ Release Data
| **Project** | fuel/armband |
| | |
+--------------------------------------+--------------------------------------+
-| **Repo/tag** | opnfv-5.1.0 |
+| **Repo/tag** | opnfv-6.0.0 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release designation** | Euphrates 5.1 |
+| **Release designation** | Fraser 6.0 |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | December 15 2017 |
+| **Release date** | April 27 2018 |
| | |
+--------------------------------------+--------------------------------------+
-| **Purpose of the delivery** | Euphrates alignment to Released |
-| | MCP 1.0 baseline + features and |
+| **Purpose of the delivery** | Fraser alignment to Released |
+| | MCP baseline + features and |
| | bug-fixes for the following |
| | feaures: |
| | |
| | - Open vSwitch for NFV |
| | - OpenDaylight |
+| | - DPDK |
+--------------------------------------+--------------------------------------+
Version Change
@@ -84,25 +85,25 @@ Version Change
Module Version Changes
----------------------
-This is the Euphrates 5.1 release.
+This is the Fraser 6.0 release.
It is based on following upstream versions:
-- MCP 1.0 Base Release
+- MCP Base Release
-- OpenStack Ocata Release
+- OpenStack Pike Release
-- OpenDaylight
+- OpenDaylight Oxygen Release
Document Changes
----------------
-This is the Euphrates 5.1 release.
+This is the Fraser 6.0 release.
It comes with the following documentation:
-- `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/armband/docs/release/installation/installation.instruction.html>`_
+- :ref:`fuel-release-installation-label`
- Release notes (This document)
-- `User guide <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/userguide/userguide.html>`_
+- :ref:`fuel-release-userguide-label`
Reason for Version
==================
@@ -111,14 +112,14 @@ Feature Additions
-----------------
**JIRA TICKETS:**
-`Euphrates 5.1 new features <https://jira.opnfv.org/issues/?filter=12114>`_
+`Fraser 6.0 new features <https://jira.opnfv.org/issues/?filter=12302>`_
Bug Corrections
---------------
**JIRA TICKETS:**
-`Euphrates 5.1 bug fixes <https://jira.opnfv.org/issues/?filter=12115>`_
+`Fraser 6.0 bug fixes <https://jira.opnfv.org/issues/?filter=12303>`_
(Also See respective Integrated feature project's bug tracking)
@@ -135,12 +136,11 @@ Software Deliverables
Documentation Deliverables
--------------------------
-- `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/armband/docs/release/installation/installation.instruction.html>`_
+- :ref:`fuel-release-installation-label`
- Release notes (This document)
-- `User guide <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/userguide/userguide.html>`_
-
+- :ref:`fuel-release-userguide-label`
=========================================
Known Limitations, Issues and Workarounds
@@ -163,7 +163,7 @@ Known Issues
**JIRA TICKETS:**
-`Known issues <https://jira.opnfv.org/issues/?filter=12116>`_
+`Known issues <https://jira.opnfv.org/issues/?filter=12304>`_
(Also See respective Integrated feature project's bug tracking)
@@ -179,13 +179,13 @@ Workarounds
============
Test Results
============
-The Euphrates 5.1 release with the Fuel deployment tool has undergone QA test
+The Fraser 6.0 release with the Fuel deployment tool has undergone QA test
runs, see separate test results.
==========
References
==========
-For more information on the OPNFV Euphrates 5.1 release, please see:
+For more information on the OPNFV Fraser 6.0 release, please see:
OPNFV
=====
@@ -197,7 +197,7 @@ OPNFV
OpenStack
=========
-4) `OpenStack Ocata Release Artifacts <http://www.openstack.org/software/ocata>`_
+4) `OpenStack Pike Release Artifacts <http://www.openstack.org/software/pike>`_
5) `OpenStack Documentation <http://docs.openstack.org>`_
diff --git a/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst b/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
index 5e30ab542..c51a5f5b8 100644
--- a/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
+++ b/docs/release/scenarios/os-nosdn-ovs-ha/os-nosdn-ovs-ha.rst
@@ -2,14 +2,14 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c)2017 Mirantis Inc., Enea Software AB and others
-This document provides scenario level details for Euphrates 5.0 of
+This document provides scenario level details for Fraser 6.0 of
deployment with no SDN controller and no extra features enabled.
============
Introduction
============
-This scenario is used primarily to validate and deploy a Ocata OpenStack
+This scenario is used primarily to validate and deploy a Pike OpenStack
deployment without any NFV features or SDN controller enabled.
Scenario components and composition
@@ -38,5 +38,5 @@ None
References
==========
-For more information on the OPNFV Euphrates release, please visit
-http://www.opnfv.org/euphrates
+For more information on the OPNFV Fraser release, please visit
+http://www.opnfv.org/software
diff --git a/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst b/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
index 7ac4e111a..99c4de041 100644
--- a/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
+++ b/docs/release/scenarios/os-nosdn-ovs-noha/os-nosdn-ovs-noha.rst
@@ -2,14 +2,14 @@
.. http://creativecommons.org/licenses/by/4.0
.. (c) 2017 Mirantis Inc., Enea Software AB and others
-This document provides scenario level details for Euphrates 5.0 of
+This document provides scenario level details for Fraser 6.0 of
deployment with no SDN controller and no extra features enabled.
============
Introduction
============
-This scenario is used primarily to validate and deploy a Ocata OpenStack
+This scenario is used primarily to validate and deploy a Pike OpenStack
deployment without any NFV features or SDN controller enabled.
@@ -24,7 +24,7 @@ the DPDK-enabled Open vSwitch component.
Scenario usage overview
=======================
-Simply deploy this scenario by using the os-nosdn-ovs-ha.yaml deploy
+Simply deploy this scenario by using the os-nosdn-ovs-noha.yaml deploy
settings file.
@@ -36,6 +36,5 @@ Tested on virtual deploy only.
References
==========
-For more information on the OPNFV Euphrates release, please visit
-http://www.opnfv.org/euphrates
-
+For more information on the OPNFV Fraser release, please visit
+http://www.opnfv.org/software
diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst
index 2b46a84ac..fd9dfa736 100644
--- a/docs/release/userguide/userguide.rst
+++ b/docs/release/userguide/userguide.rst
@@ -6,9 +6,9 @@
Abstract
========
-This document contains details about how to use OPNFV Fuel - Euphrates
+This document contains details about how to use OPNFV Fuel - Fraser
release - after it was deployed. For details on how to deploy check the
-installation instructions in the :ref:`references` section.
+installation instructions in the :ref:`fuel_userguide_references` section.
This is an unified documentation for both x86_64 and aarch64
architectures. All information is common for both architectures
@@ -22,26 +22,25 @@ Network Overview
Fuel uses several networks to deploy and administer the cloud:
-+------------------+-------------------+---------------------------------------------------------+
-| Network name | Deploy Type | Description |
-| | | |
-+==================+===================+=========================================================+
-| **PXE/ADMIN** | baremetal only | Used for booting the nodes via PXE |
-+------------------+-------------------+---------------------------------------------------------+
-| **MCPCONTROL** | baremetal & | Used to provision the infrastructure VMs (Salt & MaaS). |
-| | virtual | On virtual deploys, it is used for Admin too (on target |
-| | | VMs) leaving the PXE/Admin bridge unused |
-+------------------+-------------------+---------------------------------------------------------+
-| **Mgmt** | baremetal & | Used for internal communication between |
-| | virtual | OpenStack components |
-+------------------+-------------------+---------------------------------------------------------+
-| **Internal** | baremetal & | Used for VM data communication within the |
-| | virtual | cloud deployment |
-+------------------+-------------------+---------------------------------------------------------+
-| **Public** | baremetal & | Used to provide Virtual IPs for public endpoints |
-| | virtual | that are used to connect to OpenStack services APIs. |
-| | | Used by Virtual machines to access the Internet |
-+------------------+-------------------+---------------------------------------------------------+
++------------------+---------------------------------------------------------+
+| Network name | Description |
+| | |
++==================+=========================================================+
+| **PXE/ADMIN** | Used for booting the nodes via PXE and/or Salt |
+| | control network |
++------------------+---------------------------------------------------------+
+| **MCPCONTROL** | Used to provision the infrastructure VMs (Salt & MaaS) |
++------------------+---------------------------------------------------------+
+| **Mgmt** | Used for internal communication between |
+| | OpenStack components |
++------------------+---------------------------------------------------------+
+| **Internal** | Used for VM data communication within the |
+| | cloud deployment |
++------------------+---------------------------------------------------------+
+| **Public** | Used to provide Virtual IPs for public endpoints |
+| | that are used to connect to OpenStack services APIs. |
+| | Used by Virtual machines to access the Internet |
++------------------+---------------------------------------------------------+
These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
@@ -60,27 +59,21 @@ Accessing the Cloud
Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
ssh key */var/lib/opnfv/mcp.rsa*. The example below is a connection to Salt master.
- .. code-block:: bash
+ .. code-block:: bash
- $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
+ $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
**Note**: The Salt master IP is not hard set, it is configurable via INSTALLER_IP during deployment
+Logging in to cluster nodes is possible from the Jumpserver and from Salt master. On the Salt master
+cluster hostnames can be used instead of IP addresses:
-The Fuel baremetal deploy has a Virtualized Control Plane (VCP) which means that the controller
-services are installed in VMs on the baremetal targets (kvm servers). These VMs can also be
-accessed with virsh console: user *opnfv*, password *opnfv_secret*. This method does not apply
-to infrastructure VMs (Salt master and MaaS).
+ .. code-block:: bash
-The example below is a connection to a controller VM. The connection is made from the baremetal
-server kvm01.
+ $ sudo -i
+ $ ssh -i mcp.rsa ubuntu@ctl01
- .. code-block:: bash
-
- $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu x.y.z.141
- ubuntu@kvm01:~$ virsh console ctl01
-
-User *ubuntu* has sudo rights. User *opnfv* has sudo rights only on aarch64 deploys.
+User *ubuntu* has sudo rights.
=============================
@@ -96,34 +89,34 @@ For example tell salt to execute a ping to 8.8.8.8 on all the nodes.
.. figure:: img/saltstack.png
Complex filters can be done to the target like compound queries or node roles.
-For more information about Salt see the :ref:`references` section.
+For more information about Salt see the :ref:`fuel_userguide_references` section.
Some examples are listed below. Note that these commands are issued from Salt master
-with *root* user.
+as *root* user.
#. View the IPs of all the components
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~$ salt "*" network.ip_addrs
- cfg01.baremetal-mcp-ocata-odl-ha.local:
+ root@cfg01:~$ salt "*" network.ip_addrs
+ cfg01.mcp-pike-odl-ha.local:
- 10.20.0.2
- 172.16.10.100
- mas01.baremetal-mcp-ocata-odl-ha.local:
+ mas01.mcp-pike-odl-ha.local:
- 10.20.0.3
- 172.16.10.3
- 192.168.11.3
- .........................
+ .........................
#. View the interfaces of all the components and put the output in a file with yaml format
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
- root@cfg01:~# cat interfaces.yaml
- cfg01.baremetal-mcp-ocata-odl-ha.local:
+ root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
+ root@cfg01:~# cat interfaces.yaml
+ cfg01.mcp-pike-odl-ha.local:
enp1s0:
hwaddr: 52:54:00:72:77:12
inet:
@@ -136,77 +129,77 @@ with *root* user.
prefixlen: '64'
scope: link
up: true
- .........................
+ .........................
#. View installed packages in MaaS node
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~# salt "mas*" pkg.list_pkgs
- mas01.baremetal-mcp-ocata-odl-ha.local:
- ----------
- accountsservice:
- 0.6.40-2ubuntu11.3
- acl:
- 2.2.52-3
- acpid:
- 1:2.0.26-1ubuntu2
- adduser:
- 3.113+nmu3ubuntu4
- anerd:
- 1
- .........................
+ root@cfg01:~# salt "mas*" pkg.list_pkgs
+ mas01.mcp-pike-odl-ha.local:
+ ----------
+ accountsservice:
+ 0.6.40-2ubuntu11.3
+ acl:
+ 2.2.52-3
+ acpid:
+ 1:2.0.26-1ubuntu2
+ adduser:
+ 3.113+nmu3ubuntu4
+ anerd:
+ 1
+ .........................
#. Execute any linux command on all nodes (list the content of */var/log* in this example)
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~# salt "*" cmd.run 'ls /var/log'
- cfg01.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apt
- auth.log
- boot.log
- btmp
- cloud-init-output.log
- cloud-init.log
- .........................
+ root@cfg01:~# salt "*" cmd.run 'ls /var/log'
+ cfg01.mcp-pike-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
#. Execute any linux command on nodes using compound queries filter
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
- cfg01.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apt
- auth.log
- boot.log
- btmp
- cloud-init-output.log
- cloud-init.log
- .........................
+ root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
+ cfg01.mcp-pike-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
#. Execute any linux command on nodes using role filter
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
- cmp001.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apache2
- apt
- auth.log
- btmp
- ceilometer
- cinder
- cloud-init-output.log
- cloud-init.log
- .........................
+ root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
+ cmp001.mcp-pike-odl-ha.local:
+ alternatives.log
+ apache2
+ apt
+ auth.log
+ btmp
+ ceilometer
+ cinder
+ cloud-init-output.log
+ cloud-init.log
+ .........................
@@ -217,19 +210,19 @@ Accessing Openstack
Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
Openstack credentials are at */root/keystonercv3*.
- .. code-block:: bash
+ .. code-block:: bash
- root@ctl01:~# source keystonercv3
- root@ctl01:~# openstack image list
- +--------------------------------------+-----------------------------------------------+--------+
- | ID | Name | Status |
- +======================================+===============================================+========+
- | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
- | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
- +--------------------------------------+-----------------------------------------------+--------+
+ root@ctl01:~# source keystonercv3
+ root@ctl01:~# openstack image list
+ +--------------------------------------+-----------------------------------------------+--------+
+ | ID | Name | Status |
+ +======================================+===============================================+========+
+ | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
+ | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
+ +--------------------------------------+-----------------------------------------------+--------+
-The OpenStack Dashboard, Horizon is available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
+The OpenStack Dashboard, Horizon, is available at http://<proxy public VIP>
The administrator credentials are *admin*/*opnfv_secret*.
.. figure:: img/horizon_login.png
@@ -239,19 +232,78 @@ A full list of IPs/services is available at <proxy public VIP>:8090 for baremeta
.. figure:: img/salt_services_ip.png
-For Virtual deploys, the most commonly used IPs are in the table below.
+==============================
+Guest Operating System Support
+==============================
+
+There are a number of possibilities regarding the guest operating systems which can be spawned
+on the nodes. The current system spawns virtual machines for VCP VMs on the KVM nodes and VMs
+requested by users in OpenStack compute nodes. Currently the system supports the following
+UEFI-images for the guests:
+
++------------------+-------------------+------------------+
+| OS name | x86_64 status | aarch64 status |
++==================+===================+==================+
+| Ubuntu 17.10 | untested | Full support |
++------------------+-------------------+------------------+
+| Ubuntu 16.04 | Full support | Full support |
++------------------+-------------------+------------------+
+| Ubuntu 14.04 | untested | Full support |
++------------------+-------------------+------------------+
+| Fedora atomic 27 | untested | Not supported |
++------------------+-------------------+------------------+
+| Fedora cloud 27 | untested | Not supported |
++------------------+-------------------+------------------+
+| Debian | untested | Full support |
++------------------+-------------------+------------------+
+| Centos 7 | untested | Not supported |
++------------------+-------------------+------------------+
+| Cirros 0.3.5 | Full support | Full support |
++------------------+-------------------+------------------+
+| Cirros 0.4.0 | Full support | Full support |
++------------------+-------------------+------------------+
+
+
+The above table covers only UEFI image and implies OVMF/AAVMF firmware on the host. An x86 deployment
+also supports non-UEFI images, however that choice is up to the underlying hardware and the administrator
+to make.
+
+The images for the above operating systems can be found in their respective websites.
+
+===================
+Openstack Endpoints
+===================
+
+For each Openstack service three endpoints are created: admin, internal and public.
+
+ .. code-block:: bash
-+-----------+--------------+---------------+
-| Component | IP | Default value |
-+===========+==============+===============+
-| gtw01 | x.y.z.110 | 172.16.10.110 |
-+-----------+--------------+---------------+
-| ctl01 | x.y.z.100 | 172.16.10.100 |
-+-----------+--------------+---------------+
-| cmp001 | x.y.z.105 | 172.16.10.105 |
-+-----------+--------------+---------------+
-| cmp002 | x.y.z.106 | 172.16.10.106 |
-+-----------+--------------+---------------+
+ ubuntu@ctl01:~$ openstack endpoint list --service keystone
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+ | 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone | identity | True | internal | http://172.16.10.26:5000/v3 |
+ | 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone | identity | True | admin | http://172.16.10.26:35357/v3 |
+ | b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone | identity | True | public | https://10.0.15.2:5000/v3 |
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+
+MCP sets up all Openstack services to talk to each other over unencrypted
+connections on the internal management network. All admin/internal endpoints use
+plain http, while the public endpoints are https connections terminated via nginx
+at the VCP proxy VMs.
+
+To access the public endpoints an SSL certificate has to be provided. For
+convenience, the installation script will copy the required certificate into
+to the cfg01 node at /etc/ssl/certs/os_cacert.
+
+Copy the certificate from the cfg01 node to the client that will access the https
+endpoints and place it under /etc/ssl/certs. The SSL connection will be established
+automatically after.
+
+ .. code-block:: bash
+
+ $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 \
+ "cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert
=============================
@@ -274,36 +326,36 @@ After the installation is done, a webbrowser on the host can be used to view the
#. Create a new directory at any location
- .. code-block:: bash
+ .. code-block:: bash
- $ mkdir -p modeler
+ $ mkdir -p modeler
#. Place fuel repo in the above directory
- .. code-block:: bash
+ .. code-block:: bash
- $ cd modeler
- $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
+ $ cd modeler
+ $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
#. Create a container and mount the above host directory
- .. code-block:: bash
+ .. code-block:: bash
- $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
+ $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
#. Install all the required packages inside the container.
- .. code-block:: bash
+ .. code-block:: bash
- $ apt-get update
- $ apt-get install -y npm nodejs
- $ npm install -g reclass-doc
- $ cd /host/fuel/mcp/reclass
- $ ln -s /usr/bin/nodejs /usr/bin/node
- $ reclass-doc --output /host /host/fuel/mcp/reclass
+ $ apt-get update
+ $ apt-get install -y npm nodejs
+ $ npm install -g reclass-doc
+ $ cd /host/fuel/mcp/reclass
+ $ ln -s /usr/bin/nodejs /usr/bin/node
+ $ reclass-doc --output /host /host/fuel/mcp/reclass
#. View the results from the host by using a browser. The file to open should be now at modeler/index.html
@@ -311,14 +363,12 @@ After the installation is done, a webbrowser on the host can be used to view the
.. figure:: img/reclass_doc.png
-.. _references:
+.. _fuel_userguide_references:
==========
References
==========
-1) `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/installation/installation.instruction.html>`_
+1) :ref:`fuel-release-installation-label`
2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
-3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
-
-
+3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/>`_