aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorAlexandru Avadanii <Alexandru.Avadanii@enea.com>2017-11-26 05:43:39 +0100
committerAlexandru Avadanii <Alexandru.Avadanii@enea.com>2017-11-26 23:56:20 +0000
commitb49021e1227573fbde13c82f432635ef187b1e83 (patch)
treea0ef84e1ab70b5f1aca9254c3664847c056ab474 /docs
parent3293d28284b3d2d140f3b82452cfb6b16a7f2f2a (diff)
u/fuel: Bump & rebase for image pre-install
1. Bump to latest Fuel@OPNFV to include: - Bring in newer glusterfs for mtime unsplit brain * Requires adding arch "arm64" to PPA definition in reclass: - (reclass-system) linux.system.repo.glusterfs: Add arm64 arch - Switch nofeature-ha compute nodes to UCA repo * Requires an alternative way of adding linux.enea.com repos; * linux.enea.com repos will now be pre-install into VM images; * Requires refresh on repo arch list handled by Armband patch: - (fuel) baremetal, virtual: Extend arch list for UCA repo 2. Staging proposed patches from upstream Fuel@OPNFV: - Add pre-{install,purge} support for base image * Reference implementation adds pre-installed Armband specifics: - Enea public GPG to APT keys (for below repos); - repos (linux.enea.com/{apt-mk,mcp-repos}/*); - linux-{image,headers}-generic-hwe-16.04-edge; - cloud-init: datasource from NoCloud only; * Allows us to drop kernel installation from state files, installing the kernel only once during image prep, instead of two stages of parallel installs (5 baremetal, 14 VCP); * Ensures Armband repos are pre-configured for infrastructure VMs, allowing us to drop more reclass repo definitions; * Rework armband patch to install kernel only on kvm, cmp: - (fuel) baremetal: linux-image-generic-hwe-16.04-edge 3. Sync reclass repo definitions with upstream change, drop duplicates - [linux][repos] Remove unused repositories [1] * Upstream dropped all "ocata-{security,hotfix,...} repo comps, which are also empty for Armband, so drop them too; * Rework following armband patches: - (reclass-system) linux/system/repo/mcp: Add Armband repos * Move Armband repos to new dedicated reclass classes: - linux.system.repo.mcp.armband.extra (currently empty); - linux.system.repo.mcp.armband.openstack; * Use HTTPS for fetching Enea Armband GPG key; - (fuel) baremetal: Add Armband Openstack repos to kvm, cmp * Consume defs introduced above only on baremetal nodes; 4. Sync documentation with Fuel@OPNFV (cp) 5. Add vim swap files to .gitignore [1] https://github.com/Mirantis/reclass-system-salt-model/commit/1dd1b31 Change-Id: Ibab56279de86f08ad7cd9bc6761f4c525532f811 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com> (cherry picked from commit 37083673d6cdddbb9b710f4dd5efe832753e5856)
Diffstat (limited to 'docs')
-rw-r--r--docs/release/installation/img/README.rst12
-rw-r--r--docs/release/installation/img/arm_pod5.pngbin0 -> 168862 bytes
-rw-r--r--docs/release/installation/img/fuel_baremetal.pngbin0 -> 245916 bytes
-rw-r--r--docs/release/installation/img/fuel_virtual.pngbin0 -> 216442 bytes
-rw-r--r--docs/release/installation/img/lf_pod2.pngbin0 -> 167832 bytes
-rw-r--r--docs/release/installation/installation.instruction.rst215
-rw-r--r--docs/release/userguide/img/horizon_login.pngbin0 -> 32205 bytes
-rw-r--r--docs/release/userguide/img/salt_services_ip.pngbin0 -> 149270 bytes
-rw-r--r--docs/release/userguide/img/saltstack.pngbin0 -> 14373 bytes
-rw-r--r--docs/release/userguide/index.rst18
-rw-r--r--docs/release/userguide/userguide.rst267
11 files changed, 481 insertions, 31 deletions
diff --git a/docs/release/installation/img/README.rst b/docs/release/installation/img/README.rst
new file mode 100644
index 00000000..bc8d9bed
--- /dev/null
+++ b/docs/release/installation/img/README.rst
@@ -0,0 +1,12 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. SPDX-License-Identifier: CC-BY-4.0
+.. (c) 2017 Ericsson AB, Mirantis Inc., Enea AB and others.
+
+Image Editor
+============
+All files in this directory have been created using `draw.io <http://draw.io>`_.
+
+Image Sources
+=============
+Image sources are embedded in each `png` file.
+To edit an image, import the `png` file using `draw.io <http://draw.io>`_.
diff --git a/docs/release/installation/img/arm_pod5.png b/docs/release/installation/img/arm_pod5.png
new file mode 100644
index 00000000..b35b661a
--- /dev/null
+++ b/docs/release/installation/img/arm_pod5.png
Binary files differ
diff --git a/docs/release/installation/img/fuel_baremetal.png b/docs/release/installation/img/fuel_baremetal.png
new file mode 100644
index 00000000..aee42ac3
--- /dev/null
+++ b/docs/release/installation/img/fuel_baremetal.png
Binary files differ
diff --git a/docs/release/installation/img/fuel_virtual.png b/docs/release/installation/img/fuel_virtual.png
new file mode 100644
index 00000000..d7664865
--- /dev/null
+++ b/docs/release/installation/img/fuel_virtual.png
Binary files differ
diff --git a/docs/release/installation/img/lf_pod2.png b/docs/release/installation/img/lf_pod2.png
new file mode 100644
index 00000000..b6c9b8e3
--- /dev/null
+++ b/docs/release/installation/img/lf_pod2.png
Binary files differ
diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst
index 1b624d26..502c7509 100644
--- a/docs/release/installation/installation.instruction.rst
+++ b/docs/release/installation/installation.instruction.rst
@@ -21,7 +21,7 @@ This document provides guidelines on how to install and
configure the Euphrates release of OPNFV when using Fuel as a
deployment tool, including required software and hardware configurations.
-Although the available installation options provide a high de.g.ee of
+Although the available installation options provide a high degree of
freedom in how the system is set up, including architecture, services
and features, etc., said permutations may not provide an OPNFV
compliant reference architecture. This document provides a
@@ -40,7 +40,7 @@ OPNFV, using Fuel as a deployment tool, some planning must be
done.
Preparations
-==================
+============
Prior to installation, a number of deployment specific parameters must be collected, those are:
@@ -65,7 +65,7 @@ This information will be needed for the configuration procedures
provided in this document.
=========================================
-Hardware requirements for virtual deploys
+Hardware Requirements for Virtual Deploys
=========================================
The following minimum hardware requirements must be met for the virtual
@@ -76,7 +76,7 @@ installation of Euphrates using Fuel:
| | |
+============================+========================================================+
| **1 Jumpserver** | A physical node (also called Foundation Node) that |
-| | hosts a Salt Master VM and each of the VM nodes in |
+| | will host a Salt Master VM and each of the VM nodes in |
| | the virtual deploy |
+----------------------------+--------------------------------------------------------+
| **CPU** | Minimum 1 socket with Virtualization support |
@@ -88,7 +88,7 @@ installation of Euphrates using Fuel:
===========================================
-Hardware requirements for baremetal deploys
+Hardware Requirements for Baremetal Deploys
===========================================
The following minimum hardware requirements must be met for the baremetal
@@ -153,7 +153,7 @@ environment, you should think about:
- Networking -- Depends on the Choose Network Topology, the network bandwidth per virtual machine, and network storage.
================================================
-Top of the rack (TOR) Configuration requirements
+Top of the Rack (TOR) Configuration Requirements
================================================
The switching infrastructure provides connectivity for the OPNFV
@@ -177,8 +177,27 @@ Manual configuration of the Euphrates hardware platform should
be carried out according to the `OPNFV Pharos Specification
<https://wiki.opnfv.org/display/pharos/Pharos+Specification>`_.
+============================
+OPNFV Software Prerequisites
+============================
+
+The Jumpserver node should be pre-provisioned with an operating system,
+according to the Pharos specification. Relevant network bridges should
+also be pre-configured (e.g. admin, management, public).
+
+Fuel@OPNFV has been validated by CI using the following distributions
+installed on the Jumpserver:
+
+ - CentOS 7 (recommended by Pharos specification);
+ - Ubuntu Xenial;
+
+**NOTE:** The install script expects 'libvirt' to be installed and running
+on the Jumpserver. In case the packages are missing, the script will install
+them; but depending on the OS distribution, the user might have to start the
+'libvirtd' service manually.
+
==========================================
-OPNFV Software installation and deployment
+OPNFV Software Installation and Deployment
==========================================
This section describes the process of installing all the components needed to
@@ -205,6 +224,33 @@ For virtual deploys all the targets are VMs on the Jumpserver. The deploy script
- Install Openstack on the targets
- Leverage Salt to install & configure Openstack services
+.. figure:: img/fuel_virtual.png
+ :align: center
+ :alt: Fuel@OPNFV Virtual POD Network Layout Examples
+
+ Fuel@OPNFV Virtual POD Network Layout Examples
+
+ +-----------------------+------------------------------------------------------------------------+
+ | cfg01 | Salt Master VM |
+ +-----------------------+------------------------------------------------------------------------+
+ | ctl01 | Controller VM |
+ +-----------------------+------------------------------------------------------------------------+
+ | cmp01/cmp02 | Compute VMs |
+ +-----------------------+------------------------------------------------------------------------+
+ | gtw01 | Gateway VM with neutron services (dhcp agent, L3 agent, metadata, etc) |
+ +-----------------------+------------------------------------------------------------------------+
+ | odl01 | VM on which ODL runs (for scenarios deployed with ODL) |
+ +-----------------------+------------------------------------------------------------------------+
+
+
+In this figure there are examples of two virtual deploys:
+ - Jumphost 1 has only virsh bridges, created by the deploy script
+ - Jumphost 2 has a mix of linux and virsh briges; when linux bridge exist for a specified network,
+ the deploy script will skip creating a virsh bridge for it
+
+**Note**: A virtual network "mcpcontrol" is always created. For virtual deploys, "mcpcontrol" is also used
+for Admin, leaving the PXE/Admin bridge unused.
+
Automatic Installation of a Baremetal POD
=========================================
@@ -223,8 +269,42 @@ The installation is done automatically with the deploy script, which will:
- Leverage Salt to configure the operatign system on the baremetal nodes
- Leverage Salt to install & configure Openstack services
-
-Steps to start the automatic deploy
+.. figure:: img/fuel_baremetal.png
+ :align: center
+ :alt: Fuel@OPNFV Baremetal POD Network Layout Example
+
+ Fuel@OPNFV Baremetal POD Network Layout Example
+
+ +-----------------------+---------------------------------------------------------+
+ | cfg01 | Salt Master VM |
+ +-----------------------+---------------------------------------------------------+
+ | mas01 | MaaS Node VM |
+ +-----------------------+---------------------------------------------------------+
+ | kvm01..03 | Baremetals which hold the VMs with controller functions |
+ +-----------------------+---------------------------------------------------------+
+ | cmp001/cmp002 | Baremetal compute nodes |
+ +-----------------------+---------------------------------------------------------+
+ | prx01/prx02 | Proxy VMs for Nginx |
+ +-----------------------+---------------------------------------------------------+
+ | msg01..03 | RabbitMQ Service VMs |
+ +-----------------------+---------------------------------------------------------+
+ | dbs01..03 | MySQL service VMs |
+ +-----------------------+---------------------------------------------------------+
+ | mdb01..03 | Telemetry VMs |
+ +-----------------------+---------------------------------------------------------+
+ | odl01 | VM on which ODL runs (for scenarios deployed with ODL) |
+ +-----------------------+---------------------------------------------------------+
+ | Tenant VM | VM running in the cloud |
+ +-----------------------+---------------------------------------------------------+
+
+In the baremetal deploy all bridges but "mcpcontrol" are linux bridges. For the Jumpserver, if they are already created
+they will be used; otherwise they will be created. For the targets, the bridges are created by the deploy script.
+
+**Note**: A virtual network "mcpcontrol" is always created. For baremetal deploys, PXE bridge is used for
+baremetal node provisioning, while "mcpcontrol" is used to provision the infrastructure VMs only.
+
+
+Steps to Start the Automatic Deploy
===================================
These steps are common both for virtual and baremetal deploys.
@@ -249,7 +329,7 @@ These steps are common both for virtual and baremetal deploys.
.. code-block:: bash
- $ git checkout 5.0.2
+ $ git checkout opnfv-5.0.2
#. Start the deploy script
@@ -257,42 +337,115 @@ These steps are common both for virtual and baremetal deploys.
$ ci/deploy.sh -l <lab_name> \
-p <pod_name> \
- -b <URI to the PDF file> \
+ -b <URI to configuration repo containing the PDF file> \
-s <scenario> \
- -B <list of admin, public and management bridges>
+ -B <list of admin, management, private and public bridges>
Examples
--------
#. Virtual deploy
- .. code-block:: bash
+ To start a virtual deployment, it is required to have the `virtual` keyword
+ while specifying the pod name to the installer script.
+
+ It will create the required bridges and networks, configure Salt Master and
+ install OpenStack.
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l ericsson \
- -p virtual_kvm \
- -s os-nosdn-nofeature-noha
+ .. code-block:: bash
+
+ $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
+ -l ericsson \
+ -p virtual_kvm \
+ -s os-nosdn-nofeature-noha
+
+ Once the deployment is complete, the OpenStack Dashboard, Horizon is
+ available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
+ The administrator credentials are **admin** / **opnfv_secret**.
#. Baremetal deploy
-A x86 deploy on pod1 from Ericsson lab
+ A x86 deploy on pod2 from Linux Foundation lab
- .. code-block:: bash
+ .. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l ericsson \
- -p pod1 \
- -s os-nosdn-nofeature-ha \
- -B pxebr
+ $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
+ -l lf \
+ -p pod2 \
+ -s os-nosdn-nofeature-ha \
+ -B pxebr,br-ctl
-An aarch64 deploy on pod5 from Arm lab
+ .. figure:: img/lf_pod2.png
+ :align: center
+ :alt: Fuel@OPNFV LF POD2 Network Layout
+
+ Fuel@OPNFV LF POD2 Network Layout
+
+ Once the deployment is complete, the SaltStack Deployment Documentation is
+ available at http://<Proxy VIP>:8090, e.g. http://172.30.10.103:8090.
+
+ An aarch64 deploy on pod5 from Arm lab
+
+ .. code-block:: bash
+
+ $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
+ -l arm \
+ -p pod5 \
+ -s os-nosdn-nofeature-ha \
+ -B admin7_br0,mgmt7_br0,,public7_br0
+
+ .. figure:: img/arm_pod5.png
+ :align: center
+ :alt: Fuel@OPNFV ARM POD5 Network Layout
+
+ Fuel@OPNFV ARM POD5 Network Layout
+
+ Once the deployment is complete, the SaltStack Deployment Documentation is
+ available at http://<Proxy VIP>:8090, e.g. http://10.0.8.103:8090.
+
+
+Pod Descriptor Files
+====================
+
+Descriptor files provide the installer with an abstraction of the target pod
+with all its hardware characteristics and required parameters. This information
+is split into two different files:
+Pod Descriptor File (PDF) and Installer Descriptor File (IDF).
+
+
+The Pod Descriptor File is a hardware and network description of the pod
+infrastructure. The information is modeled under a yaml structure.
+A reference file with the expected yaml structure is available at
+*mcp/config/labs/local/pod1.yaml*
+
+A common network section describes all the internal and provider networks
+assigned to the pod. Each network is expected to have a vlan tag, IP subnet and
+attached interface on the boards. Untagged vlans shall be defined as "native".
+
+The hardware description is arranged into a main "jumphost" node and a "nodes"
+set for all target boards. For each node the following characteristics
+are defined:
+
+- Node parameters including CPU features and total memory.
+- A list of available disks.
+- Remote management parameters.
+- Network interfaces list including mac address, speed and advanced features.
+- IP list of fixed IPs for the node
+
+**Note**: the fixed IPs are ignored by the MCP installer script and it will instead
+assign based on the network ranges defined under the pod network configuration.
+
+
+The Installer Descriptor File extends the PDF with pod related parameters
+required by the installer. This information may differ per each installer type
+and it is not considered part of the pod infrastructure. Fuel installer relies
+on the IDF model to map the networks to the bridges on the foundation node and
+to setup all node NICs by defining the expected OS device name and bus address.
- .. code-block:: bash
- $ ci/deploy.sh -b file:///home/jenkins/tmpdir/securedlab \
- -l arm \
- -p pod5 \
- -s os-nosdn-nofeature-ha \
- -B pxebr
+The file follows a yaml structure and a "fuel" section is expected. Contents and
+references must be aligned with the PDF file. The IDF file must be named after
+the PDF with the prefix "idf-". A reference file with the expected structure
+is available at *mcp/config/labs/local/idf-pod1.yaml*
=============
diff --git a/docs/release/userguide/img/horizon_login.png b/docs/release/userguide/img/horizon_login.png
new file mode 100644
index 00000000..641ca6c6
--- /dev/null
+++ b/docs/release/userguide/img/horizon_login.png
Binary files differ
diff --git a/docs/release/userguide/img/salt_services_ip.png b/docs/release/userguide/img/salt_services_ip.png
new file mode 100644
index 00000000..504beb3e
--- /dev/null
+++ b/docs/release/userguide/img/salt_services_ip.png
Binary files differ
diff --git a/docs/release/userguide/img/saltstack.png b/docs/release/userguide/img/saltstack.png
new file mode 100644
index 00000000..d57452c6
--- /dev/null
+++ b/docs/release/userguide/img/saltstack.png
Binary files differ
diff --git a/docs/release/userguide/index.rst b/docs/release/userguide/index.rst
new file mode 100644
index 00000000..d4330d08
--- /dev/null
+++ b/docs/release/userguide/index.rst
@@ -0,0 +1,18 @@
+.. _fuel-userguide:
+
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Open Platform for NFV Project, Inc. and its contributors
+
+.. _fuel-release-userguide-label:
+
+**************************
+User guide for Fuel\@OPNFV
+**************************
+
+.. toctree::
+ :numbered:
+ :maxdepth: 2
+
+ userguide.rst
+
diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst
new file mode 100644
index 00000000..f00e6635
--- /dev/null
+++ b/docs/release/userguide/userguide.rst
@@ -0,0 +1,267 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Open Platform for NFV Project, Inc. and its contributors
+
+========
+Abstract
+========
+
+This document contains details about how to use OPNFV Fuel - Euphrates
+release - after it was deployed. For details on how to deploy check the
+installation instructions in the :ref:`references` section.
+
+This is an unified documentation for both x86_64 and aarch64
+architectures. All information is common for both architectures
+except when explicitly stated.
+
+
+
+================
+Network Overview
+================
+
+Fuel uses several networks to deploy and administer the cloud:
+
++------------------+-------------------+---------------------------------------------------------+
+| Network name | Deploy Type | Description |
+| | | |
++==================+===================+=========================================================+
+| **PXE/ADMIN** | baremetal only | Used for booting the nodes via PXE |
++------------------+-------------------+---------------------------------------------------------+
+| **MCPCONTROL** | baremetal & | Used to provision the infrastructure VMs (Salt & MaaS). |
+| | virtual | On virtual deploys, it is used for Admin too (on target |
+| | | VMs) leaving the PXE/Admin bridge unused |
++------------------+-------------------+---------------------------------------------------------+
+| **Mgmt** | baremetal & | Used for internal communication between |
+| | virtual | OpenStack components |
++------------------+-------------------+---------------------------------------------------------+
+| **Internal** | baremetal & | Used for VM data communication within the |
+| | virtual | cloud deployment |
++------------------+-------------------+---------------------------------------------------------+
+| **Public** | baremetal & | Used to provide Virtual IPs for public endpoints |
+| | virtual | that are used to connect to OpenStack services APIs. |
+| | | Used by Virtual machines to access the Internet |
++------------------+-------------------+---------------------------------------------------------+
+
+
+These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
+Jumpserver. If they don't exists at deploy time, they will be created by the scripts as virsh
+networks.
+
+Mcpcontrol exists only on the Jumpserver and needs to be virtual because a DHCP server runs
+on this network and associates static host entry IPs for Salt and Maas VMs.
+
+
+
+===================
+Accessing the Cloud
+===================
+
+Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
+ssh key */var/lib/opnfv/mcp.rsa*. The example below is a connection to Salt master.
+
+ .. code-block:: bash
+
+ $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
+
+**Note**: The Salt master IP is not hard set, it is configurable via INSTALLER_IP during deployment
+
+
+The Fuel baremetal deploy has a Virtualized Control Plane (VCP) which means that the controller
+services are installed in VMs on the baremetal targets (kvm servers). These VMs can also be
+accessed with virsh console: user *opnfv*, password *opnfv_secret*. This method does not apply
+to infrastructure VMs (Salt master and MaaS).
+
+The example below is a connection to a controller VM. The connection is made from the baremetal
+server kvm01.
+
+ .. code-block:: bash
+
+ $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu x.y.z.141
+ ubuntu@kvm01:~$ virsh console ctl01
+
+User *ubuntu* has sudo rights. User *opnfv* has sudo rights only on aarch64 deploys.
+
+
+=============================
+Exploring the Cloud with Salt
+=============================
+
+To gather information about the cloud, the salt commands can be used. It is based
+around a master-minion idea where the salt-master pushes config to the minions to
+execute actions.
+
+For example tell salt to execute a ping to 8.8.8.8 on all the nodes.
+
+.. figure:: img/saltstack.png
+
+Complex filters can be done to the target like compound queries or node roles.
+For more information about Salt see the :ref:`references` section.
+
+Some examples are listed below. Note that these commands are issued from Salt master
+with *root* user.
+
+
+#. View the IPs of all the components
+
+ .. code-block:: bash
+
+ root@cfg01:~$ salt "*" network.ip_addrs
+ cfg01.baremetal-mcp-ocata-odl-ha.local:
+ - 10.20.0.2
+ - 172.16.10.100
+ mas01.baremetal-mcp-ocata-odl-ha.local:
+ - 10.20.0.3
+ - 172.16.10.3
+ - 192.168.11.3
+ .........................
+
+
+#. View the interfaces of all the components and put the output in a file with yaml format
+
+ .. code-block:: bash
+
+ root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
+ root@cfg01:~# cat interfaces.yaml
+ cfg01.baremetal-mcp-ocata-odl-ha.local:
+ enp1s0:
+ hwaddr: 52:54:00:72:77:12
+ inet:
+ - address: 10.20.0.2
+ broadcast: 10.20.0.255
+ label: enp1s0
+ netmask: 255.255.255.0
+ inet6:
+ - address: fe80::5054:ff:fe72:7712
+ prefixlen: '64'
+ scope: link
+ up: true
+ .........................
+
+
+#. View installed packages in MaaS node
+
+ .. code-block:: bash
+
+ root@cfg01:~# salt "mas*" pkg.list_pkgs
+ mas01.baremetal-mcp-ocata-odl-ha.local:
+ ----------
+ accountsservice:
+ 0.6.40-2ubuntu11.3
+ acl:
+ 2.2.52-3
+ acpid:
+ 1:2.0.26-1ubuntu2
+ adduser:
+ 3.113+nmu3ubuntu4
+ anerd:
+ 1
+ .........................
+
+
+#. Execute any linux command on all nodes (list the content of */var/log* in this example)
+
+ .. code-block:: bash
+
+ root@cfg01:~# salt "*" cmd.run 'ls /var/log'
+ cfg01.baremetal-mcp-ocata-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
+
+
+#. Execute any linux command on nodes using compound queries filter
+
+ .. code-block:: bash
+
+ root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
+ cfg01.baremetal-mcp-ocata-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
+
+
+#. Execute any linux command on nodes using role filter
+
+ .. code-block:: bash
+
+ root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
+ cmp001.baremetal-mcp-ocata-odl-ha.local:
+ alternatives.log
+ apache2
+ apt
+ auth.log
+ btmp
+ ceilometer
+ cinder
+ cloud-init-output.log
+ cloud-init.log
+ .........................
+
+
+
+===================
+Accessing Openstack
+===================
+
+Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
+Openstack credentials are at */root/keystonercv3*.
+
+ .. code-block:: bash
+
+ root@ctl01:~# source keystonercv3
+ root@ctl01:~# openstack image list
+ +--------------------------------------+-----------------------------------------------+--------+
+ | ID | Name | Status |
+ +======================================+===============================================+========+
+ | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
+ | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
+ +--------------------------------------+-----------------------------------------------+--------+
+
+
+The OpenStack Dashboard, Horizon is available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
+The administrator credentials are *admin*/*opnfv_secret*.
+
+.. figure:: img/horizon_login.png
+
+
+A full list of IPs/services is available at <proxy public VIP>:8090 for baremetal deploys.
+
+.. figure:: img/salt_services_ip.png
+
+For Virtual deploys, the most commonly used IPs are in the table below.
+
++-----------+--------------+---------------+
+| Component | IP | Default value |
++===========+==============+===============+
+| gtw01 | x.y.z.110 | 172.16.10.110 |
++-----------+--------------+---------------+
+| ctl01 | x.y.z.100 | 172.16.10.100 |
++-----------+--------------+---------------+
+| cmp001 | x.y.z.105 | 172.16.10.105 |
++-----------+--------------+---------------+
+| cmp002 | x.y.z.106 | 172.16.10.106 |
++-----------+--------------+---------------+
+
+
+.. _references:
+
+==========
+References
+==========
+
+1) `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/installation/installation.instruction.html>`_
+2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
+3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
+
+