aboutsummaryrefslogtreecommitdiffstats
path: root/docs/release/userguide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/release/userguide')
-rw-r--r--docs/release/userguide/userguide.rst328
1 files changed, 189 insertions, 139 deletions
diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst
index 2b46a84ac..fd9dfa736 100644
--- a/docs/release/userguide/userguide.rst
+++ b/docs/release/userguide/userguide.rst
@@ -6,9 +6,9 @@
Abstract
========
-This document contains details about how to use OPNFV Fuel - Euphrates
+This document contains details about how to use OPNFV Fuel - Fraser
release - after it was deployed. For details on how to deploy check the
-installation instructions in the :ref:`references` section.
+installation instructions in the :ref:`fuel_userguide_references` section.
This is an unified documentation for both x86_64 and aarch64
architectures. All information is common for both architectures
@@ -22,26 +22,25 @@ Network Overview
Fuel uses several networks to deploy and administer the cloud:
-+------------------+-------------------+---------------------------------------------------------+
-| Network name | Deploy Type | Description |
-| | | |
-+==================+===================+=========================================================+
-| **PXE/ADMIN** | baremetal only | Used for booting the nodes via PXE |
-+------------------+-------------------+---------------------------------------------------------+
-| **MCPCONTROL** | baremetal & | Used to provision the infrastructure VMs (Salt & MaaS). |
-| | virtual | On virtual deploys, it is used for Admin too (on target |
-| | | VMs) leaving the PXE/Admin bridge unused |
-+------------------+-------------------+---------------------------------------------------------+
-| **Mgmt** | baremetal & | Used for internal communication between |
-| | virtual | OpenStack components |
-+------------------+-------------------+---------------------------------------------------------+
-| **Internal** | baremetal & | Used for VM data communication within the |
-| | virtual | cloud deployment |
-+------------------+-------------------+---------------------------------------------------------+
-| **Public** | baremetal & | Used to provide Virtual IPs for public endpoints |
-| | virtual | that are used to connect to OpenStack services APIs. |
-| | | Used by Virtual machines to access the Internet |
-+------------------+-------------------+---------------------------------------------------------+
++------------------+---------------------------------------------------------+
+| Network name | Description |
+| | |
++==================+=========================================================+
+| **PXE/ADMIN** | Used for booting the nodes via PXE and/or Salt |
+| | control network |
++------------------+---------------------------------------------------------+
+| **MCPCONTROL** | Used to provision the infrastructure VMs (Salt & MaaS) |
++------------------+---------------------------------------------------------+
+| **Mgmt** | Used for internal communication between |
+| | OpenStack components |
++------------------+---------------------------------------------------------+
+| **Internal** | Used for VM data communication within the |
+| | cloud deployment |
++------------------+---------------------------------------------------------+
+| **Public** | Used to provide Virtual IPs for public endpoints |
+| | that are used to connect to OpenStack services APIs. |
+| | Used by Virtual machines to access the Internet |
++------------------+---------------------------------------------------------+
These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
@@ -60,27 +59,21 @@ Accessing the Cloud
Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
ssh key */var/lib/opnfv/mcp.rsa*. The example below is a connection to Salt master.
- .. code-block:: bash
+ .. code-block:: bash
- $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
+ $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
**Note**: The Salt master IP is not hard set, it is configurable via INSTALLER_IP during deployment
+Logging in to cluster nodes is possible from the Jumpserver and from Salt master. On the Salt master
+cluster hostnames can be used instead of IP addresses:
-The Fuel baremetal deploy has a Virtualized Control Plane (VCP) which means that the controller
-services are installed in VMs on the baremetal targets (kvm servers). These VMs can also be
-accessed with virsh console: user *opnfv*, password *opnfv_secret*. This method does not apply
-to infrastructure VMs (Salt master and MaaS).
+ .. code-block:: bash
-The example below is a connection to a controller VM. The connection is made from the baremetal
-server kvm01.
+ $ sudo -i
+ $ ssh -i mcp.rsa ubuntu@ctl01
- .. code-block:: bash
-
- $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu x.y.z.141
- ubuntu@kvm01:~$ virsh console ctl01
-
-User *ubuntu* has sudo rights. User *opnfv* has sudo rights only on aarch64 deploys.
+User *ubuntu* has sudo rights.
=============================
@@ -96,34 +89,34 @@ For example tell salt to execute a ping to 8.8.8.8 on all the nodes.
.. figure:: img/saltstack.png
Complex filters can be done to the target like compound queries or node roles.
-For more information about Salt see the :ref:`references` section.
+For more information about Salt see the :ref:`fuel_userguide_references` section.
Some examples are listed below. Note that these commands are issued from Salt master
-with *root* user.
+as *root* user.
#. View the IPs of all the components
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~$ salt "*" network.ip_addrs
- cfg01.baremetal-mcp-ocata-odl-ha.local:
+ root@cfg01:~$ salt "*" network.ip_addrs
+ cfg01.mcp-pike-odl-ha.local:
- 10.20.0.2
- 172.16.10.100
- mas01.baremetal-mcp-ocata-odl-ha.local:
+ mas01.mcp-pike-odl-ha.local:
- 10.20.0.3
- 172.16.10.3
- 192.168.11.3
- .........................
+ .........................
#. View the interfaces of all the components and put the output in a file with yaml format
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
- root@cfg01:~# cat interfaces.yaml
- cfg01.baremetal-mcp-ocata-odl-ha.local:
+ root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
+ root@cfg01:~# cat interfaces.yaml
+ cfg01.mcp-pike-odl-ha.local:
enp1s0:
hwaddr: 52:54:00:72:77:12
inet:
@@ -136,77 +129,77 @@ with *root* user.
prefixlen: '64'
scope: link
up: true
- .........................
+ .........................
#. View installed packages in MaaS node
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~# salt "mas*" pkg.list_pkgs
- mas01.baremetal-mcp-ocata-odl-ha.local:
- ----------
- accountsservice:
- 0.6.40-2ubuntu11.3
- acl:
- 2.2.52-3
- acpid:
- 1:2.0.26-1ubuntu2
- adduser:
- 3.113+nmu3ubuntu4
- anerd:
- 1
- .........................
+ root@cfg01:~# salt "mas*" pkg.list_pkgs
+ mas01.mcp-pike-odl-ha.local:
+ ----------
+ accountsservice:
+ 0.6.40-2ubuntu11.3
+ acl:
+ 2.2.52-3
+ acpid:
+ 1:2.0.26-1ubuntu2
+ adduser:
+ 3.113+nmu3ubuntu4
+ anerd:
+ 1
+ .........................
#. Execute any linux command on all nodes (list the content of */var/log* in this example)
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~# salt "*" cmd.run 'ls /var/log'
- cfg01.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apt
- auth.log
- boot.log
- btmp
- cloud-init-output.log
- cloud-init.log
- .........................
+ root@cfg01:~# salt "*" cmd.run 'ls /var/log'
+ cfg01.mcp-pike-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
#. Execute any linux command on nodes using compound queries filter
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
- cfg01.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apt
- auth.log
- boot.log
- btmp
- cloud-init-output.log
- cloud-init.log
- .........................
+ root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
+ cfg01.mcp-pike-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
#. Execute any linux command on nodes using role filter
- .. code-block:: bash
+ .. code-block:: bash
- root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
- cmp001.baremetal-mcp-ocata-odl-ha.local:
- alternatives.log
- apache2
- apt
- auth.log
- btmp
- ceilometer
- cinder
- cloud-init-output.log
- cloud-init.log
- .........................
+ root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
+ cmp001.mcp-pike-odl-ha.local:
+ alternatives.log
+ apache2
+ apt
+ auth.log
+ btmp
+ ceilometer
+ cinder
+ cloud-init-output.log
+ cloud-init.log
+ .........................
@@ -217,19 +210,19 @@ Accessing Openstack
Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
Openstack credentials are at */root/keystonercv3*.
- .. code-block:: bash
+ .. code-block:: bash
- root@ctl01:~# source keystonercv3
- root@ctl01:~# openstack image list
- +--------------------------------------+-----------------------------------------------+--------+
- | ID | Name | Status |
- +======================================+===============================================+========+
- | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
- | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
- +--------------------------------------+-----------------------------------------------+--------+
+ root@ctl01:~# source keystonercv3
+ root@ctl01:~# openstack image list
+ +--------------------------------------+-----------------------------------------------+--------+
+ | ID | Name | Status |
+ +======================================+===============================================+========+
+ | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
+ | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
+ +--------------------------------------+-----------------------------------------------+--------+
-The OpenStack Dashboard, Horizon is available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
+The OpenStack Dashboard, Horizon, is available at http://<proxy public VIP>
The administrator credentials are *admin*/*opnfv_secret*.
.. figure:: img/horizon_login.png
@@ -239,19 +232,78 @@ A full list of IPs/services is available at <proxy public VIP>:8090 for baremeta
.. figure:: img/salt_services_ip.png
-For Virtual deploys, the most commonly used IPs are in the table below.
+==============================
+Guest Operating System Support
+==============================
+
+There are a number of possibilities regarding the guest operating systems which can be spawned
+on the nodes. The current system spawns virtual machines for VCP VMs on the KVM nodes and VMs
+requested by users in OpenStack compute nodes. Currently the system supports the following
+UEFI-images for the guests:
+
++------------------+-------------------+------------------+
+| OS name | x86_64 status | aarch64 status |
++==================+===================+==================+
+| Ubuntu 17.10 | untested | Full support |
++------------------+-------------------+------------------+
+| Ubuntu 16.04 | Full support | Full support |
++------------------+-------------------+------------------+
+| Ubuntu 14.04 | untested | Full support |
++------------------+-------------------+------------------+
+| Fedora atomic 27 | untested | Not supported |
++------------------+-------------------+------------------+
+| Fedora cloud 27 | untested | Not supported |
++------------------+-------------------+------------------+
+| Debian | untested | Full support |
++------------------+-------------------+------------------+
+| Centos 7 | untested | Not supported |
++------------------+-------------------+------------------+
+| Cirros 0.3.5 | Full support | Full support |
++------------------+-------------------+------------------+
+| Cirros 0.4.0 | Full support | Full support |
++------------------+-------------------+------------------+
+
+
+The above table covers only UEFI image and implies OVMF/AAVMF firmware on the host. An x86 deployment
+also supports non-UEFI images, however that choice is up to the underlying hardware and the administrator
+to make.
+
+The images for the above operating systems can be found in their respective websites.
+
+===================
+Openstack Endpoints
+===================
+
+For each Openstack service three endpoints are created: admin, internal and public.
+
+ .. code-block:: bash
-+-----------+--------------+---------------+
-| Component | IP | Default value |
-+===========+==============+===============+
-| gtw01 | x.y.z.110 | 172.16.10.110 |
-+-----------+--------------+---------------+
-| ctl01 | x.y.z.100 | 172.16.10.100 |
-+-----------+--------------+---------------+
-| cmp001 | x.y.z.105 | 172.16.10.105 |
-+-----------+--------------+---------------+
-| cmp002 | x.y.z.106 | 172.16.10.106 |
-+-----------+--------------+---------------+
+ ubuntu@ctl01:~$ openstack endpoint list --service keystone
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+ | 008fec57922b4e9e8bf02c770039ae77 | RegionOne | keystone | identity | True | internal | http://172.16.10.26:5000/v3 |
+ | 1a1f3c3340484bda9ef7e193f50599e6 | RegionOne | keystone | identity | True | admin | http://172.16.10.26:35357/v3 |
+ | b0a47d42d0b6491b995d7e6230395de8 | RegionOne | keystone | identity | True | public | https://10.0.15.2:5000/v3 |
+ +----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
+
+MCP sets up all Openstack services to talk to each other over unencrypted
+connections on the internal management network. All admin/internal endpoints use
+plain http, while the public endpoints are https connections terminated via nginx
+at the VCP proxy VMs.
+
+To access the public endpoints an SSL certificate has to be provided. For
+convenience, the installation script will copy the required certificate into
+to the cfg01 node at /etc/ssl/certs/os_cacert.
+
+Copy the certificate from the cfg01 node to the client that will access the https
+endpoints and place it under /etc/ssl/certs. The SSL connection will be established
+automatically after.
+
+ .. code-block:: bash
+
+ $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2 \
+ "cat /etc/ssl/certs/os_cacert" | sudo tee /etc/ssl/certs/os_cacert
=============================
@@ -274,36 +326,36 @@ After the installation is done, a webbrowser on the host can be used to view the
#. Create a new directory at any location
- .. code-block:: bash
+ .. code-block:: bash
- $ mkdir -p modeler
+ $ mkdir -p modeler
#. Place fuel repo in the above directory
- .. code-block:: bash
+ .. code-block:: bash
- $ cd modeler
- $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
+ $ cd modeler
+ $ git clone https://gerrit.opnfv.org/gerrit/fuel && cd fuel
#. Create a container and mount the above host directory
- .. code-block:: bash
+ .. code-block:: bash
- $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
+ $ docker run --privileged -it -v <absolute_path>/modeler:/host ubuntu bash
#. Install all the required packages inside the container.
- .. code-block:: bash
+ .. code-block:: bash
- $ apt-get update
- $ apt-get install -y npm nodejs
- $ npm install -g reclass-doc
- $ cd /host/fuel/mcp/reclass
- $ ln -s /usr/bin/nodejs /usr/bin/node
- $ reclass-doc --output /host /host/fuel/mcp/reclass
+ $ apt-get update
+ $ apt-get install -y npm nodejs
+ $ npm install -g reclass-doc
+ $ cd /host/fuel/mcp/reclass
+ $ ln -s /usr/bin/nodejs /usr/bin/node
+ $ reclass-doc --output /host /host/fuel/mcp/reclass
#. View the results from the host by using a browser. The file to open should be now at modeler/index.html
@@ -311,14 +363,12 @@ After the installation is done, a webbrowser on the host can be used to view the
.. figure:: img/reclass_doc.png
-.. _references:
+.. _fuel_userguide_references:
==========
References
==========
-1) `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/installation/installation.instruction.html>`_
+1) :ref:`fuel-release-installation-label`
2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
-3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
-
-
+3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/>`_