summaryrefslogtreecommitdiffstats
path: root/docs/release/userguide/userguide.rst
diff options
context:
space:
mode:
authorAlexandru Avadanii <Alexandru.Avadanii@enea.com>2017-11-26 05:43:39 +0100
committerAlexandru Avadanii <Alexandru.Avadanii@enea.com>2017-11-26 22:26:00 +0100
commit37083673d6cdddbb9b710f4dd5efe832753e5856 (patch)
treea0ef84e1ab70b5f1aca9254c3664847c056ab474 /docs/release/userguide/userguide.rst
parentd8dd9847b7a534b3c78af1b3eb772f07a3543d0c (diff)
u/fuel: Bump & rebase for image pre-install
1. Bump to latest Fuel@OPNFV to include: - Bring in newer glusterfs for mtime unsplit brain * Requires adding arch "arm64" to PPA definition in reclass: - (reclass-system) linux.system.repo.glusterfs: Add arm64 arch - Switch nofeature-ha compute nodes to UCA repo * Requires an alternative way of adding linux.enea.com repos; * linux.enea.com repos will now be pre-install into VM images; * Requires refresh on repo arch list handled by Armband patch: - (fuel) baremetal, virtual: Extend arch list for UCA repo 2. Staging proposed patches from upstream Fuel@OPNFV: - Add pre-{install,purge} support for base image * Reference implementation adds pre-installed Armband specifics: - Enea public GPG to APT keys (for below repos); - repos (linux.enea.com/{apt-mk,mcp-repos}/*); - linux-{image,headers}-generic-hwe-16.04-edge; - cloud-init: datasource from NoCloud only; * Allows us to drop kernel installation from state files, installing the kernel only once during image prep, instead of two stages of parallel installs (5 baremetal, 14 VCP); * Ensures Armband repos are pre-configured for infrastructure VMs, allowing us to drop more reclass repo definitions; * Rework armband patch to install kernel only on kvm, cmp: - (fuel) baremetal: linux-image-generic-hwe-16.04-edge 3. Sync reclass repo definitions with upstream change, drop duplicates - [linux][repos] Remove unused repositories [1] * Upstream dropped all "ocata-{security,hotfix,...} repo comps, which are also empty for Armband, so drop them too; * Rework following armband patches: - (reclass-system) linux/system/repo/mcp: Add Armband repos * Move Armband repos to new dedicated reclass classes: - linux.system.repo.mcp.armband.extra (currently empty); - linux.system.repo.mcp.armband.openstack; * Use HTTPS for fetching Enea Armband GPG key; - (fuel) baremetal: Add Armband Openstack repos to kvm, cmp * Consume defs introduced above only on baremetal nodes; 4. Sync documentation with Fuel@OPNFV (cp) 5. Add vim swap files to .gitignore [1] https://github.com/Mirantis/reclass-system-salt-model/commit/1dd1b31 Change-Id: Ibab56279de86f08ad7cd9bc6761f4c525532f811 Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'docs/release/userguide/userguide.rst')
-rw-r--r--docs/release/userguide/userguide.rst267
1 files changed, 267 insertions, 0 deletions
diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst
new file mode 100644
index 00000000..f00e6635
--- /dev/null
+++ b/docs/release/userguide/userguide.rst
@@ -0,0 +1,267 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) Open Platform for NFV Project, Inc. and its contributors
+
+========
+Abstract
+========
+
+This document contains details about how to use OPNFV Fuel - Euphrates
+release - after it was deployed. For details on how to deploy check the
+installation instructions in the :ref:`references` section.
+
+This is an unified documentation for both x86_64 and aarch64
+architectures. All information is common for both architectures
+except when explicitly stated.
+
+
+
+================
+Network Overview
+================
+
+Fuel uses several networks to deploy and administer the cloud:
+
++------------------+-------------------+---------------------------------------------------------+
+| Network name | Deploy Type | Description |
+| | | |
++==================+===================+=========================================================+
+| **PXE/ADMIN** | baremetal only | Used for booting the nodes via PXE |
++------------------+-------------------+---------------------------------------------------------+
+| **MCPCONTROL** | baremetal & | Used to provision the infrastructure VMs (Salt & MaaS). |
+| | virtual | On virtual deploys, it is used for Admin too (on target |
+| | | VMs) leaving the PXE/Admin bridge unused |
++------------------+-------------------+---------------------------------------------------------+
+| **Mgmt** | baremetal & | Used for internal communication between |
+| | virtual | OpenStack components |
++------------------+-------------------+---------------------------------------------------------+
+| **Internal** | baremetal & | Used for VM data communication within the |
+| | virtual | cloud deployment |
++------------------+-------------------+---------------------------------------------------------+
+| **Public** | baremetal & | Used to provide Virtual IPs for public endpoints |
+| | virtual | that are used to connect to OpenStack services APIs. |
+| | | Used by Virtual machines to access the Internet |
++------------------+-------------------+---------------------------------------------------------+
+
+
+These networks - except mcpcontrol - can be linux bridges configured before the deploy on the
+Jumpserver. If they don't exists at deploy time, they will be created by the scripts as virsh
+networks.
+
+Mcpcontrol exists only on the Jumpserver and needs to be virtual because a DHCP server runs
+on this network and associates static host entry IPs for Salt and Maas VMs.
+
+
+
+===================
+Accessing the Cloud
+===================
+
+Access to any component of the deployed cloud is done from Jumpserver to user *ubuntu* with
+ssh key */var/lib/opnfv/mcp.rsa*. The example below is a connection to Salt master.
+
+ .. code-block:: bash
+
+ $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu 10.20.0.2
+
+**Note**: The Salt master IP is not hard set, it is configurable via INSTALLER_IP during deployment
+
+
+The Fuel baremetal deploy has a Virtualized Control Plane (VCP) which means that the controller
+services are installed in VMs on the baremetal targets (kvm servers). These VMs can also be
+accessed with virsh console: user *opnfv*, password *opnfv_secret*. This method does not apply
+to infrastructure VMs (Salt master and MaaS).
+
+The example below is a connection to a controller VM. The connection is made from the baremetal
+server kvm01.
+
+ .. code-block:: bash
+
+ $ ssh -o StrictHostKeyChecking=no -i /var/lib/opnfv/mcp.rsa -l ubuntu x.y.z.141
+ ubuntu@kvm01:~$ virsh console ctl01
+
+User *ubuntu* has sudo rights. User *opnfv* has sudo rights only on aarch64 deploys.
+
+
+=============================
+Exploring the Cloud with Salt
+=============================
+
+To gather information about the cloud, the salt commands can be used. It is based
+around a master-minion idea where the salt-master pushes config to the minions to
+execute actions.
+
+For example tell salt to execute a ping to 8.8.8.8 on all the nodes.
+
+.. figure:: img/saltstack.png
+
+Complex filters can be done to the target like compound queries or node roles.
+For more information about Salt see the :ref:`references` section.
+
+Some examples are listed below. Note that these commands are issued from Salt master
+with *root* user.
+
+
+#. View the IPs of all the components
+
+ .. code-block:: bash
+
+ root@cfg01:~$ salt "*" network.ip_addrs
+ cfg01.baremetal-mcp-ocata-odl-ha.local:
+ - 10.20.0.2
+ - 172.16.10.100
+ mas01.baremetal-mcp-ocata-odl-ha.local:
+ - 10.20.0.3
+ - 172.16.10.3
+ - 192.168.11.3
+ .........................
+
+
+#. View the interfaces of all the components and put the output in a file with yaml format
+
+ .. code-block:: bash
+
+ root@cfg01:~$ salt "*" network.interfaces --out yaml --output-file interfaces.yaml
+ root@cfg01:~# cat interfaces.yaml
+ cfg01.baremetal-mcp-ocata-odl-ha.local:
+ enp1s0:
+ hwaddr: 52:54:00:72:77:12
+ inet:
+ - address: 10.20.0.2
+ broadcast: 10.20.0.255
+ label: enp1s0
+ netmask: 255.255.255.0
+ inet6:
+ - address: fe80::5054:ff:fe72:7712
+ prefixlen: '64'
+ scope: link
+ up: true
+ .........................
+
+
+#. View installed packages in MaaS node
+
+ .. code-block:: bash
+
+ root@cfg01:~# salt "mas*" pkg.list_pkgs
+ mas01.baremetal-mcp-ocata-odl-ha.local:
+ ----------
+ accountsservice:
+ 0.6.40-2ubuntu11.3
+ acl:
+ 2.2.52-3
+ acpid:
+ 1:2.0.26-1ubuntu2
+ adduser:
+ 3.113+nmu3ubuntu4
+ anerd:
+ 1
+ .........................
+
+
+#. Execute any linux command on all nodes (list the content of */var/log* in this example)
+
+ .. code-block:: bash
+
+ root@cfg01:~# salt "*" cmd.run 'ls /var/log'
+ cfg01.baremetal-mcp-ocata-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
+
+
+#. Execute any linux command on nodes using compound queries filter
+
+ .. code-block:: bash
+
+ root@cfg01:~# salt -C '* and cfg01*' cmd.run 'ls /var/log'
+ cfg01.baremetal-mcp-ocata-odl-ha.local:
+ alternatives.log
+ apt
+ auth.log
+ boot.log
+ btmp
+ cloud-init-output.log
+ cloud-init.log
+ .........................
+
+
+#. Execute any linux command on nodes using role filter
+
+ .. code-block:: bash
+
+ root@cfg01:~# salt -I 'nova:compute' cmd.run 'ls /var/log'
+ cmp001.baremetal-mcp-ocata-odl-ha.local:
+ alternatives.log
+ apache2
+ apt
+ auth.log
+ btmp
+ ceilometer
+ cinder
+ cloud-init-output.log
+ cloud-init.log
+ .........................
+
+
+
+===================
+Accessing Openstack
+===================
+
+Once the deployment is complete, Openstack CLI is accessible from controller VMs (ctl01..03).
+Openstack credentials are at */root/keystonercv3*.
+
+ .. code-block:: bash
+
+ root@ctl01:~# source keystonercv3
+ root@ctl01:~# openstack image list
+ +--------------------------------------+-----------------------------------------------+--------+
+ | ID | Name | Status |
+ +======================================+===============================================+========+
+ | 152930bf-5fd5-49c2-b3a1-cae14973f35f | CirrosImage | active |
+ | 7b99a779-78e4-45f3-9905-64ae453e3dcb | Ubuntu16.04 | active |
+ +--------------------------------------+-----------------------------------------------+--------+
+
+
+The OpenStack Dashboard, Horizon is available at http://<controller VIP>:8078, e.g. http://10.16.0.101:8078.
+The administrator credentials are *admin*/*opnfv_secret*.
+
+.. figure:: img/horizon_login.png
+
+
+A full list of IPs/services is available at <proxy public VIP>:8090 for baremetal deploys.
+
+.. figure:: img/salt_services_ip.png
+
+For Virtual deploys, the most commonly used IPs are in the table below.
+
++-----------+--------------+---------------+
+| Component | IP | Default value |
++===========+==============+===============+
+| gtw01 | x.y.z.110 | 172.16.10.110 |
++-----------+--------------+---------------+
+| ctl01 | x.y.z.100 | 172.16.10.100 |
++-----------+--------------+---------------+
+| cmp001 | x.y.z.105 | 172.16.10.105 |
++-----------+--------------+---------------+
+| cmp002 | x.y.z.106 | 172.16.10.106 |
++-----------+--------------+---------------+
+
+
+.. _references:
+
+==========
+References
+==========
+
+1) `Installation instructions <http://docs.opnfv.org/en/stable-euphrates/submodules/fuel/docs/release/installation/installation.instruction.html>`_
+2) `Saltstack Documentation <https://docs.saltstack.com/en/latest/topics>`_
+3) `Saltstack Formulas <http://salt-formulas.readthedocs.io/en/latest/develop/overview-reclass.html>`_
+
+