diff options
author | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2019-02-07 19:51:04 +0100 |
---|---|---|
committer | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2019-02-14 16:58:51 +0100 |
commit | 58af9a94ef78bbcf3f0593d4170d32ebce721455 (patch) | |
tree | 895f9cd9620d4509b86d281fcfc5fce9a69a5e15 /docs/release | |
parent | 494c436572aed0b739bcfcc3fbf5b78ea34318b2 (diff) |
[baremetal] Containerize MaaS
- replace mas01 VM with a Docker container;
- drop `mcpcontrol` virsh-managed network, including special handling
previously required for it across all scripts;
- drop infrastructure VMs handling from scripts, the only VMs we still
handle are cluster VMs for virtual and/or hybrid deployments;
- drop SSH server from mas01;
- stop running linux state on mas01, as all prerequisites are properly
handled durin Docker build or via entrypoint.sh - for completeness,
we still keep pillar data in sync with the actual contents of mas01
configuration, so running the state manually would still work;
- make port 5240 available on the jumpserver for MaaS dashboard access;
- docs: update diagrams and text to reflect the new changes;
Change-Id: I6d9424995e9a90c530fd7577edf401d552bab929
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'docs/release')
-rwxr-xr-x[-rw-r--r--] | docs/release/installation/img/fuel_baremetal_ha.png | bin | 289121 -> 279736 bytes | |||
-rwxr-xr-x[-rw-r--r--] | docs/release/installation/img/fuel_baremetal_noha.png | bin | 197550 -> 187877 bytes | |||
-rwxr-xr-x[-rw-r--r--] | docs/release/installation/img/fuel_hybrid_noha.png | bin | 191144 -> 186931 bytes | |||
-rwxr-xr-x[-rw-r--r--] | docs/release/installation/img/fuel_virtual_noha.png | bin | 236222 -> 234038 bytes | |||
-rw-r--r-- | docs/release/installation/installation.instruction.rst | 30 | ||||
-rw-r--r-- | docs/release/userguide/userguide.rst | 67 |
6 files changed, 55 insertions, 42 deletions
diff --git a/docs/release/installation/img/fuel_baremetal_ha.png b/docs/release/installation/img/fuel_baremetal_ha.png Binary files differindex f2ed6106f..af5f00f8a 100644..100755 --- a/docs/release/installation/img/fuel_baremetal_ha.png +++ b/docs/release/installation/img/fuel_baremetal_ha.png diff --git a/docs/release/installation/img/fuel_baremetal_noha.png b/docs/release/installation/img/fuel_baremetal_noha.png Binary files differindex 5a3b42919..4b5aef050 100644..100755 --- a/docs/release/installation/img/fuel_baremetal_noha.png +++ b/docs/release/installation/img/fuel_baremetal_noha.png diff --git a/docs/release/installation/img/fuel_hybrid_noha.png b/docs/release/installation/img/fuel_hybrid_noha.png Binary files differindex 51449a777..f2debfef3 100644..100755 --- a/docs/release/installation/img/fuel_hybrid_noha.png +++ b/docs/release/installation/img/fuel_hybrid_noha.png diff --git a/docs/release/installation/img/fuel_virtual_noha.png b/docs/release/installation/img/fuel_virtual_noha.png Binary files differindex 7d05a9dcd..710988acb 100644..100755 --- a/docs/release/installation/img/fuel_virtual_noha.png +++ b/docs/release/installation/img/fuel_virtual_noha.png diff --git a/docs/release/installation/installation.instruction.rst b/docs/release/installation/installation.instruction.rst index b0efd57ab..46a4350f5 100644 --- a/docs/release/installation/installation.instruction.rst +++ b/docs/release/installation/installation.instruction.rst @@ -108,7 +108,7 @@ installation of ``Gambia`` using Fuel: | | | +==================+======================================================+ | **1 Jumpserver** | A physical node (also called Foundation Node) that | -| | hosts the Salt Master container and MaaS VM | +| | hosts the Salt Master and MaaS containers | +------------------+------------------------------------------------------+ | **# of nodes** | Minimum 5 | | | | @@ -170,7 +170,7 @@ installation of ``Gambia`` using Fuel: | | | +==================+======================================================+ | **1 Jumpserver** | A physical node (also called Foundation Node) that | -| | hosts the Salt Master container, MaaS VM and | +| | hosts the Salt Master and MaaS containers, and | | | each of the virtual nodes defined in ``PDF`` | +------------------+------------------------------------------------------+ | **# of nodes** | .. NOTE:: | @@ -426,6 +426,14 @@ Changes ``deploy.sh`` Will Perform to Jumpserver OS .. WARNING:: + On Jumpservers running Ubuntu with AppArmor enabled, when deploying + on baremetal nodes (i.e. when MaaS is used), the install script + will disable certain conflicting AppArmor profiles that interfere with + MaaS services inside the container, e.g. ``ntpd``, ``named``, ``dhcpd``, + ``tcpdump``. + +.. WARNING:: + The install script will automatically install and/or upgrade the required distribution package dependencies on the Jumpserver, unless explicitly asked not to (via the ``-P`` deploy arg). @@ -729,7 +737,7 @@ Sample ``public`` network configuration block: private: 'trunk' public: 'trunk' trunks: - # mgmt network is not decapsulated for jumpserver infra VMs, + # mgmt network is not decapsulated for jumpserver infra nodes, # to align with the VLAN configuration of baremetal nodes. mgmt: True @@ -991,15 +999,15 @@ A simplified overview of the steps ``deploy.sh`` will automatically perform is: - create a Salt Master Docker container on the jumpserver, which will drive the rest of the installation; -- ``baremetal`` or ``hybrid`` only: create a ``MaaS`` infrastructure node VM, +- ``baremetal`` or ``hybrid`` only: create a ``MaaS`` container node, which will be leveraged using Salt to handle OS provisioning on the ``baremetal`` nodes; - leverage Salt to install & configure OpenStack; .. NOTE:: - A virtual network ``mcpcontrol`` is always created for initial connection - of the VMs on Jumphost. + A Docker network ``mcpcontrol`` is always created for initial connection + of the infrastructure containers (``cfg01``, ``mas01``) on Jumphost. .. WARNING:: @@ -1096,7 +1104,7 @@ each on a separate Jumphost node, both behind the same ``TOR`` switch: +-------------+------------------------------------------------------------+ | ``cfg01`` | Salt Master Docker container | +-------------+------------------------------------------------------------+ - | ``mas01`` | MaaS Node VM | + | ``mas01`` | MaaS Node Docker container | +-------------+------------------------------------------------------------+ | ``ctl01`` | Baremetal controller node | +-------------+------------------------------------------------------------+ @@ -1125,7 +1133,7 @@ each on a separate Jumphost node, both behind the same ``TOR`` switch: +---------------------------+----------------------------------------------+ | ``cfg01`` | Salt Master Docker container | +---------------------------+----------------------------------------------+ - | ``mas01`` | MaaS Node VM | + | ``mas01`` | MaaS Node Docker container | +---------------------------+----------------------------------------------+ | ``kvm01``, | Baremetals which hold the VMs with | | ``kvm02``, | controller functions | @@ -1186,7 +1194,7 @@ each on a separate Jumphost node, both behind the same ``TOR`` switch: +-------------+------------------------------------------------------------+ | ``cfg01`` | Salt Master Docker container | +-------------+------------------------------------------------------------+ - | ``mas01`` | MaaS Node VM | + | ``mas01`` | MaaS Node Docker container | +-------------+------------------------------------------------------------+ | ``ctl01`` | Controller VM | +-------------+------------------------------------------------------------+ @@ -1324,10 +1332,10 @@ sequentially by the deploy script: +===========================+=================================================+ | ``virtual_init`` | ``cfg01``: reclass node generation | | | | -| | ``jumpserver`` VMs (e.g. ``mas01``): basic OS | +| | ``jumpserver`` VMs (if present): basic OS | | | config | +---------------------------+-------------------------------------------------+ -| ``maas`` | ``mas01``: OS, MaaS installation, | +| ``maas`` | ``mas01``: OS, MaaS configuration | | | ``baremetal`` node commissioning and deploy | | | | | | .. NOTE:: | diff --git a/docs/release/userguide/userguide.rst b/docs/release/userguide/userguide.rst index 25b5e13be..50acf6feb 100644 --- a/docs/release/userguide/userguide.rst +++ b/docs/release/userguide/userguide.rst @@ -29,7 +29,8 @@ Fuel uses several networks to deploy and administer the cloud: | **PXE/admin** | Used for booting the nodes via PXE and/or Salt | | | control network | +------------------+----------------------------------------------------------+ -| **mcpcontrol** | Used to provision the infrastructure hosts (Salt & MaaS) | +| **mcpcontrol** | Docker network used to provision the infrastructure | +| | hosts (Salt & MaaS) | +------------------+----------------------------------------------------------+ | **management** | Used for internal communication between | | | OpenStack components | @@ -45,20 +46,21 @@ Fuel uses several networks to deploy and administer the cloud: These networks - except ``mcpcontrol`` - can be Linux bridges configured before the deploy on the Jumpserver. If they don't exists at deploy time, they will be created by the scripts as -``libvirt`` managed networks. +``libvirt`` managed networks (except ``mcpcontrol``, which will be handled by +Docker using the ``bridge`` driver). Network ``mcpcontrol`` ~~~~~~~~~~~~~~~~~~~~~~ -``mcpcontrol`` is a virtual network, managed by libvirt. Its only purpose is to +``mcpcontrol`` is a virtual network, managed by Docker. Its only purpose is to provide a simple method of assigning an arbitrary ``INSTALLER_IP`` to the Salt master node (``cfg01``), to maintain backwards compatibility with old OPNFV Fuel behavior. Normally, end-users only need to change the ``INSTALLER_IP`` if the default CIDR (``10.20.0.0/24``) overlaps with existing lab networks. -``mcpcontrol`` has both NAT and DHCP enabled, so the Salt master (``cfg01``) -and the MaaS VM (``mas01``, when present) get assigned predefined IPs (``.2``, -``.3``, while the jumpserver bridge port gets ``.1``). +``mcpcontrol`` uses the Docker bridge driver, so the Salt master (``cfg01``) +and the MaaS containers (``mas01``, when present) get assigned predefined IPs +(``.2``, ``.3``, while the jumpserver gets ``.1``). +------------------+---------------------------+-----------------------------+ | Host | Offset in IP range | Default address | @@ -346,6 +348,18 @@ To login as ``ubuntu`` user, use the RSA private key ``/var/lib/opnfv/mcp.rsa``: jenkins@jumpserver:~$ docker exec -it fuel bash root@cfg01:~$ +Accessing the MaaS Node (``mas01``) +=================================== + +Starting with the ``Hunter`` release, the MaaS node (``mas01``) is +containerized and no longer runs a ``sshd`` server. To access it (from +``jumpserver`` only): + +.. code-block:: console + + jenkins@jumpserver:~$ docker exec -it maas bash + root@mas01:~$ + Accessing Cluster Nodes ======================= @@ -382,19 +396,10 @@ Accessing the ``MaaS`` Dashboard ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``MaaS`` web-based dashboard is available at -``http://<mas01 IP address>:5240/MAAS``, e.g. -``http://172.16.10.12:5240/MAAS``. +``http://<jumpserver IP address>:5240/MAAS``. The administrator credentials are ``opnfv``/``opnfv_secret``. -.. NOTE:: - - ``mas01`` VM does not automatically get assigned an IP address in the - public network segment. If ``MaaS`` dashboard should be accesiable from - the public network, such an address can be manually added to the last - VM NIC interface in ``mas01`` (which is already connected to the public - network bridge). - Ensure Commission/Deploy Timeouts Are Not Too Small ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -446,30 +451,31 @@ Check Network Connectivity Between Nodes on the Jumpserver ``cfg01`` is a Docker container running on the ``jumpserver``, connected to Docker networks (created by docker-compose automatically on container up), which in turn are connected using veth pairs to their ``libvirt`` managed -counterparts. +counterparts (or manually created bridges). -For example, the ``mcpcontrol`` network(s) should look like below. +For example, the ``mgmt`` network(s) should look like below for a ``virtual`` +deployment. .. code-block:: console - jenkins@jumpserver:~$ brctl show mcpcontrol + jenkins@jumpserver:~$ brctl show mgmt bridge name bridge id STP enabled interfaces - mcpcontrol 8000.525400064f77 yes mcpcontrol-nic - veth_mcp0 + mgmt 8000.525400064f77 yes mgmt-nic + veth_mcp2 vnet8 jenkins@jumpserver:~$ docker network ls NETWORK ID NAME DRIVER SCOPE - 81a0fdb3bd78 docker-compose_docker-mcpcontrol macvlan local + 81a0fdb3bd78 docker-compose_mgmt macvlan local [...] - jenkins@jumpserver:~$ docker network inspect docker-compose_mcpcontrol + jenkins@jumpserver:~$ docker network inspect docker-compose_mgmt [ { - "Name": "docker-compose_mcpcontrol", + "Name": "docker-compose_mgmt", [...] "Options": { - "parent": "veth_mcp1" + "parent": "veth_mcp3" }, } ] @@ -488,14 +494,13 @@ segment). inet addr:172.16.10.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet addr:192.168.11.2 Bcast:0.0.0.0 Mask:255.255.255.0 -For each network of interest (``mcpcontrol``, ``mgmt``, ``PXE/admin``), check -that ``cfg01`` can ping the jumpserver IP in that network segment, as well as -the ``mas01`` IP in that network. +For each network of interest (``mgmt``, ``PXE/admin``), check +that ``cfg01`` can ping the jumpserver IP in that network segment. .. NOTE:: - ``mcpcontrol`` is set up at VM bringup, so it should always be available, - while the other networks are configured by Salt as part of the + ``mcpcontrol`` is set up at container bringup, so it should always be + available, while the other networks are configured by Salt as part of the ``virtual_init`` STATE file. .. code-block:: console @@ -552,7 +557,7 @@ To confirm or rule out this possibility, monitor the serial console output of one (or more) cluster nodes during ``MaaS`` commissioning. If the node is properly configured to attempt PXE boot, yet it times out waiting for an IP address from ``mas01`` ``DHCP``, it's worth checking that ``DHCP`` packets -reach the ``jumpserver``, respectively the ``mas01`` VM. +reach the ``jumpserver``, respectively the ``mas01`` container. .. code-block:: console |