diff options
author | Tim Rozet <trozet@redhat.com> | 2018-08-08 17:43:55 -0400 |
---|---|---|
committer | Tim Rozet <trozet@redhat.com> | 2018-08-10 20:40:16 -0400 |
commit | c5959cc14b95e9d10b78ebf3c8e2525c672fc0c7 (patch) | |
tree | 1ab5b0be3e893ac3f77f951abe0c8d7bdf07e6d6 /docs/release | |
parent | 7bbbc905908be356fd1cf2f869b43d7e4d87c12b (diff) |
Allow all in one deployments
This patch adds the ability to deploy all in one single nodes (Control
+ Compute). To enable this functionality do the following for each
deployment type:
- Baremetal: do not tag any nodes as compute in the inventory file
- Virtual: use argument '--virtual-computes 0'
JIRA: APEX-548
Change-Id: I22525c9eb21d331129c819449316c26a6fcf522d
Signed-off-by: Tim Rozet <trozet@redhat.com>
Diffstat (limited to 'docs/release')
-rw-r--r-- | docs/release/installation/baremetal.rst | 8 | ||||
-rw-r--r-- | docs/release/installation/introduction.rst | 7 | ||||
-rw-r--r-- | docs/release/installation/virtual.rst | 12 |
3 files changed, 17 insertions, 10 deletions
diff --git a/docs/release/installation/baremetal.rst b/docs/release/installation/baremetal.rst index d8f90792..ff55bc16 100644 --- a/docs/release/installation/baremetal.rst +++ b/docs/release/installation/baremetal.rst @@ -150,9 +150,13 @@ IPMI configuration information gathered in section template to ``/etc/opnfv-apex/inventory.yaml``. 2. The nodes dictionary contains a definition block for each baremetal host - that will be deployed. 1 or more compute nodes and 3 controller nodes are - required. (The example file contains blocks for each of these already). + that will be deployed. 0 or more compute nodes and 1 or 3 controller nodes + are required. (The example file contains blocks for each of these already). It is optional at this point to add more compute nodes into the node list. + By specifying 0 compute nodes in the inventory file, the deployment will + automatically deploy "all-in-one" nodes which means the compute will run + along side the controller in a single overcloud node. Specifying 3 control + nodes will result in a highly-available service model. 3. Edit the following values for each node: diff --git a/docs/release/installation/introduction.rst b/docs/release/installation/introduction.rst index 8dbf8f2f..76ed0acb 100644 --- a/docs/release/installation/introduction.rst +++ b/docs/release/installation/introduction.rst @@ -12,7 +12,7 @@ Preface Apex uses Triple-O from the RDO Project OpenStack distribution as a provisioning tool. The Triple-O image based life cycle installation -tool provisions an OPNFV Target System (3 controllers, 2 or more +tool provisions an OPNFV Target System (1 or 3 controllers, 0 or more compute nodes) with OPNFV specific configuration provided by the Apex deployment tool chain. @@ -37,6 +37,5 @@ will prepare a host to the same ready state for OPNFV deployment. ``opnfv-deploy`` instantiates a Triple-O Undercloud VM server using libvirt as its provider. This VM is then configured and used to provision the -OPNFV target deployment (3 controllers, n compute nodes). These nodes can -be either virtual or bare metal. This guide contains instructions for -installing either method. +OPNFV target deployment. These nodes can be either virtual or bare metal. +This guide contains instructions for installing either method. diff --git a/docs/release/installation/virtual.rst b/docs/release/installation/virtual.rst index af8aece2..5682f364 100644 --- a/docs/release/installation/virtual.rst +++ b/docs/release/installation/virtual.rst @@ -12,11 +12,14 @@ The virtual deployment operates almost the same way as the bare metal deployment with a few differences mainly related to power management. ``opnfv-deploy`` still deploys an undercloud VM. In addition to the undercloud VM a collection of VMs (3 control nodes + 2 compute for an HA deployment or 1 -control node and 1 or more compute nodes for a Non-HA Deployment) will be +control node and 0 or more compute nodes for a Non-HA Deployment) will be defined for the target OPNFV deployment. All overcloud VMs are registered with a Virtual BMC emulator which will service power management (IPMI) commands. The overcloud VMs are still provisioned with the same disk images -and configuration that baremetal would use. +and configuration that baremetal would use. Using 0 nodes for a virtual +deployment will automatically deploy "all-in-one" nodes which means the compute +will run along side the controller in a single overcloud node. Specifying 3 +control nodes will result in a highly-available service model. To Triple-O these nodes look like they have just built and registered the same way as bare metal nodes, the main difference is the use of a libvirt driver for @@ -67,7 +70,7 @@ environment will deploy with the following architecture: - 1 undercloud VM - The option of 3 control and 2 or more compute VMs (HA Deploy / default) - or 1 control and 1 or more compute VM (Non-HA deploy / pass -n) + or 1 control and 0 or more compute VMs (Non-HA deploy) - 1-5 networks: provisioning, private tenant networking, external, storage and internal API. The API, storage and tenant networking networks can be @@ -83,7 +86,8 @@ Follow the steps below to execute: password: 'opnfvapex'. It is also useful in some cases to surround the deploy command with ``nohup``. For example: ``nohup <deploy command> &``, will allow a deployment to continue even if - ssh access to the Jump Host is lost during deployment. + ssh access to the Jump Host is lost during deployment. By specifying + ``--virtual-computes 0``, the deployment will proceed as all-in-one. 2. It will take approximately 45 minutes to an hour to stand up undercloud, define the target virtual machines, configure the deployment and execute |