Age | Commit message (Collapse) | Author | Files | Lines |
|
Remove NovaDSN from overcloud compute.
When using the Conductor the Nova compute service
does not need access to the database. This patch
removes all references to the Nova DSN in the overcloud
compute templates.
Change-Id: If75f480489b84002dd061c183dbee3572a8b63f1
|
|
The params were added in I2997d23c584055c40034827e9beb58e6542ea11c
as a means to pass undercloud image data to overcloud instances
so they could perform an update via takeovernode). We've
never actually made use of them via takeovernode... furthermore
these params are a bit stale in that they haven't been applied
to other instance types (storage, etc.).
I propose we remove them entirely and start with a fresh plan for
how these would get used (perhaps a blueprint). As is these don't
appear to have ever been fully wired up to do anything removing
them should have no effect on end users.
Change-Id: I96f91fb0d67e7fe203d3767c8ab89ce82adbe331
|
|
This change adds the necessary elements to the overcloud-source.yaml,
nova-compute-config.yaml and nova-compute-instance.yaml to allow Neutron
Distributed Virtual Routers (DVR) to be enabled. The added elements are
set to default to values such that DVR is not enabled in keeping with
backwards compatibility.
Change-Id: I422c65e7d941593083d52ad7fdf0dfd1d2fb3155
blueprint: support-neutron-dvr
|
|
|
|
Allows heat to have more control of the parallelism of the deploy
and allow easy integration of the new heat dependencies required for
nova compute integration. At present is difficult understand and has
unnecessary complex dependencies.
Change-Id: Ie566b8b14cbd98fe29cc2368a96d45cc74ca4715
Co-Authored-By: Nicholas Randon <nicholas.randon@hp.com>
|
|
They're mostly rather higgledy-piggledy at the moment which makes it
quite difficult to compare against files where these are sorted. EG
compute.yaml from I687a00c7dc164ba044f9f2dfca96a02401427855.
Change-Id: I508a3d0f6a79810d2100fdd1ad143bcd37bf8c00
|
|
Remove the hardcoding of gre as the Neutron tenant network type for the
Overcloud. This will enable the ability to deploy an Overcloud that uses
vxlan instead of gre tunnels. A new parameter, NeutronTunnelTypes, is
added to allow configuring the tunnel_types parameter in the Neutron ML2
configuration.
This change is required by https://review.openstack.org/#/c/92913
Change-Id: I2c2e2153a61349e58ada28c87aa2338c9f00e7bd
|
|
|
|
The existing examples for the overcloud ExtraConfig options
use an ironic setting that would likely never apply (Ironic
isn't for the overcloud).
This patch modifies the default section setting to use
the Nova force_config_drive option instead as a
config example.
Change-Id: Ieb893552fe9466b90b9d9a831a676d114efb6db1
|
|
Supplement ExtraConfig with specific versions - ControllerExtraConfig and
NovaComputeExtraConfig. This allows the user to specify different
configurations for each role.
Change-Id: Ieaee80e414130504a5e40e878a5a4ca1c196ca2b
|
|
|
|
The address for the vnc proxy is incorrectly configured in the nova
configuration file.
The correct IP address is the Public Virtual IP address of the
controller node as created by:
I9649ee74ebaf62b6b929b28243a07c789a08867c
The nova image_element nova.conf already has:
novncproxy_base_url=http://{{nova.public_ip}}:6080/vnc_auto.html
but nothing was setting nova.public_ip - until now
Closes-Bug: #1332554
Change-Id: I41214834511680170393dd4325b510f549373141
|
|
There may be times where an update needs to change this without changing
the template, such as when updates will be done by something other than
Heat (i.e. Ansible).
Change-Id: I89d1153acab697b64468f841b3f2d17c169da649
|
|
|
|
To support underclouds and seeds running older than the very
latest heat.
2013-05-23 lacks function list_join, so this change reverts to
using the equivalent function Fn::Join.
Change-Id: I039f57ab39c1fcfc319a7a34265ba4fabf4ccd08
Closes-Bug: #1354305
|
|
To balance load over the rabbit cluster we want to route access
to it via haproxy.
This also helps workaround bug #856764 as an additional benefit.
This change sets rabbit.host to the ControlVirtualIP (to be used by
the elements) and adds an haproxy listener for the rabbit nodes.
Related to blueprint tripleo-icehouse-ha-production-configuration
Depends on I3ff37ec18b9191ca8e861519bed142cbdbd5faa2
Change-Id: I49b622a604542f456bd9a37da8dae3353218e640
Related-Bug: 856764
|
|
This change renames a few NovaCompute resources so that the naming
is consistent with the controller resources naming choice.
Change-Id: I8c22867b208c5e16fd52bb3157f838f762b71470
|
|
Overcloud bootstrap_nodeid is now specified by parameter
BootStrapNodeResource with default value controller0.
This avoids the need to use Fn::Select on the mergy.py
built list of controllers to specify the first controller.
Change-Id: Id9cfeab50b90ceeeae51ea0e35997b7495b28cc4
Partial-Blueprint: tripleo-juno-remove-mergepy
|
|
This change was generated and validated by running the following:
make hot clean all validate-all
This converts all templates to be valid HOT.
Fn::Select is not converted in this change but this will actually
work with heat_template_version 2013-05-23. Fn::Select is converted
manually in the next change in this series.
This change also sets the heat_template_version to 2014-10-16 which
includes the list_join intrinsic functions used throughout these
templates.
Partial-Blueprint: tripleo-juno-remove-mergepy
Change-Id: Ib3cbb83f6ae94adb7b793ab1b662bd5c55cbb5b3
|
|
Currently there is very weak (no) ordering of StructuredDeployments during
heat stack creation (and, importantly, update) on the overcloud. This can
cause the deployment which sends the completion signal back to Heat to
happen before all others have completed, which in turn leads Heat to state
the stack is ready while ORC is still configuring services
The only workaround to this is to wait an unknown amount of time after the
heat stack completes before the system is usable.
This patch prevents the completion signal from being returned early, by
ensuring these are strictly ordered:
controller0Deploy
controller0Passthrough
controller0AllNodesDeploy
NovaCompute0Deploy
NovaCompute0Passthrough
NovaCompute0AllNodesDeploy
Change-Id: I0a549370b7aca55b1145de521ad51218428deaf5
|
|
Without this, when there are multiple admin networks (e.g. a VLAN)
Nova will refuse to guess and we'll fail to deploy.
Change-Id: Id1dca43ef287fda2adcfdf5b5d30145b055dbe76
|
|
Previously the completion signal was just based on the first run of
os-refresh-config. But in this case, we actually need to wait until it
runs successfully with all hosts computed. That way we can know that
services aren't in an unstable state while that configuration rolls out.
Change-Id: I3b965c19c92b366df3069cb8e1daffa18252c884
Closes-Bug: #1337230
|
|
This causes that:
* rabbit.nodes is list of all control nodes
* rabbit_hosts in OS config files points to all nodes in
rabbitmq cluster
* overcloud control nodes are joined into cluster
This works both for single or multiple control nodes and it's needed
for scaling out control nodes.
rabbit.nodes property is very similar to generating list of all hosts,
so it uses same StructuredConfig block. This block (and couple of references)
is renamed to allNodesConfig to make it more general.
Related to blueprint tripleo-icehouse-ha-production-configuration
Change-Id: Ice1a34ba7a52c41c1bb0c63350438971c651e7b6
|
|
Then feed in through separate deployments. This reduces the exponential
growth of calculating the entire list for every server.
Change-Id: Ib1187eabeb91b46e29ddcf5065056e43a69bb2a0
|
|
This change is to set the glance protocol and port as
configurable via the Heat template. Presently the port is
hard-coded in the elements nova.conf file, and the protocol
is assumed as being the default (http).
This change will allow the glance_api_servers
to be set (in nova.conf) using the constituent parts:
glance_protocol://glance_host:glance_port
Change to nova.conf to read this value is:
Idccc0d60c9f6b17a853c6de1bbea64bfc7e028b2
Default port value is set to the nova default(9292) which is
currently hard-coded in the elements nova.conf file.
Default protocol value is set to the nova default(http).
Change-Id: I3c7218292797c62c36e2aaab4f325bf053ef140b
|
|
|
|
|
|
The control plane has to be up before the compute deployments can
work. By sequencing these we permit stopping the o-r-c scripts in
the overcloud rather than trying and failing to configure things.
It also reduces the total deploy time by front loading control
plane configuration - Heat has some sequence code which prevents
parallel instantiation on deployments, and the control plane bring
up is critical path for deploying OpenStack.
Change-Id: I0bb2f8ab41c4af1443af60f7547673d495e4e0fb
|
|
Updates the overcloud nova-compute templates so that
the NTP server is properly configured.
Change-Id: I4fc407153da5e031dcf5e5e5e1b3b74d932dba45
Partial-bug: #1309677
|
|
-Undercloud Ceilometer has to have access to SNMPd credentials,
so it can poll the Overcloud nodes
-In every Overcloud node, we need to set the same cretentials
to SNMPd.conf
Change-Id: Icf7c0c1772b6380b7136108e61c15cafe17274ba
|
|
This provides a means for users to pass configuration through to the
machines they are deploying without us modelling that.
Change-Id: I7134eb0c6be2d5cb1795b2f03cfba4afb69dc837
blueprint: passthrough-config
|
|
This migrates the overcloud to using OS::Heat::StructuredConfig and
OS::Heat::StructuredDeployment. With those tools, we can decouple
servers from software configuration and begin to deprecate features in
tripleo_heat_merge.
Change-Id: Ice85f0711e90d0fabf1d1bc4698201c4d6758508
|
|
Updates all references for notCompute and notcompute
to use 'controller' instead.
Change-Id: I70ef83f35064ab388bdc7e1a6da62b6585580010
Partial-bug: #1300324
|
|
Updated heat templates to default to preserve ephemeral partition
on a heat stack update. Allows undercloud and overcloud to be
re-imaged/updated and state preserved.
Change-Id: I6626af48d8c55672022a46fe48e5725ad2619f0c
|
|
Capture some undercloud metadata into the overcloud compute
configs, so the overcloud nodes can pull updated images from
the undercloud glance. Unset by default, but able to be set
during stack-create or stack-update.
Change-Id: I2997d23c584055c40034827e9beb58e6542ea11c
|
|
|
|
Currently our wait conditions are racing with Heat resolving
configurations. There should be plenty of time but sometimes Heat may
be dealing with a temporarily problematic Nova API and spinning on that.
While that is happening, the in-instance tools will not have their full
configuration available to them. We don't want the wait condition timeout
to start until the box has had its actual config exposed to it.
Change-Id: I0eab8fe7547d3cbcebb1559fd3d06206b1750e96
|
|
- different Flavors for control, compute and storage nodes
- for devtest use, they default to 'baremetal', so nothing
changes
- for Tuskar, there is a possibility to have them different for
every role
Change-Id: I8c1b80f55a91c7a7fd5e560ccdb8da82ec374084
|
|
Username is currently assumed to be guest in the configuration
files. This change makes it more explicit.
Configuration files in tripleo-image-elements will be updated
to use this parameter in an upcoming patch.
Change-Id: Ia176f4d573a3a293560c72236a4181befa678301
|
|
This makes it possible to have SSL connections to APIs from compute
hosts with no DNS or external connectivity - something the
ci-overcloud has.
Change-Id: I089ef8fdfb4a59279f09bf3cd2a4474000e4bfa6
|
|
This patch makes it possible to setup physical networks for VMs,
separate to the control plane configuration which is needed for
routed/natted access to physical networks.
Future work is needed to automate ci-overcloud configuration of the
control plane where we need two distinct bridges, but this is enough
to stop folk dying of boredom setting up a sizeable ci-overcloud.
Change-Id: I6ac7129f22bb797467adb0408638781d20081f19
|
|
This is needed to allow configuring the template in
I9fa923b63033edb694720bfe5fc756a7c0fbfd2a.
Change-Id: I65810db156cb3d93291ac56fcf96e3ed2c87e1b2
|
|
This is complete as far as it goes but it isn't enough to make running
a scaled out control plane actually work. Specifically, the constructs
to point at API hosts based on looking up a network address aren't
suirtable for scaled out - we need to be using the virtual IP or DNS
round robin or other such resilient configurations, but that is
largely / entirely orthogonal to making the template be ready for
scaling.
Change-Id: Ib9e6db5e7d5db84e4746afdabea046d2b8702bbb
|
|
This uses the new merge feature earlier in this series.
Exporting COMPUTESCALE before running make will build a different
template. Note that since Make doesn't depend on variable values, you
need to delete overcloud.yaml between building with different scales.
Change-Id: If05b99ae3596bcc794e3a899ab1443aeb14ec754
|
|
Change-Id: I205bb2c0bb7c9b956fd3e0d6b266bdf5afb48864
|
|
This is the first step towards preserving state on stack updates when the image
id has changed.
I chose REPLACE as the default value because that is the current behaviour and
we can override it from the command line.
Change-Id: I64eab51892922ab51a89a9f389457fd1ed979fb2
|
|
We have seen situations where nova-compute is not ready when notcompute
has run its waitcondition. That leads to errors while we fail to boot
instances until there is at least one nova compute available.
We also update nova-compute-instance.yaml so that it continues to work
stand-alone.
Change-Id: Iadea7a34e2cd4576cc78659b99c12e1041af5b45
|
|
o Adds the required swift metadata (in swift-source.yaml).
o Sets up glance to use the swift backend on the overcloud.
o Sets up glance to use the file backend on the undercloud and seed,
i.e. maintain the Status quo.
Change-Id: I4a70ffbf9c51f1fea5cfc84d8718d3d30d36b3f2
|
|
Change-Id: I3a84cf52cc46f0c338319a046d77edb2a9b29c45
|
|
overcloud compute node makes requests to Neutron API
and requires quantum_admin_password option of nova.conf
to be set (it is defined in nova image element like
quantum_admin_password={{neutron.service-password}}).
Without this, booting of a user instance in overcloud
fails, because nova-compute service can't authorize
requests to Neutron API.
Change-Id: Ie726d0c3d54abc6c24a45fde3f5af03fd2cf9e37
|