Age | Commit message (Collapse) | Author | Files | Lines |
|
Add a new roles data YAML file and environment to help
create the undercloud via t-h-t.
Partially-implements: blueprint heat-undercloud
Change-Id: I36df7fa86c2ff40026d59f02248af529a4a81861
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
|
|
This enables the deployer to dynamically add nova metadata to the
servers based on the output of service profiles that implement the
metadata_settings key in the role_data output for the profiles.
One can set an implementation via the OS::TripleO::ServerMetadataHook
resource, which currently is set as OS::Heat::None. So, because of
the default implementation, if left untouched it actually does
nothing.
Currently, besides the list, which is metadata_settings, this hook also
takes the name of the node that it's setting the metadata for.
This is useful for nova vendordata plugins that can parse said metadata.
Change-Id: I8a937f711f0b90156fbb6c4632760435ef846474
|
|
In order to call commands that need to be run on a single node, we
create a new per-service variable that will contain the first node of
each role containing the service.
Change-Id: I03e8685f939e8ae1fcd8b16883b559615042505d
Partial-Bug: #1615983
|
|
|
|
For some upgrade scenarios, e.g all-in-one deployments, it may
be possible to run the upgrade steps, then apply puppet in one
stack update, so reverse the order here. For normal deployments
the upgrade steps are mapped to OS::Heat::None so this will have
no effect.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I3c78751349a6ac2bc5dff82f67bffe13750ac21c
|
|
This patch adds a new type called:
OS::TripleO::Network::Ports::ControlPlaneVipPort
This defaults to a normal OS::Neutron::Port object but can
be mocked out for some implementations like when installing
the undercloud where neutron doesn't exist.
Change-Id: Iebf2428432a98a9d789b206ce973599adbc0af8f
|
|
|
|
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
architecture.
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I39f5426cb9da0b40bec4a7a3a4a353f69319bdf9
|
|
This patch moves the t-i-e element code for hosts configuration
into a t-h-t shell script that gets driven by a os-collect-config
script hook.
This helps accomplish several goals:
- moves us away from t-i-e
- gives us better signal handling in the error case (where the
previous element relied on 99-refresh-completed
- Allows the t-h-t undercloud installer to more easily consume this
since it doesn't rely on the old os-apply-config metadata (which
that installer doesn't support).
Change-Id: I73c3d4818ef531a3559fab272521f44519e2f486
|
|
This patch drops use of the vip-hosts.yaml service which can
cause issues during deployment because puppet 'hosts' resources
overwrite the data in /etc/hosts. The only reason things seem to work
at all at the moment is because our hosts element in t-i-e runs
on each os-refresh-config iteration and re-adds the dropped hosts
entries.
To work around the issue we add a conditional which selectively
adds the extra hosts entries only if the AddVipsToEtcHosts is set
to true.
Closes-bug: 1645123
Change-Id: Ic6aaeb249a127df83894f32a704219683a6382b2
|
|
This change modifies the template interface to support containers and
converts the compute services to composable roles.
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Flavio Percoco <flavio@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
Change-Id: I82fa58e19de94ec78ca242154bc6ecc592112d1b
|
|
This is wrong atm, it should loop to create a list for the depends_on
not multiple depends_on statements.
Note this was first corrected in https://review.openstack.org/#/c/330659/
but we need it as a standalone patch that can be backported.
Change-Id: I4d1d6346f2147e573fc0900038f1ad1d782e75ee
Closes-Bug: #1642069
|
|
|
|
Modify the syntax used to access the ResourceGroup attributes so we
always select the first node from the group, e.g even if the node
named "0" in the ResourceGroup nested stack has been removed due to
the removal policy.
Change-Id: I8b1c9538976a1518b220187a0034ad41a738d5a6
Closes-Bug: #1640449
|
|
These VIPs were previously used to create endpoints, but are no longer
used. The one exception is KeystoneAdminVip, which is used by the
python-client.
Closes-Bug: 1639956
Change-Id: Iafdf37b6ee91806d683592a99e025a8de4c0ff20
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
For parameter merge strategies to work we need to merge multiple environment
files, which doesn't consider the defaults defined in the heat template.
Moving where we define these defaults will enable the merge strategies
applied when appending services to roles in environment files to work.
Change-Id: I1ef1ad685c8a15308d051665c576a98b277f2496
Closes-Bug: #1635409
|
|
|
|
Adds new puppet specific services for Mistral
API and Mistral Engine.
This submission enables the mistral service by default in the
overcloud, a following submission will disable it and make it
optional by enabling it on demand based in an environment file.
Depends-On: Iae42ffa37c4c9b1e070b7c3753e04c45bb97703f
Depends-On: I942d419be951651e305d01460f394870c30a9878
Depends-On: I6cb2cbf4a2abf494668d24b8c36b0d525643f0af
Implements: blueprint composable-services-within-roles
Co-Authored-By: Carlos Camacho <ccamacho@redhat.com>
Change-Id: Id5ff9cb498b5a47af38413d211ff0ed6ccd0015b
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: If2804b469eb3ee08f3f194c7dd3290d23a245a7a
Depends-On: I091ecfbcb2e38fe77203244ac7a597aedcb558fb
Change-Id: Iacc504fc4fa2d06893917024ce2340d3fb80b626
|
|
This makes sure that the Host settings for all deployments are finished
before starting the AllNodesDeployments which execute puppet.
Change-Id: Ibe604472255ce905ca2c1dca2a9b07a6f8f40e47
Related-bug: #1633565
|
|
This patch moves the hosts configuration into its own deployment.
It will continue to use os-apply-config as something that is
required early on in the bootstrapping (it needs to be
configured before puppet runs for example).
The motivation here is so we can refactor all-nodes-config.yaml to use a
new hiera hook that that avoids os-apply-config entirely.
Change-Id: Ib3e4380f205358b27d22a1102b663cf300b1ed86
Partial-bug: #1596373
|
|
|
|
Closes-Bug: #1631277
Change-Id: I126b3ed2afdf03ffabb7e57f8792b9f7ecc06a09
|
|
Otherwise there may be a race between updating the hiera
and running the UpdateWorkflow
Change-Id: I22cd893e0db3df6d39504fbd61d7d9024cebb1c5
Related-Bug: 1631297
|
|
|
|
in the great rebase following the JINJA ALL THE THINGS changes we lost
critical functionality in the fluentd client service. This review
restores the missing features.
Change-Id: I7c23f16f81e75f3da6a24587b2eb8385b3e920a4
Closes-bug: 1630692
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee
Change-Id: I754c4a41d8a294a4c7c18bd282ae014efd4b9b16
Closes-Bug: #1628521
|
|
These hard-coded references to the Controller role mean that
things won't work if the keystone service is moved to any other
role, so we need to generate the lists dynamically based on the
enabled services for each role.
Change-Id: I5f1250a8a1a38cb3909feeb7d4c1000fd0fabd14
Closes-Bug: #1629096
|
|
As noted in the bug, predictable placement is broken right now
because the %index% in the scheduler hint isn't being interpolated.
This is because the parameter was moved from overcloud.yaml to the
service-specific files, which doesn't provide the index value.
Because the Compute role's parameter is named NovaCompute... we also
have to include some backwards compatibility logic to handle the
mismatch.
Change-Id: Ibee2949fe4c6c707203d7250e2ce169c769b1dcd
Closes-Bug: 1627858
|
|
|
|
This patch moves the keystone::auth settings for all
services into the new service_config_settings section. This
is important because we execute the keystone commands via
puppet only on the role containing the keystone service
and without these settings it will fail.
Note that yaql merging/filtering is used here to ensure that
service_config_settings is optional in service templates,
and also that we'll only deploy hieradata for a given
service on a node running the service (the key in
the service_config_settings map must match the service_name
in the service template for this to work).
e.g the following will result in only deploying keystone: 123
in hiera on the role running the "keystone" service,
regardless of which service template defines it.
service_config_settings:
keystone:
keystone: 123
Co-Authored-By: Steven Hardy <shardy@redhat.com>
Change-Id: I0c2fce037a1a38772f998d582a816b4b703f8265
Closes-bug: 1620829
|
|
This was missed during custom-roles work, and will mean deployments
break if any of the existing roles are removed from roles_data.yaml
Change-Id: Ia737b48a0dd272f8d706b7458764201fa47cb0bb
Closes-Bug: #1625755
|
|
The previous logic left out the default Count completely when it was
zero, which breaks nested validation and it's likely similar problems
would exist with the other optional defaults, so rework it so the
defaulting happens in the jinja2 logic, and document the interfaces
better in roles_data.yaml
Change-Id: I7f2eb4a3a0b43c5d2cd0d001ed3c73f783c95c74
Closes-Bug: #1625760
|
|
|
|
This implements support for installing fluentd agents as a composable
service on the overcloud.
Depends-On: I2e1abe4d8c8359e56ff626255ee50c9cacca1940
Implements: tripleo-opstools-centralized-logging
Change-Id: I23b0e23881b742158fcfb6b8c145a3211d45086e
|
|
This adjusts the interface to OS::TripleO::AllNodesExtraConfig so
it supports custom/composable/optional roles.
Note this does break backwards compatibility, and I can't see any way
to avoid that. I've converted the in-tree templates, and we'll have
to document carefully and or provide a script (or automated conversion
via mistral perhaps?) to allow folks to easily adjust any out of tree
templates to the new format.
Basically you just have to:
1. Remove all the *_servers parameters, replace with one "servers"
json parameter
2. Replace references to e.g "controller_servers" with "servers, Controller"
which does a path-based lookup into the json map provided by overcloud.yaml
Change-Id: I5eebf853646b2f6300d6b542fcd4f43e82d3b413
Partially-Implements: blueprint custom-roles
|
|
We need to remove the hard-coded roles from overcloud.j2.yaml
as now it's valid to e.g remove BlockStorage completely.
The previous behavior for the per-role upgrade scripts is maintained
but we'll need to rework this for newton->ocata upgrades where we
can no longer be sure the servers mapping will contain all roles.
Change-Id: I25e6c84757e3c00fba2aae834cd8206c62e44acf
Partially-Implements: blueprint custom-roles
|
|
Refactor so the post-deploy steps recently moved into
puppet/post.yaml are generated by jinja2 instead of hard-coded
Change-Id: I488e46aaa449c95571bd3d1de9513c3d0730baf3
Partially-Implements: blueprint custom-roles
|
|
|
|
|
|
To support custom roles we need to generate these lists of role
specific data.
Change-Id: Ide97cd57d1c07f7f7ff260ff7a6bbe2b71753bd0
Partially-Implements: blueprint custom-roles
|
|
This moves the now nearly identical group resources inside the loop
there's a FIXME related to some deprecated compute parameters we'll
need to work around.
Change-Id: Iddd63c42754867125e65e7721ab9d9f46f4d6afb
Partially-Implements: blueprint custom-roles
|
|
Change-Id: I8fc855833e8c602e94d0e8b330a713de1c98f901
|
|
This patch add support for deploying Ceph RGW.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: I88c8659a36c2435834e8646c75880b0adc52e964
|
|
These are identical for all roles, so move them into the per-role
loop
Partially-Implements: blueprint custom-roles
Change-Id: Id85b830a0e225912a3ea8c8b17a11fc424f68bb0
|
|
These are identical for all roles, so move them into the per-role
loop
Partially-Implements: blueprint custom-roles
Change-Id: I0a9918d5a2e9a73fe3ac68a96bdee02e95799bc1
|
|
The first step of generating the Service chain resources via j2,
we'll then incrementally convert other resources to be created
in a similar way.
Partially-Implements: blueprint custom-roles
Depends-On: I81239991f36ed5f6453184bf9cffe930832cb68b
Change-Id: Iafa9b2afddf18a5a9833ec472a552fb256338b38
|