Age | Commit message (Collapse) | Author | Files | Lines |
|
When implementing custom roles, we lost an implicit dependency that
ensured AllNodesExtraConfig is applied before AllNodesDeploySteps,
which causes problems if you need to write hieradata via the
AllNodesExtraConfig hook (some cisco integrations we have in tree
do this, and are now broken because the ordering is no longer ensured.
Change-Id: Ie78ecbb4e135ab7f196867ef9d8d271049a9cd10
Closes-Bug: #1687597
(cherry picked from commit 4efc067a7e2965fc7a07eb05b019d0e3e8160606)
|
|
Fetch the host public keys from each node, combine them all and write to the
system-wide ssh known hosts. The alternative of disabling host key
verification is vulnerable to a MITM attack.
Change-Id: Ib572b5910720b1991812256e68c975f7fbe2239c
(cherry picked from commit 7d3552a105ad5aa62cad0998c11df5ec6bd06ed6)
|
|
This patch again removes hard coded role references to
the overcloud.yaml template that was added in
fd15a091f7ab6927833275df17b96ecacc2b1827. This
breaks the composable undercloud work (undercloud-containers ci job as
well).
Change-Id: Ie30b2573dc4d2b45ebc0afc0e0d73bfdf41e4d4b
Closes-bug: #1676528
(cherry picked from commit f7f1a8a6d8cfd4c78ffd256497b32daa5908641e)
|
|
When replacing the controller node with resource id 0,
AllNodesValidation will fail because there is an hardcoded reference
to resource.0. With this commit the id for validation is extracted
dynamically with yaql query, picking the first available.
Thanks to Steven Hardy for pointing to the right direction.
Change-Id: I8f2bfacbc005d948bd31ebd51c3d3df3182d5a3c
Closes-Bug: #1673439
|
|
As of Ocata, whenever Heat needs to get the value of an output from a
nested Stack it will still load the Stack in memory and re-resolve the
output value. This means that the EndpointMap's endpoint_map output, which
is huge, gets loaded and recalculated whenever showing the EndpointMap or
KeystoneUrl outputs of the main (overcloud) stack. To avoid this, store the
value locally in an OS::Heat::Value resource. This means that the
EndpointMap will only be resolved once, during the stack create/update, and
the outputs can refer to that value.
Related-Bug: #1661728
Change-Id: Ia79eceeea309f5508713a310849f5d366a035430
Depends-On: If0f80cab94c28514d1569b1025362ab9d9d31512
(cherry picked from commit b2ee58c7f6883011b4ba8b387eedc63d3600aea0)
|
|
This delivers a /root/tripleo_upgrade_node.sh to those nodes
that have the disable_upgrade_deployment flag set to true.
They will later be upgraded manually by the operator who will
invoke the script delivered here using upgrade-non-controller.sh
We can also deliver any service specific upgrade configuration,
such as configuring nova-compute to use the placement API as this
is required in order for placement to be configured and installed
during the subsequent upgrade steps for controller services.
This removes the compute and swift specific upgrade scripts as
they are now merged into the common
tripleo_upgrade_node.sh - removing any hard coded
reference to a particular role name (compute/objectstorage) and
only relying on the disable_upgrade_deployment is roles_data.yaml
Change-Id: I4531a4038b78087ef4a1a62c35f1328822427817
Co-Authored-By: Mathieu Bultel <mbultel@redhat.com>
|
|
Where the role has disabled upgrades, we need to skip both the ansible and
puppet steps. To do this we refactor the post.j2.yaml so that it can be
included in the upgrade template with an adjusted list of roles.
Note this requires https://review.openstack.org/#/c/425220/ - this
change will be required for local testing of this patch
(run mistral-db-mange populate after updating tripleo-common
and restart the mistral services, or update your repos and re-run
openstack undercloud install).
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: Ie7d0fa6fef3528bd93e6cde076b964ea8de3185a
|
|
files/partitions
This submission:
- Fix an error in the AllNodesExtraConfig resource.
(Can't merge servers multiple times).
- Add environment files to deploy swap file/partition
without manual edit over the templates.
- If a swap partition is mounted without having it available
the deployment will fail, the fix checks that if the
partition is not created then the deployment continues.
- Removing empty extra lines in swap templates.
- Adjust description and remove unnecessary comments in
swap templates.
Closes-Bug: 1652184
Change-Id: I828bbbbd4c178956aac74af49f80fcd4f62fa16b
|
|
Add a new roles data YAML file and environment to help
create the undercloud via t-h-t.
Partially-implements: blueprint heat-undercloud
Change-Id: I36df7fa86c2ff40026d59f02248af529a4a81861
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
|
|
This enables the deployer to dynamically add nova metadata to the
servers based on the output of service profiles that implement the
metadata_settings key in the role_data output for the profiles.
One can set an implementation via the OS::TripleO::ServerMetadataHook
resource, which currently is set as OS::Heat::None. So, because of
the default implementation, if left untouched it actually does
nothing.
Currently, besides the list, which is metadata_settings, this hook also
takes the name of the node that it's setting the metadata for.
This is useful for nova vendordata plugins that can parse said metadata.
Change-Id: I8a937f711f0b90156fbb6c4632760435ef846474
|
|
In order to call commands that need to be run on a single node, we
create a new per-service variable that will contain the first node of
each role containing the service.
Change-Id: I03e8685f939e8ae1fcd8b16883b559615042505d
Partial-Bug: #1615983
|
|
|
|
For some upgrade scenarios, e.g all-in-one deployments, it may
be possible to run the upgrade steps, then apply puppet in one
stack update, so reverse the order here. For normal deployments
the upgrade steps are mapped to OS::Heat::None so this will have
no effect.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I3c78751349a6ac2bc5dff82f67bffe13750ac21c
|
|
This patch adds a new type called:
OS::TripleO::Network::Ports::ControlPlaneVipPort
This defaults to a normal OS::Neutron::Port object but can
be mocked out for some implementations like when installing
the undercloud where neutron doesn't exist.
Change-Id: Iebf2428432a98a9d789b206ce973599adbc0af8f
|
|
|
|
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
architecture.
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I39f5426cb9da0b40bec4a7a3a4a353f69319bdf9
|
|
This patch moves the t-i-e element code for hosts configuration
into a t-h-t shell script that gets driven by a os-collect-config
script hook.
This helps accomplish several goals:
- moves us away from t-i-e
- gives us better signal handling in the error case (where the
previous element relied on 99-refresh-completed
- Allows the t-h-t undercloud installer to more easily consume this
since it doesn't rely on the old os-apply-config metadata (which
that installer doesn't support).
Change-Id: I73c3d4818ef531a3559fab272521f44519e2f486
|
|
This patch drops use of the vip-hosts.yaml service which can
cause issues during deployment because puppet 'hosts' resources
overwrite the data in /etc/hosts. The only reason things seem to work
at all at the moment is because our hosts element in t-i-e runs
on each os-refresh-config iteration and re-adds the dropped hosts
entries.
To work around the issue we add a conditional which selectively
adds the extra hosts entries only if the AddVipsToEtcHosts is set
to true.
Closes-bug: 1645123
Change-Id: Ic6aaeb249a127df83894f32a704219683a6382b2
|
|
This change modifies the template interface to support containers and
converts the compute services to composable roles.
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Flavio Percoco <flavio@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
Change-Id: I82fa58e19de94ec78ca242154bc6ecc592112d1b
|
|
This is wrong atm, it should loop to create a list for the depends_on
not multiple depends_on statements.
Note this was first corrected in https://review.openstack.org/#/c/330659/
but we need it as a standalone patch that can be backported.
Change-Id: I4d1d6346f2147e573fc0900038f1ad1d782e75ee
Closes-Bug: #1642069
|
|
|
|
Modify the syntax used to access the ResourceGroup attributes so we
always select the first node from the group, e.g even if the node
named "0" in the ResourceGroup nested stack has been removed due to
the removal policy.
Change-Id: I8b1c9538976a1518b220187a0034ad41a738d5a6
Closes-Bug: #1640449
|
|
These VIPs were previously used to create endpoints, but are no longer
used. The one exception is KeystoneAdminVip, which is used by the
python-client.
Closes-Bug: 1639956
Change-Id: Iafdf37b6ee91806d683592a99e025a8de4c0ff20
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
For parameter merge strategies to work we need to merge multiple environment
files, which doesn't consider the defaults defined in the heat template.
Moving where we define these defaults will enable the merge strategies
applied when appending services to roles in environment files to work.
Change-Id: I1ef1ad685c8a15308d051665c576a98b277f2496
Closes-Bug: #1635409
|
|
|
|
Adds new puppet specific services for Mistral
API and Mistral Engine.
This submission enables the mistral service by default in the
overcloud, a following submission will disable it and make it
optional by enabling it on demand based in an environment file.
Depends-On: Iae42ffa37c4c9b1e070b7c3753e04c45bb97703f
Depends-On: I942d419be951651e305d01460f394870c30a9878
Depends-On: I6cb2cbf4a2abf494668d24b8c36b0d525643f0af
Implements: blueprint composable-services-within-roles
Co-Authored-By: Carlos Camacho <ccamacho@redhat.com>
Change-Id: Id5ff9cb498b5a47af38413d211ff0ed6ccd0015b
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: If2804b469eb3ee08f3f194c7dd3290d23a245a7a
Depends-On: I091ecfbcb2e38fe77203244ac7a597aedcb558fb
Change-Id: Iacc504fc4fa2d06893917024ce2340d3fb80b626
|
|
This makes sure that the Host settings for all deployments are finished
before starting the AllNodesDeployments which execute puppet.
Change-Id: Ibe604472255ce905ca2c1dca2a9b07a6f8f40e47
Related-bug: #1633565
|
|
This patch moves the hosts configuration into its own deployment.
It will continue to use os-apply-config as something that is
required early on in the bootstrapping (it needs to be
configured before puppet runs for example).
The motivation here is so we can refactor all-nodes-config.yaml to use a
new hiera hook that that avoids os-apply-config entirely.
Change-Id: Ib3e4380f205358b27d22a1102b663cf300b1ed86
Partial-bug: #1596373
|
|
|
|
Closes-Bug: #1631277
Change-Id: I126b3ed2afdf03ffabb7e57f8792b9f7ecc06a09
|
|
Otherwise there may be a race between updating the hiera
and running the UpdateWorkflow
Change-Id: I22cd893e0db3df6d39504fbd61d7d9024cebb1c5
Related-Bug: 1631297
|
|
|
|
in the great rebase following the JINJA ALL THE THINGS changes we lost
critical functionality in the fluentd client service. This review
restores the missing features.
Change-Id: I7c23f16f81e75f3da6a24587b2eb8385b3e920a4
Closes-bug: 1630692
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee
Change-Id: I754c4a41d8a294a4c7c18bd282ae014efd4b9b16
Closes-Bug: #1628521
|
|
These hard-coded references to the Controller role mean that
things won't work if the keystone service is moved to any other
role, so we need to generate the lists dynamically based on the
enabled services for each role.
Change-Id: I5f1250a8a1a38cb3909feeb7d4c1000fd0fabd14
Closes-Bug: #1629096
|
|
As noted in the bug, predictable placement is broken right now
because the %index% in the scheduler hint isn't being interpolated.
This is because the parameter was moved from overcloud.yaml to the
service-specific files, which doesn't provide the index value.
Because the Compute role's parameter is named NovaCompute... we also
have to include some backwards compatibility logic to handle the
mismatch.
Change-Id: Ibee2949fe4c6c707203d7250e2ce169c769b1dcd
Closes-Bug: 1627858
|
|
|
|
This patch moves the keystone::auth settings for all
services into the new service_config_settings section. This
is important because we execute the keystone commands via
puppet only on the role containing the keystone service
and without these settings it will fail.
Note that yaql merging/filtering is used here to ensure that
service_config_settings is optional in service templates,
and also that we'll only deploy hieradata for a given
service on a node running the service (the key in
the service_config_settings map must match the service_name
in the service template for this to work).
e.g the following will result in only deploying keystone: 123
in hiera on the role running the "keystone" service,
regardless of which service template defines it.
service_config_settings:
keystone:
keystone: 123
Co-Authored-By: Steven Hardy <shardy@redhat.com>
Change-Id: I0c2fce037a1a38772f998d582a816b4b703f8265
Closes-bug: 1620829
|
|
This was missed during custom-roles work, and will mean deployments
break if any of the existing roles are removed from roles_data.yaml
Change-Id: Ia737b48a0dd272f8d706b7458764201fa47cb0bb
Closes-Bug: #1625755
|
|
The previous logic left out the default Count completely when it was
zero, which breaks nested validation and it's likely similar problems
would exist with the other optional defaults, so rework it so the
defaulting happens in the jinja2 logic, and document the interfaces
better in roles_data.yaml
Change-Id: I7f2eb4a3a0b43c5d2cd0d001ed3c73f783c95c74
Closes-Bug: #1625760
|
|
|
|
This implements support for installing fluentd agents as a composable
service on the overcloud.
Depends-On: I2e1abe4d8c8359e56ff626255ee50c9cacca1940
Implements: tripleo-opstools-centralized-logging
Change-Id: I23b0e23881b742158fcfb6b8c145a3211d45086e
|
|
This adjusts the interface to OS::TripleO::AllNodesExtraConfig so
it supports custom/composable/optional roles.
Note this does break backwards compatibility, and I can't see any way
to avoid that. I've converted the in-tree templates, and we'll have
to document carefully and or provide a script (or automated conversion
via mistral perhaps?) to allow folks to easily adjust any out of tree
templates to the new format.
Basically you just have to:
1. Remove all the *_servers parameters, replace with one "servers"
json parameter
2. Replace references to e.g "controller_servers" with "servers, Controller"
which does a path-based lookup into the json map provided by overcloud.yaml
Change-Id: I5eebf853646b2f6300d6b542fcd4f43e82d3b413
Partially-Implements: blueprint custom-roles
|
|
We need to remove the hard-coded roles from overcloud.j2.yaml
as now it's valid to e.g remove BlockStorage completely.
The previous behavior for the per-role upgrade scripts is maintained
but we'll need to rework this for newton->ocata upgrades where we
can no longer be sure the servers mapping will contain all roles.
Change-Id: I25e6c84757e3c00fba2aae834cd8206c62e44acf
Partially-Implements: blueprint custom-roles
|
|
Refactor so the post-deploy steps recently moved into
puppet/post.yaml are generated by jinja2 instead of hard-coded
Change-Id: I488e46aaa449c95571bd3d1de9513c3d0730baf3
Partially-Implements: blueprint custom-roles
|
|
|
|
|