Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
In newton, we used to construct the fqdn_$NETWORK in puppet-tripleo for
external, internal_api, storage, storage_mgmt, tenant, management, and
ctrlplane. When this was moved into THT, we accidently dropped external
which leads to deployment failures if a service is moved to the external
network and the configuration consumes the fqdn_external hiera key.
Specifically this is reproduced if the MysqlNetwork is switch to to
exernal, then the deployment fails because the bind address which is set
to use fqdn_external is blank.
Change-Id: I01ad0c14cb3dc38aad7528345c928b86628433c1
Closes-Bug: #1697722
|
|
Existing host_config_and_reboot.role.j2.yaml is done in ocata to
configure kernel args. This can be enhanced with use of role-specific
parameters, which is done in the current patch. The earlier method is
deprecated and will be removed in Q releae.
Implements: blueprint ovs-2-6-dpdk
Change-Id: Ib864f065527167a49a0f60812d7ad4ad12c836d1
|
|
|
|
Instead of using the Heat condition directly on the Deployment
resources, use it to set the action list to an empty list when the
server is blacklisted.
This has a couple advantages over the previous approach in that the
actual resources are not deleted and recreated when servers are added
and removed from the blacklist.
Recreating the resources can be problematic, as it would then force the
Deployments to re-run when a server is removed from the blacklist. That
is likely not always desirable, especially in the case of
NetworkDeloyment.
Additionally, you will still see the resources for a blacklisted server
in the stack, just with an empty set of actions. This has the benefit of
preserving the history of the previous time the Deployment was
triggered.
implements blueprint disable-deployments
Change-Id: I3d0263a6319ae4871b1ae11383ae838bd2540d36
|
|
Replace the multiple SoftwareDeployment resources with a common
playbook that runs on all roles, consuming the configuration data
written via the HostPrepAnsible tasks.
This hopefully simplifies things, and will enable re-running the
deploy steps for minor updates (we'll need some way to detect
a container should be replaced, but that will be done via a
follow-up patch).
Change-Id: I674a4d9d2c77d1f6fbdb0996f6c9321848e32662
|
|
Adds the ability to blacklist servers from all SoftwareDeployment
resources. The servers are specified in a new list parameter,
DeploymentServerBlacklist by the Heat assigned name
(overcloud-compute-0, etc).
implements blueprint disable-deployments
Change-Id: I46941e54a476c7cc8645cd1aff391c9c6c5434de
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
|
|
At scale, having the os-collect-config instances all check in at the
same time can cause performance problems. This change enables splay and
sets it to a default maximum random sleep of 30 seconds prior to the
os-collect-config polling.
Change-Id: Iab8b51f4e5fb4727b8aa7e081f5cbfcbf11f7fcb
Depends-On: I88f623c9e8db9ed4a186918206a63faec8f7f673
Closes-Bug: #1677314
|
|
|
|
Fetch the host public keys from each node, combine them all and write to the
system-wide ssh known hosts. The alternative of disabling host key
verification is vulnerable to a MITM attack.
Change-Id: Ib572b5910720b1991812256e68c975f7fbe2239c
|
|
The server resource type, OS::TripleO::Server can now be mapped per role
instead of globally. This allows users to mix baremetal
(OS::Nova::Server) and deployed-server (OS::Heat::DeployedServer) server
resources in the same deployment.
blueprint pluggable-server-type-per-role
Change-Id: Ib9e9abe2ba5103db221f0b485c46704b1e260dbf
|
|
Prior to https://review.openstack.org/#/c/271450/ os-net-config was
applied via os-refresh-config directly, which meant that even though
UpdateDeployment and NetworkDeployment can be created concurrently,
we'd always do the os-net-config step first.
However now that we apply both steps via scripts (which are both handled
via the same heat-config hook) we should add an explicit dependency to
ensure the network is always fully configured before attempting to run
any update. This should avoid the risk of e.g running an update on
initial deployment before the network connectivity to access yum repos
is in place.
Change-Id: Idff7a95afe7b49b6384b1d0c78e76522fb1f8eb7
Related-Bug: #1666227
|
|
This adds the UpgradeInitCommonCommand for newton..ocata common
UpgradeInit commands. This comes before the ansible upgrade steps
so we need to do things like remove the old newton hieradata and
install the ansible-pacemaker module and ansible heat-agent plugin
This defaults to '' and is set in the major-upgrade-composable-steps
and unset in the major-upgrade-converge environment files.
Change-Id: I0c7a32194c0069b63a501a913c17907b47c9cc16
|
|
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
|
|
This enables the deployer to dynamically add nova metadata to the
servers based on the output of service profiles that implement the
metadata_settings key in the role_data output for the profiles.
One can set an implementation via the OS::TripleO::ServerMetadataHook
resource, which currently is set as OS::Heat::None. So, because of
the default implementation, if left untouched it actually does
nothing.
Currently, besides the list, which is metadata_settings, this hook also
takes the name of the node that it's setting the metadata for.
This is useful for nova vendordata plugins that can parse said metadata.
Change-Id: I8a937f711f0b90156fbb6c4632760435ef846474
|
|
Currently when the docker environments are invoked, every node has the
boot script run which replaces os-collect-config with the heat-agents
container. This should only be happening on Compute nodes currently,
and each role will be converted to heat-agents one at a time.
This change implements a role-specific NodeUserData resource and uses
that mechanism to run docker/firstboot/install_docker_agents.yaml only
on Compute nodes.
Change-Id: Id81811dbcaf0e661c3980aa25f3ca80db5ef0954
|
|
We can't run this during the upgrade steps, because there are things
which need to happen before any role configuration happens, e.g
installing the new hiera heat-config hook, which must be done before
e.g "ControllerDeployment" runs or the stack update hangs.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I365b57513590662c3f78a33dc625747f457c48c5
|
|
We could already pass metadata to the nova server instances (on
creation) via the ServerMetadata parameter, however, there was no
way of doing this per-role. This introduces that by adding a
{{role}}ServerMetadata parameter for each role. This parameter gets
merged with the ServerMetadata parameter and allows this
functionality.
Note that both default to {}, and so does the result of merging those
parameters with their default values. So nothing changes for the
default settings.
Change-Id: I334edcc51ce7ee82fc13b6cf4c0d74ccb7db099c
|
|
There are some requirements for early configuration that involves
e.g setting kernel parameters then rebooting. Currently this can
be done via cloud-init, e.g firstboot templates, but there's been
discussion around enabling a SoftwareDeployment approach instead.
The main advantage of doing it this way is there's an error path
if something goes wrong with the config (except triggering the
reboot as we have to use NO_SIGNAL for that).
Change-Id: Ia54ee654f755631b8062eb5c209a60c6f9161500
|
|
There were several instances where the short-names/FQDNs where being
gotten in the same way in the role's templates. So this introduces a
mapping to get these values in order to reduce clutter.
Change-Id: Ie7df360bb69d56655f3e0fcbbf4d297db39b7a26
|
|
Currently, one can get the network-based FQDNs via a custom puppet
fact. This is currently unreliable, as it's based on the ::hostname
fact which we assume it's set correctly by nova. However, this is not
necessarily the case (for instance, if you use pre-deployed services
such as we do with the multinode-jobs). In these cases, the
::hostname fact will return something other than what we specified in
nova, and effectively breaks the configurations in we relly too much
on the network-based FQDN facts.
By using hiera instead, we avoid this issue as we set those values to
be exactly what we expect (as we set them in the OS::TripleO::Server
resource.
Change-Id: I6ce31237098f57bdc0adfd3c42feef0073c224fb
|
|
This patch optimizes how we deploy hiera by using a new
heat hook specifically designed to help compose hiera
within heat templates. As part of this change:
- we update all the 'hiera' software configurations to set the group to hiera
instead of os-apply-config.
- The new format uses JSON instead of YAML. The hook actually writes
out the hiera JSON directly so no conversion takes place. Arrays,
Strings, Booleans all stay in their native formats. As such we can avoid
having to do many of the awkward string and list conversions in t-h-t to
support the previous YAML formatting.
- The new hook prefers JSON over YAML so upgrading users will have the
new files prefered. (we will post a cleanup routine for the old files
soon but this isn't a new behavior, JSON is now simply prefered.)
- A lot of services required edits to account for default settings that
worked in YAML that no longer work correctly in the native JSON
format. In almost all these cases I think the resulting codes looks
cleaner and is more explicit with regards to what is getting
configured in hiera on the actual nodes.
Depends-On: I6a383b1ad4ec29458569763bd3f56fd3f2bd726b
Closes-bug: #1596373
Change-Id: Ibe7e2044e200e2c947223286fdf4fd5bcf98c2e1
|
|
Not having the default easily accessible is causing issues for the UI,
as it cannot guess at it and can accidentally overwrite the value with
an empty string (the expected default when unset). The default is
already helpfully spelled out in the doc string for each file, this
updates the parameter to match it.
Change-Id: Ic284f9904e8f1d01cc717d59a0759f679d94106d
Closes-Bug: #1643670
|
|
The update configuration is generated into ceph.yaml and into
{rolename}.yaml. We should ensure puppet hiera is looking for
these files.
Change-Id: I261d16bc365b3d19adc502385edcc509a53ffc2a
Closes-Bug: #1638346
Resolves: rhbz#1388977
|
|
|
|
in the great rebase following the JINJA ALL THE THINGS changes we lost
critical functionality in the fluentd client service. This review
restores the missing features.
Change-Id: I7c23f16f81e75f3da6a24587b2eb8385b3e920a4
Closes-bug: 1630692
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee
Change-Id: I754c4a41d8a294a4c7c18bd282ae014efd4b9b16
Closes-Bug: #1628521
|
|
When generating these templates, we should
create them with the "-role" appended as they will
be generated from a role.role.j2.yaml file.
i.e. role.role.j2.yaml will generate <service>-role.yaml
config.role.j2.yaml will generate <service>-config.yaml
Partial-Bug: #1626976
Change-Id: I614dc462fd7fc088b67634d489d8e7b68e7d4ab1
|