Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Closes-bug: #1668919
Change-Id: Ie750caa34c6fa22ca6eae6834b9ca20e15d97f7f
|
|
|
|
|
|
In I7831d20eae6ab9668a919b451301fe669e2b1346 we removed some of
the old upgrades but left the environment files removed here.
Change-Id: Ib3eca5687285b280832d19b647c3b4aa3d9ac36d
|
|
When configuring nova containers via puppet, the puppet class chain
includes a class for live migration, which configures live migration
aspects in nova and libvirt.
Some of the libvirt config parts try to notify Service[libvirt], but
that service definition is only included in nova-libvirt service, it's
not included in the control plan nova services. However, our hieradata
is currently global on the node, it's not per-service, which means even
though only nova-compute and nova-libvirt service set
tripleo::profile::base::nova::manage_migration: true
this hiera setting is applied to all containers running puppet, most
notably the ones which configure nova control plane services. As a
result, configuration of nova control plane services failed, and in turn
the whole deployment failed.
This commit disables the libvirt part of live migration config until we
implement some better solution (e.g. hieradata separation between
different puppet containers, or move the libvirt config parts only to
nova-compute manifests in puppet-tripleo).
Change-Id: I0328406607d451e6bdce4d92c441c03648925fa7
Closes-Bug: #1684107
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Currently we're referencing some steps that don't exist in the
output from the OS::Heat::Value resource, but as noted in the heat
bug #1681749 I think this isn't valid and probably should not be
allowed, so instead merge defaults with the non-empty step
tasks. To avoid further duplication of the loop variables, I
made the max step a variable.
Change-Id: Icf3d639b53c97006a0c370c12600449fba6f3323
Related-Bug: #1681749
|
|
In two places during upgrade we manually trigger puppet.
There can be a problem when new puppet modules are added, and their
corresponding symlinks in /etc/puppet/modules are not created during
the installation as their are installed in
/usr/share/openstack-puppet/modules. To prevent the issue tripleo set
modulepath in the templates.
We must use the same modulepath to make sure that we don't fail
because of missing module in the manual puppet run.
This particulary happens when you upgrade from M->N->O, as the base
image in Mitaka doesn't have the proper symlinks and they are not
created during the installation of the package.
Closes-Bug: #1684587
Change-Id: I79df6ea33f1c58e13309176a6de41b7572541fd6
|
|
|
|
This switches Zaqar to run with httpd when configured by puppet.
Change-Id: I69b923dd76a60e9ec786cae886c137ba572ec906
|
|
|
|
|
|
* Switch auth_uri to point to Keystone versionless endpoint.
* Switch Swift auth url to use Keystone versionless endpoint and
Keystone v3 API.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: I78cdd2286b5a5094f36d4f3c7c58340745664449
Partial-blueprint: keystone-v3
|
|
|
|
This change implements a MOTD message and provides a hash of
sshd config options which are sourced to the puppet-ssh module
as a hash.
The SSHD puppet service is enabled by default, as it is
required for Idb56acd1e1ecb5a5fd4d942969be428cc9cbe293.
Also added the service to the CI roles.
Change-Id: Ie2e01d93082509b8ede37297067eab03bb1ab06e
Depends-On: I1d09530d69e42c0c36311789166554a889e46556
Closes-Bug: #1668543
Co-Authored-By: Oliver Walsh <owalsh@redhat.com>
|
|
There's no need for puppet to download rabbitmqadmin script from
rabbitmq, as the script would be immediately deleted together with the
ephemeral puppet container. Also, since rabbitmq isn't running at the
time we run the puppet container (rabbitmq doesn't have config files
generated at that point), puppet couldn't connect to rabbitmq anyway.
Change-Id: Ia59e1013c24ab02037246135024418cc9b674606
Closes-Bug: #1684104
|
|
This is not really an issue, but this is now consistent with other
images and is generally nicer to people with OCD.
Also, this helps generating a consistent overcloud_containers.yaml from
parsing the heat templates.
Change-Id: I24b41dea51d2a8e862f43e9092c94ba07431af4a
|
|
According to [1] we need os_region_name, not region_name. Furthermore
the os_interface is configured as well. The hard check on this
parameter was introduced in ocata[2], explaining why the newton version
did not chock on it.
[1] https://docs.openstack.org/ocata/config-reference/compute/config-options.html
[2] https://github.com/openstack/nova/commit/d486315e0
Closes-Bug: #1684058
Change-Id: If6118bf03e832fe3fa5ea4fcb1b436afd2adf80a
|
|
This covers aodh, gnocchi and panko.
cp tls-via-certmonger-containers
Change-Id: I6dabb0d82755c28b8940c0baab0e23cfcc587c42
|
|
|
|
|
|
|
|
This relies on using the default paths for certs/keys used by libvirt
and is only enabled if TLS-everywhere is enabled.
bp tls-via-certmonger
Depends-On: If18206d89460f6660a81aabc4ff8b97f1f99bba7
Depends-On: I0a1684397ebefaa8dc00237e0b7952e9296381fa
Change-Id: I0538bbdd54fd0b82518585f4f270b4be684f0ec4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
We disabled it because it stopped working. Let's see how it works now.
Change-Id: If1efb86cb1d6ada357d4562408a566ac702fb6be
Closes-Bug: #1646506
|
|
|
|
|
|
|
|
Running this job once a day has proven problematic for large
deployments as seen in the bug report. Setting it to run hourly
would be an improvement to the current situation, as the flushes
wouldn't need to process as much data.
Note that this only affects people using UUID as the token provider.
Change-Id: I462e4da2bfdbcba0403ecde5d613386938e2283a
Related-Bug: #1649616
|
|
When TLS is enabled, the containers need to trust the CAs that the
host trusts.
Change-Id: I0434b0ac10290970857cad3d1a89d00f5b054196
|
|
This enables common resources that the docker templates might need.
The initial resource only is common volumes, and two volumes are
introduced (localtime and hosts).
Change-Id: Ic55af32803f9493a61f9b57aff849bfc6187d992
|
|
Users may have an external swift proxy already available (i.e. radosgw
from already existing ceph, or hardware appliance implementing swift
proxy). With this change user may specify an environment file that
registers the specified urls as endpoint for the object-store service.
The internal swift proxy is left as unconfigured.
Change-Id: I5e6f0a50f26d4296565f0433f720bfb40c5d2109
Depends-On: Ia568c3a5723d8bd8c2c37dbba094fc8a83b9d67e
|
|
Previously only the VIPs and their associated hostnames were present in
the HostsEntry output, due to the hosts_entries output on the
hosts-config.yaml nested stack being empty. It was referencing an
invalid attribute.
Change-Id: Iec41926e27bdbf86eb30f230f904df1b7dbfa9c2
Closes-Bug: #1683517
|
|
aodh::auth::auth_region in aodh-base.yaml is hardcoded to regionOne
instead of using the available KeystoneRegion
Change-Id: I521b7e226675062225085e1d5f0296e53b152e81
|