Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
In two places during upgrade we manually trigger puppet.
There can be a problem when new puppet modules are added, and their
corresponding symlinks in /etc/puppet/modules are not created during
the installation as their are installed in
/usr/share/openstack-puppet/modules. To prevent the issue tripleo set
modulepath in the templates.
We must use the same modulepath to make sure that we don't fail
because of missing module in the manual puppet run.
This particulary happens when you upgrade from M->N->O, as the base
image in Mitaka doesn't have the proper symlinks and they are not
created during the installation of the package.
Closes-Bug: #1684587
Change-Id: I79df6ea33f1c58e13309176a6de41b7572541fd6
|
|
|
|
|
|
|
|
|
|
According to [1] we need os_region_name, not region_name. Furthermore
the os_interface is configured as well. The hard check on this
parameter was introduced in ocata[2], explaining why the newton version
did not chock on it.
[1] https://docs.openstack.org/ocata/config-reference/compute/config-options.html
[2] https://github.com/openstack/nova/commit/d486315e0
Closes-Bug: #1684058
Change-Id: If6118bf03e832fe3fa5ea4fcb1b436afd2adf80a
|
|
|
|
|
|
|
|
This relies on using the default paths for certs/keys used by libvirt
and is only enabled if TLS-everywhere is enabled.
bp tls-via-certmonger
Depends-On: If18206d89460f6660a81aabc4ff8b97f1f99bba7
Depends-On: I0a1684397ebefaa8dc00237e0b7952e9296381fa
Change-Id: I0538bbdd54fd0b82518585f4f270b4be684f0ec4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Running this job once a day has proven problematic for large
deployments as seen in the bug report. Setting it to run hourly
would be an improvement to the current situation, as the flushes
wouldn't need to process as much data.
Note that this only affects people using UUID as the token provider.
Change-Id: I462e4da2bfdbcba0403ecde5d613386938e2283a
Related-Bug: #1649616
|
|
When TLS is enabled, the containers need to trust the CAs that the
host trusts.
Change-Id: I0434b0ac10290970857cad3d1a89d00f5b054196
|
|
This enables common resources that the docker templates might need.
The initial resource only is common volumes, and two volumes are
introduced (localtime and hosts).
Change-Id: Ic55af32803f9493a61f9b57aff849bfc6187d992
|
|
Users may have an external swift proxy already available (i.e. radosgw
from already existing ceph, or hardware appliance implementing swift
proxy). With this change user may specify an environment file that
registers the specified urls as endpoint for the object-store service.
The internal swift proxy is left as unconfigured.
Change-Id: I5e6f0a50f26d4296565f0433f720bfb40c5d2109
Depends-On: Ia568c3a5723d8bd8c2c37dbba094fc8a83b9d67e
|
|
This reverts commit 57a26486128982c9887edd02eb8897045215b10a.
Change-Id: I1bbe16a1a7a382ae0c898bd19cd64d3d49aa84c7
Closes-bug: #1683210
|
|
This enables nova cold migration.
This also switches to SSH as the default transport for live-migration.
The tripleo-common mistral action that generates passwords supplies the
MigrationSshKey parameter that enables this.
The TCP transport is no longer used for live-migration and the firewall
port has been closed.
Change-Id: I4e55a987c93673796525988a2e4cc264a6b5c24f
Depends-On: I367757cbe8757d11943af7e41af620f9ce919a06
Depends-On: I9e7a1862911312ad942233ac8fc828f4e1be1dcf
Depends-On: Iac1763761c652bed637cb7cf85bc12347b5fe7ec
|
|
|
|
* Split it to REQUIRED/OPTIONAL
* Move puppet_tags to OPTIONAL as it already has a
default set of tags that need not to be repeated
explicitly.
Change-Id: Ib70176f1edf61228771c983b0c3231fb7939a316
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
Fetch the host public keys from each node, combine them all and write to the
system-wide ssh known hosts. The alternative of disabling host key
verification is vulnerable to a MITM attack.
Change-Id: Ib572b5910720b1991812256e68c975f7fbe2239c
|
|
The server resource type, OS::TripleO::Server can now be mapped per role
instead of globally. This allows users to mix baremetal
(OS::Nova::Server) and deployed-server (OS::Heat::DeployedServer) server
resources in the same deployment.
blueprint pluggable-server-type-per-role
Change-Id: Ib9e9abe2ba5103db221f0b485c46704b1e260dbf
|
|
|
|
|
|
|
|
|
|
Previously Ansible upgrade steps failed with: Could not find the
requested service nova-compute: cannot disable.
Change-Id: I14e8bc89aca0a3f7308d88488b431e23251cc043
Closes-Bug: #1682373
|
|
The rest of the services are using underscores, so this helps
uniformity.
Change-Id: I4ce3cc76f430a19fa08c77b004b86ecad02119ae
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Change-Id: I99b96343742ee5c40d8786e26b2336427e225c82
Implements: blueprint update-plan-environment-yaml
|
|
|
|
|
|
|
|
Prior to Ocata, the Controller role was hardcoded for various lookups.
When we switched to having the primary role name being dynamically
pulled from the roles_data.yaml using the first role as the primary
role as part of I36df7fa86c2ff40026d59f02248af529a4a81861, it
introduced a regression for folks who had previously been using
a custom roles file without the Controller being listed first.
Instead of relying on the position of the role in the roles data, this
change adds the concepts of tags to the role data that can be used when
looking for specific functionality within the deployment process. If
no roles are specified with this the tags indicating a 'primary'
'controller', it will fall back to using the first role listed in the
roles data as the primary role.
Change-Id: Id3377e7d7dcc88ba9a61ca9ef1fb669949714f65
Closes-Bug: #1677374
|
|
Non-working containers upgrade CI is caused by the fact that all
multinode jobs deploy pacemaker environments.
Currently we cannot upgrade Pacemakerized deployments
anyway (containerization of pacemakerized services is WIP), upgrades
have only been tested with non-Pacemaker deployments so far.
We need a new environment which will not try deploying in a
pacemakerized way. When pacemaker-managed services are containerized, we
can change the job to upgrade an HA deployment (or single-node "HA" at
least), and perhaps even get rid of the environment file introduced
here, and reuse multinode.yaml.
Change-Id: Ie635b1b3a0b91ed5305f38d3c76f6a961efc1d30
Closes-Bug: #1682051
|
|
This allows us to better configure these parametes, e.g. we could set
the cron job to run more times per day, and not just one.
Change-Id: I0a151808804809c0742bcfa8ac876e22f5ce5570
Closes-Bug: #1682097
|