Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
This relies on using the default paths for certs/keys used by libvirt
and is only enabled if TLS-everywhere is enabled.
bp tls-via-certmonger
Depends-On: If18206d89460f6660a81aabc4ff8b97f1f99bba7
Depends-On: I0a1684397ebefaa8dc00237e0b7952e9296381fa
Change-Id: I0538bbdd54fd0b82518585f4f270b4be684f0ec4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
We disabled it because it stopped working. Let's see how it works now.
Change-Id: If1efb86cb1d6ada357d4562408a566ac702fb6be
Closes-Bug: #1646506
|
|
|
|
|
|
|
|
Running this job once a day has proven problematic for large
deployments as seen in the bug report. Setting it to run hourly
would be an improvement to the current situation, as the flushes
wouldn't need to process as much data.
Note that this only affects people using UUID as the token provider.
Change-Id: I462e4da2bfdbcba0403ecde5d613386938e2283a
Related-Bug: #1649616
|
|
When TLS is enabled, the containers need to trust the CAs that the
host trusts.
Change-Id: I0434b0ac10290970857cad3d1a89d00f5b054196
|
|
This enables common resources that the docker templates might need.
The initial resource only is common volumes, and two volumes are
introduced (localtime and hosts).
Change-Id: Ic55af32803f9493a61f9b57aff849bfc6187d992
|
|
Users may have an external swift proxy already available (i.e. radosgw
from already existing ceph, or hardware appliance implementing swift
proxy). With this change user may specify an environment file that
registers the specified urls as endpoint for the object-store service.
The internal swift proxy is left as unconfigured.
Change-Id: I5e6f0a50f26d4296565f0433f720bfb40c5d2109
Depends-On: Ia568c3a5723d8bd8c2c37dbba094fc8a83b9d67e
|
|
Previously only the VIPs and their associated hostnames were present in
the HostsEntry output, due to the hosts_entries output on the
hosts-config.yaml nested stack being empty. It was referencing an
invalid attribute.
Change-Id: Iec41926e27bdbf86eb30f230f904df1b7dbfa9c2
Closes-Bug: #1683517
|
|
This reverts commit 57a26486128982c9887edd02eb8897045215b10a.
Change-Id: I1bbe16a1a7a382ae0c898bd19cd64d3d49aa84c7
Closes-bug: #1683210
|
|
This enables nova cold migration.
This also switches to SSH as the default transport for live-migration.
The tripleo-common mistral action that generates passwords supplies the
MigrationSshKey parameter that enables this.
The TCP transport is no longer used for live-migration and the firewall
port has been closed.
Change-Id: I4e55a987c93673796525988a2e4cc264a6b5c24f
Depends-On: I367757cbe8757d11943af7e41af620f9ce919a06
Depends-On: I9e7a1862911312ad942233ac8fc828f4e1be1dcf
Depends-On: Iac1763761c652bed637cb7cf85bc12347b5fe7ec
|
|
|
|
* Split it to REQUIRED/OPTIONAL
* Move puppet_tags to OPTIONAL as it already has a
default set of tags that need not to be repeated
explicitly.
Change-Id: Ib70176f1edf61228771c983b0c3231fb7939a316
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
Fetch the host public keys from each node, combine them all and write to the
system-wide ssh known hosts. The alternative of disabling host key
verification is vulnerable to a MITM attack.
Change-Id: Ib572b5910720b1991812256e68c975f7fbe2239c
|
|
The server resource type, OS::TripleO::Server can now be mapped per role
instead of globally. This allows users to mix baremetal
(OS::Nova::Server) and deployed-server (OS::Heat::DeployedServer) server
resources in the same deployment.
blueprint pluggable-server-type-per-role
Change-Id: Ib9e9abe2ba5103db221f0b485c46704b1e260dbf
|
|
|
|
|
|
|
|
|
|
Previously Ansible upgrade steps failed with: Could not find the
requested service nova-compute: cannot disable.
Change-Id: I14e8bc89aca0a3f7308d88488b431e23251cc043
Closes-Bug: #1682373
|
|
The rest of the services are using underscores, so this helps
uniformity.
Change-Id: I4ce3cc76f430a19fa08c77b004b86ecad02119ae
|
|
When containerizing ceilometer agents, keystone auth is not getting
set correctly as we're not including the service config settings.
Change-Id: Ic17d64eb39e1fcb64c198410f27adbe94c84b7d4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Change-Id: I99b96343742ee5c40d8786e26b2336427e225c82
Implements: blueprint update-plan-environment-yaml
|
|
|
|
|
|
|
|
Prior to Ocata, the Controller role was hardcoded for various lookups.
When we switched to having the primary role name being dynamically
pulled from the roles_data.yaml using the first role as the primary
role as part of I36df7fa86c2ff40026d59f02248af529a4a81861, it
introduced a regression for folks who had previously been using
a custom roles file without the Controller being listed first.
Instead of relying on the position of the role in the roles data, this
change adds the concepts of tags to the role data that can be used when
looking for specific functionality within the deployment process. If
no roles are specified with this the tags indicating a 'primary'
'controller', it will fall back to using the first role listed in the
roles data as the primary role.
Change-Id: Id3377e7d7dcc88ba9a61ca9ef1fb669949714f65
Closes-Bug: #1677374
|
|
Non-working containers upgrade CI is caused by the fact that all
multinode jobs deploy pacemaker environments.
Currently we cannot upgrade Pacemakerized deployments
anyway (containerization of pacemakerized services is WIP), upgrades
have only been tested with non-Pacemaker deployments so far.
We need a new environment which will not try deploying in a
pacemakerized way. When pacemaker-managed services are containerized, we
can change the job to upgrade an HA deployment (or single-node "HA" at
least), and perhaps even get rid of the environment file introduced
here, and reuse multinode.yaml.
Change-Id: Ie635b1b3a0b91ed5305f38d3c76f6a961efc1d30
Closes-Bug: #1682051
|
|
This allows us to better configure these parametes, e.g. we could set
the cron job to run more times per day, and not just one.
Change-Id: I0a151808804809c0742bcfa8ac876e22f5ce5570
Closes-Bug: #1682097
|
|
This is only done when TLS-everywhere is enabled, and depends on those
directories being exclusive for services that run over httpd. Which is
the commit this is on top of.
Also, an environment file was added that's similar to
environments/docker.yaml. The difference is that this one will contain
the services that can run containerized with TLS-everywhere. This file
will be updated as more services get support for this.
bp tls-via-certmonger-containers
Change-Id: I87bf59f2c33de6cf2d4ce0679a5e0e22bc24bf78
|
|
The containers also need to trust the CA's that the overcloud node
trusts, else we'll get SSL verification failures.
bp tls-via-certmonger-containers
Change-Id: I7d3412a6273777712db2c90522e365c413567c49
|
|
|
|
|
|
We pass the short hostname to docker-puppet.py. In order to satisfy the
factor FQDN check for the short hostname we need to run the container
with --net=host in some cases.
Change-Id: I2929f122f23ee33e8ea5d4c5006d2bbb8b928b67
Closes-bug: #1681903
|