Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Pass mode parameter to ceph-ansible in place of ACLs parameter
because ACLs are not for same UID in container as container host
and because ACLs are not passed by kolla_config.
Change-Id: I7e3433eab8e2a62963b623531f223d5abd301d16
Closes-Bug: #1709683
|
|
Bind mounts and adds the appropriate permissions for the cert and
key that's used for TLS.
bp tls-via-certmonger-containers
Change-Id: I7fae4083604c7dc89ca04141080a228ebfc44ac9
|
|
This bind mounts the certificates if TLS is enabled in the internal
network. It also disables the CRL usage since we can't restart haproxy
at the rate that the CRL is updated. This will be addressed later and
is a known limitation of using containerized haproxy (there's the same
issue in the HA scenario). To address the different UID that the certs
and keys will have, I added an extra step that changes the ownership
of these files; though this only gets included if TLS in the internal
network is enabled.
bp tls-via-certmonger-containers
Depends-On: I2078da7757ff3af1d05d36315fcebd54bb4ca3ec
Change-Id: Ic6ca88ee7b6b256ae6182e60e07498a8a793d66a
|
|
The containerized version of the mongodb service omits the
metadata_settings definition [1], which confuses certmonger when
internal TLS is enabled and make the generation of certificates fail.
Use the right setting from the non-containerized profile.
[1] https://review.openstack.org/#/c/461780/
Change-Id: I50a9a3a822ba5ef5d2657a12c359b51b7a3a42f2
Closes-Bug: #1709553
|
|
Various containerized services (e.g. nova, neutron, heat) run initial set up
steps with some ephemeral containers that don't use kolla_start. The
tripleo.cnf file is not copied in /etc/my.cnf.d and this can break some
deployments (e.g. when using internal TLS, service lack SSL settings).
Fix the configuration of transient containers by bind mounting of the
tripleo.cnf file when kolla_start is not used.
Change-Id: I5246f9d52fcf8c8af81de7a0dd8281169c971577
Closes-Bug: #1710127
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
|
|
So far we've been using virtlogd running on the host, we should now be
using virtlogd from a container.
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Change-Id: I998c69ea1f7480ebb90afb44d6006953a84a1c04
|
|
Splitting by colon using native str_split function did not work well
because we needed a right split.
This change replaces the str_split calls with yaql rightSplit().
Change-Id: Iab2f69a5fadc6b02e2eacf3c9d1a9024b0212ac6
|
|
The ip address which clients and other nodes use to connect to the
monitors is derived from the monitor_interface parameter unless
a monitor_address or monitor_address_block is given (to set mon_host
into ceph.conf); this change adds setting for monitor_address_block to
match the public_network so that clients attempt to connect to the mons
on the appropriate network.
Change-Id: I7187e739e9f777eab724fbc09e8b2c8ddedc552d
Closes-Bug: #1709485
|
|
|
|
After merging commit 488796, single quotation marks
were missed. This causes the upgrade to fail as the
flag --sacks-number is considered a su command flag.
Also mounts Ceph config data into the container which
seems needed for the gnocchi-upgrade command when
configured to use Ceph.
Also move the gnocchi db sync to step 4, so ceph is
ready. Add a retry loop to ceilometer-upgrade cmd so
it doesnt fail while apache is restarted.
Closes-Bug: #1709322
Change-Id: I62f3a5fa2d43a2cd579f72286661d503e9f08b90
|
|
This bind mounts the necessary files for the mongodb container to serve
TLS in the internal network.
bp tls-via-certmonger-containers
Change-Id: Ieef2a456a397f7d5df368ddd5003273cb0bb7259
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
|
|
With these two services running over httpd in the containers, we can now
enable TLS for them.
bp tls-via-certmonger-containers
Change-Id: Ib8fc37a391e3b32feef0ac6492492c0088866d21
|
|
The non-containerized version will run over httpd [1], and for the
containerized TLS work, it is needed in the container version as well.
[1] Iac35b7ddcd8a800901548c75ca8d5083ad17e4d3
bp tls-via-certmonger-containers
Depends-On: I1c5f13039414f17312f91a5e0fd02019aa08e00e
Change-Id: I2c39a2957fd95dd261b5b8c4df5e66e00a68d2f7
|
|
In non-containerized deployments, Galera can be configured to use TLS
for gcomm group communication when enable_internal_tls is set to true.
Fix the metadata service definition and update the Kolla configuration
to make gcomm use TLS in containers, if configured.
bp tls-via-certmonger-containers
Change-Id: Ibead27be81910f946d64b8e5421bcc41210d7430
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Closes-Bug: #1708135
Depends-On: If845baa7b0a437c28148c817b7f94d540ca15814
|
|
In non-containerized deployments, HAProxy can be configured to use TLS for
proxying internal services.
Fix the creation of the of the haproxy bundle resource to enable TLS when
configured. The keys and certs files are all passed as configuration files and
must be copied by Kolla at container startup.
For the time being, disable the use of the CRL file until we find a means
of restarting the containerized HAProxy service when that file expires.
Change-Id: If307e3357dccb7e96bdb80c9c06d66a09b55f3bd
Depends-On: I4b72739446c63f0f0ac9f859314a4d6746e20255
Closes-Bug: #1709563
|
|
Run virsh secret-define and secret-set-value in an init step
instead of relying on the puppet-nova exec.
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Change-Id: Ic950e290af1c66d34b40791defbdf4f8afaa11da
Closes-Bug: #1709583
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Deploying a multipathd container gives the following error:
failed: [localhost] (item={'key': u'config_files', 'value': [{u'dest': u'/', u'merge': True, u'source':
u'/var/lib/kolla/config_files/src-iscsid/*', u'preserve_properties': True}]}) =>
{\"checksum\": \"72ad81489381571c5043b7613f6828b06ae364bd\", \"failed\": true, \"item\":
{\"key\": \"config_files\", \"value\": [{\"dest\": \"/\", \"merge\": true, \"preserve_properties\": true,
\"source\": \"/var/lib/kolla/config_files/src-iscsid/*\"}]}, \"msg\": \"Destination directory does not exist\"}
The reason is the wrong indentation of the config_files key in the
multipath docker service.
Change-Id: I0e1fbb9eb188a903994b9e5da90ab4a6fb81f00a
Closes-Bug: #1708129
|
|
Docker refuses to start the container because config_files/src-ceph:ro
is mounted at both /etc/ceph and config-data/puppet-generated/ceph.
The mount to /var/lib/config-data/puppet-generated/ceph should have
been removed in commit ed0b77ff93a1a1e071d32f6a758e04c6d0b041ef.
Change-Id: I411b4764a54fc21e97e4c41a5fef00c7e6e2b64d
Closes-Bug: #1707956
|
|
|
|
|
|
Without this gnocchi is not initializing the sacks like puppet does
and gnocchi containers dont respond properly.
Change-Id: I2c53b00793f99420fd12ccc0b5646cf21d528e46
|
|
|
|
The cron containers need to run as root in order to create PID files
correctly.
Additionally, the keystone_cron container was misconfigured to
use /usr/bin/cron instead of the correct /usr/bin/crond.
Additionally we have an issue where the Kolla keystone container has
hard coded ARGS for the docker container which causes -DFOREGROUND
(an Apache specific argument) to get appended onto the kolla_start
command thus causing crond to fail to startup correctly. This
works around the issue by overriding the command and calling
kolla_set_configs manually. Once we fix this in Kolla we can
revisit this.
Change-Id: Ib8fb2bef9a3bb89131265051e9ea304525b58374
Related-bug: 1707785
|
|
The syntax was wrong and wasn't actually bind mounting the CA file.
This fixes it.
Change-Id: Icfa2118ccd2a32fdc3d1af27e3e3ee02bdfbb13b
|
|
metadata_settings is meant to have a specific format or be completely
absent. Unfortunately the hook [1] doesn't an empty value for this. So
we remove it as an easy fix before figuring out how to add such a
functionality to the hook.
[1] https://github.com/openstack/tripleo-heat-templates/blob/master/extraconfig/nova_metadata/krb-service-principals.yaml
Co-Authored-By: Thomas Herve <therve@redhat.com>
Change-Id: Ieac62a8076e421b5c4843a3cbe1c8fa9e3825b38
|
|
In HA overclouds, the helper script clustercheck is called by HAProxy to poll
the state of the galera cluster. Make sure that a dedicated clustercheck user
is created at deployment, like it is currently done in Ocata.
The creation of the clustercheck user happens on all controller nodes, right
after the database creation. This way, it does not need to wait for the galera
cluster to be up and running.
Partial-Bug: #1707683
Change-Id: If8e0b3f9e4f317fde5328e71115aab87a5fa655f
|
|
|
|
These are needed for the TLS everywhere bits.
Change-Id: I81fcf453fc1aaa2545e0ed24013f0f13b240a102
|
|
|
|
Services that access database have to read an extra MySQL configuration file
/etc/my.cnf.d/tripleo.cnf which holds client-only settings, like client bind
address and SSL configuration. The configuration file is thus used by
containerized services, but also by non-containerized services that still
run on the host.
In order to generate that client configuration file appropriately both on the
host and for containers, 1) the MySQLClient service must be included by the
role; 2) every containerized service which uses the database must include the
mysql::client profile in the docker-puppet config generation step.
By including the mysql::client profile in each containerized service, we ensure
that any change in configuration file will be reflected in the service's
/var/lib/config-data/{service}, and that paunch will restart the service's
container automatically.
We now only rely on MySQLClient from puppet/services, to make it possible to
generate /etc/my.cnf.d/tripleo.cnf on the host, and to set the hiera keys that
drive the generation of that config file in containers via docker-puppet.
We include a new YAML validation step to ensure that any service which depends
on MySQL will initialize the mysql::client profile during the docker-puppet
step.
Change-Id: I0dab1dc9caef1e749f1c42cfefeba179caebc8d7
|
|
Once an Ocata overcloud is upgraded to Pike, clustercheck should only be
running in a dedicated container, and xinetd should no longer manage it on
the host. Fix the mysql upgrade_task accordingly.
Change-Id: I01acacc2ff7bcc867760b298fad6ff11742a2afb
Closes-Bug: #1706612
|