Age | Commit message (Collapse) | Author | Files | Lines |
|
The ip address which clients and other nodes use to connect to the
monitors is derived from the monitor_interface parameter unless
a monitor_address or monitor_address_block is given (to set mon_host
into ceph.conf); this change adds setting for monitor_address_block to
match the public_network so that clients attempt to connect to the mons
on the appropriate network.
Change-Id: I7187e739e9f777eab724fbc09e8b2c8ddedc552d
Closes-Bug: #1709485
|
|
|
|
|
|
After merging commit 488796, single quotation marks
were missed. This causes the upgrade to fail as the
flag --sacks-number is considered a su command flag.
Also mounts Ceph config data into the container which
seems needed for the gnocchi-upgrade command when
configured to use Ceph.
Also move the gnocchi db sync to step 4, so ceph is
ready. Add a retry loop to ceilometer-upgrade cmd so
it doesnt fail while apache is restarted.
Closes-Bug: #1709322
Change-Id: I62f3a5fa2d43a2cd579f72286661d503e9f08b90
|
|
If we consolidate these we can focus on one implementation (the new ansible
based one used for docker-steps)
Change-Id: Iec0ad2278d62040bf03613fc9556b199c6a80546
Depends-On: Ifa2afa915e0fee368fb2506c02de75bf5efe82d5
|
|
Run virsh secret-define and secret-set-value in an init step
instead of relying on the puppet-nova exec.
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Change-Id: Ic950e290af1c66d34b40791defbdf4f8afaa11da
Closes-Bug: #1709583
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Deploying a multipathd container gives the following error:
failed: [localhost] (item={'key': u'config_files', 'value': [{u'dest': u'/', u'merge': True, u'source':
u'/var/lib/kolla/config_files/src-iscsid/*', u'preserve_properties': True}]}) =>
{\"checksum\": \"72ad81489381571c5043b7613f6828b06ae364bd\", \"failed\": true, \"item\":
{\"key\": \"config_files\", \"value\": [{\"dest\": \"/\", \"merge\": true, \"preserve_properties\": true,
\"source\": \"/var/lib/kolla/config_files/src-iscsid/*\"}]}, \"msg\": \"Destination directory does not exist\"}
The reason is the wrong indentation of the config_files key in the
multipath docker service.
Change-Id: I0e1fbb9eb188a903994b9e5da90ab4a6fb81f00a
Closes-Bug: #1708129
|
|
Docker refuses to start the container because config_files/src-ceph:ro
is mounted at both /etc/ceph and config-data/puppet-generated/ceph.
The mount to /var/lib/config-data/puppet-generated/ceph should have
been removed in commit ed0b77ff93a1a1e071d32f6a758e04c6d0b041ef.
Change-Id: I411b4764a54fc21e97e4c41a5fef00c7e6e2b64d
Closes-Bug: #1707956
|
|
|
|
|
|
Without this gnocchi is not initializing the sacks like puppet does
and gnocchi containers dont respond properly.
Change-Id: I2c53b00793f99420fd12ccc0b5646cf21d528e46
|
|
|
|
The cron containers need to run as root in order to create PID files
correctly.
Additionally, the keystone_cron container was misconfigured to
use /usr/bin/cron instead of the correct /usr/bin/crond.
Additionally we have an issue where the Kolla keystone container has
hard coded ARGS for the docker container which causes -DFOREGROUND
(an Apache specific argument) to get appended onto the kolla_start
command thus causing crond to fail to startup correctly. This
works around the issue by overriding the command and calling
kolla_set_configs manually. Once we fix this in Kolla we can
revisit this.
Change-Id: Ib8fb2bef9a3bb89131265051e9ea304525b58374
Related-bug: 1707785
|
|
The syntax was wrong and wasn't actually bind mounting the CA file.
This fixes it.
Change-Id: Icfa2118ccd2a32fdc3d1af27e3e3ee02bdfbb13b
|
|
metadata_settings is meant to have a specific format or be completely
absent. Unfortunately the hook [1] doesn't an empty value for this. So
we remove it as an easy fix before figuring out how to add such a
functionality to the hook.
[1] https://github.com/openstack/tripleo-heat-templates/blob/master/extraconfig/nova_metadata/krb-service-principals.yaml
Co-Authored-By: Thomas Herve <therve@redhat.com>
Change-Id: Ieac62a8076e421b5c4843a3cbe1c8fa9e3825b38
|
|
In HA overclouds, the helper script clustercheck is called by HAProxy to poll
the state of the galera cluster. Make sure that a dedicated clustercheck user
is created at deployment, like it is currently done in Ocata.
The creation of the clustercheck user happens on all controller nodes, right
after the database creation. This way, it does not need to wait for the galera
cluster to be up and running.
Partial-Bug: #1707683
Change-Id: If8e0b3f9e4f317fde5328e71115aab87a5fa655f
|
|
|
|
These are needed for the TLS everywhere bits.
Change-Id: I81fcf453fc1aaa2545e0ed24013f0f13b240a102
|
|
|
|
|
|
Running puppet apply with --logdest syslog results in all the output
being redirected to syslog. You get no error messages. In the case where this fails, the subsequent debug task shows nothing useful
as there was no stdout/stderr.
Also pass --logdest console to docker-puppet's puppet apply so that
we get the output for the debug task.
Related-Bug: #1707030
Change-Id: I67df5eee9916237420ca646a16e188f26c828c0e
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
Running puppet apply with --logdest syslog results in all the output
being redirected to syslog. You get no error messages. In the case where
this ansible task fails, the subsequent debug task shows nothing useful
as there was no stdout/stderr.
Also pass --logdest console to puppet apply so that we get the output
for the debug task. My local testing showed that when specifying logdest
twice, both values were honored, and the output went to syslog and the
console.
Change-Id: Id5212b3ed27b6299e33e81ecf71ead554f9bdd29
Closes-Bug: #1707030
|
|
Services that access database have to read an extra MySQL configuration file
/etc/my.cnf.d/tripleo.cnf which holds client-only settings, like client bind
address and SSL configuration. The configuration file is thus used by
containerized services, but also by non-containerized services that still
run on the host.
In order to generate that client configuration file appropriately both on the
host and for containers, 1) the MySQLClient service must be included by the
role; 2) every containerized service which uses the database must include the
mysql::client profile in the docker-puppet config generation step.
By including the mysql::client profile in each containerized service, we ensure
that any change in configuration file will be reflected in the service's
/var/lib/config-data/{service}, and that paunch will restart the service's
container automatically.
We now only rely on MySQLClient from puppet/services, to make it possible to
generate /etc/my.cnf.d/tripleo.cnf on the host, and to set the hiera keys that
drive the generation of that config file in containers via docker-puppet.
We include a new YAML validation step to ensure that any service which depends
on MySQL will initialize the mysql::client profile during the docker-puppet
step.
Change-Id: I0dab1dc9caef1e749f1c42cfefeba179caebc8d7
|
|
|
|
|
|
This sets the SSL flag in the docker service and expose the parameter in
the docker service.
Depends-On: I4c68a662c2433398249f770ac50ba0791449fe71
Change-Id: Ic3df2b9ab7432ffbed5434943e04085a781774a0
|
|
|
|
|
|
Add docker profiles to deploy Ceph in containers via ceph-ansible. This is
implemented by triggering a Mistral workflow during one of the overcloud
deployment steps, as provided by [1].
Some new service-specific parameters are available to determine the workflow to
execute and the ansible playbook to use. A new `CephAnsibleExtraConfig`
parameter can be used to provide arbitrary config variables consumed by `ceph-ansible`.
The pre-existing template params consumed up until the Pike release to
drive `puppet-ceph` continue to work and are translated, when possible, into
the equivalent `ceph-ansible` variable.
A new environment file is added to enable use of ceph-ansible;
the pre-existing puppet-ceph implementation remains unchanged and usable
for non-containerized deployments.
1. https://review.openstack.org/#/c/463324/
Change-Id: I81d44a1e198c83a4ef8b109b4eb6c611555dcdc5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The introduction of I90253412a5e2cd8e56e74cce3548064c06d022b1 broke the HAproxy
service due to some HAproxy-specific iptables rules being executed during the
puppet config step.
Ensure that no iptables call is performed during the generation of configuration
files. Move those calls to step 1, as implemented in the pacemaker-based
HAproxy service (Ib5a083ba3299a82645f1a0f9da0d482c6b89ee23).
Depends-On: I2d6274d061039a9793ad162ed8e750bd87bf71e9
Closes-Bug: #1697921
Change-Id: Ica3a432ff4a9e7a46df22cddba9ad96e1390b665
|
|
|
|
Given ceph-ansible or puppet-ceph will have created the Ceph
config files and keyrings in /etc/ceph on baremetal, this change
copies into the OpenStack containers the necessary files for the
services to be able to connect to the Ceph cluster.
Change-Id: Ibc9964902637429209d4e1c1563b462c60090365
|
|
Required now that https://review.openstack.org/480289 has merged
Change-Id: I17f6c9b5a6e2120a53bae296042ece492210597a
Related-Bug: #1696504
|
|
|
|
|
|
|