Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
metadata_settings is meant to have a specific format or be completely
absent. Unfortunately the hook [1] doesn't an empty value for this. So
we remove it as an easy fix before figuring out how to add such a
functionality to the hook.
[1] https://github.com/openstack/tripleo-heat-templates/blob/master/extraconfig/nova_metadata/krb-service-principals.yaml
Co-Authored-By: Thomas Herve <therve@redhat.com>
Change-Id: Ieac62a8076e421b5c4843a3cbe1c8fa9e3825b38
|
|
|
|
These are needed for the TLS everywhere bits.
Change-Id: I81fcf453fc1aaa2545e0ed24013f0f13b240a102
|
|
|
|
Services that access database have to read an extra MySQL configuration file
/etc/my.cnf.d/tripleo.cnf which holds client-only settings, like client bind
address and SSL configuration. The configuration file is thus used by
containerized services, but also by non-containerized services that still
run on the host.
In order to generate that client configuration file appropriately both on the
host and for containers, 1) the MySQLClient service must be included by the
role; 2) every containerized service which uses the database must include the
mysql::client profile in the docker-puppet config generation step.
By including the mysql::client profile in each containerized service, we ensure
that any change in configuration file will be reflected in the service's
/var/lib/config-data/{service}, and that paunch will restart the service's
container automatically.
We now only rely on MySQLClient from puppet/services, to make it possible to
generate /etc/my.cnf.d/tripleo.cnf on the host, and to set the hiera keys that
drive the generation of that config file in containers via docker-puppet.
We include a new YAML validation step to ensure that any service which depends
on MySQL will initialize the mysql::client profile during the docker-puppet
step.
Change-Id: I0dab1dc9caef1e749f1c42cfefeba179caebc8d7
|
|
|
|
|
|
This sets the SSL flag in the docker service and expose the parameter in
the docker service.
Depends-On: I4c68a662c2433398249f770ac50ba0791449fe71
Change-Id: Ic3df2b9ab7432ffbed5434943e04085a781774a0
|
|
|
|
|
|
Add docker profiles to deploy Ceph in containers via ceph-ansible. This is
implemented by triggering a Mistral workflow during one of the overcloud
deployment steps, as provided by [1].
Some new service-specific parameters are available to determine the workflow to
execute and the ansible playbook to use. A new `CephAnsibleExtraConfig`
parameter can be used to provide arbitrary config variables consumed by `ceph-ansible`.
The pre-existing template params consumed up until the Pike release to
drive `puppet-ceph` continue to work and are translated, when possible, into
the equivalent `ceph-ansible` variable.
A new environment file is added to enable use of ceph-ansible;
the pre-existing puppet-ceph implementation remains unchanged and usable
for non-containerized deployments.
1. https://review.openstack.org/#/c/463324/
Change-Id: I81d44a1e198c83a4ef8b109b4eb6c611555dcdc5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The introduction of I90253412a5e2cd8e56e74cce3548064c06d022b1 broke the HAproxy
service due to some HAproxy-specific iptables rules being executed during the
puppet config step.
Ensure that no iptables call is performed during the generation of configuration
files. Move those calls to step 1, as implemented in the pacemaker-based
HAproxy service (Ib5a083ba3299a82645f1a0f9da0d482c6b89ee23).
Depends-On: I2d6274d061039a9793ad162ed8e750bd87bf71e9
Closes-Bug: #1697921
Change-Id: Ica3a432ff4a9e7a46df22cddba9ad96e1390b665
|
|
|
|
Given ceph-ansible or puppet-ceph will have created the Ceph
config files and keyrings in /etc/ceph on baremetal, this change
copies into the OpenStack containers the necessary files for the
services to be able to connect to the Ceph cluster.
Change-Id: Ibc9964902637429209d4e1c1563b462c60090365
|
|
Required now that https://review.openstack.org/480289 has merged
Change-Id: I17f6c9b5a6e2120a53bae296042ece492210597a
Related-Bug: #1696504
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This patch adds parameters to configure alternative version
of the Zaqar messaging and management backends.
The intent is to make use of these settings in the
containers undercloud to use swift/mysql backends as a default
thus avoiding the dependency on MongoDB.
Change-Id: Ifd6a561737184c9322192ffc9a412c77d6eac3e9
Depends-On: Ie6a56b9163950cee2c0341afa0c0ddce665f3704
Depends-On: I3598e39c0a3cdf80b96e728d9aa8a7e6505e0690
|
|
Updates hieradata for changes in https://review.openstack.org/471950.
Creates a new service - NovaMigrationTarget. On baremetal this just configures
live/cold-migration. On docker is includes a container running a second sshd
services on an alternative port.
Configures /var/lib/nova/.ssh/config and mounts in nova-compute and libvirtd
containers.
Change-Id: Ic4b810ff71085b73ccd08c66a3739f94e6c0c427
Implements: blueprint tripleo-cold-migration
Depends-On: I6c04cebd1cf066c79c5b4335011733d32ac208dc
Depends-On: I063a84a8e6da64ae3b09125cfa42e48df69adc12
|
|
|
|
The metadata agent creates domain socket /var/lib/neutron/metadata_proxy
that is used for communication with haproxy in the L3 and DHCP agents.
This patch adds creation of /var/lib/neutron if it doesn't exist and
mounts it into the L3, DHCP and metadata agent containers.
Change-Id: Id8b8487b5a6a288e5ef1ca1c7d5b47a59cc8dea2
Closes-Bug: #1705289
|
|
Since these are obviously global parameters they shouldn't specify
what will be using them because they are used in multiple places.
Change-Id: I5054c2d67dffe802e37f8391dd7bad4721e29831
Partial-Bug: 1700664
|
|
Change-Id: I3ea7c0c7ea049043668e68c6e637fd2aaf992622
Partial-Bug: 1700664
|
|
Now that ODL clustering is fixed to not use an exec by:
https://git.opendaylight.org/gerrit/#/c/60491
We no longer need to use the workaround puppet-tripleo
tag to configure clustering.
Change-Id: I21c1eb2eff6d4cb855eff4a1122f55ad625d84cc
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
This is required when the bundles run on pacemaker remote nodes
otherwise the cluster won't be able to connect to the control-ports
of each bundle. The only services that need this are rabbit, redis and
galera because those run pacemaker_remote inside the container
(A/P resources and haproxy do not)
Change-Id: I6a56d79319ef3d14973a0586dcda4d523adda7aa
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
|
|
|
|
The token-flush cron job is created in /var/spool/cron/keystone
by puppet. This patch creates a cron container to run that
in an environment where it has access to keystone.conf
and the keystone-manage binaries.
Change-Id: Ie305ee9990657c66938250d1d6e19fef94675997
Partial-bug: 1701254
|
|
The purge-deleted cron job is created by puppet in
/var/spool/cron/heat. This creates a cron container
to run that in an environment where it has access to the
heat.conf and heat-manage binaries.
Change-Id: Ib9fe8e4f6dbd41021df7cf152fd18569c189d2e2
Partial-bug: #1701254
|
|
The cinder db purge cron job is created by puppet in
/var/spool/cron/cinder. This creates a cron container to run
that in an environment where it has access to cinder.conf
and the cinder-manage binaries.
Change-Id: I02ae32a6dcd8569e2e2390063d4d935d05545a78
Partial-bug: #1701254
|
|
This patch reworks the nova_api_cron container so that it contains
the /var/spool/cron cron for the nova user and also so that it
contains the correct nova.conf file.
This should allow kolla-start to copy the correct config files
into place and then start the cron service to run the nova
tasks periodically.
Change-Id: Ib6b2ca5af5419130fb9c83f83d6f4bf97410e870
Related-bug: #1701254
|
|
This patch removes more of the DockerNamespace references as part
of the cleanup/reorg of the container configuration patches.
This also adds a centos-rdo environment file for use with
the new interface. This file was generated with the command
"openstack overcloud container image prepare"
Depends-On: I729fa00175cb36b02b882d729aae5ff06d0e3fbc
Depends-On: I292162d66880278de09f7acbdbf02e2312c5bb2b
Co-Authored-By: Dan Prince <dprince@redhat.com>
Change-Id: Ice7b57c25248634240a6dd6e14e6d411e7806326
|
|
|
|
Adds upgrade_tasks to remove the pacemaker resources using the
ansible-pacemaker module.
Resources are disabled and removed in step2 (called only on
bootstrap node) and then the cluster stop is moved to step3
The existing systemd/service call is kept but only to disable
services after they are disabled/deleted from the cluster.
Related-Bug: 1701485
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: Ia597d240ea5834c50a8f6c4fac0b6ed417b8535c
|
|
|