Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
This allows any ssh client spawned from a container to validate ssh host key.
Change-Id: I86d95848e5f049e8af98107cd7027098d6cdee7c
Closes-bug: #1693841
|
|
Works around the issue encountered in 1696283.
Change-Id: I1947d9d1e3cabc5dfe25ee1af994d684425bdbf7
Resolves-Bug: #1696283
|
|
This service is missing the task to stop/disable the service on
the host prior to it being started in a container.
Change-Id: I33d70d32c3b55e1f2738441f57c74b007e7bd766
Closes-Bug: #1695017
|
|
Add ServiceDebug parameters for each services that will allow operators
to enable/disable Debug for specific services.
We keep the Debug parameters for backward compatibility.
Operators want to enable Debug everywhere:
Debug: true
Operators want to disable Debug everywhere:
Debug: false
Operators want to disable Debug everywhere except Glance:
GlanceDebug: true
Operators want to enable Debug everywhere except Glance:
Debug: true
GlanceDebug: false
New parameters: AodhDebug, BarbicanDebug, CeilometerDebug, CinderDebug,
CongressDebug, GlanceDebug, GnocchiDebug, HeatDebug, HorizonDebug,
IronicDebug, KeystoneDebug, ManilaDebug, MistralDebug, NeutronDebug,
NovaDebug, OctaviaDebug, PankoDebug, SaharaDebug, TackerDebug,
ZaqarDebug.
Note: for backward compatibility in Horizon, HorizonDebug is set to
false, so we maintain previous behavior.
Change-Id: Icbf4a38afcdbd8471d1afc11743df9705451db52
Implement-blueprint: composable-debug
Closes-Bug: #1634567
|
|
|
|
|
|
Change-Id: I124d7f5bb0e662585a034668cd82ef1fa6db2fa1
|
|
|
|
|
|
|
|
Change-Id: I149ca7cdd939ed7c1767a416bb9569ada163e820
Closes-bug: #1696089
|
|
Replace the multiple SoftwareDeployment resources with a common
playbook that runs on all roles, consuming the configuration data
written via the HostPrepAnsible tasks.
This hopefully simplifies things, and will enable re-running the
deploy steps for minor updates (we'll need some way to detect
a container should be replaced, but that will be done via a
follow-up patch).
Change-Id: I674a4d9d2c77d1f6fbdb0996f6c9321848e32662
|
|
|
|
This change implements an initial container for haproxy in the non-HA
case (aka when the container is not spawn by pacemaker).
We tested this using a stock kolla haproxy container image and we were
able to get haproxy running on a container with net=host correctly.
Change-Id: I90253412a5e2cd8e56e74cce3548064c06d022b1
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Depends-on: I51c482b70731f15fee4025bbce14e46a49a49938
Closes-Bug: #1668936
|
|
|
|
|
|
|
|
|
|
Change-Id: Ie8010969443324dc76be8ade8edc1390b073345b
|
|
This helps with processing the backlog, so lets update
the default out of the box.
Change-Id: I06d4ca95f4a1da2864f4845ef3e7a74a1bce9e41
|
|
|
|
This service allows configuring and deploying Redis containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692924
Depends-On: Ia1131611d15670190b7b6654f72e6290bf7f8b9e
Change-Id: Ie045954fcc86ef2b3e4562b6f012853177f03948
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Ie9bdd9b16bcb9f11107ece614b010e87d3ae98a9 improperly used
CephPoolDefaultSite for the puppet-ceph-devel environment. The correct
configuration item is CephPoolDefaultSize.
Change-Id: If3a23f8d000061da62e4a7565a7fb6cf1ac97a4a
|
|
|
|
Idle compute nodes are found to already consume ~1.5GB of memory, so
2GB is a bit tight. Increasing to 4GB to be on the safe side. Also
see https://bugzilla.redhat.com/show_bug.cgi?id=1341178
Change-Id: Ic95984b62a748593992446271b197439fa12b376
|
|
Adds the ability to blacklist servers from all SoftwareDeployment
resources. The servers are specified in a new list parameter,
DeploymentServerBlacklist by the Heat assigned name
(overcloud-compute-0, etc).
implements blueprint disable-deployments
Change-Id: I46941e54a476c7cc8645cd1aff391c9c6c5434de
|
|
This fix needs to be backported to ocata.
Change-Id: I5938761efa4f56e576f41929e0bc12df246ac81a
Signed-off-by: Karthik S <ksundara@redhat.com>
Closes-Bug: #1694703
|
|
When gnocchi-upgrade run, we need to ensure storage is upgraded so we
initialize the necessary storage sacks.
Closes-bug: #1693621
Change-Id: I84e4fc3b6ad7fd966c4097a29678a0fd5b7a20a5
|
|
|
|
|
|
environment file."
|
|
|
|
When running disabled/ceilometer-expirer.yaml, we want to remove the
crontab that used to run ceilometer-expirer binary in periodic way.
Let's use Puppet to remove this crontab.
We can't easily use Ansible tasks this time, because the Ansible cron
module can only remove Crontabs previously managed by Ansible:
https://docs.ansible.com/ansible/cron_module.html#examples
In this case, Puppet will erase the crontab in Pike. In Queens, we'll be
able to remove these environments files since we wouldn't need it
anymore.
Change-Id: Idb050c3b281d258aea52d6a3ef40441bb9c8bcbe
|
|
When using the Deployed Server feature, we rely on Puppet to install
packages. But nova-compute/libvirt puppet is running in a container, so
it cannot install anything on the host. We rely on virtlogd on the host,
so we need to install it there some way. This patch uses host_prep_tasks
for that, conditionally based on the EnablePackageInstall stack
parameter value.
Also multinode-container-upgrade.yaml env is copied as
multinode-containers.yaml, to remove the naming confusion, as the
environment file can be used for more than just upgrades. The old env
file will be removed once we make the upgrade job use the new one (catch
22 type of issue).
Change-Id: Ia9b3071daa15bc30792110e5f34cd859cc205fb8
|
|
|
|
This adds the sshd puppet service to the containerized compute role
All other roles already include this service from the defaults roles data,
it is only missing from the compute role.
As the sshd service runs on the docker host, this must remain as a
traditional puppet service. NB the sshd puppet service does not enable
sshd, it just enables the management of the sshd config via t-h-t/puppet.
Closes-bug: #1693837
Change-Id: I86ff749245ac791e870528ad4b410f3c1fd812e0
|