Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
This change implements an initial container for haproxy in the non-HA
case (aka when the container is not spawn by pacemaker).
We tested this using a stock kolla haproxy container image and we were
able to get haproxy running on a container with net=host correctly.
Change-Id: I90253412a5e2cd8e56e74cce3548064c06d022b1
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Depends-on: I51c482b70731f15fee4025bbce14e46a49a49938
Closes-Bug: #1668936
|
|
|
|
|
|
|
|
|
|
Change-Id: Ie8010969443324dc76be8ade8edc1390b073345b
|
|
This helps with processing the backlog, so lets update
the default out of the box.
Change-Id: I06d4ca95f4a1da2864f4845ef3e7a74a1bce9e41
|
|
|
|
This service allows configuring and deploying Redis containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692924
Depends-On: Ia1131611d15670190b7b6654f72e6290bf7f8b9e
Change-Id: Ie045954fcc86ef2b3e4562b6f012853177f03948
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Idle compute nodes are found to already consume ~1.5GB of memory, so
2GB is a bit tight. Increasing to 4GB to be on the safe side. Also
see https://bugzilla.redhat.com/show_bug.cgi?id=1341178
Change-Id: Ic95984b62a748593992446271b197439fa12b376
|
|
Adds the ability to blacklist servers from all SoftwareDeployment
resources. The servers are specified in a new list parameter,
DeploymentServerBlacklist by the Heat assigned name
(overcloud-compute-0, etc).
implements blueprint disable-deployments
Change-Id: I46941e54a476c7cc8645cd1aff391c9c6c5434de
|
|
This fix needs to be backported to ocata.
Change-Id: I5938761efa4f56e576f41929e0bc12df246ac81a
Signed-off-by: Karthik S <ksundara@redhat.com>
Closes-Bug: #1694703
|
|
When gnocchi-upgrade run, we need to ensure storage is upgraded so we
initialize the necessary storage sacks.
Closes-bug: #1693621
Change-Id: I84e4fc3b6ad7fd966c4097a29678a0fd5b7a20a5
|
|
|
|
|
|
environment file."
|
|
|
|
When running disabled/ceilometer-expirer.yaml, we want to remove the
crontab that used to run ceilometer-expirer binary in periodic way.
Let's use Puppet to remove this crontab.
We can't easily use Ansible tasks this time, because the Ansible cron
module can only remove Crontabs previously managed by Ansible:
https://docs.ansible.com/ansible/cron_module.html#examples
In this case, Puppet will erase the crontab in Pike. In Queens, we'll be
able to remove these environments files since we wouldn't need it
anymore.
Change-Id: Idb050c3b281d258aea52d6a3ef40441bb9c8bcbe
|
|
When using the Deployed Server feature, we rely on Puppet to install
packages. But nova-compute/libvirt puppet is running in a container, so
it cannot install anything on the host. We rely on virtlogd on the host,
so we need to install it there some way. This patch uses host_prep_tasks
for that, conditionally based on the EnablePackageInstall stack
parameter value.
Also multinode-container-upgrade.yaml env is copied as
multinode-containers.yaml, to remove the naming confusion, as the
environment file can be used for more than just upgrades. The old env
file will be removed once we make the upgrade job use the new one (catch
22 type of issue).
Change-Id: Ia9b3071daa15bc30792110e5f34cd859cc205fb8
|
|
|
|
This adds the sshd puppet service to the containerized compute role
All other roles already include this service from the defaults roles data,
it is only missing from the compute role.
As the sshd service runs on the docker host, this must remain as a
traditional puppet service. NB the sshd puppet service does not enable
sshd, it just enables the management of the sshd config via t-h-t/puppet.
Closes-bug: #1693837
Change-Id: I86ff749245ac791e870528ad4b410f3c1fd812e0
|
|
|
|
|
|
|
|
Add upgrade tasks for cinder-volume when it's controlled by pacemaker:
o Stop the service before the entire pacemaker cluster is stopped.
This ensures the service is stopped before infrastructure services
(e.g. rabbitmq) go away.
o Migrate the cinder DB prior to restarting the service. This covers
the situation when puppet-cinder (who otherwise would handle the db
sync) isn't managing the service.
o Start the service after the rest of the pacemaker cluster has been
started.
Closes-Bug: #1691851
Change-Id: I5874ab862964fadb68320d5c4de39b20f53dc25c
|
|
|
|
This will be used in our HA OVB CI job where we currently are
failing due to running out of memory. Telemetry will still be
tested via scenarios, but this will free up a large chunk of
memory in the most memory intensive job.
Closes-Bug: 1693174
Change-Id: Idefe9f0de47c5b0f29b7326642d697ed179e2eb8
|
|
OpenStack heavily relies on gratuitous ARP updates when moving floating
IP addresses between devices. When a floating IP moves, Neutron L3 agent
issues a burst of gratuitous ARP packets that should update any existing
ARP table entries on all nodes that belong to the same network segment.
Due to locktime kernel behavior, some gratuitous ARP packets may be
ignored [1], rendering ARP table entries broken for some time. Due to a
kernel bug [2], the time may be as long as hours, depending on other
traffic flowing to the node.
With the current EL7 kernel, the only way to make sure that nodes honor
all sent gratuitous ARP updates is to set arp_accept to 1; this will
disable locktime mechanism for the packets sent by Neutron L3 agent, and
will make sure ARP tables are always updated.
[1] https://patchwork.ozlabs.org/patch/762732/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1450203
Related-Bug: #1690165
Change-Id: I863b240e0ab4c4d5bb844f91b607fd0937d5cedf
|
|
Without this, ceilometer db gets hammered with gnocchi swift events.
Keystone creds are required so middleware can query for id.
Related change: I5c0f4f1a2c7fe7eb39ea6441970e9ac0946a4ec1
Change-Id: I9a7a80252703e470a69dc10352e7ece45ab23150
|
|
Add missing optional services for docker, if present in
non-docker optional services, and vice versa.
Fix issues with non containerized Mongo resources are
missing when deploying optional containerized zaqar service.
Add non containerized Ironix-Pxe resources to the optional
Ironic services, as it is done for the containerized Ironic.
Change-Id: I56675e015fa4bbd6d9809dbf7c21453939321410
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
Currently TripleO does not support LinuxBridge driver, setting
NeutronMechanismDrivers to linuxbridge will not force ml2 plugin
to use linuxbridge.
This commit adds new environment file which replaces default ovs
agent with linuxbridge on Compute and Controller nodes.
Change-Id: I433b60a551c1eeb9d956df4d0ffb6eeffe980071
Closes-Bug: #1652211
Depends-On: Iae87dc7811bc28fe86db0c422c363eaed5e5285b
Depends-On: Ie3ac03052f341c26735b423701e1decf7233d935
|
|
Since we disable mongodb by default, zaqar needs it in sceanio002 job.
Lets explicitly include it so it doesnt fail with:
Error while evaluating a Function Call, Could not find data item mongodb_node_ips
in any Hiera data file and no default supplied at
/etc/puppet/modules/tripleo/manifests/profile/base/zaqar.pp
Change-Id: I8f66def467d0c0175ad76f2ba5256b6a431934a8
|
|
|
|
This service allows configuring and deploying RabbitMQ containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Co-Authored-By: John Eckersberg <jeckersb@redhat.com>
Closes-Bug: #1692909
Depends-On: I0722e4a4d4716f477e8304cfa1aadd3eef7c2f31
Change-Id: I942737134385af775cade40c2d69516d4fe31a99
|
|
This service allows configuring and deploying MySQL/galera containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692842
Depends-On: I3b4d8ad2eec70080419882d5d822f78ebd3721ae
Change-Id: I790dbc30b3de1c1a3fe76d3d8f060e4d7f95e2e7
|
|
This service allows configuring and deploying HAProxy containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Pacemaker runs the
standard Kolla image but overrides the initial command so that
it explicitely calls HAProxy. This way, we shield ourselves from any
unexpected future change in Kolla.
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692908
Depends-On: Ifcf890a88ef003d3ab754cb677cbf34ba8db9312
Change-Id: I2f679bfe195733f4507e9b9e920b678e1370bb82
|