Age | Commit message (Collapse) | Author | Files | Lines |
|
This adds the sshd puppet service to the containerized compute role
All other roles already include this service from the defaults roles data,
it is only missing from the compute role.
As the sshd service runs on the docker host, this must remain as a
traditional puppet service. NB the sshd puppet service does not enable
sshd, it just enables the management of the sshd config via t-h-t/puppet.
Closes-bug: #1693837
Change-Id: I86ff749245ac791e870528ad4b410f3c1fd812e0
|
|
|
|
|
|
|
|
Add container-specific variants of scenarios. These variants are
supposed to be temporary, as their only purpose is to allow us CIing the
scenarios with containers while we don't have pacemaker containerized
yet. Once we can deploy and upgrade containerized deployment with
pacemaker, these should be deleted and normal scenarios should be used.
Alternative approach would be to edit the scenarios on the fly within
the CI job, to remove the pacemaker parts, which would be more DRY, but
perhaps more surprising when trying to debug issues.
Change-Id: I36ef3f4edf83ed06a75bc82940152e60f9a0941f
|
|
Add upgrade tasks for cinder-volume when it's controlled by pacemaker:
o Stop the service before the entire pacemaker cluster is stopped.
This ensures the service is stopped before infrastructure services
(e.g. rabbitmq) go away.
o Migrate the cinder DB prior to restarting the service. This covers
the situation when puppet-cinder (who otherwise would handle the db
sync) isn't managing the service.
o Start the service after the rest of the pacemaker cluster has been
started.
Closes-Bug: #1691851
Change-Id: I5874ab862964fadb68320d5c4de39b20f53dc25c
|
|
|
|
This will be used in our HA OVB CI job where we currently are
failing due to running out of memory. Telemetry will still be
tested via scenarios, but this will free up a large chunk of
memory in the most memory intensive job.
Closes-Bug: 1693174
Change-Id: Idefe9f0de47c5b0f29b7326642d697ed179e2eb8
|
|
OpenStack heavily relies on gratuitous ARP updates when moving floating
IP addresses between devices. When a floating IP moves, Neutron L3 agent
issues a burst of gratuitous ARP packets that should update any existing
ARP table entries on all nodes that belong to the same network segment.
Due to locktime kernel behavior, some gratuitous ARP packets may be
ignored [1], rendering ARP table entries broken for some time. Due to a
kernel bug [2], the time may be as long as hours, depending on other
traffic flowing to the node.
With the current EL7 kernel, the only way to make sure that nodes honor
all sent gratuitous ARP updates is to set arp_accept to 1; this will
disable locktime mechanism for the packets sent by Neutron L3 agent, and
will make sure ARP tables are always updated.
[1] https://patchwork.ozlabs.org/patch/762732/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1450203
Related-Bug: #1690165
Change-Id: I863b240e0ab4c4d5bb844f91b607fd0937d5cedf
|
|
Without this, ceilometer db gets hammered with gnocchi swift events.
Keystone creds are required so middleware can query for id.
Related change: I5c0f4f1a2c7fe7eb39ea6441970e9ac0946a4ec1
Change-Id: I9a7a80252703e470a69dc10352e7ece45ab23150
|
|
Add missing optional services for docker, if present in
non-docker optional services, and vice versa.
Fix issues with non containerized Mongo resources are
missing when deploying optional containerized zaqar service.
Add non containerized Ironix-Pxe resources to the optional
Ironic services, as it is done for the containerized Ironic.
Change-Id: I56675e015fa4bbd6d9809dbf7c21453939321410
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
Currently TripleO does not support LinuxBridge driver, setting
NeutronMechanismDrivers to linuxbridge will not force ml2 plugin
to use linuxbridge.
This commit adds new environment file which replaces default ovs
agent with linuxbridge on Compute and Controller nodes.
Change-Id: I433b60a551c1eeb9d956df4d0ffb6eeffe980071
Closes-Bug: #1652211
Depends-On: Iae87dc7811bc28fe86db0c422c363eaed5e5285b
Depends-On: Ie3ac03052f341c26735b423701e1decf7233d935
|
|
Since we disable mongodb by default, zaqar needs it in sceanio002 job.
Lets explicitly include it so it doesnt fail with:
Error while evaluating a Function Call, Could not find data item mongodb_node_ips
in any Hiera data file and no default supplied at
/etc/puppet/modules/tripleo/manifests/profile/base/zaqar.pp
Change-Id: I8f66def467d0c0175ad76f2ba5256b6a431934a8
|
|
|
|
This service allows configuring and deploying RabbitMQ containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Co-Authored-By: John Eckersberg <jeckersb@redhat.com>
Closes-Bug: #1692909
Depends-On: I0722e4a4d4716f477e8304cfa1aadd3eef7c2f31
Change-Id: I942737134385af775cade40c2d69516d4fe31a99
|
|
This service allows configuring and deploying MySQL/galera containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692842
Depends-On: I3b4d8ad2eec70080419882d5d822f78ebd3721ae
Change-Id: I790dbc30b3de1c1a3fe76d3d8f060e4d7f95e2e7
|
|
This service allows configuring and deploying HAProxy containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Pacemaker runs the
standard Kolla image but overrides the initial command so that
it explicitely calls HAProxy. This way, we shield ourselves from any
unexpected future change in Kolla.
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692908
Depends-On: Ifcf890a88ef003d3ab754cb677cbf34ba8db9312
Change-Id: I2f679bfe195733f4507e9b9e920b678e1370bb82
|
|
We had two exactly the same definitions of PreConfig in
docker_steps.j2.yaml. We should remove one of them.
I chose to remove the first definition, as the 2nd definition is amended
by change I674a4d9d2c77d1f6fbdb0996f6c9321848e32662, so we'll avoid a
conflict.
Change-Id: If65e30daefcf6552e085c7648c6691b7068834d4
|
|
GenerateConfigDeployment wasn't anchored with dependencies anywhere. If
it took too long to complete and step 1 of containers creation already
started executing, problems happened. This is now fixed by adding the
required dependency relationship.
Change-Id: Ie7dfd2a965e704ba278d4c2fad67f14a3a62799e
Closes-Bug: #1692503
|
|
|
|
In HA overcloud deployments, HAProxy makes use of a helper service called
"clustercheck", to check whether galera nodes are available for serving
traffic.
This change implements a dedicated service for clustercheck, which was
originally part of the pacemaker mysql service. The service is
configured by tripleo and the container's lifecycle is managed by docker,
like other containerized services.
Closes-Bug: #1692969
Change-Id: I8a5b30429f8ec3e484256a62a29ab7dee33ab291
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Depends-on: I1aabe34fa6a9c8c705a4405f275b66502c313cf2
|
|
right thing by default"
|
|
|
|
Service_provider is configured to point to networking-odl
Change-Id: Icdb1c1414b237a9409e8e7dc55bb3c01da41841c
Signed-off-by: Ricardo Noriega <rnoriega@redhat.com>
|
|
|
|
|
|
|
|
|
|
by default
The default value is 0 which has the minimum number be caluclated based on the replica count
from osd_pool_defaut_size. The default replica count is 3 and the calculated min_size is 2.
If the replica count is 1 then the min_size is 1. ie: min_size = replica - (replica/2)
Add CephPoolDefaultSize parameter to ceph-mon.yaml. This parameter defaults to 3 but can
be overriden. See puppet-ceph-devel.yaml for an example
Change-Id: Ie9bdd9b16bcb9f11107ece614b010e87d3ae98a9
|
|
This patch guards db syncs and initialization code from executing
on multiple nodes at the same time by using the new
bootstrap_host_exec script. This helper script checks to make
sure the container is executing on the "bootstrap host" for the
specified service (arg 0) and then if it matches runs the
specified command.
Depends-On: If25f217bbb592edab4e1dde53ca99ed93c0e146c
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Change-Id: I0c864ca093ea476248b619d8c88477ef0b64e2eb
Closes-Bug: 1688380
|
|
Change-Id: I468f654fa75b22aef398e2cba9c3625eb8f0767c
|
|
This is needed since it's what writes the service metadata to the nova
server in order to create the kerberos principals. It worked in a base
controller since the keystone template does have this. But if we would
deploy these services on a separate role, it would break. So this output
is needed.
bp tls-via-certmonger-containers
Change-Id: I3ee8c65d356dcd092a3fbf79041e5c69ef23b721
|
|
Agent service is disabled and service_provider is configured
to point to networking-odl
Change-Id: I570d15a092cff66666a74e95dee69f6531a58b22
Signed-off-by: Ricardo Noriega <rnoriega@redhat.com>
|
|
|
|
|
|
It's not used by any service that we enable by default. So instead, I
added it to the environment that enables the services that use it.
Change-Id: Id2e6550fb7c319fc52469644ea022cf35757e0ce
|
|
This changes both the service names and the file names for disabled
services, adding the 'disabled' suffix to them.
This comes with the reasoning that, if a service requires a disabled
service, and checks for the name in the "service_names" hiera entry, it
will appear as if the service was enabled, when it's actually not. So
changing the name and using this convention prevents that issue.
Change-Id: I308d6680a4d9b526f22ba0d7d20e5db638aadb9a
|
|
From ocata, the vhost socket directory requires a different set of
permissions from the default directory (/var/run/openvswitch). Modifying
the directory to a new agreed directory which will be created in puppet.
Closes-Bug: #1687993
Depends-On: I255f98c40869e7508ed01a03a96294284ecdc6a8
Change-Id: I77250ca84c9da2fb5a8381e6f60234f8a05cbf12
|
|
|
|
Even though this service was disabled by default in Pike[1], we still
want the entry in roles_data since it will actually disable the service
on upgrade.
A comment was added so we remember to remove it in Queens.
[1] Icffb7d1bb2cf7bd61026be7d2dcfbd70cd3bcbda
Change-Id: I2012d7494207bf3239f589bf80b8048abf72428f
|
|
This submission installs the Neutron L2 Gateway service
in the scenario 004.
This is only to check that the service is installed correctly
no sanity check is running yet.
Change-Id: I421802e9aa1a9f192860a6d72b4bb7c729666c3a
|
|
file.
During a deployment on lower spec systems, the "db sync" can take longer than five minutes.
The solution is to increase the default value of DatabaseSyncTimeout from 300 to 900
by using the environment file "low-memory-usage.yaml".
Change-Id: I6463dbdd4dfe1d6f2dd283211cc496fe3a628fb0
Closes-bug: #1689318
|
|
|
|
|
|
|
|
|
|
|
|
|