Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Without this evidently agent logs IO errors.
Change-Id: I3031212c582381ae6b6147a48101bf83a05caa8a
|
|
This got missed in the patch which added host logging for most
other services.
Change-Id: I0be8a5bce6558ebaf5b4830138d1f6c31aec6394
|
|
|
|
|
|
|
|
|
|
This allows any ssh client spawned from a container to validate ssh host key.
Change-Id: I86d95848e5f049e8af98107cd7027098d6cdee7c
Closes-bug: #1693841
|
|
Works around the issue encountered in 1696283.
Change-Id: I1947d9d1e3cabc5dfe25ee1af994d684425bdbf7
Resolves-Bug: #1696283
|
|
This service is missing the task to stop/disable the service on
the host prior to it being started in a container.
Change-Id: I33d70d32c3b55e1f2738441f57c74b007e7bd766
Closes-Bug: #1695017
|
|
|
|
|
|
Change-Id: I149ca7cdd939ed7c1767a416bb9569ada163e820
Closes-bug: #1696089
|
|
Replace the multiple SoftwareDeployment resources with a common
playbook that runs on all roles, consuming the configuration data
written via the HostPrepAnsible tasks.
This hopefully simplifies things, and will enable re-running the
deploy steps for minor updates (we'll need some way to detect
a container should be replaced, but that will be done via a
follow-up patch).
Change-Id: I674a4d9d2c77d1f6fbdb0996f6c9321848e32662
|
|
This change implements an initial container for haproxy in the non-HA
case (aka when the container is not spawn by pacemaker).
We tested this using a stock kolla haproxy container image and we were
able to get haproxy running on a container with net=host correctly.
Change-Id: I90253412a5e2cd8e56e74cce3548064c06d022b1
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Depends-on: I51c482b70731f15fee4025bbce14e46a49a49938
Closes-Bug: #1668936
|
|
|
|
This service allows configuring and deploying Redis containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692924
Depends-On: Ia1131611d15670190b7b6654f72e6290bf7f8b9e
Change-Id: Ie045954fcc86ef2b3e4562b6f012853177f03948
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When using the Deployed Server feature, we rely on Puppet to install
packages. But nova-compute/libvirt puppet is running in a container, so
it cannot install anything on the host. We rely on virtlogd on the host,
so we need to install it there some way. This patch uses host_prep_tasks
for that, conditionally based on the EnablePackageInstall stack
parameter value.
Also multinode-container-upgrade.yaml env is copied as
multinode-containers.yaml, to remove the naming confusion, as the
environment file can be used for more than just upgrades. The old env
file will be removed once we make the upgrade job use the new one (catch
22 type of issue).
Change-Id: Ia9b3071daa15bc30792110e5f34cd859cc205fb8
|
|
This service allows configuring and deploying RabbitMQ containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Co-Authored-By: John Eckersberg <jeckersb@redhat.com>
Closes-Bug: #1692909
Depends-On: I0722e4a4d4716f477e8304cfa1aadd3eef7c2f31
Change-Id: I942737134385af775cade40c2d69516d4fe31a99
|
|
This service allows configuring and deploying MySQL/galera containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Inside there is
pacemaker_remote which will invoke the resource agent managing galera.
The resources themselves are created via puppet-pacemaker inside a
short-lived container used for this purpose (mysql_init_bundle).
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692842
Depends-On: I3b4d8ad2eec70080419882d5d822f78ebd3721ae
Change-Id: I790dbc30b3de1c1a3fe76d3d8f060e4d7f95e2e7
|
|
This service allows configuring and deploying HAProxy containers
in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Pacemaker runs the
standard Kolla image but overrides the initial command so that
it explicitely calls HAProxy. This way, we shield ourselves from any
unexpected future change in Kolla.
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Closes-Bug: #1692908
Depends-On: Ifcf890a88ef003d3ab754cb677cbf34ba8db9312
Change-Id: I2f679bfe195733f4507e9b9e920b678e1370bb82
|
|
We had two exactly the same definitions of PreConfig in
docker_steps.j2.yaml. We should remove one of them.
I chose to remove the first definition, as the 2nd definition is amended
by change I674a4d9d2c77d1f6fbdb0996f6c9321848e32662, so we'll avoid a
conflict.
Change-Id: If65e30daefcf6552e085c7648c6691b7068834d4
|
|
GenerateConfigDeployment wasn't anchored with dependencies anywhere. If
it took too long to complete and step 1 of containers creation already
started executing, problems happened. This is now fixed by adding the
required dependency relationship.
Change-Id: Ie7dfd2a965e704ba278d4c2fad67f14a3a62799e
Closes-Bug: #1692503
|
|
In HA overcloud deployments, HAProxy makes use of a helper service called
"clustercheck", to check whether galera nodes are available for serving
traffic.
This change implements a dedicated service for clustercheck, which was
originally part of the pacemaker mysql service. The service is
configured by tripleo and the container's lifecycle is managed by docker,
like other containerized services.
Closes-Bug: #1692969
Change-Id: I8a5b30429f8ec3e484256a62a29ab7dee33ab291
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Depends-on: I1aabe34fa6a9c8c705a4405f275b66502c313cf2
|
|
This patch guards db syncs and initialization code from executing
on multiple nodes at the same time by using the new
bootstrap_host_exec script. This helper script checks to make
sure the container is executing on the "bootstrap host" for the
specified service (arg 0) and then if it matches runs the
specified command.
Depends-On: If25f217bbb592edab4e1dde53ca99ed93c0e146c
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Change-Id: I0c864ca093ea476248b619d8c88477ef0b64e2eb
Closes-Bug: 1688380
|
|
This is needed since it's what writes the service metadata to the nova
server in order to create the kerberos principals. It worked in a base
controller since the keystone template does have this. But if we would
deploy these services on a separate role, it would break. So this output
is needed.
bp tls-via-certmonger-containers
Change-Id: I3ee8c65d356dcd092a3fbf79041e5c69ef23b721
|
|
|
|
|
|
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
This patch adds support for running the neutron metadata agent in a
container.
Change-Id: I53c62516c95d62f5ced70818d4eb4c2c341df0d7
Partial-Bug: #1668922
|
|
proxy"
|
|
|
|
We already have an ansible deployment that applies the per-service
host_prep_tasks, so we can simplify the dependencies here by just
doing the docker-steps host preparation at the same time.
The motivation behind this is to both simplify the depends_on web we
have here, reduce the number of discrete deployments, and also to
potentially make running ansible directly e.g for debugging easier.
In a future patch we'll convert the configuration steps to work in
a similar way, such that they can be more easily reapplied e.g for
rolling minor updates, possibly outside of heat.
Change-Id: I9a201fc5a9e82c7fba4c2de36eb5332e21a81d37
|
|
|
|
This helps a bit with debugging issues, and the container will be
deleted on the next run when the same volume is configured.
Change-Id: I4f2f219bd7e40abafd0eb31c1275fdd8ed4db4da
|
|
Depends-on: I30ba93f76171e5993b5f0e1d7f1f5533acb25740
Closes-bug: #1668925
Change-Id: I3cb61d2d8765f9c2601bb00c4bfa24162883b96a
|
|
This spawns an extra container that runs httpd to run the TLS proxy that
will go in front of neutron server.
bp tls-via-certmonger-containers
Change-Id: I2529d78e889835f48c51e12d28ecd7c48739b02b
|
|
For TLS everywhere, neutron-server needs httpd in the image, since
it'll use a separate container that runs a TLS proxy to terminate
the connection. This requires the image where the configuration is
ran to have httpd installed, since there are several directories
and the user/group that's needed.
So, we then switch the image to be used to be neutron-server instead
of the openvswitch-agent image.
Change-Id: Ie16de3004925b7624f106d6c015ec04ef6031a06
Depends-On: I82f10ac0e7e692e6ba4a06dc10da9eaf79c60e7e
|
|
This was forgotten in I72376a803ec6b2ed93903cc0c95a6ffce718b6dc and
broke containerized deployment.
Change-Id: I599a87bf06efbfefd3067c77ed6ca866505900f9
Closes-Bug: #1690870
|
|
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
|
|
|
|
|
|
This spawns an extra container that runs httpd to run the TLS proxy that
will go in front of glance-api.
bp tls-via-certmonger-containers
Change-Id: If902ac732479832b9aa3e4a8d063b5be68a42a9b
|