Age | Commit message (Collapse) | Author | Files | Lines |
|
The containerized cinder service was merged a bit too soon and it
caused several issues in CI. Disable it temporarily to unblock CI until
it matures.
Change-Id: I8c6c0ce0011fddfec1e2de798d4fc6f34ae78de2
Related-Bug: #1700333
|
|
|
|
It was removed by mistake from the docker.yaml environment file in
I76f188438bfc6449b152c2861d99738e6eb3c61b.
Change-Id: If8df98e1ddd0961ab0c9e5df917fef8200db65e6
Closes-Bug: #1698749
|
|
|
|
The previous fix Ib10e4f18d967d356a15b97f58c488f8402a73356 made
multinode CI pass, but there was still an error during volume
scheduling on OVB:
OSError: [Errno 13] Permission denied: '/var/lib/cinder/conversion'
This was most likely due to cinder-volume was running on host and used
host's cinder user, while we still deployed containerized
cinder-backup and it chowned /var/lib/cinder under kolla's cinder user
whose UID doesn't match the baremetal one.
We didn't hit this issue in the multinode job because it doesn't
presently deploy cinder-backup service at all.
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: I9ac74d6717533f59945694b4a43fe56d7ca768c6
Closes-Bug: #1698136
|
|
CI was stuck on collecting logs. The collect-logs playbook, which
normally takes just a few minutes, took more than an hour and was
eventually killed.
The playbook was stuck on collecting LVM info on the overcloud node,
which runs this command:
(vgs; pvs; lvs) &> /var/log/extra/lvm.txt
Therefore it's very likely that the problematic part is the LVM setup
in the containerized cinder-volume service, and falling back to
non-contianerized for the time being should get the CI going
again.
Change-Id: Ib10e4f18d967d356a15b97f58c488f8402a73356
Closes-Bug: #1698136
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Depends-On: I3e865f2e9b6935eb3dfa4b4579c803f0127848ae
Change-Id: I09327a63d238a130b6ac0f2361f80e2b244b4b52
|
|
This service generates the /etc/my.cnf.d/tripleo.cnf file which is
being used to configured MySQL clients (e.g. client bind address,
client SSL configuration...)
We generate the config file in this service and let containerized MySQL clients
mount /var/lib/config-data/mysql_client/etc/my.cnf.d/tripleo.cnf it in their
own container. This way, when this MySQLClient service is updated, the other
containers will automatically pick the updated configuration at next restart.
Partial-Bug: #1692317
Change-Id: Idc56d27fb9645ad3b07df8ef08b7e2ce29e6d499
|
|
Depends-On: I037858a445742de58bd2f8d879f2b1272b07f481
Change-Id: Ifd138ea553a45a637a1a9fe3d0e946f8be51e119
|
|
Depends-On: I037858a445742de58bd2f8d879f2b1272b07f481
Change-Id: I808a5513decab1bd2cce949d05fd1acb17612a42
|
|
In change I90253412a5e2cd8e56e74cce3548064c06d022b1 we merged
containerized HAProxy setup, but because of a typo in resource
registry, CI kept using the non-containerized variant and it went
unnoticed that the containerized HAProxy doesn't work yet.
We merged a resource registry fix in
Ibcbacff16c3561b75e29b48270d60b60c1eb1083 and it brought down the CI,
which now used the non-working HAProxy.
After putting in the missing haproxy container image to tripleo-common
in I41c1064bbf5f26c8819de6d241dd0903add1bbaa we got further, but the
CI still fails on HAProxy related problem, so we should revert back to
using non-containerized HAProxy for the time being.
Change-Id: If73bf28288de10812f430619115814494618860f
Closes-Bug: #1697645
|
|
|
|
Adds docker service for Cinder Volume
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Partial-bug: #1668920
Change-Id: Ifadb007897f3455b90de6800751a0d08991ebca2
|
|
Adds docker services for Cinder Backup
Co-Authored-By: Gorka Eguileor <geguileo@redhat.com>
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Alan Bishop <abishop@redhat.com>
Partial-bug: #1668920
Change-Id: I26fc31e59b28da017f0b028b74bde40aaac53ad5
|
|
Adds docker services for Cinder API and Scheduler.
Co-Authored-By: Gorka Eguileor <geguileo@redhat.com>
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Alan Bishop <abishop@redhat.com>
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Change-Id: I5cff9587626a3b2a147e03146d5268242d1c9658
Partial-bug: #1668920
|
|
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Depends-On: I486de8b6ab2f4235bb4a21c3650f6b9e52a83b80
Change-Id: I6cf70fa05ad1c8aa6d9f837ddcd370eb26e45f97
|
|
This configures iscsid so that it runs as a container on
relevant roles (undercloud, controller, compute, and volume).
When the iscsid docker service is provision it will also run
an ansible snippet that disables the iscsid.socket on the host
OS thus disabling the hosts systemd from auto-starting iscsid
as it normally does.
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Change-Id: I2ea741ad978f166e199d47ed1b52369e9b031f1f
|
|
Move to one common services.yaml not only reduces the duplication, but it
should improve performance for the docker/services.yaml case, because we were
creating two ResourceChains with $many services which we know can be really
slow (especially since we seem to be missing concurrent: true on one)
Change-Id: I76f188438bfc6449b152c2861d99738e6eb3c61b
|
|
|
|
It is 'HAproxy' and not 'HAProxy'. This needs fixing so that the
proper service is instantiated when a role includes the HAproxy
service.
Change-Id: Ibcbacff16c3561b75e29b48270d60b60c1eb1083
|
|
|
|
This change implements an initial container for haproxy in the non-HA
case (aka when the container is not spawn by pacemaker).
We tested this using a stock kolla haproxy container image and we were
able to get haproxy running on a container with net=host correctly.
Change-Id: I90253412a5e2cd8e56e74cce3548064c06d022b1
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Depends-on: I51c482b70731f15fee4025bbce14e46a49a49938
Closes-Bug: #1668936
|
|
This adds the sshd puppet service to the containerized compute role
All other roles already include this service from the defaults roles data,
it is only missing from the compute role.
As the sshd service runs on the docker host, this must remain as a
traditional puppet service. NB the sshd puppet service does not enable
sshd, it just enables the management of the sshd config via t-h-t/puppet.
Closes-bug: #1693837
Change-Id: I86ff749245ac791e870528ad4b410f3c1fd812e0
|
|
|
|
|
|
This patch adds support for running the neutron metadata agent in a
container.
Change-Id: I53c62516c95d62f5ced70818d4eb4c2c341df0d7
Partial-Bug: #1668922
|
|
These duplicate the defaults in puppet/services/docker.yaml and
break things if you include an environment file (e.g that generated
by quickstart containers-default-parameters.yaml) before the
docker.yaml.
Instead it's probably more helpful to include the commented lines
showing how to enable use of a local docker registry.
Change-Id: I3896fa2ea7caa603186f0af04f6d8382d50dd97a
Closes-Bug: #1691524
|
|
Adds a service definition for Horizon running inside a docker container.
Co-Authored-By: Martin André <m.andre@redhat.com>
Closes-Bug: #1668926
Depends-On: I677ad57672215f6afe918e13b28c9ce2e1de5a81
Change-Id: I29f18722f4da48dab18f9e5c51b01fba42316734
|
|
Depends-on: I30ba93f76171e5993b5f0e1d7f1f5533acb25740
Closes-bug: #1668925
Change-Id: I3cb61d2d8765f9c2601bb00c4bfa24162883b96a
|
|
Closes-bug: #1668919
Change-Id: Ie750caa34c6fa22ca6eae6834b9ca20e15d97f7f
|
|
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Closes-bug: #1668918
Change-Id: Ie1ebd25965bd2dbad2a22161da0022bad0b9e554
|
|
Closes-bug: #1668928
Change-Id: I291df31be97c3d55cddb3924482aa5976a79c2b1
|
|
|
|
|
|
|
|
|
|
Closes-bug: #1668930
Change-Id: If5dff4388b255373083e164a74aaacd529a94111
|
|
This patch moves enabling Zaqar docker services into
a separate environment in the environments/services-docker
directory.
Change-Id: I6755eb7ae2abb2b9c8b213ff6fd21b0392353ef5
|
|
This patch moves enabling Mistral docker services into
a separate environment in the environments/services-docker
directory.
Change-Id: I8b484532de5f5d61fc0240defbc5fc27789a1279
|
|
This patch moves enabling Ironic docker services into
a separate environment in the environments/services-docker
directory.
Change-Id: I236de47d422b3563a0192359f2327610fc1714ca
|
|
A recent commit [1] change how docker is installed and configured on
the overcloud nodes, from a cloud-init script to a proper puppet
profile in puppet-tripleo but forgot to enable the docker service on
the compute nodes.
[1] Ia50169819cb959025866348b11337728f8ed5c9e
Change-Id: I202723d0e48f110e5b0dbfe3dcf6646da9f37948
|
|
This aligns the docker based services with the new composable upgrades
architecture we landed for ocata, and does a first-pass adding upgrade_tasks
for the services (these may change, atm we only disable the service on
the host).
To run the upgrade workflow you basically do two steps:
openstack overcloud deploy --templates \
-e environments/major-upgrade-composable-steps-docker.yaml
This will run the ansible upgrade steps we define via upgrade_tasks
then run the normal docker PostDeploySteps to bring up the containers.
For the puppet workflow there's then an operator driven step where
compute nodes (and potentially storage nodes) are upgrades in batches
and finally you do:
openstack overcloud deploy --templates \
-e environments/major-upgrade-converge-docker.yaml
In the puppet case this re-applies puppet to unpin the nova RPC API
so I guess it'll restart the nova containers this affects but otherwise
will be a no-op (we also disable the ansible steps at this point.
Depends-On: I9057d47eea15c8ba92ca34717b6b5965d4425ab1
Change-Id: Ia50169819cb959025866348b11337728f8ed5c9e
|