Age | Commit message (Collapse) | Author | Files | Lines |
|
The previous fix Ib10e4f18d967d356a15b97f58c488f8402a73356 made
multinode CI pass, but there was still an error during volume
scheduling on OVB:
OSError: [Errno 13] Permission denied: '/var/lib/cinder/conversion'
This was most likely due to cinder-volume was running on host and used
host's cinder user, while we still deployed containerized
cinder-backup and it chowned /var/lib/cinder under kolla's cinder user
whose UID doesn't match the baremetal one.
We didn't hit this issue in the multinode job because it doesn't
presently deploy cinder-backup service at all.
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: I9ac74d6717533f59945694b4a43fe56d7ca768c6
Closes-Bug: #1698136
|
|
CI was stuck on collecting logs. The collect-logs playbook, which
normally takes just a few minutes, took more than an hour and was
eventually killed.
The playbook was stuck on collecting LVM info on the overcloud node,
which runs this command:
(vgs; pvs; lvs) &> /var/log/extra/lvm.txt
Therefore it's very likely that the problematic part is the LVM setup
in the containerized cinder-volume service, and falling back to
non-contianerized for the time being should get the CI going
again.
Change-Id: Ib10e4f18d967d356a15b97f58c488f8402a73356
Closes-Bug: #1698136
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Depends-On: I3e865f2e9b6935eb3dfa4b4579c803f0127848ae
Change-Id: I09327a63d238a130b6ac0f2361f80e2b244b4b52
|
|
Depends-On: I037858a445742de58bd2f8d879f2b1272b07f481
Change-Id: Ifd138ea553a45a637a1a9fe3d0e946f8be51e119
|
|
Depends-On: I037858a445742de58bd2f8d879f2b1272b07f481
Change-Id: I808a5513decab1bd2cce949d05fd1acb17612a42
|
|
In change I90253412a5e2cd8e56e74cce3548064c06d022b1 we merged
containerized HAProxy setup, but because of a typo in resource
registry, CI kept using the non-containerized variant and it went
unnoticed that the containerized HAProxy doesn't work yet.
We merged a resource registry fix in
Ibcbacff16c3561b75e29b48270d60b60c1eb1083 and it brought down the CI,
which now used the non-working HAProxy.
After putting in the missing haproxy container image to tripleo-common
in I41c1064bbf5f26c8819de6d241dd0903add1bbaa we got further, but the
CI still fails on HAProxy related problem, so we should revert back to
using non-containerized HAProxy for the time being.
Change-Id: If73bf28288de10812f430619115814494618860f
Closes-Bug: #1697645
|
|
|
|
Adds docker service for Cinder Volume
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Partial-bug: #1668920
Change-Id: Ifadb007897f3455b90de6800751a0d08991ebca2
|
|
Adds docker services for Cinder Backup
Co-Authored-By: Gorka Eguileor <geguileo@redhat.com>
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Alan Bishop <abishop@redhat.com>
Partial-bug: #1668920
Change-Id: I26fc31e59b28da017f0b028b74bde40aaac53ad5
|
|
Adds docker services for Cinder API and Scheduler.
Co-Authored-By: Gorka Eguileor <geguileo@redhat.com>
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Alan Bishop <abishop@redhat.com>
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Change-Id: I5cff9587626a3b2a147e03146d5268242d1c9658
Partial-bug: #1668920
|
|
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Depends-On: I486de8b6ab2f4235bb4a21c3650f6b9e52a83b80
Change-Id: I6cf70fa05ad1c8aa6d9f837ddcd370eb26e45f97
|
|
This configures iscsid so that it runs as a container on
relevant roles (undercloud, controller, compute, and volume).
When the iscsid docker service is provision it will also run
an ansible snippet that disables the iscsid.socket on the host
OS thus disabling the hosts systemd from auto-starting iscsid
as it normally does.
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Change-Id: I2ea741ad978f166e199d47ed1b52369e9b031f1f
|
|
|
|
It is 'HAproxy' and not 'HAProxy'. This needs fixing so that the
proper service is instantiated when a role includes the HAproxy
service.
Change-Id: Ibcbacff16c3561b75e29b48270d60b60c1eb1083
|
|
|
|
This change implements an initial container for haproxy in the non-HA
case (aka when the container is not spawn by pacemaker).
We tested this using a stock kolla haproxy container image and we were
able to get haproxy running on a container with net=host correctly.
Change-Id: I90253412a5e2cd8e56e74cce3548064c06d022b1
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Depends-on: I51c482b70731f15fee4025bbce14e46a49a49938
Closes-Bug: #1668936
|
|
This adds the sshd puppet service to the containerized compute role
All other roles already include this service from the defaults roles data,
it is only missing from the compute role.
As the sshd service runs on the docker host, this must remain as a
traditional puppet service. NB the sshd puppet service does not enable
sshd, it just enables the management of the sshd config via t-h-t/puppet.
Closes-bug: #1693837
Change-Id: I86ff749245ac791e870528ad4b410f3c1fd812e0
|
|
|
|
|
|
This patch adds support for running the neutron metadata agent in a
container.
Change-Id: I53c62516c95d62f5ced70818d4eb4c2c341df0d7
Partial-Bug: #1668922
|
|
These duplicate the defaults in puppet/services/docker.yaml and
break things if you include an environment file (e.g that generated
by quickstart containers-default-parameters.yaml) before the
docker.yaml.
Instead it's probably more helpful to include the commented lines
showing how to enable use of a local docker registry.
Change-Id: I3896fa2ea7caa603186f0af04f6d8382d50dd97a
Closes-Bug: #1691524
|
|
Adds a service definition for Horizon running inside a docker container.
Co-Authored-By: Martin André <m.andre@redhat.com>
Closes-Bug: #1668926
Depends-On: I677ad57672215f6afe918e13b28c9ce2e1de5a81
Change-Id: I29f18722f4da48dab18f9e5c51b01fba42316734
|
|
Depends-on: I30ba93f76171e5993b5f0e1d7f1f5533acb25740
Closes-bug: #1668925
Change-Id: I3cb61d2d8765f9c2601bb00c4bfa24162883b96a
|
|
Closes-bug: #1668919
Change-Id: Ie750caa34c6fa22ca6eae6834b9ca20e15d97f7f
|
|
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Closes-bug: #1668918
Change-Id: Ie1ebd25965bd2dbad2a22161da0022bad0b9e554
|
|
Closes-bug: #1668928
Change-Id: I291df31be97c3d55cddb3924482aa5976a79c2b1
|
|
|
|
|
|
|
|
|
|
Closes-bug: #1668930
Change-Id: If5dff4388b255373083e164a74aaacd529a94111
|
|
This patch moves enabling Zaqar docker services into
a separate environment in the environments/services-docker
directory.
Change-Id: I6755eb7ae2abb2b9c8b213ff6fd21b0392353ef5
|
|
This patch moves enabling Mistral docker services into
a separate environment in the environments/services-docker
directory.
Change-Id: I8b484532de5f5d61fc0240defbc5fc27789a1279
|
|
This patch moves enabling Ironic docker services into
a separate environment in the environments/services-docker
directory.
Change-Id: I236de47d422b3563a0192359f2327610fc1714ca
|
|
A recent commit [1] change how docker is installed and configured on
the overcloud nodes, from a cloud-init script to a proper puppet
profile in puppet-tripleo but forgot to enable the docker service on
the compute nodes.
[1] Ia50169819cb959025866348b11337728f8ed5c9e
Change-Id: I202723d0e48f110e5b0dbfe3dcf6646da9f37948
|
|
This aligns the docker based services with the new composable upgrades
architecture we landed for ocata, and does a first-pass adding upgrade_tasks
for the services (these may change, atm we only disable the service on
the host).
To run the upgrade workflow you basically do two steps:
openstack overcloud deploy --templates \
-e environments/major-upgrade-composable-steps-docker.yaml
This will run the ansible upgrade steps we define via upgrade_tasks
then run the normal docker PostDeploySteps to bring up the containers.
For the puppet workflow there's then an operator driven step where
compute nodes (and potentially storage nodes) are upgrades in batches
and finally you do:
openstack overcloud deploy --templates \
-e environments/major-upgrade-converge-docker.yaml
In the puppet case this re-applies puppet to unpin the nova RPC API
so I guess it'll restart the nova containers this affects but otherwise
will be a no-op (we also disable the ansible steps at this point.
Depends-On: I9057d47eea15c8ba92ca34717b6b5965d4425ab1
Change-Id: Ia50169819cb959025866348b11337728f8ed5c9e
|
|
This allows to run a containerized neutron on the overcloud.
Co-Authored-By: Martin André <m.andre@redhat.com>
Depends-On: Iaf6536b1c4d0b2b118af92295136378cdfeee9d1
Change-Id: I86a12248d4f28f4dbe7708be928bcd8a45968d01
|
|
A recent patch enabled a few containerized services on the Controller
node. We need to enable docker for all the roles.
Change-Id: I99fc0c2d29db3514a439b717d14367ad2252e450
|
|
This patch avoids conflicts when cherry picking
docker services in our ad-hoc docker services
patch series and enables them all at once.
Change-Id: Ia4f7c8071d8b4f3c9a7d73173e9120eb1e79ce53
|
|
This patch implements a new docker deployment architecture that
should us to install docker services in a stepwise manner alongside
of baremetal puppet services. This works by using Yaql to select
docker specific services (docker/services/*.yaml) vs the puppet
specific ones and then applying the selected Json to relevant Heat
software deployments for docker and baremetal puppet in a stepwise
fashion.
Additionally the new architecture
leverages new composable services interfaces from Newton to
allow configuration of per-service container configuration
sets (directories that are bind mounted into kolla containers) by
using the Kolla containers themselves. It does this by spinning up
a throw away "configuration only" version of the container being
configured itself, then running the puppet apply in that container and
copying the generated config files into /var/lib/config-data. This
avoids having to install all of the OpenStack dependency packages
in the heat-agent-container itself (our previous approach) and should
allow us to configure a much wider variety of container config files
that would otherwise be impossible with the previous shared approach.
The new approach (combined) should allow us to configure containers in
both the undercloud and overcloud and incrementally add CI coverage to
services as we containerize them.
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Ian Main <imain@redhat.com>
Co-Authored-By: Flavio Percoco <flavio@redhat.com>
Change-Id: Ibcff99f03e6751fbf3197adefd5d344178b71fc2
|
|
This switches to using overcloud-full as the OS image for
containerized compute. It includes the following changes:
- install docker, until this change lands
I1eab2a6de721c8f3c21c7df0019f2d4d1cc3775f
- agent image pull has been removed. This avoids a race between docker
starting and the current call to pull. This relies on "docker run"
to do the initial pull and leaves open the option of some other
prefetch mechanism to do the initial pull
- rely on unit Conflicts= to ensure heat-docker-agents and
os-collect-config do not run at the same time
- tweaks to host bind mounts
- removal of commands which only apply to atomic
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: I2e82634785834a877a4dbdbdcd788a9ac1c14a9d
|
|
Currently when the docker environments are invoked, every node has the
boot script run which replaces os-collect-config with the heat-agents
container. This should only be happening on Compute nodes currently,
and each role will be converted to heat-agents one at a time.
This change implements a role-specific NodeUserData resource and uses
that mechanism to run docker/firstboot/install_docker_agents.yaml only
on Compute nodes.
Change-Id: Id81811dbcaf0e661c3980aa25f3ca80db5ef0954
|