Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Use mounts instead of docker volumes, and preserve existing data when
moving from baremetal to containerized ironic-conductor.
We cannot keep the data in the same directory to avoid hard-linking
errors in ironic, because of this issue:
https://github.com/docker/docker/issues/7457
This means we need to copy the data over to a new location before we
start the containers.
Change-Id: If98460120212f887b06adf117c5d88b97682638e
|
|
|
|
|
|
|
|
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Closes-bug: #1668918
Change-Id: Ie1ebd25965bd2dbad2a22161da0022bad0b9e554
|
|
|
|
|
|
This is used for the TLS-everywhere bits. It will be taken into account
by a metadata hook that outputs relevant entries for the nova-metadata
service; and subsequently kerberos principals will be created from
these.
Subsequent patches will add support for TLS in the internal network for
the containerized keystone.
Change-Id: Ic747ad9c8d6e76c8c16e347c1cdcabc899dd9f9a
|
|
Use mounts instead of docker volumes to preserve existing data when
moving from baremetal to containerized Libvirt.
Change-Id: I2215d451a4ef4023741f0750ac1b45a94652026a
|
|
Use mounts instead of docker volumes to preserve existing data when
moving from baremetal to containerized Swift.
Change-Id: Ib7cbca2ef674a0245a67b69ee2c77f574d74c181
|
|
Change-Id: I936b31fd24c43e35092b3bfef4454a8da81d19c8
|
|
Since the 'file' resource is included in the tags that puppet takes into
account, we already generate the fernet keys if it's enabled as a token
provider.
This merely adds the keys to the container. However, if fernet is not
the provider, we make this file addition optional.
Change-Id: Id92039b3bad9ecda169323e01de7bebae70f2ba0
|
|
Use mounts instead of docker volumes to preserve existing data when
moving from baremetal to containerized RabbitMQ.
Change-Id: I8de6610d13d2d878ffba12eb742880eed694eb3e
|
|
We used named Docker volume for MongoDB storage, which meant that when
moving from bare metal to containerized, we lost data and reinitialized
the storage from scratch.
With this commit we keep the data by mounting the original data into the
container. We also need make sure that file ownership is correct
according to the uid/gid used within MongoDB container image.
Change-Id: I86ef2cb37a068b767462d6d50fe451389b7cbb58
|
|
We used named Docker volume for MariaDB storage, which meant that when
moving from BM to containerized wit MariaDB, we lost data and
reinitialized the storage from scratch.
With this commit we keep the data by mounting the original data into the
container.
We also need to make sure that file ownership is correct according to
the MariaDB container image used, and that Kolla bootstrap mechanisms
aren't retriggered, as they aren't idempotent.
Change-Id: I1fc955021c6dd83f1a366495dd8c7281fb9e7cc5
|
|
|
|
|
|
|
|
Closes-bug: #1668928
Change-Id: I291df31be97c3d55cddb3924482aa5976a79c2b1
|
|
This implements a host_prep_tasks hook where we can specify Ansible
tasks to perform on the host before deploying containerized
services. The hook runs in a single step, the assumption is that we will
mostly use the hook for creating per-service directories on the host to
ensure we are able to mount them into the containers. (We cannot do this
operation via Puppet because all containerized services run their Puppet
within a config container, so Puppet doesn't have access to host's
filesystem.)
Change-Id: I7d8bac39e0cd422fd651eefe29f7d10941ab4a1a
|
|
Add an example build script, and link the
current overrides template which can be
used to build Kolla packages that work with
TripleO.
Change-Id: I2ca122bfe797ed7c38a01ed9462cd880681d21f1
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
Closes-bug: #1668930
Change-Id: If5dff4388b255373083e164a74aaacd529a94111
|
|
Use yaml anchors wherever possible for image definition and drop unused
anchors.
Renamed parameters to Docker*ConfigImage to clarify that an image is
specifically used to generate configuration files.
Change-Id: I388bd59de7f1d36a3a881fbb723ba5bcba09e637
|
|
We don't use docker_image for anything. It is a remant of the
pre-composable docker templates and we can now remove it.
This patch removes references to the 'docker_image' section
from docker/post.yaml and all of the docker/services* templates.
Change-Id: I208c1ef1550ab39ab0ee47ab282f9b1937379810
|
|
This updates the docker/service README so that it
correctly documents the current requirements of the new
puppet_config interface.
Change-Id: I0f3e00ea3cce24152475abf6df34f4836e32c9c8
|
|
Currently returncodes are ignored from docker puppet workers, so a
failed puppet apply may not manifest until later in the stack
deployment due to some other failure.
This change logs any failures at the end of the run and returns a
failure code if any worker returns a failure code.
Change-Id: I6a504dbeb4c0ac465ce10e7647830524fe0a1160
|
|
This is now required per the puppet_config interfaces for docker
services (per I208c1ef1550ab39ab0ee47ab282f9b1937379810)
Change-Id: Iab96919cb0a6b15942f3c19f8d28205261174edc
|
|
A recent commit [1] change how docker is installed and configured on
the overcloud nodes, from a cloud-init script to a proper puppet
profile in puppet-tripleo but forgot to enable the docker service on
the compute nodes.
[1] Ia50169819cb959025866348b11337728f8ed5c9e
Change-Id: I202723d0e48f110e5b0dbfe3dcf6646da9f37948
|
|
This patch makes the neutron-l3 docker service adhere
to the new puppet_config interface.
Change-Id: If5b73ec90637e878af55c8404d1eff8c18e857c3
|
|
|
|
|
|
This aligns the docker based services with the new composable upgrades
architecture we landed for ocata, and does a first-pass adding upgrade_tasks
for the services (these may change, atm we only disable the service on
the host).
To run the upgrade workflow you basically do two steps:
openstack overcloud deploy --templates \
-e environments/major-upgrade-composable-steps-docker.yaml
This will run the ansible upgrade steps we define via upgrade_tasks
then run the normal docker PostDeploySteps to bring up the containers.
For the puppet workflow there's then an operator driven step where
compute nodes (and potentially storage nodes) are upgrades in batches
and finally you do:
openstack overcloud deploy --templates \
-e environments/major-upgrade-converge-docker.yaml
In the puppet case this re-applies puppet to unpin the nova RPC API
so I guess it'll restart the nova containers this affects but otherwise
will be a no-op (we also disable the ansible steps at this point.
Depends-On: I9057d47eea15c8ba92ca34717b6b5965d4425ab1
Change-Id: Ia50169819cb959025866348b11337728f8ed5c9e
|
|
|
|
In cases where /var/log/httpd already exists, this exits with error
code 1.
$ sudo docker logs keystone-init-log
mkdir: cannot create directory '/var/log/httpd': File exists
Change-Id: I62bf08d9fc9e02d5f3016bd14bb0a090b76ac837
|
|
This updates kolla config to overwrite the stock
version with the puppet-nova generated mock.
Depends-On: Ie16a60c604ecf9f4012b0630f91e6ece2b6855db
Change-Id: I320f024adc88102ea24c0212702fe2dce826874f
Closes-bug: #440612
|
|
|
|
This approach removes the need for the yaql zip to build the
docker-puppet data by building the data in a puppet_config dict.
This allows a future change to make docker-puppet.py only accept dict
data.
Currently the step_config is left where it is and referenced inside
puppet_config, but feedback is welcome whether this is necessary or
desirable.
Change-Id: I4a4d7a6fd2735cb841174af305dbb62e0b3d3e8c
|
|
This allows to run a containerized neutron on the overcloud.
Co-Authored-By: Martin André <m.andre@redhat.com>
Depends-On: Iaf6536b1c4d0b2b118af92295136378cdfeee9d1
Change-Id: I86a12248d4f28f4dbe7708be928bcd8a45968d01
|
|
Otherwise the containerized nova running in the overcloud fails with
"Host 'overcloud-novacompute-0' is not mapped to any cell, Code: 400".
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: I9ff77f25bfd1f37167b0638a32fe5049951bc5b4
|
|
We should always pass the `DOCKER_*` env vars to all the `docker`
commands that are executed in the various scripts as those variables may
contain the access data for the docker daemon.
Change-Id: Ie719f451350e6ea35cb22d97a8f090ad81fa8141
|
|
This change gives the option of docker-puppet.py data to be in a dict
as well as a list. This allows docker_puppet_tasks data to use the
same keys as the top level puppet config data.
If the yaql fu can be worked out to build the top level data,
docker-puppet.py can later drop the list format entirely.
Change-Id: I7e2294c6c898d2340421c93516296ccf120aa6d2
|
|
tool to use it."
|
|
This allows you to show the changes made to a container during
configuration stage for fast development.
Change-Id: Id9c72cf2b07486f0a80bf3572a7ba349888d877f
|
|
|
|
Neutron DB sync didn't have permission to read the config files, we now
run neutron-db-manage as root until we can find a more permanent
solution.
Change-Id: I502a8514adc523c7cac1da059be10480eef71cb9
Closes-Bug: #1667300
|
|
This patch sets the step correctly for docker_puppet_tasks.
This is now required in order to match the 'step' in some
puppet manifests explicitly so that things like keystone
initialization run correctly.
Closes-bug: #1667454
Change-Id: If2bdd0b1051125674f116f895832b48723d82b3a
|
|
|
|
|