Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
This removes the default container names from all the templates
and uses a single environment file to specify the full container
name and registry from which to pull. Also does away with most
of DockerNamespace.
Change-Id: Ieaedac33f0a25a352ab432cdb00b5c888be4ba27
Depends-On: Ibc108871ebc2beb1baae437105b2da1d0123ba60
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
|
|
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
|
|
This solves a problem with bind-mounts when the containers are holding
files descriptors open.
At the same time this makes the template more robust to puppet changes
since new config files will be available in the containers without
needing to update the templates.
Partial-Bug: #1698323
Change-Id: Ia4ad6d77387e3dc354cd131c2f9756939fb8f736
|
|
This commit consistently defines a heat template parameter in the form
of DockerXXXConfigImage where XXX represents the name of the
config_volume that is used by docker-puppet.
The goal is to mitigate hard to debug errors where the templates would
set different defaults for the image docker-puppet.py uses to run, for
the same config_volume name.
This fixes a couple of inconsistencies on the way.
Change-Id: I212020a76622a03521385a6cae4ce73e51ce5b6b
Closes-Bug: #1699791
|
|
In many occasions we had log directory initialization containers
without `detach: false`, which didn't guarantee that they'll finish
before the container depending on them will start using the log
directory.
This is now fixed by moving the initialization container one global
step earlier, so that we can keep the concurrency when creating the
log dirs. (Using `detach: false` makes paunch handle just one
container at a time, and as such it can have negative performance
impact.)
For services which have their container(s) starting in step_1,
initialization cannot be moved to an earlier step, so the solution
here was to just add `detach: false`.
As a minor related change, cinder DB sync container now mounts the log
directory from host to put cinder-manage.log into the expected
location.
Change-Id: I1340de4f68dd32c2412d9385cf3a8ca202b48556
|
|
This change modifies these mounts to be more specific mounts based on
the files which puppet actually modifies.
The result is something a bit more self-documenting, and allows for
trying other techniques for populating /etc other than directly mounting
config-data directories.
Change-Id: Ied1eab99d43afcd34c00af25b7e36e7e55ff88e6
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
This was forgotten in I72376a803ec6b2ed93903cc0c95a6ffce718b6dc and
broke containerized deployment.
Change-Id: I599a87bf06efbfefd3067c77ed6ca866505900f9
Closes-Bug: #1690870
|
|
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
|
|
Some containers are using the logs named volume for collecting logs
written to `/var/log`. We should make this consistent for all the
containers.
This patch also cleans up some mounts that weren't needed for some
services. For example, glance-api doesn't need `/run` to be mounted.
Other changes:
* Rework log volumes to hostpath mounts to omit slow COW writes.
* Add kolla_config's permission and host_prep_tasks create and
manage hostpath mounted log dirs permissions.
* Rework data owning init containers to kolla_config permissions
* When a step wants KOLLA_BOOTSTRAP or DB sync, use logs data owning
init containers to set permissions for logs. This is required
because kolla bootsrap and DB sync runs before the kolla config
stage and there is yet permissions set for logs.
* In order to address hybrid cases for host services vs containerized
ones to access logs having different UIDs, persist containerized
services' logs into separate directories (an upgrade impact)
* Ensure host prep tasks to create /var/log/containers/ and /var/lib/
sub-directories for services
* Fix missing /etc/httpd, /var/www config-data mounts for zaqar/ironic
* Fix YAML indentation and drop strings quotation.
Co-authored-by: Bogdan Dobrelya <bdobreli@redhat.com>
Partial blueprint containerized-services-logs
Change-Id: I53e737120bf0121bd28667f355b6f29f1b2a6b82
|
|
Kolla provides a way to set ownership of files and directory inside the
containers. Use it instead of running an additional container to do the
job.
Change-Id: I554faf7c797f3997dd3ca854da032437acecf490
|
|
Simplify the config of the containerized services by bind mounting in
the configurations instead of specifying them all in kolla config.
This is change is useful to limit the side effects of generating the
config files and running the container is two separate steps as config
directories are now bind-mounted inside the container instead of having
files being copied to the container. We've seen examples of Apache's
mod_ssl configuration file present on the container preventing it to
start when puppet configured apache not to load the ssl module (in case
TLS is disabled).
Co-Authored-By: Ian Main <imain@redhat.com>
Change-Id: I4ec5dd8b360faea71a044894a61790997f54d48a
|
|
|
|
We used named Docker volume for MariaDB storage, which meant that when
moving from BM to containerized wit MariaDB, we lost data and
reinitialized the storage from scratch.
With this commit we keep the data by mounting the original data into the
container.
We also need to make sure that file ownership is correct according to
the MariaDB container image used, and that Kolla bootstrap mechanisms
aren't retriggered, as they aren't idempotent.
Change-Id: I1fc955021c6dd83f1a366495dd8c7281fb9e7cc5
|
|
Use yaml anchors wherever possible for image definition and drop unused
anchors.
Renamed parameters to Docker*ConfigImage to clarify that an image is
specifically used to generate configuration files.
Change-Id: I388bd59de7f1d36a3a881fbb723ba5bcba09e637
|
|
We don't use docker_image for anything. It is a remant of the
pre-composable docker templates and we can now remove it.
This patch removes references to the 'docker_image' section
from docker/post.yaml and all of the docker/services* templates.
Change-Id: I208c1ef1550ab39ab0ee47ab282f9b1937379810
|
|
This aligns the docker based services with the new composable upgrades
architecture we landed for ocata, and does a first-pass adding upgrade_tasks
for the services (these may change, atm we only disable the service on
the host).
To run the upgrade workflow you basically do two steps:
openstack overcloud deploy --templates \
-e environments/major-upgrade-composable-steps-docker.yaml
This will run the ansible upgrade steps we define via upgrade_tasks
then run the normal docker PostDeploySteps to bring up the containers.
For the puppet workflow there's then an operator driven step where
compute nodes (and potentially storage nodes) are upgrades in batches
and finally you do:
openstack overcloud deploy --templates \
-e environments/major-upgrade-converge-docker.yaml
In the puppet case this re-applies puppet to unpin the nova RPC API
so I guess it'll restart the nova containers this affects but otherwise
will be a no-op (we also disable the ansible steps at this point.
Depends-On: I9057d47eea15c8ba92ca34717b6b5965d4425ab1
Change-Id: Ia50169819cb959025866348b11337728f8ed5c9e
|
|
This approach removes the need for the yaql zip to build the
docker-puppet data by building the data in a puppet_config dict.
This allows a future change to make docker-puppet.py only accept dict
data.
Currently the step_config is left where it is and referenced inside
puppet_config, but feedback is welcome whether this is necessary or
desirable.
Change-Id: I4a4d7a6fd2735cb841174af305dbb62e0b3d3e8c
|
|
This change gives the option of docker-puppet.py data to be in a dict
as well as a list. This allows docker_puppet_tasks data to use the
same keys as the top level puppet config data.
If the yaql fu can be worked out to build the top level data,
docker-puppet.py can later drop the list format entirely.
Change-Id: I7e2294c6c898d2340421c93516296ccf120aa6d2
|
|
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: If0ee671acbf6a9931622003a859089d61e2050b3
|