Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
The containerized version of the mongodb service omits the
metadata_settings definition [1], which confuses certmonger when
internal TLS is enabled and make the generation of certificates fail.
Use the right setting from the non-containerized profile.
[1] https://review.openstack.org/#/c/461780/
Change-Id: I50a9a3a822ba5ef5d2657a12c359b51b7a3a42f2
Closes-Bug: #1709553
|
|
This bind mounts the necessary files for the mongodb container to serve
TLS in the internal network.
bp tls-via-certmonger-containers
Change-Id: Ieef2a456a397f7d5df368ddd5003273cb0bb7259
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
|
|
Services that access database have to read an extra MySQL configuration file
/etc/my.cnf.d/tripleo.cnf which holds client-only settings, like client bind
address and SSL configuration. The configuration file is thus used by
containerized services, but also by non-containerized services that still
run on the host.
In order to generate that client configuration file appropriately both on the
host and for containers, 1) the MySQLClient service must be included by the
role; 2) every containerized service which uses the database must include the
mysql::client profile in the docker-puppet config generation step.
By including the mysql::client profile in each containerized service, we ensure
that any change in configuration file will be reflected in the service's
/var/lib/config-data/{service}, and that paunch will restart the service's
container automatically.
We now only rely on MySQLClient from puppet/services, to make it possible to
generate /etc/my.cnf.d/tripleo.cnf on the host, and to set the hiera keys that
drive the generation of that config file in containers via docker-puppet.
We include a new YAML validation step to ensure that any service which depends
on MySQL will initialize the mysql::client profile during the docker-puppet
step.
Change-Id: I0dab1dc9caef1e749f1c42cfefeba179caebc8d7
|
|
|
|
This removes the default container names from all the templates
and uses a single environment file to specify the full container
name and registry from which to pull. Also does away with most
of DockerNamespace.
Change-Id: Ieaedac33f0a25a352ab432cdb00b5c888be4ba27
Depends-On: Ibc108871ebc2beb1baae437105b2da1d0123ba60
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
|
|
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
|
|
This solves a problem with bind-mounts when the containers are holding
files descriptors open.
At the same time this makes the template more robust to puppet changes
since new config files will be available in the containers without
needing to update the templates.
Partial-Bug: #1698323
Change-Id: Ia4ad6d77387e3dc354cd131c2f9756939fb8f736
|
|
This commit consistently defines a heat template parameter in the form
of DockerXXXConfigImage where XXX represents the name of the
config_volume that is used by docker-puppet.
The goal is to mitigate hard to debug errors where the templates would
set different defaults for the image docker-puppet.py uses to run, for
the same config_volume name.
This fixes a couple of inconsistencies on the way.
Change-Id: I212020a76622a03521385a6cae4ce73e51ce5b6b
Closes-Bug: #1699791
|
|
|
|
In many occasions we had log directory initialization containers
without `detach: false`, which didn't guarantee that they'll finish
before the container depending on them will start using the log
directory.
This is now fixed by moving the initialization container one global
step earlier, so that we can keep the concurrency when creating the
log dirs. (Using `detach: false` makes paunch handle just one
container at a time, and as such it can have negative performance
impact.)
For services which have their container(s) starting in step_1,
initialization cannot be moved to an earlier step, so the solution
here was to just add `detach: false`.
As a minor related change, cinder DB sync container now mounts the log
directory from host to put cinder-manage.log into the expected
location.
Change-Id: I1340de4f68dd32c2412d9385cf3a8ca202b48556
|
|
This service generates the /etc/my.cnf.d/tripleo.cnf file which is
being used to configured MySQL clients (e.g. client bind address,
client SSL configuration...)
We generate the config file in this service and let containerized MySQL clients
mount /var/lib/config-data/mysql_client/etc/my.cnf.d/tripleo.cnf it in their
own container. This way, when this MySQLClient service is updated, the other
containers will automatically pick the updated configuration at next restart.
Partial-Bug: #1692317
Change-Id: Idc56d27fb9645ad3b07df8ef08b7e2ce29e6d499
|
|
This change modifies these mounts to be more specific mounts based on
the files which puppet actually modifies.
The result is something a bit more self-documenting, and allows for
trying other techniques for populating /etc other than directly mounting
config-data directories.
Change-Id: Ied1eab99d43afcd34c00af25b7e36e7e55ff88e6
|
|
This got missed in the patch which added host logging for most
other services.
Change-Id: I0be8a5bce6558ebaf5b4830138d1f6c31aec6394
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
This was forgotten in I72376a803ec6b2ed93903cc0c95a6ffce718b6dc and
broke containerized deployment.
Change-Id: I599a87bf06efbfefd3067c77ed6ca866505900f9
Closes-Bug: #1690870
|
|
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
|
|
Some containers are using the logs named volume for collecting logs
written to `/var/log`. We should make this consistent for all the
containers.
This patch also cleans up some mounts that weren't needed for some
services. For example, glance-api doesn't need `/run` to be mounted.
Other changes:
* Rework log volumes to hostpath mounts to omit slow COW writes.
* Add kolla_config's permission and host_prep_tasks create and
manage hostpath mounted log dirs permissions.
* Rework data owning init containers to kolla_config permissions
* When a step wants KOLLA_BOOTSTRAP or DB sync, use logs data owning
init containers to set permissions for logs. This is required
because kolla bootsrap and DB sync runs before the kolla config
stage and there is yet permissions set for logs.
* In order to address hybrid cases for host services vs containerized
ones to access logs having different UIDs, persist containerized
services' logs into separate directories (an upgrade impact)
* Ensure host prep tasks to create /var/log/containers/ and /var/lib/
sub-directories for services
* Fix missing /etc/httpd, /var/www config-data mounts for zaqar/ironic
* Fix YAML indentation and drop strings quotation.
Co-authored-by: Bogdan Dobrelya <bdobreli@redhat.com>
Partial blueprint containerized-services-logs
Change-Id: I53e737120bf0121bd28667f355b6f29f1b2a6b82
|
|
The puppet-redis module makes use of the exec puppet tag to copy the
/etc/redis.conf.puppet file to /etc/redis.conf. We need to explicitly
enable it otherwise our redis container will pick up the default redis
configuration and not the one that was generated with puppet.
Also creates the /var/run/redis directory on the host since we bind
mount /run, and ensure the container sets the correct ownership on the
directory.
Finally, configure redis to not daemonize otherwise the container ends
up in a restart loop.
Change-Id: Ia1dce2120ca7479eef8bc77dedf9431adbe210cc
Closes-Bug: #1686707
|
|
Closes-bug: #1668919
Change-Id: Ie750caa34c6fa22ca6eae6834b9ca20e15d97f7f
|
|
Kolla provides a way to set ownership of files and directory inside the
containers. Use it instead of running an additional container to do the
job.
Change-Id: I554faf7c797f3997dd3ca854da032437acecf490
|
|
Simplify the config of the containerized services by bind mounting in
the configurations instead of specifying them all in kolla config.
This is change is useful to limit the side effects of generating the
config files and running the container is two separate steps as config
directories are now bind-mounted inside the container instead of having
files being copied to the container. We've seen examples of Apache's
mod_ssl configuration file present on the container preventing it to
start when puppet configured apache not to load the ssl module (in case
TLS is disabled).
Co-Authored-By: Ian Main <imain@redhat.com>
Change-Id: I4ec5dd8b360faea71a044894a61790997f54d48a
|
|
Also add upgrade_tasks to disable corresponding host
services in order to not data race with containers
Change-Id: I19c16aaa3e5a73436ca7aa7d06facf64feee2327
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
We used named Docker volume for MongoDB storage, which meant that when
moving from bare metal to containerized, we lost data and reinitialized
the storage from scratch.
With this commit we keep the data by mounting the original data into the
container. We also need make sure that file ownership is correct
according to the uid/gid used within MongoDB container image.
Change-Id: I86ef2cb37a068b767462d6d50fe451389b7cbb58
|
|
We used named Docker volume for MariaDB storage, which meant that when
moving from BM to containerized wit MariaDB, we lost data and
reinitialized the storage from scratch.
With this commit we keep the data by mounting the original data into the
container.
We also need to make sure that file ownership is correct according to
the MariaDB container image used, and that Kolla bootstrap mechanisms
aren't retriggered, as they aren't idempotent.
Change-Id: I1fc955021c6dd83f1a366495dd8c7281fb9e7cc5
|
|
Use yaml anchors wherever possible for image definition and drop unused
anchors.
Renamed parameters to Docker*ConfigImage to clarify that an image is
specifically used to generate configuration files.
Change-Id: I388bd59de7f1d36a3a881fbb723ba5bcba09e637
|
|
We don't use docker_image for anything. It is a remant of the
pre-composable docker templates and we can now remove it.
This patch removes references to the 'docker_image' section
from docker/post.yaml and all of the docker/services* templates.
Change-Id: I208c1ef1550ab39ab0ee47ab282f9b1937379810
|
|
This aligns the docker based services with the new composable upgrades
architecture we landed for ocata, and does a first-pass adding upgrade_tasks
for the services (these may change, atm we only disable the service on
the host).
To run the upgrade workflow you basically do two steps:
openstack overcloud deploy --templates \
-e environments/major-upgrade-composable-steps-docker.yaml
This will run the ansible upgrade steps we define via upgrade_tasks
then run the normal docker PostDeploySteps to bring up the containers.
For the puppet workflow there's then an operator driven step where
compute nodes (and potentially storage nodes) are upgrades in batches
and finally you do:
openstack overcloud deploy --templates \
-e environments/major-upgrade-converge-docker.yaml
In the puppet case this re-applies puppet to unpin the nova RPC API
so I guess it'll restart the nova containers this affects but otherwise
will be a no-op (we also disable the ansible steps at this point.
Depends-On: I9057d47eea15c8ba92ca34717b6b5965d4425ab1
Change-Id: Ia50169819cb959025866348b11337728f8ed5c9e
|
|
This approach removes the need for the yaql zip to build the
docker-puppet data by building the data in a puppet_config dict.
This allows a future change to make docker-puppet.py only accept dict
data.
Currently the step_config is left where it is and referenced inside
puppet_config, but feedback is welcome whether this is necessary or
desirable.
Change-Id: I4a4d7a6fd2735cb841174af305dbb62e0b3d3e8c
|
|
This change gives the option of docker-puppet.py data to be in a dict
as well as a list. This allows docker_puppet_tasks data to use the
same keys as the top level puppet config data.
If the yaql fu can be worked out to build the top level data,
docker-puppet.py can later drop the list format entirely.
Change-Id: I7e2294c6c898d2340421c93516296ccf120aa6d2
|
|
|
|
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: If0ee671acbf6a9931622003a859089d61e2050b3
|
|
Change-Id: Ic3fd3bfd76d31ba515dbabdda7dfd06b9833a2ca
|