Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
The cron containers need to run as root in order to create PID files
correctly.
Additionally, the keystone_cron container was misconfigured to
use /usr/bin/cron instead of the correct /usr/bin/crond.
Additionally we have an issue where the Kolla keystone container has
hard coded ARGS for the docker container which causes -DFOREGROUND
(an Apache specific argument) to get appended onto the kolla_start
command thus causing crond to fail to startup correctly. This
works around the issue by overriding the command and calling
kolla_set_configs manually. Once we fix this in Kolla we can
revisit this.
Change-Id: Ib8fb2bef9a3bb89131265051e9ea304525b58374
Related-bug: 1707785
|
|
Services that access database have to read an extra MySQL configuration file
/etc/my.cnf.d/tripleo.cnf which holds client-only settings, like client bind
address and SSL configuration. The configuration file is thus used by
containerized services, but also by non-containerized services that still
run on the host.
In order to generate that client configuration file appropriately both on the
host and for containers, 1) the MySQLClient service must be included by the
role; 2) every containerized service which uses the database must include the
mysql::client profile in the docker-puppet config generation step.
By including the mysql::client profile in each containerized service, we ensure
that any change in configuration file will be reflected in the service's
/var/lib/config-data/{service}, and that paunch will restart the service's
container automatically.
We now only rely on MySQLClient from puppet/services, to make it possible to
generate /etc/my.cnf.d/tripleo.cnf on the host, and to set the hiera keys that
drive the generation of that config file in containers via docker-puppet.
We include a new YAML validation step to ensure that any service which depends
on MySQL will initialize the mysql::client profile during the docker-puppet
step.
Change-Id: I0dab1dc9caef1e749f1c42cfefeba179caebc8d7
|
|
The token-flush cron job is created in /var/spool/cron/keystone
by puppet. This patch creates a cron container to run that
in an environment where it has access to keystone.conf
and the keystone-manage binaries.
Change-Id: Ie305ee9990657c66938250d1d6e19fef94675997
Partial-bug: 1701254
|
|
|
|
This removes the default container names from all the templates
and uses a single environment file to specify the full container
name and registry from which to pull. Also does away with most
of DockerNamespace.
Change-Id: Ieaedac33f0a25a352ab432cdb00b5c888be4ba27
Depends-On: Ibc108871ebc2beb1baae437105b2da1d0123ba60
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
|
|
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
|
|
This solves a problem with bind-mounts when the containers are holding
files descriptors open.
At the same time this makes the template more robust to puppet changes
since new config files will be available in the containers without
needing to update the templates.
Partial-Bug: #1698323
Change-Id: Ia4ad6d77387e3dc354cd131c2f9756939fb8f736
|
|
This is necessary for accessing the bind mounted hieradata in the
container in order to determine if the node is the primary node.
With the new validation added to yaml-validate.py, we could spot
potential issues in sahara-api and keystone bootstrap tasks.
The keystone one is a false positive, as the image defaults to the root
user in order to be able to run apache. Still, it is better to be
consistent here and specify the root user nonetheless.
Change-Id: Ib0ff9748d5406f507261e506c19b96750b10e846
Closes-Bug: #1697917
|
|
This commit consistently defines a heat template parameter in the form
of DockerXXXConfigImage where XXX represents the name of the
config_volume that is used by docker-puppet.
The goal is to mitigate hard to debug errors where the templates would
set different defaults for the image docker-puppet.py uses to run, for
the same config_volume name.
This fixes a couple of inconsistencies on the way.
Change-Id: I212020a76622a03521385a6cae4ce73e51ce5b6b
Closes-Bug: #1699791
|
|
In many occasions we had log directory initialization containers
without `detach: false`, which didn't guarantee that they'll finish
before the container depending on them will start using the log
directory.
This is now fixed by moving the initialization container one global
step earlier, so that we can keep the concurrency when creating the
log dirs. (Using `detach: false` makes paunch handle just one
container at a time, and as such it can have negative performance
impact.)
For services which have their container(s) starting in step_1,
initialization cannot be moved to an earlier step, so the solution
here was to just add `detach: false`.
As a minor related change, cinder DB sync container now mounts the log
directory from host to put cinder-manage.log into the expected
location.
Change-Id: I1340de4f68dd32c2412d9385cf3a8ca202b48556
|
|
This change modifies these mounts to be more specific mounts based on
the files which puppet actually modifies.
The result is something a bit more self-documenting, and allows for
trying other techniques for populating /etc other than directly mounting
config-data directories.
Change-Id: Ied1eab99d43afcd34c00af25b7e36e7e55ff88e6
|
|
This patch guards db syncs and initialization code from executing
on multiple nodes at the same time by using the new
bootstrap_host_exec script. This helper script checks to make
sure the container is executing on the "bootstrap host" for the
specified service (arg 0) and then if it matches runs the
specified command.
Depends-On: If25f217bbb592edab4e1dde53ca99ed93c0e146c
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Change-Id: I0c864ca093ea476248b619d8c88477ef0b64e2eb
Closes-Bug: 1688380
|
|
This was forgotten in I72376a803ec6b2ed93903cc0c95a6ffce718b6dc and
broke containerized deployment.
Change-Id: I599a87bf06efbfefd3067c77ed6ca866505900f9
Closes-Bug: #1690870
|
|
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
|
|
Some containers are using the logs named volume for collecting logs
written to `/var/log`. We should make this consistent for all the
containers.
This patch also cleans up some mounts that weren't needed for some
services. For example, glance-api doesn't need `/run` to be mounted.
Other changes:
* Rework log volumes to hostpath mounts to omit slow COW writes.
* Add kolla_config's permission and host_prep_tasks create and
manage hostpath mounted log dirs permissions.
* Rework data owning init containers to kolla_config permissions
* When a step wants KOLLA_BOOTSTRAP or DB sync, use logs data owning
init containers to set permissions for logs. This is required
because kolla bootsrap and DB sync runs before the kolla config
stage and there is yet permissions set for logs.
* In order to address hybrid cases for host services vs containerized
ones to access logs having different UIDs, persist containerized
services' logs into separate directories (an upgrade impact)
* Ensure host prep tasks to create /var/log/containers/ and /var/lib/
sub-directories for services
* Fix missing /etc/httpd, /var/www config-data mounts for zaqar/ironic
* Fix YAML indentation and drop strings quotation.
Co-authored-by: Bogdan Dobrelya <bdobreli@redhat.com>
Partial blueprint containerized-services-logs
Change-Id: I53e737120bf0121bd28667f355b6f29f1b2a6b82
|
|
list_concat was introduced recently and is able to replace the yaql
calls for concatenating lists.
Change-Id: Id3a80a0e1e4c25b6d838898757c69ec99d0cd826
|
|
This enables common resources that the docker templates might need.
The initial resource only is common volumes, and two volumes are
introduced (localtime and hosts).
Change-Id: Ic55af32803f9493a61f9b57aff849bfc6187d992
|
|
This is only done when TLS-everywhere is enabled, and depends on those
directories being exclusive for services that run over httpd. Which is
the commit this is on top of.
Also, an environment file was added that's similar to
environments/docker.yaml. The difference is that this one will contain
the services that can run containerized with TLS-everywhere. This file
will be updated as more services get support for this.
bp tls-via-certmonger-containers
Change-Id: I87bf59f2c33de6cf2d4ce0679a5e0e22bc24bf78
|
|
Simplify the config of the containerized services by bind mounting in
the configurations instead of specifying them all in kolla config.
This is change is useful to limit the side effects of generating the
config files and running the container is two separate steps as config
directories are now bind-mounted inside the container instead of having
files being copied to the container. We've seen examples of Apache's
mod_ssl configuration file present on the container preventing it to
start when puppet configured apache not to load the ssl module (in case
TLS is disabled).
Co-Authored-By: Ian Main <imain@redhat.com>
Change-Id: I4ec5dd8b360faea71a044894a61790997f54d48a
|
|
Simplify the config of the keystone service by mounting in the
configurations instead of specifying them all in kolla config.
This is change is useful to limit the side effects of generating the
config files and running the container is two separate steps as config
directories are now bind-mounted inside the container instead of having
files being copied to the container. We've seen examples of Apache's
mod_ssl configuration file present on the container preventing it to
start when puppet configured apache not to load the ssl module (in case
TLS is disabled).
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: Ie33ffc7c2b1acf3e4e505d38efb104bf013f2ce6
|
|
Previously only the first two intial fernet keys were mounted into the
container. This is not practical, however, as doing key rotation will
generate more entries in this repository. So instead we mount the whole
directory, which would allow us to do rotation in the base host and
seamlessly affect the container as well.
Change-Id: I7763a09e57fe6a7867ffd079ab0b9222374c38c8
|
|
A previous commit [1] added support for fernet in the keystone docker
service; however, this was not set as the default token provider. This
patch makes it the default.
[1] Id92039b3bad9ecda169323e01de7bebae70f2ba0
Change-Id: Ib44ab61eba0be8ba54bc7d0bdb22437d769cb960
|
|
|
|
This is used for the TLS-everywhere bits. It will be taken into account
by a metadata hook that outputs relevant entries for the nova-metadata
service; and subsequently kerberos principals will be created from
these.
Subsequent patches will add support for TLS in the internal network for
the containerized keystone.
Change-Id: Ic747ad9c8d6e76c8c16e347c1cdcabc899dd9f9a
|
|
Since the 'file' resource is included in the tags that puppet takes into
account, we already generate the fernet keys if it's enabled as a token
provider.
This merely adds the keys to the container. However, if fernet is not
the provider, we make this file addition optional.
Change-Id: Id92039b3bad9ecda169323e01de7bebae70f2ba0
|
|
Use yaml anchors wherever possible for image definition and drop unused
anchors.
Renamed parameters to Docker*ConfigImage to clarify that an image is
specifically used to generate configuration files.
Change-Id: I388bd59de7f1d36a3a881fbb723ba5bcba09e637
|
|
We don't use docker_image for anything. It is a remant of the
pre-composable docker templates and we can now remove it.
This patch removes references to the 'docker_image' section
from docker/post.yaml and all of the docker/services* templates.
Change-Id: I208c1ef1550ab39ab0ee47ab282f9b1937379810
|
|
This aligns the docker based services with the new composable upgrades
architecture we landed for ocata, and does a first-pass adding upgrade_tasks
for the services (these may change, atm we only disable the service on
the host).
To run the upgrade workflow you basically do two steps:
openstack overcloud deploy --templates \
-e environments/major-upgrade-composable-steps-docker.yaml
This will run the ansible upgrade steps we define via upgrade_tasks
then run the normal docker PostDeploySteps to bring up the containers.
For the puppet workflow there's then an operator driven step where
compute nodes (and potentially storage nodes) are upgrades in batches
and finally you do:
openstack overcloud deploy --templates \
-e environments/major-upgrade-converge-docker.yaml
In the puppet case this re-applies puppet to unpin the nova RPC API
so I guess it'll restart the nova containers this affects but otherwise
will be a no-op (we also disable the ansible steps at this point.
Depends-On: I9057d47eea15c8ba92ca34717b6b5965d4425ab1
Change-Id: Ia50169819cb959025866348b11337728f8ed5c9e
|
|
|
|
In cases where /var/log/httpd already exists, this exits with error
code 1.
$ sudo docker logs keystone-init-log
mkdir: cannot create directory '/var/log/httpd': File exists
Change-Id: I62bf08d9fc9e02d5f3016bd14bb0a090b76ac837
|
|
This approach removes the need for the yaql zip to build the
docker-puppet data by building the data in a puppet_config dict.
This allows a future change to make docker-puppet.py only accept dict
data.
Currently the step_config is left where it is and referenced inside
puppet_config, but feedback is welcome whether this is necessary or
desirable.
Change-Id: I4a4d7a6fd2735cb841174af305dbb62e0b3d3e8c
|
|
This change gives the option of docker-puppet.py data to be in a dict
as well as a list. This allows docker_puppet_tasks data to use the
same keys as the top level puppet config data.
If the yaql fu can be worked out to build the top level data,
docker-puppet.py can later drop the list format entirely.
Change-Id: I7e2294c6c898d2340421c93516296ccf120aa6d2
|
|
This patch sets the step correctly for docker_puppet_tasks.
This is now required in order to match the 'step' in some
puppet manifests explicitly so that things like keystone
initialization run correctly.
Closes-bug: #1667454
Change-Id: If2bdd0b1051125674f116f895832b48723d82b3a
|
|
Depends-On: Icabdb30369c8ca15e77d169dc441bee8cfd3631f
Change-Id: Icec07f75f81953c4bf81ca21b4b02bc02e157562
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Ian Main <imain@redhat.com>
|