Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Various containerized services (e.g. nova, neutron, heat) run initial set up
steps with some ephemeral containers that don't use kolla_start. The
tripleo.cnf file is not copied in /etc/my.cnf.d and this can break some
deployments (e.g. when using internal TLS, service lack SSL settings).
Fix the configuration of transient containers by bind mounting of the
tripleo.cnf file when kolla_start is not used.
Change-Id: I5246f9d52fcf8c8af81de7a0dd8281169c971577
Closes-Bug: #1710127
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
|
|
With these two services running over httpd in the containers, we can now
enable TLS for them.
bp tls-via-certmonger-containers
Change-Id: Ib8fc37a391e3b32feef0ac6492492c0088866d21
|
|
The non-containerized version will run over httpd [1], and for the
containerized TLS work, it is needed in the container version as well.
[1] Iac35b7ddcd8a800901548c75ca8d5083ad17e4d3
bp tls-via-certmonger-containers
Depends-On: I1c5f13039414f17312f91a5e0fd02019aa08e00e
Change-Id: I2c39a2957fd95dd261b5b8c4df5e66e00a68d2f7
|
|
|
|
These are needed for the TLS everywhere bits.
Change-Id: I81fcf453fc1aaa2545e0ed24013f0f13b240a102
|
|
Services that access database have to read an extra MySQL configuration file
/etc/my.cnf.d/tripleo.cnf which holds client-only settings, like client bind
address and SSL configuration. The configuration file is thus used by
containerized services, but also by non-containerized services that still
run on the host.
In order to generate that client configuration file appropriately both on the
host and for containers, 1) the MySQLClient service must be included by the
role; 2) every containerized service which uses the database must include the
mysql::client profile in the docker-puppet config generation step.
By including the mysql::client profile in each containerized service, we ensure
that any change in configuration file will be reflected in the service's
/var/lib/config-data/{service}, and that paunch will restart the service's
container automatically.
We now only rely on MySQLClient from puppet/services, to make it possible to
generate /etc/my.cnf.d/tripleo.cnf on the host, and to set the hiera keys that
drive the generation of that config file in containers via docker-puppet.
We include a new YAML validation step to ensure that any service which depends
on MySQL will initialize the mysql::client profile during the docker-puppet
step.
Change-Id: I0dab1dc9caef1e749f1c42cfefeba179caebc8d7
|
|
This patch reworks the nova_api_cron container so that it contains
the /var/spool/cron cron for the nova user and also so that it
contains the correct nova.conf file.
This should allow kolla-start to copy the correct config files
into place and then start the cron service to run the nova
tasks periodically.
Change-Id: Ib6b2ca5af5419130fb9c83f83d6f4bf97410e870
Related-bug: #1701254
|
|
|
|
This removes the default container names from all the templates
and uses a single environment file to specify the full container
name and registry from which to pull. Also does away with most
of DockerNamespace.
Change-Id: Ieaedac33f0a25a352ab432cdb00b5c888be4ba27
Depends-On: Ibc108871ebc2beb1baae437105b2da1d0123ba60
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
|
|
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
|
|
This change enables the puppet cron resource in docker-puppet.py and adds user
crontabs to the paths copied from the config containers.
Only the nova crontab is configured for now. Other services will require
similar changes to run their crontabs.
Partial-Bug: 1701254
Change-Id: I2d1d0f0d77908a132472cf4bc475f8bd526af504
Depends-On: Ie16fb4539481a3c192cff8220a97daa4c70467fc
|
|
This solves a problem with bind-mounts when the containers are holding
files descriptors open.
At the same time this makes the template more robust to puppet changes
since new config files will be available in the containers without
needing to update the templates.
Partial-Bug: #1698323
Change-Id: Ia4ad6d77387e3dc354cd131c2f9756939fb8f736
|
|
|
|
This is needed for TLS everywhere.
Change-Id: Iac35b7ddcd8a800901548c75ca8d5083ad17e4d3
Depends-On: I426bfdb9e6c852eb32d10a12e521bb8b47701c41
|
|
This commit consistently defines a heat template parameter in the form
of DockerXXXConfigImage where XXX represents the name of the
config_volume that is used by docker-puppet.
The goal is to mitigate hard to debug errors where the templates would
set different defaults for the image docker-puppet.py uses to run, for
the same config_volume name.
This fixes a couple of inconsistencies on the way.
Change-Id: I212020a76622a03521385a6cae4ce73e51ce5b6b
Closes-Bug: #1699791
|
|
In many occasions we had log directory initialization containers
without `detach: false`, which didn't guarantee that they'll finish
before the container depending on them will start using the log
directory.
This is now fixed by moving the initialization container one global
step earlier, so that we can keep the concurrency when creating the
log dirs. (Using `detach: false` makes paunch handle just one
container at a time, and as such it can have negative performance
impact.)
For services which have their container(s) starting in step_1,
initialization cannot be moved to an earlier step, so the solution
here was to just add `detach: false`.
As a minor related change, cinder DB sync container now mounts the log
directory from host to put cinder-manage.log into the expected
location.
Change-Id: I1340de4f68dd32c2412d9385cf3a8ca202b48556
|
|
This patch guards db syncs and initialization code from executing
on multiple nodes at the same time by using the new
bootstrap_host_exec script. This helper script checks to make
sure the container is executing on the "bootstrap host" for the
specified service (arg 0) and then if it matches runs the
specified command.
Depends-On: If25f217bbb592edab4e1dde53ca99ed93c0e146c
Depends-On: Ic1585bae27c318bd6bafc287e905f2ed250cce0f
Change-Id: I0c864ca093ea476248b619d8c88477ef0b64e2eb
Closes-Bug: 1688380
|
|
This was forgotten in I72376a803ec6b2ed93903cc0c95a6ffce718b6dc and
broke containerized deployment.
Change-Id: I599a87bf06efbfefd3067c77ed6ca866505900f9
Closes-Bug: #1690870
|
|
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
|
|
Some containers are using the logs named volume for collecting logs
written to `/var/log`. We should make this consistent for all the
containers.
This patch also cleans up some mounts that weren't needed for some
services. For example, glance-api doesn't need `/run` to be mounted.
Other changes:
* Rework log volumes to hostpath mounts to omit slow COW writes.
* Add kolla_config's permission and host_prep_tasks create and
manage hostpath mounted log dirs permissions.
* Rework data owning init containers to kolla_config permissions
* When a step wants KOLLA_BOOTSTRAP or DB sync, use logs data owning
init containers to set permissions for logs. This is required
because kolla bootsrap and DB sync runs before the kolla config
stage and there is yet permissions set for logs.
* In order to address hybrid cases for host services vs containerized
ones to access logs having different UIDs, persist containerized
services' logs into separate directories (an upgrade impact)
* Ensure host prep tasks to create /var/log/containers/ and /var/lib/
sub-directories for services
* Fix missing /etc/httpd, /var/www config-data mounts for zaqar/ironic
* Fix YAML indentation and drop strings quotation.
Co-authored-by: Bogdan Dobrelya <bdobreli@redhat.com>
Partial blueprint containerized-services-logs
Change-Id: I53e737120bf0121bd28667f355b6f29f1b2a6b82
|
|
list_concat was introduced recently and is able to replace the yaql
calls for concatenating lists.
Change-Id: Id3a80a0e1e4c25b6d838898757c69ec99d0cd826
|
|
This enables common resources that the docker templates might need.
The initial resource only is common volumes, and two volumes are
introduced (localtime and hosts).
Change-Id: Ic55af32803f9493a61f9b57aff849bfc6187d992
|
|
Per puppet-nova commit 2c743a6bff5b17a85d1e0500f3a9ecb21468204e
there is now a custom resource for Nova_cell_v2 configuration.
As this resource runs automatically regardless of our use
of puppet tags we need to explicitly disable it to be able to
generate Nova API configs for docker.
Change-Id: Id675dc124464acddc3fc5a88b017a351e93ba685
Closes-bug: #1681841
|
|
Simplify the config of the containerized services by bind mounting in
the configurations instead of specifying them all in kolla config.
This is change is useful to limit the side effects of generating the
config files and running the container is two separate steps as config
directories are now bind-mounted inside the container instead of having
files being copied to the container. We've seen examples of Apache's
mod_ssl configuration file present on the container preventing it to
start when puppet configured apache not to load the ssl module (in case
TLS is disabled).
Co-Authored-By: Ian Main <imain@redhat.com>
Change-Id: I4ec5dd8b360faea71a044894a61790997f54d48a
|
|
The previous code had a race condition where nova-api host discovery
and nova-compute where run at the same step. This commit ensures host
discovery happens after nova-compute has started.
Change-Id: Id2fc795a64783d958d98d4ac523a19079e8a4fab
Closes-Bug: #1675011
|
|
Use yaml anchors wherever possible for image definition and drop unused
anchors.
Renamed parameters to Docker*ConfigImage to clarify that an image is
specifically used to generate configuration files.
Change-Id: I388bd59de7f1d36a3a881fbb723ba5bcba09e637
|
|
We don't use docker_image for anything. It is a remant of the
pre-composable docker templates and we can now remove it.
This patch removes references to the 'docker_image' section
from docker/post.yaml and all of the docker/services* templates.
Change-Id: I208c1ef1550ab39ab0ee47ab282f9b1937379810
|
|
This aligns the docker based services with the new composable upgrades
architecture we landed for ocata, and does a first-pass adding upgrade_tasks
for the services (these may change, atm we only disable the service on
the host).
To run the upgrade workflow you basically do two steps:
openstack overcloud deploy --templates \
-e environments/major-upgrade-composable-steps-docker.yaml
This will run the ansible upgrade steps we define via upgrade_tasks
then run the normal docker PostDeploySteps to bring up the containers.
For the puppet workflow there's then an operator driven step where
compute nodes (and potentially storage nodes) are upgrades in batches
and finally you do:
openstack overcloud deploy --templates \
-e environments/major-upgrade-converge-docker.yaml
In the puppet case this re-applies puppet to unpin the nova RPC API
so I guess it'll restart the nova containers this affects but otherwise
will be a no-op (we also disable the ansible steps at this point.
Depends-On: I9057d47eea15c8ba92ca34717b6b5965d4425ab1
Change-Id: Ia50169819cb959025866348b11337728f8ed5c9e
|
|
This approach removes the need for the yaql zip to build the
docker-puppet data by building the data in a puppet_config dict.
This allows a future change to make docker-puppet.py only accept dict
data.
Currently the step_config is left where it is and referenced inside
puppet_config, but feedback is welcome whether this is necessary or
desirable.
Change-Id: I4a4d7a6fd2735cb841174af305dbb62e0b3d3e8c
|
|
Otherwise the containerized nova running in the overcloud fails with
"Host 'overcloud-novacompute-0' is not mapped to any cell, Code: 400".
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: I9ff77f25bfd1f37167b0638a32fe5049951bc5b4
|
|
This patch adds docker services for Nova for the Api, conductor,
scheduler, ironic, placement, and pass-thru configuration for metadata (it
simply enables metadata to be configured as part of the nova-api.
The nova-api DB initialization commands depend on a new heat-agent
feature (see patch below) to accommodate exit codes returned by
the new cells setup commands.
Change-Id: I39436783409ed752b08619b07b0a0c592bce0456
Depends-On: Ia6ca4b01982a0b33b26eca2a907d9d9f87c19922
|