Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Service template's parameter documentation has been update by
correcting few of the wrong informations and added more
information with examples.
Change-Id: I2d92fd01cbeb6fdc6f030255dc4b71166509b4f6
|
|
|
|
|
|
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
|
|
|
|
|
|
|
|
This spawns an extra container that runs httpd to run the TLS proxy that
will go in front of glance-api.
bp tls-via-certmonger-containers
Change-Id: If902ac732479832b9aa3e4a8d063b5be68a42a9b
|
|
This spawns an extra container that runs httpd to run the TLS proxy that
will go in front of swift.
bp tls-via-certmonger-containers
Depends-On: Ib01137cd0d98e6f5a3e49579c080ab18d8905b0d
Change-Id: I9639af8b46b8e865cc1fa7249bf1d8b1b978adfe
|
|
|
|
|
|
We dont need expirer unless we have collector and standard
storage enabled. Lets turn it off by default and make it
an optional service. In upgrade scenario, we will kill the
process and stop the expirer, unless explicitly enabled.
Change-Id: Icffb7d1bb2cf7bd61026be7d2dcfbd70cd3bcbda
|
|
|
|
|
|
Once puppet has written the initial fernet keys, if a deployer wants to
rotate them, the keys will be overwritten when another overcloud deploy
is executed (for instance, for updates or upgrades). This disables
replacing this keys via puppet, so now the operator can rotate the keys
out of band.
Change-Id: I01fd46ba7c5e0db12524095dc9fe29e90cb0de57
|
|
|
|
Change-Id: I3583a9a3bb04df2aebf06a566a2bdc4afdbfc9f3
|
|
|
|
neutron-metadata number of workers will be taken from NeutronWorkers parameter
if not empty. when empty, all keys related to NeutronWorkers value will be
set with empty dictionary instead empty string ({}).
Change-Id: I18347639c188bbf085e2f3c739465e52c94b9d77
Closes-bug: #1689571
|
|
We need Docker service mapping defined and set to OS::Heat::None so that
we can reuse multinode-container-upgrade.yaml service list both for
initial deployment and for the upgrade. The upgrade will not be broken
by this as its env files are being passed later on the command line, and
they'll take priority and effectively enable the Docker service on
upgrade.
Another change we need for mixed upgrade is to add the TripleoPackages
service, which will take care of updating RPMs on the bare metal and
prevent docker installation from failing with outdated
puppet-tripleo ("Could not find class ::tripleo::profile::base::docker").
Related-Bug: #1685795
Closes-Bug: #1689772
Change-Id: Idb6917f22d0e9f74f8853972c6a08bffb01be410
|
|
|
|
Variables are now passed in with --env in the docker run call.
This will allow docker-puppet.sh to be baked into the image instead of
having it as a custom entrypoint.
Change-Id: Icbaefe033becc6b2226535f28ee202917bdc1074
|
|
Move the Zaqar WSGI service to use httpd in docker deployment.
Depends-On: I35cfd1c2320eb972890b44668c8f9f0a047a65dc
Change-Id: I56a6469a9179b5c023738f447e7665d0d3c73d0b
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Thomas Herve <therve@redhat.com>
|
|
Change I5c8b0c4abfc0607f42fd3f2da9f5ef2702b1bbe1 introduced conditions
to optimize upgrade times and fix related bugs. Unfortunately the
conditional inclusion would have to be paired with support in depends_on
to work as we need. Currently we can hit this bug if the batch upgrade
steps are undefined for some role, but upgrade steps are definied:
The specified reference "ControllerUpgradeBatch_Step2" (in
ControllerUpgradeConfig_Step0) is incorrect.
To fix this we have to make the steps unconditional. This isn't fully
reverting the original change because that change also addressed
ordering issues.
Change-Id: I369591f4757c10142f5b455e64aa778e1a9a5611
Closes-Bug: #1689553
|
|
This is only done when TLS-everywhere is enabled, and depends on those
directories being exclusive for services that run over httpd.
bp tls-via-certmonger-containers
Change-Id: I194c33992c7f3628f7858ecf5e472ecfdee969ed
|
|
Partial blueprint containerized-services-logs
Change-Id: Idbf1884226503aca9072b12d050500af407973cf
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
Via https://github.com/arioch/puppet-redis/pull/192 puppet-redis grew
ulimit support also for pacemaker managed redis instances. To be able to
use that we need to set redis::managed_by_cluster_manager to true.
We also allow redis::ulimit to be configurable and we set a default of
10420 which was the default value before the above change.
Change-Id: I06129870665d7d3bfa09057fd9f0a33a99f98397
Depends-On: I4ffccfe3e3ba862d445476c14c8f2cb267fa108d
Closes-Bug: #1688464
|
|
|
|
Change-Id: I2b23d92c85d5ecc889a7ee597b90e930bde9028e
Depends-On: I72f84e737b042ecfaabf5639c6164d46a072b423
|
|
|
|
Some containers are using the logs named volume for collecting logs
written to `/var/log`. We should make this consistent for all the
containers.
This patch also cleans up some mounts that weren't needed for some
services. For example, glance-api doesn't need `/run` to be mounted.
Other changes:
* Rework log volumes to hostpath mounts to omit slow COW writes.
* Add kolla_config's permission and host_prep_tasks create and
manage hostpath mounted log dirs permissions.
* Rework data owning init containers to kolla_config permissions
* When a step wants KOLLA_BOOTSTRAP or DB sync, use logs data owning
init containers to set permissions for logs. This is required
because kolla bootsrap and DB sync runs before the kolla config
stage and there is yet permissions set for logs.
* In order to address hybrid cases for host services vs containerized
ones to access logs having different UIDs, persist containerized
services' logs into separate directories (an upgrade impact)
* Ensure host prep tasks to create /var/log/containers/ and /var/lib/
sub-directories for services
* Fix missing /etc/httpd, /var/www config-data mounts for zaqar/ironic
* Fix YAML indentation and drop strings quotation.
Co-authored-by: Bogdan Dobrelya <bdobreli@redhat.com>
Partial blueprint containerized-services-logs
Change-Id: I53e737120bf0121bd28667f355b6f29f1b2a6b82
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This will enable those consuming the stack_update_type hieradata
set by this parameter to differentiate an update from a major upgrade
Change-Id: I38469f4b7d04165ea5371aeb0cbd2e9349d70c79
|
|
|
|
In change I2aae4e2fdfec526c835f8967b54e1db3757bca17 we did the
following:
-pacemaker_status=$(systemctl is-active pacemaker || :)
+pacemaker_status=""
+if hiera -c /etc/puppet/hiera.yaml service_names | grep -q pacemaker;
then
+ pacemaker_status=$(systemctl is-active pacemaker)
+fi
we did that so due to LP#1668266: we did not want systemctl is-active to
fail on non pacemaker nodes. The problem with the above hiera check is
that it will match on pacemaker_remote nodes as well.
We cannot piggyback the pacemaker_enabled hiera key because that is true
on all nodes. So let's make the test check only for pacemaker service
without matching pacemaker remote. Tested with:
1) Test on a controller node with pacemaker service enabled
[root@overcloud-controller-0 ~]# hiera -c /etc/puppet/hiera.yaml -a service_names |grep '\bpacemaker\b'
"pacemaker",
[root@overcloud-controller-0 ~]# echo $?
0
2) Test on a compute node without pacemaker:
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker\b'
[root@overcloud-novacompute-0 puppet]# echo $?
1
3) Test on a node with pacemaker_remote in the service_names key:
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker\b'
[root@overcloud-novacompute-0 puppet]# echo $?
1
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker_remote\b'
"pacemaker_remote"]
[root@overcloud-novacompute-0 puppet]# echo $?
0
Change-Id: I54c5756ba6dea791aef89a79bc0b538ba02ae48a
Closes-Bug: #1688214
|
|
|
|
|
|
|
|
|
|
|
|
|