Age | Commit message (Collapse) | Author | Files | Lines |
|
by default
The default value is 0 which has the minimum number be caluclated based on the replica count
from osd_pool_defaut_size. The default replica count is 3 and the calculated min_size is 2.
If the replica count is 1 then the min_size is 1. ie: min_size = replica - (replica/2)
Add CephPoolDefaultSize parameter to ceph-mon.yaml. This parameter defaults to 3 but can
be overriden. See puppet-ceph-devel.yaml for an example
Change-Id: Ie9bdd9b16bcb9f11107ece614b010e87d3ae98a9
|
|
|
|
This is only done when TLS-everywhere is enabled, and depends on those
directories being exclusive for services that run over httpd.
bp tls-via-certmonger-containers
Change-Id: I194c33992c7f3628f7858ecf5e472ecfdee969ed
|
|
Partial blueprint containerized-services-logs
Change-Id: Idbf1884226503aca9072b12d050500af407973cf
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
|
|
|
|
|
|
Via https://github.com/arioch/puppet-redis/pull/192 puppet-redis grew
ulimit support also for pacemaker managed redis instances. To be able to
use that we need to set redis::managed_by_cluster_manager to true.
We also allow redis::ulimit to be configurable and we set a default of
10420 which was the default value before the above change.
Change-Id: I06129870665d7d3bfa09057fd9f0a33a99f98397
Depends-On: I4ffccfe3e3ba862d445476c14c8f2cb267fa108d
Closes-Bug: #1688464
|
|
|
|
Change-Id: I2b23d92c85d5ecc889a7ee597b90e930bde9028e
Depends-On: I72f84e737b042ecfaabf5639c6164d46a072b423
|
|
|
|
Some containers are using the logs named volume for collecting logs
written to `/var/log`. We should make this consistent for all the
containers.
This patch also cleans up some mounts that weren't needed for some
services. For example, glance-api doesn't need `/run` to be mounted.
Other changes:
* Rework log volumes to hostpath mounts to omit slow COW writes.
* Add kolla_config's permission and host_prep_tasks create and
manage hostpath mounted log dirs permissions.
* Rework data owning init containers to kolla_config permissions
* When a step wants KOLLA_BOOTSTRAP or DB sync, use logs data owning
init containers to set permissions for logs. This is required
because kolla bootsrap and DB sync runs before the kolla config
stage and there is yet permissions set for logs.
* In order to address hybrid cases for host services vs containerized
ones to access logs having different UIDs, persist containerized
services' logs into separate directories (an upgrade impact)
* Ensure host prep tasks to create /var/log/containers/ and /var/lib/
sub-directories for services
* Fix missing /etc/httpd, /var/www config-data mounts for zaqar/ironic
* Fix YAML indentation and drop strings quotation.
Co-authored-by: Bogdan Dobrelya <bdobreli@redhat.com>
Partial blueprint containerized-services-logs
Change-Id: I53e737120bf0121bd28667f355b6f29f1b2a6b82
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This will enable those consuming the stack_update_type hieradata
set by this parameter to differentiate an update from a major upgrade
Change-Id: I38469f4b7d04165ea5371aeb0cbd2e9349d70c79
|
|
|
|
In change I2aae4e2fdfec526c835f8967b54e1db3757bca17 we did the
following:
-pacemaker_status=$(systemctl is-active pacemaker || :)
+pacemaker_status=""
+if hiera -c /etc/puppet/hiera.yaml service_names | grep -q pacemaker;
then
+ pacemaker_status=$(systemctl is-active pacemaker)
+fi
we did that so due to LP#1668266: we did not want systemctl is-active to
fail on non pacemaker nodes. The problem with the above hiera check is
that it will match on pacemaker_remote nodes as well.
We cannot piggyback the pacemaker_enabled hiera key because that is true
on all nodes. So let's make the test check only for pacemaker service
without matching pacemaker remote. Tested with:
1) Test on a controller node with pacemaker service enabled
[root@overcloud-controller-0 ~]# hiera -c /etc/puppet/hiera.yaml -a service_names |grep '\bpacemaker\b'
"pacemaker",
[root@overcloud-controller-0 ~]# echo $?
0
2) Test on a compute node without pacemaker:
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker\b'
[root@overcloud-novacompute-0 puppet]# echo $?
1
3) Test on a node with pacemaker_remote in the service_names key:
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker\b'
[root@overcloud-novacompute-0 puppet]# echo $?
1
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker_remote\b'
"pacemaker_remote"]
[root@overcloud-novacompute-0 puppet]# echo $?
0
Change-Id: I54c5756ba6dea791aef89a79bc0b538ba02ae48a
Closes-Bug: #1688214
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This add openstack-nova-migration on the compute during the upgrade.
Closes-Bug: #1687081
Depends-on: Iab022bdfb655e3c52fecebf416e75c9e981072ab
Depends-on: I02dc8934521340f42ac44a7d16889f6d79620c33
Change-Id: I3db2a3188e538eeaef61769d38f0166545444cfe
|
|
Specify the allowed networks for migration ssh tunneling.
bp tripleo-cold-migration
Change-Id: Iab022bdfb655e3c52fecebf416e75c9e981072ab
Depends-on: Idb56acd1e1ecb5a5fd4d942969be428cc9cbe293
|
|
|
|
By adding back the conditions we avoid the deployment of unneded
software configs on nodes where we don't have any upgrade task to
run, speeding up the upgrade process.
Related-Bug: #1679486
Related-Bug: #1678101
Change-Id: I5c8b0c4abfc0607f42fd3f2da9f5ef2702b1bbe1
|
|
Depends-On: I55ac06e1a561d29d7e1c928a1684989c9654b95d
Change-Id: Id29e96979b937593efe244f46ce2dd74df3aaa7f
|
|
By deafult, we let the data live for ever. Which isnt very efficient.
Lets expose params to tweak this and use a reasonable default.
Change-Id: I145fa73a7af9cb4135ba910d3659853b3baa893d
|
|
For performance reasons we might want to tweak this param
lets expose this via tripleo. The puppet changes were
added in this patch I5de5283d1b14e0bba63d6d9a440611914ba86ca4
Change-Id: I72f1fe3a47060fe37602a70b8a74fba72209127c
|
|
|
|
Instead of using the CA bundle, this sets the mysql client configuration
file to use a specific file for validating the certificate of the
database server. This helps in two ways:
* Improves performance since validation will check only one certificate.
* Improves security since we're only the certificates signed by one CA
are valid, instead of any certificate that the system trusts (which
could include potentially compromised public certs).
Change-Id: I46f7cb6da73715f8f331337e0161418450d5afd7
Depends-On: I75bdaf71d88d169e64687a180cb13c1f63418a0f
|
|
This switches heat-api and heat-api-cfn to use httpd in containerized
overcloud.
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: I2fe6e25474279c7c91a69d9df7b28e12b1d8ac00
|
|
|
|
libvirt has its own parameter for setting the CA, however, if we have a
common CA for all services in the internal network (which we do), it's
more consistent to use the common parameter for configuring that CA
file.
The previous parameter was left in case the deployer wants to use a
specific CA file for the compute nodes.
Change-Id: I3d132d3d257d7ea9f43e49593f8509c3cd205ca5
|
|
Instead of using the CA bundle, this sets HAProxy to use a specific file
for validating the certificates of the services it's proxying. This
helps in two ways:
* Improves performance since validation will check only one certificate.
* Improves security since we're only the certificates signed by one CA
are valid, instead of any certificate that the system trusts (which
could include potentially compromised public certs).
Change-Id: Id6de045b3c93c82d37e0b0657c17a3108516016a
|
|
Change-Id: Ic218a753e0cede2ba3951bcaec843f487dce0c71
|
|
|
|
|
|
|
|
To test this change we deployed a stock master with ipv6 which created a bunch
of ipv6 with /64 netmask:
[root@overcloud-controller-0 ~]# pcs resource show ip-fd00.fd00.fd00.2000..18
Resource: ip-fd00.fd00.fd00.2000..18 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=fd00:fd00:fd00:2000::18 cidr_netmask=64
Operations: start interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-start-interval-0s)
stop interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-stop-interval-0s)
monitor interval=10s timeout=20s (ip-fd00.fd00.fd00.2000..18-monitor-interval-10s)
Then we update the THT folder with this patch and upload the new scripts on the undercloud via:
openstack overcloud deploy --update-plan-only ....
Then we kick off the minor update workflow:
openstack overcloud update stack -i overcloud
Once the controller-0 node (bootstrap node for pacemaker) is completed we have the
correct VIP configuration:
[root@overcloud-controller-0 heat-config-script]# pcs resource show ip-fd00.fd00.fd00.2000..18
Resource: ip-fd00.fd00.fd00.2000..18 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=fd00:fd00:fd00:2000::18 cidr_netmask=128 nic=vlan20 lvs_ipv6_addrlabel=true lvs_ipv6_addrlabel_value=99
Operations: start interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-start-interval-0s)
stop interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-stop-interval-0s)
monitor interval=10s timeout=20s (ip-fd00.fd00.fd00.2000..18-monitor-interval-10s)
Also verified that running the script a second time does not alter the
(already fixed) VIPs.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: I765cd5c9b57134dff61f67ce726bf88af90f8090
|
|
|
|
SnmpdBindHost will be useful for users who want to change the binding
options for SNMP daemon.
It has to be an array, and by the default the value is
['udp:161','udp6:[::1]:161'] like it was in puppet-tripleo profile.
Change-Id: Iccf0a8d35cc05d34272c078c97a5dddfb8e7d614
Closes-Bug: #1687628
|
|
When implementing custom roles, we lost an implicit dependency that
ensured AllNodesExtraConfig is applied before AllNodesDeploySteps,
which causes problems if you need to write hieradata via the
AllNodesExtraConfig hook (some cisco integrations we have in tree
do this, and are now broken because the ordering is no longer ensured.
Change-Id: Ie78ecbb4e135ab7f196867ef9d8d271049a9cd10
Closes-Bug: #1687597
|
|
|
|
Closes-Bug:1686619
Change-Id: I7c32ca39a456de9833d30c31d41fcb727d2b0a34
|
|
|