diff options
author | Michele Baldessari <michele@acksyn.org> | 2016-11-09 09:05:08 +0100 |
---|---|---|
committer | Michele Baldessari <michele@acksyn.org> | 2016-11-09 14:51:51 +0100 |
commit | dde12b075ff51d4def4f49e635dd390a7f1f2cac (patch) | |
tree | f0a7183a7e5d454a97d66f3df1adce08ca434fc3 /environments/network-isolation-no-tunneling.yaml | |
parent | 465324cb6aa85b56b4390b01425190b23a64acf3 (diff) |
Fix race during major-upgrade-pacemaker step
Currently when we call the major-upgrade step we do the following:
"""
...
if [[ -n $(is_bootstrap_node) ]]; then
check_clean_cluster
fi
...
if [[ -n $(is_bootstrap_node) ]]; then
migrate_full_to_ng_ha
fi
...
for service in $(services_to_migrate); do
manage_systemd_service stop "${service%%-clone}"
...
done
"""
The problem with the above code is that it is open to the following race
condition:
1. Code gets run first on a non-bootstrap controller node so we start
stopping a bunch of services
2. Pacemaker notices will notice that services are down and will mark
the service as stopped
3. Code gets run on the bootstrap node (controller-0) and the
check_clean_cluster function will fail and exit
4. Eventually also the script on the non-bootstrap controller node will
timeout and exit because the cluster never shut down (it never actually
started the shutdown because we failed at 3)
Let's make sure we first only call the HA NG migration step as a
separate heat step. Only afterwards we start shutting down the systemd
services on all nodes.
We also need to move the STONITH_STATE variable into a file because it
is being used across two different scripts (1 and 2) and we need to
store that state.
Co-Authored-By: Athlan-Guyot Sofer <sathlang@redhat.com>
Closes-Bug: #1640407
Change-Id: Ifb9b9e633fcc77604cca2590071656f4b2275c60
Diffstat (limited to 'environments/network-isolation-no-tunneling.yaml')
0 files changed, 0 insertions, 0 deletions