Age | Commit message (Collapse) | Author | Files | Lines |
|
The bootstrap_nodeid can have capital letters while the hostname may
not. In puppet we use downcase for this comparison, so let's follow a
similar pattern for scripts from THT.
Change-Id: I8a0bec4a6f3ed0b4f2289cbe7023344fb284edf7
Closes-Bug: #16998201
|
|
We need to ensure that the pacemaker cluster restarts
in the end of the deployment.
Due to the resources renaming we added the
postconfig resource not in the end of the
deployment as it was *postpuppet.
Closes-bug: 1695904
Change-Id: Ic6978fcff591635223b354831cd6cbe0802316cf
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
|
|
|
|
In change I2aae4e2fdfec526c835f8967b54e1db3757bca17 we did the
following:
-pacemaker_status=$(systemctl is-active pacemaker || :)
+pacemaker_status=""
+if hiera -c /etc/puppet/hiera.yaml service_names | grep -q pacemaker;
then
+ pacemaker_status=$(systemctl is-active pacemaker)
+fi
we did that so due to LP#1668266: we did not want systemctl is-active to
fail on non pacemaker nodes. The problem with the above hiera check is
that it will match on pacemaker_remote nodes as well.
We cannot piggyback the pacemaker_enabled hiera key because that is true
on all nodes. So let's make the test check only for pacemaker service
without matching pacemaker remote. Tested with:
1) Test on a controller node with pacemaker service enabled
[root@overcloud-controller-0 ~]# hiera -c /etc/puppet/hiera.yaml -a service_names |grep '\bpacemaker\b'
"pacemaker",
[root@overcloud-controller-0 ~]# echo $?
0
2) Test on a compute node without pacemaker:
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker\b'
[root@overcloud-novacompute-0 puppet]# echo $?
1
3) Test on a node with pacemaker_remote in the service_names key:
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker\b'
[root@overcloud-novacompute-0 puppet]# echo $?
1
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker_remote\b'
"pacemaker_remote"]
[root@overcloud-novacompute-0 puppet]# echo $?
0
Change-Id: I54c5756ba6dea791aef89a79bc0b538ba02ae48a
Closes-Bug: #1688214
|
|
|
|
This add openstack-nova-migration on the compute during the upgrade.
Closes-Bug: #1687081
Depends-on: Iab022bdfb655e3c52fecebf416e75c9e981072ab
Depends-on: I02dc8934521340f42ac44a7d16889f6d79620c33
Change-Id: I3db2a3188e538eeaef61769d38f0166545444cfe
|
|
|
|
To test this change we deployed a stock master with ipv6 which created a bunch
of ipv6 with /64 netmask:
[root@overcloud-controller-0 ~]# pcs resource show ip-fd00.fd00.fd00.2000..18
Resource: ip-fd00.fd00.fd00.2000..18 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=fd00:fd00:fd00:2000::18 cidr_netmask=64
Operations: start interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-start-interval-0s)
stop interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-stop-interval-0s)
monitor interval=10s timeout=20s (ip-fd00.fd00.fd00.2000..18-monitor-interval-10s)
Then we update the THT folder with this patch and upload the new scripts on the undercloud via:
openstack overcloud deploy --update-plan-only ....
Then we kick off the minor update workflow:
openstack overcloud update stack -i overcloud
Once the controller-0 node (bootstrap node for pacemaker) is completed we have the
correct VIP configuration:
[root@overcloud-controller-0 heat-config-script]# pcs resource show ip-fd00.fd00.fd00.2000..18
Resource: ip-fd00.fd00.fd00.2000..18 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=fd00:fd00:fd00:2000::18 cidr_netmask=128 nic=vlan20 lvs_ipv6_addrlabel=true lvs_ipv6_addrlabel_value=99
Operations: start interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-start-interval-0s)
stop interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-stop-interval-0s)
monitor interval=10s timeout=20s (ip-fd00.fd00.fd00.2000..18-monitor-interval-10s)
Also verified that running the script a second time does not alter the
(already fixed) VIPs.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: I765cd5c9b57134dff61f67ce726bf88af90f8090
|
|
Closes-Bug:1686619
Change-Id: I7c32ca39a456de9833d30c31d41fcb727d2b0a34
|
|
We fixed pcs resources start/stop timeouts via
I587136d8d045d213875c657ea5a405074f80c8ad in Nov 2015.for mitaka.
And there we stated:
This can be removed once updates from deployments made prior to
I6fc18f1ad876c5a25723710a3b20d8ec9519dcba are no longer supported.
We can now safely remove these updates as they are useless and cost time
anyway.
Change-Id: Ibad2b3eed0d08560d52d5ebe700746b61e5b8f51
|
|
The [Pre|Post]Puppet resources were renamed in
https://review.openstack.org/#/c/365763.
This was intended for having a pre/post deployment
steps using an agnostic name instead of
being attached to a technology.
The renaming was unintentionally reverted in
https://review.openstack.org/#/c/393644/ and
https://review.openstack.org/#/c/434451.
This submission merge both resources into one,
and remove the old pre|post hooks.
Closes-bug: #1669756
Change-Id: Ic9d97f172efd2db74255363679b60f1d2dc4e064
|
|
In two places during upgrade we manually trigger puppet.
There can be a problem when new puppet modules are added, and their
corresponding symlinks in /etc/puppet/modules are not created during
the installation as their are installed in
/usr/share/openstack-puppet/modules. To prevent the issue tripleo set
modulepath in the templates.
We must use the same modulepath to make sure that we don't fail
because of missing module in the manual puppet run.
This particulary happens when you upgrade from M->N->O, as the base
image in Mitaka doesn't have the proper symlinks and they are not
created during the installation of the package.
Closes-Bug: #1684587
Change-Id: I79df6ea33f1c58e13309176a6de41b7572541fd6
|
|
|
|
Fetch the host public keys from each node, combine them all and write to the
system-wide ssh known hosts. The alternative of disabling host key
verification is vulnerable to a MITM attack.
Change-Id: Ib572b5910720b1991812256e68c975f7fbe2239c
|
|
|
|
This reverts commit b323f8a16035549d84cdec4718380bde3d23d6c3 and uses
the new logic in puppet-tripleo (see Ifd6fa5b398d98e8998630ea0c9a2ce9867ceba2b
), basically doing the same.
Closes-Bug: 1665641
Change-Id: Ib5cb0578be2993af0a0b8675005d838640bdb139
|
|
The current check tends to produce a false positive causing unnecessary
service restarts. yum check-update will exit with return code 100 if
updated packages are available.
Change-Id: I8bd89f2b24bafc6c991382b9eb484cfa9a2f8968
|
|
|
|
In [1] we removed the previously used special case upgrade code.
However we have since discovered that for openvswitch 2.5.0-14
the special case is still required with an extra flag to prevent
the restart. This adds the upgrade code back into the minor
update and 'manual upgrade' scripts for compute/swift. The
review at If998704b3c4199bbae8a1d068c31a71763f5c8a2 is adding
this logic for the ansible upgrade steps.
Related-Bug: 1669714
[1] https://review.openstack.org/#/q/59e5f9597eb37f69045e470eb457b878728477d7
Change-Id: I3e5899e2d831b89745b2f37e61ff69dbf83ff595
|
|
Attempt to check galera's cluster status fails when galera service
is not running on the same node.
Change-Id: I27fb0841d85cd0dc86e92ac2e21eedf5f8f863ab
|
|
|
|
Removes some of the no longer used scripts and templates used by
the upgrades workflow in previous versions.
Change-Id: I7831d20eae6ab9668a919b451301fe669e2b1346
|
|
The UpdateDeployment already depends on NetworkDeployment.
We should not run os-net-config unconditionally before update.
Closes-Bug: #1666227
Change-Id: I48cbf5de00d47c6fdad71ff24c00e9db05cec5d5
|
|
|
|
|
|
Removed from the tripleo_upgrade_node.sh (major upgrade) & yum_update.sh
(minor update). The workaround is no longer needed and in fact has the
opposite effect killing connectitivity to the node. The 'normal' yum
update on nodes delivers the latest openvswitch 2.6.1 with no drama.
Also adds a 'complete' message, some extra debug echo for logs
and removes the python-zaqarclient install no longer needed
Closes-Bug: 1669714
Change-Id: Icd1517bcade36781fa0da21d045ffd9ec68efc38
|
|
Package update fails on compute node, when yum_update checks for
pacemaker status via systemctl command. Because exit on error (-e)
option has been enabled recently, this issue is happening. Fixing
by, executing the command only on nodes where pacemaker is enabled.
Closes-Bug: #1668266
Change-Id: I2aae4e2fdfec526c835f8967b54e1db3757bca17
|
|
|
|
During the upgrade from M to N i encountered an error in a step
requiring the upgrade of mysql version. The variable backup_flags
is undefined at that point.
Change-Id: Ic6681c40934b27a03d00a75007d7f12d6d540de3
Closes-Bug: #1667731
|
|
Adds two checks, one for the CephMon and one for the CephOSD upgrade
tasks borrowed from ceph-ansible.
Change-Id: I0a0e60d277240130c6bd76a74ccc13354b87a30a
Co-Authored-By: Sebastien Han <seb@redhat.com>
|
|
|
|
|
|
|
|
And change the conditional to use hiera instead.
Change-Id: Icf91dd91c0ab04e7919172fcfd130183bfd427b4
|
|
We want to apply a puppet manifest for the non-controller role, but we
need to apply it in stages. By loading the proper hieradata we get the
needed step configuration.
Change-Id: I07bfeee7b7d9a9b8c2c20e5d5c9ed735d0bfc842
Closes-Bug: #1664304
|
|
|
|
We wants to run puppet on each role which has the flag
disable_upgrade_deployment to true. It will run after the upgrade
of the role and before running the whole converge step.
Change-Id: Ia85be688d070dfb5b8337e8ef3c4bc439fb6052e
|
|
We do not need the upgrade scripts used to migrate Ceph from
hammer to jewel. This submission removes that and the legacy
upgrade scripts used for the BlockStorage role.
Change-Id: I2674216dd9b5b849de6a2624ee1115420a254182
|
|
This delivers a /root/tripleo_upgrade_node.sh to those nodes
that have the disable_upgrade_deployment flag set to true.
They will later be upgraded manually by the operator who will
invoke the script delivered here using upgrade-non-controller.sh
We can also deliver any service specific upgrade configuration,
such as configuring nova-compute to use the placement API as this
is required in order for placement to be configured and installed
during the subsequent upgrade steps for controller services.
This removes the compute and swift specific upgrade scripts as
they are now merged into the common
tripleo_upgrade_node.sh - removing any hard coded
reference to a particular role name (compute/objectstorage) and
only relying on the disable_upgrade_deployment is roles_data.yaml
Change-Id: I4531a4038b78087ef4a1a62c35f1328822427817
Co-Authored-By: Mathieu Bultel <mbultel@redhat.com>
|
|
Swift rings created or updated on the overcloud nodes will now be
stored on the undercloud at the end of the deployment. An
additional consistency check is executed before storing them,
ensuring all rings within the cluster are identical.
These rings will be retrieved (before Puppet runs) by every node
when an UPDATE is executed, and by doing this will be in a
consistent state across the cluster.
This makes it possible to add, remove or replace nodes in an
existing cluster without manual operator interaction.
Closes-Bug: 1609421
Depends-On: Ic3da38cffdd993c768bdb137c17d625dff1aa372
Change-Id: I758179182265da5160c06bb95f4c6258dc0edcd6
|
|
|
|
|
|
|
|
We only need to know if pacemaker service is in active state.
Change-Id: Id5e16f2bbbe51b8a0c250eb5d35e89e61a7b3383
Resolves: rhbz#1414779
Closes-Bug: #1656980
|
|
Glance registry is not required for the v2 of the API and there are
plans to deprecate it in the glance community.
Let's remove v1 support since it has been deprecated for a while in
Glance.
Depends-On: I77db1e1789fba0fb8ac014d6d1f8f5a8ae98ae84
Co-Authored: Flavio Percoco <flaper87@gmail.com>
Change-Id: I0cd722e8c5a43fd19336e23a7fada71c257a8e2d
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
|
|
For now, don't run anything in yum_update.sh when it is run from
inside the heat-agents container. A mechanism for doing a yum update
on the host can be worked out later, but for now a yum update should
never be run inside a container.
Change-Id: I73d37578f8b2dc9c3029b968b1ef74ef4894100a
|