Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
This uses the coalesce function to take null values into account, else
these resources will fail validation.
Change-Id: Iaf4218dd731826f80b76ff8f7a902adc8c865be5
Closes-Bug: #1681332
|
|
This reverts commit b323f8a16035549d84cdec4718380bde3d23d6c3 and uses
the new logic in puppet-tripleo (see Ifd6fa5b398d98e8998630ea0c9a2ce9867ceba2b
), basically doing the same.
Closes-Bug: 1665641
Change-Id: Ib5cb0578be2993af0a0b8675005d838640bdb139
|
|
|
|
The current check tends to produce a false positive causing unnecessary
service restarts. yum check-update will exit with return code 100 if
updated packages are available.
Change-Id: I8bd89f2b24bafc6c991382b9eb484cfa9a2f8968
|
|
|
|
In [1] we removed the previously used special case upgrade code.
However we have since discovered that for openvswitch 2.5.0-14
the special case is still required with an extra flag to prevent
the restart. This adds the upgrade code back into the minor
update and 'manual upgrade' scripts for compute/swift. The
review at If998704b3c4199bbae8a1d068c31a71763f5c8a2 is adding
this logic for the ansible upgrade steps.
Related-Bug: 1669714
[1] https://review.openstack.org/#/q/59e5f9597eb37f69045e470eb457b878728477d7
Change-Id: I3e5899e2d831b89745b2f37e61ff69dbf83ff595
|
|
Attempt to check galera's cluster status fails when galera service
is not running on the same node.
Change-Id: I27fb0841d85cd0dc86e92ac2e21eedf5f8f863ab
|
|
|
|
|
|
Previously the rhel registration script disabled the satellite repo
after installing packages from it. This means those packages will
never be updated, which is not desirable from a long-term
maintenance perspective.
I believe this behavior is a holdover from the dib registration
script, where we don't want to leave repos enabled because the
image may be deployed many times and each instance needs to be
re-registered. In t-h-t we don't have that problem because the
script only runs at deploy time so it's okay and desirable to leave
the repos enabled.
Change-Id: I5d760467b458d90d74507a55effc49b71d22eaa3
Closes-Bug: 1673116
|
|
Removes some of the no longer used scripts and templates used by
the upgrades workflow in previous versions.
Change-Id: I7831d20eae6ab9668a919b451301fe669e2b1346
|
|
There were multiple issues in retry() in rhel-registration:
- There was no need for it to be recursive (local variables
got overwritten)
- There was no delay between multiple attempts, leading to faster but
more frequent failures.
- The max number of attempts was set too low for some environements.
With this patch, rhel-registration now works more reliably with slow-links
for portal registration and does not attempt to DDos the portal or your
satellite server.
Change-Id: I594d3c94867b45a7a58766dbcc66edead78d6a4e
|
|
The UpdateDeployment already depends on NetworkDeployment.
We should not run os-net-config unconditionally before update.
Closes-Bug: #1666227
Change-Id: I48cbf5de00d47c6fdad71ff24c00e9db05cec5d5
|
|
|
|
|
|
Removed from the tripleo_upgrade_node.sh (major upgrade) & yum_update.sh
(minor update). The workaround is no longer needed and in fact has the
opposite effect killing connectitivity to the node. The 'normal' yum
update on nodes delivers the latest openvswitch 2.6.1 with no drama.
Also adds a 'complete' message, some extra debug echo for logs
and removes the python-zaqarclient install no longer needed
Closes-Bug: 1669714
Change-Id: Icd1517bcade36781fa0da21d045ffd9ec68efc38
|
|
|
|
|
|
In extraconfig/pre_deploy/rhel-registration/scripts/rhel-registration,
there's a line that says:
retry subscription-manager repos --disable '*'
I believe this is broken and will result in shell expansion being made.
The proper line should be:
retry subscription-manager repos --disable='*'
This regression came from commit 2b06ed8adce2bcc18480b71c0f20a0ec2d21de19.
(Also see https://review.openstack.org/#/c/381233 )
This patch fixes the regression while preserving functionality
of the above change.
Closes-Bug: 1667316
Change-Id: I54f0db3f1f596f6356f7445cdc61737f20f14318
Signed-off-by: Vincent S. Cojot <vincent@cojot.name>
|
|
Package update fails on compute node, when yum_update checks for
pacemaker status via systemctl command. Because exit on error (-e)
option has been enabled recently, this issue is happening. Fixing
by, executing the command only on nodes where pacemaker is enabled.
Closes-Bug: #1668266
Change-Id: I2aae4e2fdfec526c835f8967b54e1db3757bca17
|
|
|
|
|
|
During the upgrade from M to N i encountered an error in a step
requiring the upgrade of mysql version. The variable backup_flags
is undefined at that point.
Change-Id: Ic6681c40934b27a03d00a75007d7f12d6d540de3
Closes-Bug: #1667731
|
|
It is quite common in large entreprises that direct HTTP/HTTPS to the outside
world is denied from nodes/systems but reaching out through a proxy is allowed.
This change adds support for an HTTP proxy when RHEL overcloud nodes reach
out to either the RHSM portal or to a satellite server. This allows the
overcloud nodes to download updates even in locked-down environments.
The following variables are settable through templates:
rhel_reg_http_proxy_host:
rhel_reg_http_proxy_port:
rhel_reg_http_proxy_username:
rhel_reg_http_proxy_password:
Note the following restrictions:
- If setting rhel_reg_http_proxy_host,
then rhel_reg_http_proxy_port cannot be empty.
- If setting rhel_reg_http_proxy_port,
then rhel_reg_http_proxy_host cannot be empty.
- If setting rhel_reg_http_proxy_username,
then rhel_reg_http_proxy_password cannot be empty.
- If setting rhel_reg_http_proxy_password,
then rhel_reg_http_proxy_username cannot be empty.
- If setting either rhel_reg_http_proxy_username or
rhel_reg_http_proxy_password, then rhel_reg_http_proxy_host
AND rhel_reg_http_proxy_port cannot be empty
Change-Id: I003ad5449bd99c01376781ec0ce9074eca3e2704
|
|
Adds two checks, one for the CephMon and one for the CephOSD upgrade
tasks borrowed from ceph-ansible.
Change-Id: I0a0e60d277240130c6bd76a74ccc13354b87a30a
Co-Authored-By: Sebastien Han <seb@redhat.com>
|
|
|
|
|
|
|
|
And change the conditional to use hiera instead.
Change-Id: Icf91dd91c0ab04e7919172fcfd130183bfd427b4
|
|
We want to apply a puppet manifest for the non-controller role, but we
need to apply it in stages. By loading the proper hieradata we get the
needed step configuration.
Change-Id: I07bfeee7b7d9a9b8c2c20e5d5c9ed735d0bfc842
Closes-Bug: #1664304
|
|
|
|
We wants to run puppet on each role which has the flag
disable_upgrade_deployment to true. It will run after the upgrade
of the role and before running the whole converge step.
Change-Id: Ia85be688d070dfb5b8337e8ef3c4bc439fb6052e
|
|
We do not need the upgrade scripts used to migrate Ceph from
hammer to jewel. This submission removes that and the legacy
upgrade scripts used for the BlockStorage role.
Change-Id: I2674216dd9b5b849de6a2624ee1115420a254182
|
|
This delivers a /root/tripleo_upgrade_node.sh to those nodes
that have the disable_upgrade_deployment flag set to true.
They will later be upgraded manually by the operator who will
invoke the script delivered here using upgrade-non-controller.sh
We can also deliver any service specific upgrade configuration,
such as configuring nova-compute to use the placement API as this
is required in order for placement to be configured and installed
during the subsequent upgrade steps for controller services.
This removes the compute and swift specific upgrade scripts as
they are now merged into the common
tripleo_upgrade_node.sh - removing any hard coded
reference to a particular role name (compute/objectstorage) and
only relying on the disable_upgrade_deployment is roles_data.yaml
Change-Id: I4531a4038b78087ef4a1a62c35f1328822427817
Co-Authored-By: Mathieu Bultel <mbultel@redhat.com>
|
|
Swift rings created or updated on the overcloud nodes will now be
stored on the undercloud at the end of the deployment. An
additional consistency check is executed before storing them,
ensuring all rings within the cluster are identical.
These rings will be retrieved (before Puppet runs) by every node
when an UPDATE is executed, and by doing this will be in a
consistent state across the cluster.
This makes it possible to add, remove or replace nodes in an
existing cluster without manual operator interaction.
Closes-Bug: 1609421
Depends-On: Ic3da38cffdd993c768bdb137c17d625dff1aa372
Change-Id: I758179182265da5160c06bb95f4c6258dc0edcd6
|
|
|
|
|
|
|
|
These are only used for TLS-everywhere, and fills up the kerberos
principals that will need to be created for the certs used by the
overcloud. With this, the metadata hook will format these principals
correctly and will further pass them on to the nova metadata service.
Where they can be used if there's a plugin enabled.
bp tls-via-certmonger
bp novajoin
Change-Id: I873094bb69200052febda629fda698a7a782c031
|
|
|
|
We only need to know if pacemaker service is in active state.
Change-Id: Id5e16f2bbbe51b8a0c250eb5d35e89e61a7b3383
Resolves: rhbz#1414779
Closes-Bug: #1656980
|
|
|
|
|
|
Update pending templates to use the release name alias.
Change-Id: I39f9be212d3e9f3bec6f45d9757eca7a3b0ccc06
|
|
Glance registry is not required for the v2 of the API and there are
plans to deprecate it in the glance community.
Let's remove v1 support since it has been deprecated for a while in
Glance.
Depends-On: I77db1e1789fba0fb8ac014d6d1f8f5a8ae98ae84
Co-Authored: Flavio Percoco <flaper87@gmail.com>
Change-Id: I0cd722e8c5a43fd19336e23a7fada71c257a8e2d
|
|
files/partitions
This submission:
- Fix an error in the AllNodesExtraConfig resource.
(Can't merge servers multiple times).
- Add environment files to deploy swap file/partition
without manual edit over the templates.
- If a swap partition is mounted without having it available
the deployment will fail, the fix checks that if the
partition is not created then the deployment continues.
- Removing empty extra lines in swap templates.
- Adjust description and remove unnecessary comments in
swap templates.
Closes-Bug: 1652184
Change-Id: I828bbbbd4c178956aac74af49f80fcd4f62fa16b
|
|
|
|
Occasionally we can see transient network outages when attempting
to register with the Redhat Portal or Satellite server. This causes
deployment or scaleout operations to fail. These outages are minimal
and retrying often resolves the issue. This becomes more prevelant
during testing as we deploy infrastructure far more frequently.
Change-Id: If23785fbe2eea4643918b2e68915bbc13c1b1112
|