Age | Commit message (Collapse) | Author | Files | Lines |
|
Prior to https://review.openstack.org/#/c/271450/ os-net-config was
applied via os-refresh-config directly, which meant that even though
UpdateDeployment and NetworkDeployment can be created concurrently,
we'd always do the os-net-config step first.
However now that we apply both steps via scripts (which are both handled
via the same heat-config hook) we should add an explicit dependency to
ensure the network is always fully configured before attempting to run
any update. This should avoid the risk of e.g running an update on
initial deployment before the network connectivity to access yum repos
is in place.
Change-Id: Idff7a95afe7b49b6384b1d0c78e76522fb1f8eb7
Related-Bug: #1666227
(cherry picked from commit 626b820b57498ff5002c5530962e6e4fd5644b51)
|
|
|
|
This doesn't exist in newton images, so install it via the
ansible tasks during step3 (when all other packages are updated).
Change-Id: I08fb7855b910ccc5a8ab2d73f1de15b695784abd
Closes-Bug: #1664265
(cherry picked from commit e6ed8a75eb8bebd22eef469bedeea7beae28037d)
|
|
Change-Id: Icc5fbf99301ae47344e1582767e1e7a4687f491b
(cherry picked from commit 7273a3de0296f6f75d4d549f72645ca916d967de)
|
|
stable/ocata
|
|
|
|
In ocata we changed the rabbitmq ha policy to "ha-exactly" via the
following changes:
- tht: Iace6daf27a76cb8ef1050ada0de7ff1f530916c6
- puppet-tripleo: Ib62001c03e1e08f58cf0c6e0ba07a8879a584084
We took care of the upgrade path via I3a97505d2ae1ae27f3080ffe74c33fdabffd2420
With the move to the ansible-based composable upgrades we left this change out.
And now an upgraded environment has the following policy:
- Upgraded environment
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}"
- New environment
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}"
We need to add this pcs resource change to the our upgrade scripts.
Change-Id: I3c4113c207e9d0c45be43df7c2379ac26cb60692
Closes-Bug: #1668600
(cherry picked from commit 41514d0cd603194fecb327f96995c60a9fe6e67a)
|
|
into stable/ocata
|
|
Change-Id: I189edaf69c0e97a3399e6af939595f98322d7c03
Partially-Implements: blueprint overcloud-upgrades-per-service
(cherry picked from commit dedef90750827fd7b413eac32223f929c8ac5555)
|
|
During upgrades, validation test if a service is running before the
upgrade process starts.
In some cases, servies doesn't exist yet so we don't want to run the
validation.
This patch makes sure we check if the service is actually present on the
system before validating it's running correctly.
Also it makes sure that services are enabled before trying to stop them.
It allows use-cases where we want to add new services during an upgrade.
Also install new packages of services added in Ocata, so we can validate
upgrades on scenarios jobs.
Change-Id: Ib48fb6b1557be43956557cbde4cbe26b53a50bd8
(cherry picked from commit 7c84a9b390c469e716e5802eef078d2df3902c6a)
|
|
stable/ocata
|
|
|
|
|
|
|
|
In the previous release[1], the services were stopped before the
pacemaker services, so that they get a chance to send last message to
the database/rabbitmq queue:
Let's do the upgrade in the same order.
[1] https://github.com/openstack/tripleo-heat-templates/blob/stable/newton/extraconfig/tasks/major_upgrade_controller_pacemaker_2.sh#L13-L71
Change-Id: I1c4045e8b9167396c9dfa4da99973102f1af1218
(cherry picked from commit fb7821378242e595184a38e1e0cb7e9978c0f806)
|
|
|
|
|
|
|
|
|
|
Change-Id: I0d7e151a931d02068dea80d7cf57b99736e689e6
(cherry picked from commit 077c2eeb40bf1e9d5ad011c4c6036614d03886b6)
|
|
Change-Id: I79169baf4c59e9325355992288de2e9ad8088e3b
(cherry picked from commit bbe274862de5bfb317b9d44684556cb200c17f08)
|
|
Change-Id: I91c3c93c1571288daa78b6d24b0aa9824a2bb5c4
(cherry picked from commit db02313b2869aac0d0ddd41129eb9bebed1a24ad)
|
|
Adding etcd upgrade tasks
Change-Id: Ie891a1a03585b3aec1ed30c176b5fb6b67d7e4b7
(cherry picked from commit 489761e848ad4be0eb67bc405968ef2870b81f05)
|
|
|
|
|
|
stable/ocata
|
|
|
|
Rename ec2-api_enabled to ec2_api_enabled so we avoid this error:
The conditional check 'ec2-api_enabled.rc == 0' failed.
The error was: error while evaluating conditional
(ec2-api_enabled.rc == 0): 'api_enabled' is undefined"}
Change-Id: Id325fd7eba397155eac7fb6c7410f88486173ba1
(cherry picked from commit d54532679edce04a5bdc3159489b77baf90b14ca)
|
|
Change-Id: I256d2fcb6353d029750113c1fec59a89c82583ca
(cherry picked from commit a9c64bd39d28cc073a7f2d19a17466d29be6cc0f)
|
|
Add base upgrade steps for auditd
Change-Id: Iaa56eb40ed80d20744cf8bab18504d700466d26e
(cherry picked from commit 5838d6f765a1ca9535b5d57c1299439040a5def2)
|
|
Change-Id: I316e14317e0586e895dcb4e084aa54e7665f6a20
(cherry picked from commit 2cebb99729005a31fbe24a957d2db84397f1952a)
|
|
Change-Id: I2703dd1a7e3eefa0ad6f7b74183101de6c1ad915
(cherry picked from commit b6214b0c5b92c85dbfa45007295db70888b509ab)
|
|
|
|
It is quite common in large entreprises that direct HTTP/HTTPS to the outside
world is denied from nodes/systems but reaching out through a proxy is allowed.
This change adds support for an HTTP proxy when RHEL overcloud nodes reach
out to either the RHSM portal or to a satellite server. This allows the
overcloud nodes to download updates even in locked-down environments.
The following variables are settable through templates:
rhel_reg_http_proxy_host:
rhel_reg_http_proxy_port:
rhel_reg_http_proxy_username:
rhel_reg_http_proxy_password:
Note the following restrictions:
- If setting rhel_reg_http_proxy_host,
then rhel_reg_http_proxy_port cannot be empty.
- If setting rhel_reg_http_proxy_port,
then rhel_reg_http_proxy_host cannot be empty.
- If setting rhel_reg_http_proxy_username,
then rhel_reg_http_proxy_password cannot be empty.
- If setting rhel_reg_http_proxy_password,
then rhel_reg_http_proxy_username cannot be empty.
- If setting either rhel_reg_http_proxy_username or
rhel_reg_http_proxy_password, then rhel_reg_http_proxy_host
AND rhel_reg_http_proxy_port cannot be empty
Closes-Bug: #1668618
Change-Id: I003ad5449bd99c01376781ec0ce9074eca3e2704
(cherry picked from commit 3002edc90a631f3adb8ae0ee696062347f94ea52)
|
|
Change-Id: I162ed6aa2d1039096e4a90e8678e48894a7119c3
|
|
|
|
|
|
|
|
This patch updates the Cinder service to reference the correct
catalogue entries for Nova as configured by TripleO. The default
settings as set by TripleO do not match our catalogue entries,
and when Cinder attempts to callback to Nova in certain events
(such as a Cinder volume retype) it can raise an EndpointNotFound
error.
Out of the box we have settings in /etc/cinder/cinder.conf like:
nova_catalog_info = compute:Compute Service:internalURL
With the format as "<service_type>:<service_name>:<endpoint_type>"
Yet our catalogue has no mention of 'Compute Service'. This patch
also fixes the reference for the adminURL also.
Related-Bug: #1668281
Change-Id: I888ee07ef02d82578867e33608901c06e6478472
Co-Authored-By: Greg Charot <gcharot@redhat.com>
(cherry picked from commit 09d8c1278604cc2aec42b7284c01cf7eb8b074b6)
|
|
|
|
As of Ocata, whenever Heat needs to get the value of an output from a
nested Stack it will still load the Stack in memory and re-resolve the
output value. This means that the EndpointMap's endpoint_map output, which
is huge, gets loaded and recalculated whenever showing the EndpointMap or
KeystoneUrl outputs of the main (overcloud) stack. To avoid this, store the
value locally in an OS::Heat::Value resource. This means that the
EndpointMap will only be resolved once, during the stack create/update, and
the outputs can refer to that value.
Related-Bug: #1661728
Change-Id: Ia79eceeea309f5508713a310849f5d366a035430
Depends-On: If0f80cab94c28514d1569b1025362ab9d9d31512
(cherry picked from commit b2ee58c7f6883011b4ba8b387eedc63d3600aea0)
|
|
This package wasn't installed in the Newton image and we need to
install it during upgrade to be able to skip preupgrade validations.
Change-Id: If6ee7a3801756ac445ae35534803eab175ad8e40
Closes-Bug: 1667967
(cherry picked from commit 96618f85e6b92a4d1d2413e72adafab2abcbddc6)
|
|
This doesn't exist in newton images, so install it via the
ansible tasks during step3 (when all other packages are updated).
Change-Id: I700a711473d10a50fad6b1797453a74c0cdff54b
Closes-Bug: 1667965
(cherry picked from commit 63cb515c602d8a231a086b1db098c129ed81eaff)
|
|
This needs to handle a ServiceNetMap containing non-default
network names when they are overridden via the *NetName parameters.
Closes-Bug: #1651541
Change-Id: I95d808444642a37612a495e822e50449a7e7da63
(cherry picked from commit 47f2579fa24e722b451c29b5f6435c5b5fe65429)
|
|
Pacemaker is now deployed by default and it would be great to have it
tested for all scenarios to deploy real environments used in production.
Change-Id: Iff879cd641f6207644b1b6309a6ec4129f1a255a
(cherry picked from commit 828788f1d17f5b14a058bf79aeafd526db842d9d)
|
|
|
|
|
|
I suspect this was forgotten from the initial commits where
we were doing the dbsync in ansible
Change-Id: Ie337bfba4e61cf3d546d0b79b611b84211ac9d9d
(cherry picked from commit a6789350a292b68fa0c5d0668b4cf1a1f6831531)
|
|
To improve testing coverage in upgrade CI job, add Pacemaker.
Change-Id: I855ed15642e28cdfda5a7cbd6ff6d01b591dff7e
(cherry picked from commit b352d687ba980eba5b492f5ef676bda20266794d)
|
|
into stable/ocata
|