Age | Commit message (Collapse) | Author | Files | Lines |
|
There were 2 problems with this condition making the
rhel-registration.yal template broken:
"conditions" should be "condition"
The condition should refer to just a condition name defined in the
"conditions:" section of the template.
Change-Id: I14d5c72cf86423808e81f1d8406098d5fd635e66
Closes-Bug: #1709916
|
|
There is a Heat patch posted (via Depends-On) that resolves the issue
that caused this to be reverted. This reverts the revert and we need to
make sure all the upgrades jobs pass before we merge this patch.
This reverts commit 69936229f4def703cd44ab164d8d1989c9fa37cb.
Closes-Bug: #1699463
implements blueprint disable-deployments
Change-Id: Iedf680fddfbfc020d301bec8837a0cb98d481eb5
|
|
This reverts commit d6c0979eb3de79b8c3a79ea5798498f0241eb32d.
This seems to be causing issues in Heat in upgrades.
Change-Id: I379fb2133358ba9c3c989c98a2dd399ad064f706
Related-Bug: #1699463
|
|
Commit I46941e54a476c7cc8645cd1aff391c9c6c5434de added support for
blacklisting servers from triggered Heat deployments.
This commit adds that functionality to the remaining Deployments in
tripleo-heat-templates for the ExtraConfig interfaces.
Since we can not (should not) change the interface to ExtraConfig, Heat
conditions are used on the actual <role>ExtraConfigPre and
NodeExtraConfig resources instead of using the actions approach on
Deployments.
Change-Id: I38fdb50d1d966a6c3651980c52298317fa3bece4
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
Adds the ability to perform a yum update after performing the RHEL
registration.
Change-Id: Id84d156cd28413309981d5943242292a3a6fa807
Partial-Bug: #1640894
|
|
|
|
Previously the rhel registration script disabled the satellite repo
after installing packages from it. This means those packages will
never be updated, which is not desirable from a long-term
maintenance perspective.
I believe this behavior is a holdover from the dib registration
script, where we don't want to leave repos enabled because the
image may be deployed many times and each instance needs to be
re-registered. In t-h-t we don't have that problem because the
script only runs at deploy time so it's okay and desirable to leave
the repos enabled.
Change-Id: I5d760467b458d90d74507a55effc49b71d22eaa3
Closes-Bug: 1673116
|
|
There were multiple issues in retry() in rhel-registration:
- There was no need for it to be recursive (local variables
got overwritten)
- There was no delay between multiple attempts, leading to faster but
more frequent failures.
- The max number of attempts was set too low for some environements.
With this patch, rhel-registration now works more reliably with slow-links
for portal registration and does not attempt to DDos the portal or your
satellite server.
Change-Id: I594d3c94867b45a7a58766dbcc66edead78d6a4e
|
|
|
|
In extraconfig/pre_deploy/rhel-registration/scripts/rhel-registration,
there's a line that says:
retry subscription-manager repos --disable '*'
I believe this is broken and will result in shell expansion being made.
The proper line should be:
retry subscription-manager repos --disable='*'
This regression came from commit 2b06ed8adce2bcc18480b71c0f20a0ec2d21de19.
(Also see https://review.openstack.org/#/c/381233 )
This patch fixes the regression while preserving functionality
of the above change.
Closes-Bug: 1667316
Change-Id: I54f0db3f1f596f6356f7445cdc61737f20f14318
Signed-off-by: Vincent S. Cojot <vincent@cojot.name>
|
|
It is quite common in large entreprises that direct HTTP/HTTPS to the outside
world is denied from nodes/systems but reaching out through a proxy is allowed.
This change adds support for an HTTP proxy when RHEL overcloud nodes reach
out to either the RHSM portal or to a satellite server. This allows the
overcloud nodes to download updates even in locked-down environments.
The following variables are settable through templates:
rhel_reg_http_proxy_host:
rhel_reg_http_proxy_port:
rhel_reg_http_proxy_username:
rhel_reg_http_proxy_password:
Note the following restrictions:
- If setting rhel_reg_http_proxy_host,
then rhel_reg_http_proxy_port cannot be empty.
- If setting rhel_reg_http_proxy_port,
then rhel_reg_http_proxy_host cannot be empty.
- If setting rhel_reg_http_proxy_username,
then rhel_reg_http_proxy_password cannot be empty.
- If setting rhel_reg_http_proxy_password,
then rhel_reg_http_proxy_username cannot be empty.
- If setting either rhel_reg_http_proxy_username or
rhel_reg_http_proxy_password, then rhel_reg_http_proxy_host
AND rhel_reg_http_proxy_port cannot be empty
Change-Id: I003ad5449bd99c01376781ec0ce9074eca3e2704
|
|
|
|
Occasionally we can see transient network outages when attempting
to register with the Redhat Portal or Satellite server. This causes
deployment or scaleout operations to fail. These outages are minimal
and retrying often resolves the issue. This becomes more prevelant
during testing as we deploy infrastructure far more frequently.
Change-Id: If23785fbe2eea4643918b2e68915bbc13c1b1112
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
Some accounts get repos automatically enabled when the system is
registered, which is a problem because some of these repos pull in
things we don't want like early beta versions of software. Since
a full list of desired repos is supposed to be included as part of
the registration config, let's just disable all repos to start with
to ensure we have a clean repo configuration.
I'm not sure whether satellite has the same problem (I would think
not, since satellite should only be providing the desired repos),
so I'm only making this change for portal registration.
Change-Id: I052080352e8b1c9b985e42d91a6c42b3258b0b11
Closes-Bug: 1629922
|
|
Change-Id: I60ab36b04b8932e4dbee58e21998dc984178b41c
Bugzilla: https://bugzilla.redhat.com/1275281
|
|
If the satellite registration url was specified with https, the curl
command to detect the satellite version would not work as expected since
-L was not passed and you get redirected to https when testing the ping
api.
To additionally handle the case where https is specified, also use curl
directly with -k to download the configuration rpm instead of using rpm
with a url.
Fixes another bug with a missing $ in the reference to the
$satellite_version variable.
Change-Id: I984fdfc415eeeed4ef29cc8d0812e1b67545d6b1
|
|
|
|
Add Satellite 5 support to the RHEL registration environment and
resources. The registration script is updated to support both satellite
versions in place given the similarity of the options for both
scenarios.
The satellite version is detected based on $REG_SAT_URL, and that
determines whether subscription-manager or rhnreg_ks is used.
Change-Id: Ic261c8a16a7d6d3978f8bfc6e53f75dbe1b716db
|
|
There are two reasons the name property should always be set for deployment
resources:
- The name often shows up in logs, files and API calls, the default
derived name is long and unhelpful
- Sorting by name determines the merge order of os-apply-config, and the
execution order of puppet/shell scripts (note this is different to
resource dependency order) so leaving the default name results in an
undetermined order which could lead to unpredictable deployment of
configs
This change simply sets the name to the resource name, but a future change
should prepend each name with a run-parts style 2 digit prefix so that the
order is explicitly stated. Documentation for extraconfig needs to clearly
state what prefix is needed to override which merge/execution order.
For existing overcloud stacks, heat currently replaces deployment resources
when the name changes, so this change
Depends-On: I95037191915ccd32b2efb72203b146897a4edbc9
Change-Id: Ic4bcd56aa65b981275c3d4214588bfc4de63b3b0
|
|
Currently, we have a problem because the unregistration happens in the
"post deploy" phase, which works fine when the top-level stack is being
deleted, but not when the ResourceGroup of servers is being scaled down,
because then the normal "post deploy" update ordering is respected and
we try to unregister after the corresponding server has been deleted.
So, instead, register/unregister each node inside the unit of scale,
e.g the role template being scaled down, which is possible via the new
NodesExtraConfig interface, which means unregistration will take
place at the right time both on stack delete and on scale-down.
Change-Id: I8f117a49fd128f268659525dd03ad46ba3daa1bc
|