aboutsummaryrefslogtreecommitdiffstats
path: root/extraconfig/pre_deploy/rhel-registration/environment-rhel-registration.yaml
AgeCommit message (Collapse)AuthorFilesLines
2017-11-02Upgrade rhel_reg_sat_repo to 6.2Emilien Macchi1-1/+1
When deploying with RHSM, sat-tools 6.2 will be installed instead of 6.1. The new version is supported by RHEL 7.4 and provides katello-agent package. Change-Id: I04a9feab02bf606ad6ca923a17947dcca30258da Closes-Bug: #1728638 (cherry picked from commit b248ae1447940f81513be9904a24197bd4af1126)
2017-02-22Adds http proxy support for registering RHEL overcloud nodesVincent S. Cojot1-0/+4
It is quite common in large entreprises that direct HTTP/HTTPS to the outside world is denied from nodes/systems but reaching out through a proxy is allowed. This change adds support for an HTTP proxy when RHEL overcloud nodes reach out to either the RHSM portal or to a satellite server. This allows the overcloud nodes to download updates even in locked-down environments. The following variables are settable through templates: rhel_reg_http_proxy_host: rhel_reg_http_proxy_port: rhel_reg_http_proxy_username: rhel_reg_http_proxy_password: Note the following restrictions: - If setting rhel_reg_http_proxy_host, then rhel_reg_http_proxy_port cannot be empty. - If setting rhel_reg_http_proxy_port, then rhel_reg_http_proxy_host cannot be empty. - If setting rhel_reg_http_proxy_username, then rhel_reg_http_proxy_password cannot be empty. - If setting rhel_reg_http_proxy_password, then rhel_reg_http_proxy_username cannot be empty. - If setting either rhel_reg_http_proxy_username or rhel_reg_http_proxy_password, then rhel_reg_http_proxy_host AND rhel_reg_http_proxy_port cannot be empty Change-Id: I003ad5449bd99c01376781ec0ce9074eca3e2704
2016-03-29change the default satellite tools rpm repo.Mike Burns1-0/+1
Change-Id: I60ab36b04b8932e4dbee58e21998dc984178b41c Bugzilla: https://bugzilla.redhat.com/1275281
2015-10-01Move RHEL (un)registration to NodeExtraConfigSteven Hardy1-0/+22
Currently, we have a problem because the unregistration happens in the "post deploy" phase, which works fine when the top-level stack is being deleted, but not when the ResourceGroup of servers is being scaled down, because then the normal "post deploy" update ordering is respected and we try to unregister after the corresponding server has been deleted. So, instead, register/unregister each node inside the unit of scale, e.g the role template being scaled down, which is possible via the new NodesExtraConfig interface, which means unregistration will take place at the right time both on stack delete and on scale-down. Change-Id: I8f117a49fd128f268659525dd03ad46ba3daa1bc