Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
This change adds an `include` statement to bring in the extra
functionality available from the existing puppet-ssh module in
already available in RDO.
By using puppet-ssh it provides a framework to allow the passing in of
server options using just hiera values under ssh::server_options.
For example, sshd_config banner can now be passed a server option, as
well as all the new parameters outlined in the launchpad issue that
the patch references for Closing. For this reason, the former augeas
setting for `Banner /etc/issue` is now managed by the main puppet-ssh
module instead.
The change also allows population of MOTD text to `/etc/motd` as
well as `issue.net`.
$bannertext is refactored in accordance with patch [1]
[1] https://review.openstack.org/#/c/442406/
Change-Id: Id329538fb7b623526f1d91d8a513cf3440c86a7c
Related-Bug: 1668543
(cherry picked from commit b35bc80ac2acf18463e4c18c8360862749aa0964)
|
|
|
|
This patch configures SSH tunneling for nova cold-migration and reuses the
tunnel for libvirt live-migration unless TLS has been enabled.
Change-Id: I367757cbe8757d11943af7e41af620f9ce919a06
Depends-On: Iac1763761c652bed637cb7cf85bc12347b5fe7ec
(cherry picked from commit ccbcd11276c7bc3ffc8f013d9a5b2d3944bf76cf)
|
|
Every time we call apache module regardless of using SSL we have to
configure mod_ssl from puppet-apache or we'll hit issue during package
update. File /etc/httpd/conf.d/ssl.conf from mod_ssl package contains
Listen 443 while apache::mod::ssl just configures SSL bits but does not
add Listen. If the apache::mod::ssl is not included the ssl.conf file is
removed and recreated during mod_ssl package update. This causes
conflict on port 443.
Change-Id: Ic5a0719f67d3795a9edca25284d1cf6f088073e8
Related-Bug: 1682448
Resolves: rhbz#1441977
(cherry picked from commit 9e729c0db22865d036860346eb6b81c4c2108719)
|
|
into stable/ocata
|
|
puppet-tripleo" into stable/ocata
|
|
Apache is configured in step 3 so if we configure ceilometer in step 4,
the configuration is removed on updates. We need to configure it in step
3 with the other apache services to ensure we don't have issues on
updates.
Change-Id: Icc9d03cd8904c93cb6e17f662f141c6e4c0bf423
Related-Bug: #1664418
(cherry picked from commit 890178bd6f6f465ffcb8cf4ad9b8019a1d6dc653)
|
|
We configure apache in step3 so we need to configure the gnocchi api in
step 3 as well to prevent unnecessary service restarts during updates.
Change-Id: I30010c9cf0b0c23fde5d00b67472979d519a15be
Related-Bug: #1664418
(cherry picked from commit 9de4c92571fdbe342a20a68e4ee44feb55464007)
|
|
Currently, mongodb has no limits on how much memory
it can consume. This enforces restriction so mongodb
service limits through systemd.
The puppet-systemd module has support for limits. The
MemoryLimit support is added in the follwoing pull
request https://github.com/camptocamp/puppet-systemd/pull/23
Closes-bug: #1656558
Change-Id: Ie9391aa39532507c5de8dd668a70d5b66e17c891
(cherry picked from commit 3aa86a4ea3c2406f79d6283cbb158f67136b5e9a)
|
|
|
|
This allows decoupling the Swift ringbuilding logic from the Controller
and ObjectStorage roles. A follow up patch will modify
tripleo-heat-templates and use this modified class.
Actually this downloads the Swift rings even if ring building is
disabled or if there is no need to rebalance. This is required, because
operators can disable ring building, but use the same mechanism to
distribute pre-built rings to the nodes.
If ring building is disabled, these won't be uploaded at the end back to
the undercloud.
Related-Bug: 1665641
Change-Id: Ifd6fa5b398d98e8998630ea0c9a2ce9867ceba2b
(cherry picked from commit 3412150d91dc7fe6e9f168b4ffdbb4d54c39fc55)
|
|
This sets the flag create_domain_entry for the ldap_backend resource,
which will create the domain for the ldap backend (this was previously
not the case since only the configuration was created). Furtherly, this
flag will also refresh the keystone server, so the changes come into
effect.
Note that this is only done in step 3, so the domains are created there
and the refresh happens in that step. Also, this is only done for the
bootstrap node, since when the other nodes start, they will already have
the domains available in the keystone database and there won't be a need
to restart.
Related-Bug: #1677603
Depends-On: Ib6c633b6a975e4b760c10a2aef3c252885b05e28
Change-Id: Id879cf5c5ae39d37bf58b73c78733001d2b03d9c
(cherry picked from commit 13ea87e658e36d1afcc3e4db7f43bcfc068e1f49)
|
|
bundle rake syntax
Could not parse for environment *root*: Syntax error at ')'; expected '}'
Change-Id: Idfb254df068b3d7342a6ea3c71dabd1316a61bdf
(cherry picked from commit 33e0fe959d849acdab4b084ffd31d242c58ff6b6)
|
|
This patch adds the appropriate include to make sure that appropriate
keystone user, services, etc. are created when octavia is selected.
Closes-bug: #1680588
Change-Id: I0b6d657a0300538292223923d8808c23f936c193
(cherry picked from commit 23e723255cf46fd730cae185a0dc1f7194a511e0)
|
|
|
|
Ldap_backend is a define so we need a resource to talk it. If
ldap_backend_enable set by tripleo-heat-templates, we call the
ldap_backend as a resource.
Given an environment such as the following:
parameter_defaults:
KeystoneLdapDomainEnable: true
KeystoneLDAPBackendConfigs:
tripleoldap:
url: ldap://192.0.2.250
user: cn=openstack,ou=Users,dc=redhat,dc=example,dc=com
password: Secrete
suffix: dc=redhat,dc=example,dc=com
user_tree_dn: ou=Users,dc=redhat,dc=example,dc=com
user_filter: "(memberOf=cn=OSuser,ou=Groups,dc=redhat,dc=example,dc=com)"
user_objectclass: person
user_id_attribute: cn
user_allow_create: false
user_allow_update: false
user_allow_delete: false
ControllerExtraConfig:
nova::keystone::authtoken::auth_version: v3
cinder::keystone::authtoken::auth_version: v3
It would then create a domain called tripleoldap with an LDAP
configuration as defined by the hash. The parameters from the
hash are defined by the keystone::ldap_backend resource in
puppet-keystone.
More backends can be added as more entries to that hash.
Partial-Bug: 1677603
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Co-Authored-By: Guillaume Coré <gucore@redhat.com>
Signed-off-by: Cyril Lopez <cylopez@redhat.com>
Change-Id: I1593c6a33ed1a0ea51feda9dfb6e1690eaeac5db
(cherry picked from commit b8388e378a9151bccbac0db0478b1ef5d1e2e3fb)
|
|
This change will make the global cluster-check property configurable
and will pick a lower default (60s) in case a pacemaker remote node
is deployed.
The cluster-recheck-interval is set to default to 15minutes by
pacemaker. This value is too high when a pacemaker remote service
is deployed. With this default value a reboot of a pacemaker remote
node will be reported as offline by pacemaker for up to 15minutes.
With this change we do the following:
1) Do nothing in case pacemaker remote is not deployed
2) When pacemaker remote is deployed and the operator has not
specified otherwise, we set the recheck interval to 60s.
3) When the operator specifies the recheck interval we set that.
Change-Id: I900952b33317b7998a1f26a65f4d70c1726df19c
Closes-Bug: #1679753
(cherry picked from commit f464e9f703b824f8971ade50c32884748caffefc)
|
|
|
|
So we avoid useless apache restart and save time during the deployment.
Note: the backport is not 100% clean as Heat API was not deployed in
WSGI during Ocata cycle, so now, it's only for Aodh.
Related-Bug: #1664418
Change-Id: Ie00b717a6741e215e59d219710154f0d2ce6b39e
(cherry picked from commit 2272bcabba8752cd1876f85b1f9b83b0c7592c94)
|
|
|
|
This causes issues in deployments that is not using ML2
ComputeNeutronCorePlugin or OVS agent on the compute nodes.
Closes-Bug: 1679202
Change-Id: I9cdfd115add8c0d2d3ae6802e7bde007c1677c67
Signed-off-by: Tim Rozet <trozet@redhat.com>
(cherry picked from commit 1b93ca14c4d58c360424fbf34f669014b34d3b4b)
|
|
|
|
Ceilometer user is needed for other ceilometer services to
authenticate with keystone even when API is not present.
So the data can be dispatched to gnocchi. Lets keep these
separate so user always exists even when api is not.
Depends-On: Iffebd40752eafb1d30b5962da8b5624fb9df7d48
Closes-bug: #1677354
Change-Id: I8f4e543a7cef5e50a35a191fe20e276d518daf20
(cherry picked from commit 38e4976b7b80487e26c75ece20bab631597240a3)
|
|
Add an explicit tunnel timeout configuration option to increase the
tunnel timeout for persistent socket connections from two minutes (2m)
to one hour (3600s). A configuration was already present to apply a
tunnel timeout to the zaqar_ws endpoint, but that only applies to
connections made directly to the zaqar_ws endpoint directly. Since UI
now uses mod_proxy to proxy WebSocket connections for Zaqar, the timeout
is now applied for the same reasons to the ui haproxy server.
Change-Id: If749dc9148ccf8f2fa12b56b6ed6740f42e65aeb
Closes-Bug: 1672826
(cherry picked from commit e8125cb3640e0fe74b8617aaf55686d5645c8f7f)
|
|
We configure apache in step 3 so horizon should be configured at the
same time or else updates will cause horizon to be unvailable during the
update process.
Change-Id: I4032f7c24edc0ff9ed637e213870cdd3beb9a54e
Closes-Bug: #1678338
(cherry picked from commit e2928717412242faa4eb15d778f1b5c0952edc08)
|
|
The eqlx_use_chap, eqlx_chap_login and eqlx_chap_password were
previously deprecated and are scheduled to be removed in Pike. This
change updates these parameters to use the replacement params.
See I295d8388ba17dd60e83995e7c82f64f02a3c4258 for more details.
Change-Id: I0f229ed2e7bb65d9da81c5caa69dbe1a4aded814
(cherry picked from commit 9cd4ddce32b4f14e7f6168416fcaee26a64f7a90)
|
|
The rabbitmq user check is moved to step >= 2 from step >= 1. There is
no gaurantee that rabbitmq is running at step 1, especially if updating
a failed stack that never made it past step 1 to begin with.
Change-Id: I029193da4c180deff3ab516bc8dc2da14c279317
Closes-Bug: #1675194
(cherry picked from commit aa9af086f05e466e88ac2a85ecc9d39f5a6d1e2f)
|
|
|
|
|
|
Change-Id: Id933276fab16eebd72751dca136ad805547e6291
Related-Bug: #1676491
(cherry picked from commit f137661aa178a6b390976470ddec7ed77eb05cf5)
|
|
Without this gnocchi resources types are not created
as they are skipped initially and the resources from
ceilometer wont make it to gnocchi.
Closes-bug: #1674421
Depends-On: I753f37e121b95813e345f200ad3f3e75ec4bd7e1
Change-Id: Ib45bf1b3e526a58f675d7555fe7bb5038dadeede
(cherry picked from commit aec471a78d46d839e98026c4cb98acb412a7b424)
|
|
We attempt to use iscsi-iname in an exec for our nova compute profile
but we do not ensure that the package providing this command is
installed. This change adds the package definition for
iscsi-initiator-utils to ensure it is installed before trying to use
iscsi-iname.
Change-Id: I1bfdb68170931fd05a09859cf8eefb50ed20915d
Closes-Bug: #1675462
(cherry picked from commit 2102a610c14d357f99a531250e676d6366559212)
|
|
|
|
services" into stable/ocata
|
|
|
|
We currently set the haproxy stat socket to /var/run/haproxy.sock.
On Centos/RHEL with selinux enabled this will break:
avc: denied { link } for pid=284010 comm="haproxy"
name="haproxy.sock" dev="tmpfs" ino=330803
scontext=system_u:system_r:haproxy_t:s0
tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file
The blessed/correctly-labeled path is /var/lib/haproxy/stats
Note: I am setting only Partial-Bug because I would still like
to make this a parameter so other distros may just override the path.
But that change is more apt for pike and not for ocata.
Change-Id: I62aab6fb188a9103f1586edac1c2aa7949fdb08c
Patial-Bug: #1671119
(cherry picked from commit 5f8607711bb85150bb9631559f0538254ba5c5cc)
|
|
The db_sync from panko comes from the panko-api package; So we move the
db_sync to be done in the api manifest as it's done for other services
such as barbican.
This is necessary since in cases where the overcloud deploy requires
puppet to do the installations, with the previous setup it failed since
the command wasn't available in the step it was being done.
Change-Id: I20a549cbaa2ee4b2c762dbae97f5cbf4d0b517c8
Closes-Bug: #1671716
(cherry picked from commit d73c2630b534b277122db68620be8923c4d3a6b4)
|
|
Using keystone_authtoken credentials for this purpose is deprecated, and also
prevents ironic-conductor from being used as a separate role.
As a side effect, this change makes it possible to potentially enable
ironic-inspector support in the future (it's not enabled yet).
Change-Id: I21180678bec911f1be36e3b174bae81af042938c
Partial-Bug: #1661250
(cherry picked from commit ffe6ae2c24f82df620df14ee4be8bd292cb95075)
|
|
Changes Include:
- Adds spec testing
- Only raise limits if nonha. puppet-systemd will restart the mariadb
service which breaks ha deployments. Hence we only want to do this
in noha.
- Minor fix to hiera value refrenced not as parameter to mysql.pp
Partial-Bug: #1648181
Related-Bug: #1524809
Co-Authored By: Feng Pan <fpan@redhat.com>
Change-Id: Id063bf4b4ac229181b01f40965811cb8ac4230d5
Signed-off-by: Tim Rozet <trozet@redhat.com>
Signed-off-by: Feng Pan <fpan@redhat.com>
(cherry picked from commit c9acf8a687ea64686c1ecceeff45add014752121)
|
|
Since the norpm provider can prevent the chronyd package from actually
getting purged, we need to make sure the chronyd service is stopped and
disabled so that it does not conflict with ntpd.
Change-Id: I7a697aba7aa5a27ba4ab6e46018057f7f01dfab2
Closes-Bug: #1665426
(cherry picked from commit 37ba3a8db5e38955469e8bc9158388379d64abc8)
|
|
stable/ocata
|
|
Systemd starts mariadb as user mysql, so in order to allow a large
number of connections (e.g. max_connections=4096) it is necessary to
raise the file descriptor limit via a system drop-in file.
When installing an undercloud, such drop-in file is currently
generated by instack-undercloud (in file puppet-stack-config.pp). But
non-HA overcloud also need such drop-in to be generated.
In order to avoid duplicating code, the drop-in creation code should
be provided by puppet-tripleo. By default, no drop-in is generated;
it has to be enabled by instack-undercloud or tripleo-heat-template
once they will use it (resp. to create undercloud or non-HA overcloud).
This patch does not aim at generating a dynamic file limit based on
the number of connections, this should land in another dedicated
patch. Instead, it just reuses the limit currently set for undercloud
and HA-overclouds.
Also, the generation of the drop-in does not force a mysql restart
like it currently does in instack-undercloud, to avoid unexpected
service disruption on a non-HA overcloud after a minor update.
Co-Authored-By: Tim Rozet <trozet@redhat.com>
Depends-On: I7ca7b5f7614971455cae2bf7c4bf8264b642b0dc
Change-Id: Ia0907b2ab6062a93fb9363e39c86535a490fbaf6
Partial-Bug: #1648181
Related-Bug: #1524809
(cherry picked from commit 09665170f6d0f4536a48dd4d1444e07aa064bed7)
|
|
This patch will set neutron's dhcp_agents_per_network equal to the
number of deployed neutron DHCP agents unless otherwise explicitly set.
Conflicts:
manifests/profile/base/neutron.pp
Note: spec/classes/tripleo_profile_base_neutron_spec.rb removed from
backport as it required defining the neutron class as a precondition to
satisfy a requirement for a rabbit password. This leads to a duplicate
definition.
Partial-bug: #1632721
Change-Id: I5533e42c5ba9f72cc70d80489a07e30ee2341198
(cherry picked from commit 52a68ffc8f060e1961458a524e5861cea02d1c1c)
|
|
stable/ocata
|
|
By default Puppet does virtual package matching if precise name matching
fails. Docker-distribution RPM "provides" docker-registry:
bash-4.2# rpm -q --whatprovides docker-registry
docker-distribution-2.5.1-1.el7.x86_64
This means that when we wanted to make docker-registry package absent,
we were actually removing docker-distribution instead. This is now fixed
by allow_virtual => false. Only name matching is performed.
Change-Id: I1f93b404085f0bc2b6c063f573c801db6409c0bb
Closes-Bug: #1666459
(cherry picked from commit d12c004bc9c630c756a6b0df351916b9e04b9778)
|
|
When fixing LP#1643487 we added ?bind_address to all DB URIs.
Since this clashes with Cellsv2 due to the URIs becoming host
dependent, we need a new approach to pass bind_address to pymysql
that leaves the DB URIs host-independent.
We first create a /etc/my.cnf.d/tripleo.cnf file with a [tripleo]
section and in this section we add the correct bind-address option.
Note that we use the puppet augeas lens and not the mysql one
because the mysql one does not support custom sections *and* there
are older versions around which do not like the /etc/my.cnf.d/* path.
The reason for not reusing an existing mariadb file (my.cnf or
galera.cnf) is that pymysql's ini file support is not robust
enough at the moment: https://github.com/PyMySQL/PyMySQL/issues/548
The reason for putting this file creation code only on the controller
nodes the following: The slow VIP failover only happens if a
service runs where the VIPs exist. The VIPs get created in the
haproxy profile and that is why in order to have fast VIP failovers
the MySQLClient profile must live where the Haproxy service is running.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Partial-Bug: #1663181
Change-Id: Iff8bd2d9ee85f7bb1445aa2e1b3cfbff1f397b18
(cherry picked from commit f6116ff0f350aeecdaa346e4e49d208be49ce6b9)
|
|
Which language options to offer to the UI users is determined in the
configuration file. Let's show all possible languages by default,
unless specified otherwise.
Change-Id: I513303bf82dca53e2291ab66f2385a2985a1846e
Related-Bug: #1663279
(cherry picked from commit 053ee06787539f6da07985968d6c3b0194e56008)
|