Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
If os-collect-config/config.json is updated before an upgrade/update,
then the os-net-config run will automatically erase the keepalived
managed ips.
This is a hackish way to ensure that keepalived is restarted during the
next phase in order to have the ip recreated.
It basically adds a comment line to the keepalived.conf file (making it
different than the puppet one) if it's there. This will force a puppet
restart of the keepalive service puting the ips back on the undercloud.
Change-Id: I56b706ff44ba31aa87a63f870940831ce02a6e77
Closes-Bug: #1640213
Related-Bug: #1638029
|
|
|
|
The lastest patchset of https://review.openstack.org/393361 was actually
not working.
The `if defined` idiom depends on *evaluation* order.
At the time it's red in the haproxy.pp class, the line that loads the
class 'haproxy' has still not yet been reached and thus the `defined`
result is false. The constraint is not added.
For this reason, the use of `defined` in module is not advised by
puppetlabs[1].
[1] https://docs.puppet.com/puppet/latest/reference/function.html#defined
Change-Id: Ibd352cb313f8863d62db8987419378bed5b87256
Relates-To: #1638029
|
|
|
|
|
|
Update default doc default for bind_host to match previous change
http://review.openstack.org/386817/
Change-Id: Iff048ba7152c1b7e945f284311215c8f872c1409
Closes-bug: #1640104
|
|
aodh, ceilometer, gnocchi and neutron need the X-Forwarded-Proto in
order to return links with the correct protocol when SSL is enabled.
This enables it in HAProxy
Change-Id: Icceab92f86b1cc40d42195fa4ba0c75f302795b8
Closes-Bug: #1640126
|
|
In HA deployments, we now check mysql nodes every 1s and removed them
immediately if they are failed. Previously we would check every 2s and
allow them to fail 5 checks before being removed, producing errors from
other OpenStack services for 10s, which causes confusion and delay for
operators.
Additionally, these check options are now also a class parameter so can
be overridden by operators.
Closes-Bug: #1639189
Change-Id: I0b915f790ae5a4b018a212d3aa83cca507be05e9
|
|
In order for the browser to trust the certificate served by HAProxy
we need to include the CA cert in the PEM file that the endpoints
serve.
Change-Id: Ibce76c1aa04bd3cb09a804c6e9789c55d8f2b417
Closes-Bug: #1639807
|
|
This patch changes the rabbit_hosts config generation to work properly
with IPv6 addresses.
Closes-Bug: #1639881
Change-Id: I07cd983880a4a75a051e081dcb96134cb5c6f5e8
|
|
It's been proposed this may help with the
('Connection aborted.', BadStatusLine("''",)) errors.
This patch increase queue, server and client timeouts to 2m (default is 1m)
Related-Bug: #1638908
Change-Id: Ie4f059f3fad2271bb472697e85ede296eee91f5d
|
|
Use rabbitmq_node_ips to find out where rabbitmq nodes are, and have
correct ipv6 syntax if required.
Closes-Bug: 1637443
Change-Id: Ibc0ed642931dd3ada7ee594bb8c70a1c3462206d
|
|
|
|
|
|
http://logs.openstack.org/08/393008/1/check/gate-puppet-openstack-integration-4-scenario002-tempest-ubuntu-xenial/283f87f/logs/puppet-version.txt.gz
Change-Id: Iae209b40bae184a9eee8f7bdbaa86140c08e74c5
|
|
Rather than use the heat::keystone::domain class which also includes the
configuration options, we should just create the user for heat in
keystone independently of the configuration.
Change-Id: I7d42d04ef0c53dc1e62d684d8edacfed9fd28fbe
Related-Bug: #1638350
Closes-Bug: #1638626
|
|
This optionally enables TLS for Cinder API in the internal network.
If internal TLS is enabled, each node that is serving the Cinder API
service will use certmonger to request its certificate.
bp tls-via-certmonger
Change-Id: Ib4a9c8d3ca57f1b02e1bb0d150f333db501e9863
|
|
Change-Id: Id4dc2379b0c423012a0b3aaf49d1e1a7d633a03b
|
|
|
|
|
|
|
|
|
|
This optionally enables TLS for Nova API in the internal network.
If internal TLS is enabled, each node that is serving the Nova API
service will use certmonger to request its certificate.
Note that this doesn't enable internal TLS for the nova metadata
service since it doesn't run over httpd. This will be handled in
a later commit.
bp tls-via-certmonger
Change-Id: I88380a1ed8fd597a1a80488cbc6ce357f133bd70
|
|
When using SSL setup for undercloud, the admin and public vip required
for ssl binding by haproxy are created by keepalived.
This makes sure that keepalived is started before haproxy and thus that
the interfaces are indeed present.
This patch also ensures this is happening for overcloud ssl
configuration. The case where another load-balancing technology other
than haproxy is used is not covered.
Closes-Bug: #1638029
Change-Id: I98cb0dcd7f389a1dd38ec8324429bfef4979aa66
|
|
|
|
|
|
|
|
|
|
New Newton release
Change-Id: I152fbd1dcaac37474183d60654db15a9a4918209
|
|
|
|
ODL was missing transparent binding mode, which causes HA deployments to
fail since HA Proxy will try to come up on every node (even without
VIP).
Closes-Bug: 1637833
Change-Id: I0bb7839cdcfeacb4ca1a9fc6f878e8b51330be92
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Since the service_name is now being passed from t-h-t, we can clean
it up from the profile in puppet.
Change-Id: I724af8c355c3077be64cf472cedbca80af55da01
Depends-On: I13638cd1af52537bef8540f0d5fa5f5f7decd392
|
|
In order to make the zaqar service fully composable, the mongo ips need
to be calculated without assuming that mongo and zaqar are on the same
node.
Change-Id: I0b077e85ba5fcd9fdfd33956cf33ce2403fcb088
|
|
|
|
In some cases, for instance, when updating from a non-SSL setup in
HAProxy to an SSL setup, we don't reload haproxy's configuration.
This is problematic since we need HAProxy to serve the certificates
and the new endpoints.
This forces the reload when puppet notices changes.
Change-Id: Ie1dd809e6beef33fadad48de55e488219fb7d686
Closes-Bug: #1636921
|
|
|
|
|
|
|
|
If we upgrade a cloud that was configured with external load balancer
the process will fail during convergence step because it will try to
restart haproxy which is not configured when an external load balancer
is configured.
Closes-Bug: #1636527
Change-Id: I6f6caec3e5c96e77437c1c83e625f39649a66c48
|
|
With the landing of HA NG in Newton we can actually remove the
pacemaker profiles we do not need. The only ones that are being
used in one form or the other are:
$ grep -ir services\/pacemaker environments | awk '{ print $3 }' | sort | uniq
../puppet/services/pacemaker/cinder-backup.yaml
../puppet/services/pacemaker/cinder-volume.yaml
../puppet/services/pacemaker/database/mysql.yaml
../puppet/services/pacemaker/database/redis.yaml
../puppet/services/pacemaker/haproxy.yaml
../puppet/services/pacemaker/manila-share.yaml
../puppet/services/pacemaker/rabbitmq.yaml
../puppet/services/pacemaker.yaml
The only exception is profile/pacemaker/database/mongodbvalidator
because it is included by profile/base/database/mongodb.pp
Change-Id: I80c8559bb2d915385bcc20ae71fe144ddd6591c1
|
|
The current redis file descriptor limit is 4096 because of two reasons:
- It is run via the redis user
- It is not started via systemd which has explicit LimitNOFILE set to
10240 (which matches the default configuration of maximum 10000
clients)
Create an /etc/security/limits.d/redis.conf file in order to increase
the fd limit value With this change we correctly get the following
limits:
[root@overcloud-controller-0 ~]# pcs status |grep -A2 redis
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-2 ]
Slaves: [ overcloud-controller-0 overcloud-controller-1 ]
[root@overcloud-controller-0 ~]# cat /proc/`pgrep redis`/limits | grep open
Max open files 10240 10240 files
Previously this limit was set to 4096.
Change-Id: I7691581bad92ad9442cecd82cf44f5ac78ed169f
Closes-Bug: #1635334
|
|
proxy for the UI"
|
|
|
|
|
|
|
|
|
|
|
|
Previously we did this with Pacemaker, but with move to NG HA
architecture we lost the ability to use NFS mounts as image storage for
Glance. This reimplements the mounting without utilizing Pacemaker. The
mount is by default also written to /etc/fstab so that it persists over
reboot, but this behavior can be disabled.
This could also go to puppet-glance eventually, but not yet -- we need
this backported to Newton because it's a TripleO regression. I don't
think puppet-glance would allow backporting this to Newton, because from
their point of view it would be a RFE rather than a regression.
Change-Id: I45ad34c36587a8d695069368cf791f1efb68256c
Related-Bug: #1635606
|