Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: If87cc4d55e8524246d2cd41a62805f84780006b2
|
|
Add Pacemaker resources for Cinder services, also add relevant ordering
and colocation constraints.
Change-Id: Idc2e1b5ec96d882543f7a1a4ec723a010020ab02
|
|
|
|
|
|
Previously we've been starting non-pacemakerized services in step 3 on
bootstrap node and in step 4 on others. Now that $sync_db in OpenStack
Puppet modules is decoupled from $enabled and $manage_service [1] we can
start the services in step 4 on all nodes.
[1] https://bugs.launchpad.net/puppet-glance/+bug/1452278
Change-Id: I6351d972ab00f4661d98338d95310d33f271de2f
|
|
|
|
|
|
|
|
This will configure the sysctl settings via puppet instead of
sysctl image element.
Change-Id: Ieb129d4cbe4b6d4184172631499ecd638073564f
|
|
The exec timeout/attempts is configured so that it is
left running for up to 30mins if the command runs but is
unsuccessfull and up to 2h if the command times out.
Change-Id: I4b6b77e878017bf92d7c59c868d393e74405a355
|
|
We need to write config for OpenStack services on all nodes in step 3 so
that we can then create pacemaker resources in step 4. (If we wrote
config on non-bootstrap nodes in step 4 as it is currently, services on
those nodes might be started unconfigured. This is an inter-node
ordering issue that cannot be easily solved from within Puppet
manifests, hence the use of steps to enforce this ordering.)
Change-Id: Ia78ec38520bd1295872ea2690e8d3f8d6b01c46c
|
|
Set clone params according to [1].
[1] https://github.com/beekhof/osp-ha-deploy/blob/f8a65ab4c34f94737edde7db60337b830bfe6311/pcmk/rabbitmq.scenario
Change-Id: I5644de2d6253ab762a1420560ecb5bee2fd83092
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
|
|
This will change the way how RabbitMQ clients get to the servers,
they will not go through HAProxy anymore.
Change-Id: I522d7520b383a280505e0e7c8fecba9ac02d2c9b
|
|
Aims at having the Pacemaker resources configuration happening
in a single if condition.
Change-Id: I497538510f80a356e876d476024671b787b77fc9
|
|
Change-Id: I724c341f148fedf725f3b3da778e491741b754ae
|
|
NTP synchronization is moved to to step 1 where initial Pacemaker
configuration is performed.
Memacached is moved to step 2 to make sure it is up before the
OpenStack services are started.
Change-Id: I84121a687ee5ddb522239ecefd4d1d76c2f910b5
|
|
Change-Id: Ia8cb04b214c71afc884647fb20be3cc1a309c194
|
|
As with RabbitMQ previously, we can hit the same race conditions between
config being written on all nodes vs. pacemaker starting the
services. Configuring the services at least one step earlier than
starting them will allow us to get rid of this race condition.
Change-Id: I78f47dfb82ca8609ed40f784d65ba92db3d411f3
|
|
Recently puppet-pacemaker has changed in a backward incompatible way, we
need to reflect the changes in TripleO.
This patch also addresses non-deterministic order between corosync
service and VIP creation.
Depends-On: Ia68fee38f99dba18badc07eb0adbc473cfcffdf3
Change-Id: Ia7fe14cfb1401be98b62afeed589bb9f1b8af761
Co-Authored-By: Yanis Guenane <yanis.guenane@enovance.com>
|
|
This ensures that the hosts in Corosync and in Pacemaker are the same,
to make our cluster setup compatible with the recommended architecture.
Change-Id: Id81f315768edd24b8978b8de7093e04904591ce2
Closes-Bug: #1447497
Depends-On: Idb9ad017ffb1048f38fedbd55cc974785f6b1c38
|
|
The Pacemaker resource agent might have attempted to start the
service when the rabbitmq-env.conf file wasn't written yet, making
it attempt to bind on 0.0.0.0
Co-Authored-By: Jason Guiditta <jguiditt@redhat.com>
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Change-Id: I081a0bfc6fc3943b8ade71799357022d29317d79
|
|
|
|
|
|
Increases the max_connections since this is currently set to 151.
This causes problems in a baremetal environment with multiple CPUs.
A related change is @ https://review.openstack.org/#/c/183046/2
for haproxy. There is also a bug report at
https://bugzilla.redhat.com/show_bug.cgi?id=1218322
Change-Id: I9b4690191616cc04c4edc7b2402bd9ec54a7d17d
|
|
|
|
Change-Id: Icfe70de72eb2cf09fe2d00d9ae49baebc79e1886
|
|
|
|
This commit aims to support the creation of the galera cluster via
Pacemaker. With this commit in, three use-cases will be supported.
* Non HA setup / Non Pacemaker setup : The deployment will take place
as it is currently the case in f20puppet-nonha. Nothing changes.
* Non HA setup / Pacemaker setup : Even though it is a non ha setup,
galera cluster via pacemaker will be deployed with a cluster nbr of 1.
* HA setup / Non Pacemaker setup : N/A
* HA setup / Pacemaker setup : It is assumed that HA setup will
always be with pacemaker. So in this situation pacemaker will deploy a
cluster of 3 galera master nodes.
Depends-On: I7aed9acec11486e0f4f67e4d522727476c767d83
Change-Id: If0c37a86fa8b5aa6d452129bccf7341a3a3ba667
|
|
Use some optimized configuration settings for RabbitMQ when
clustered. Data is ported from Astapor.
Change-Id: If54aff5654dbe75e68197588be12cb3995c77ec7
|
|
The puppet-pacemaker module realizes some abstraction for the
different service types in ::service already.
Change-Id: Icd897e18fda01b1bf4722a975c991e26341ac129
Closes-Bug: 1449988
|
|
This patch adds support for using the Heat resource registry
so that end users can enable pacemaker. Using this approach
allows us to isolate all of the pacemaker logic for the
controller in a single template rather than use conditionals
for every service that must support it.
Change-Id: Ibefb80d0d8f98404133e4c31cf078d729b64dac3
|
|
|
|
|
|
This commit allows one to configure MongoDB as a pacemaker resource when
EnablePacemaker is set to true
Change-Id: Iedfba3eb851442d0ca3b8c0a7163a63285ab6071
|
|
|
|
This patch adds support for a new GlanceBackend setting
which can be set to one of swift, rbd, or file to control
which Glance backend is configured for use by default.
Change-Id: Id6a3fbc3477e85e8e2446e3dc13d424f9535d0ff
|
|
This reverts commit 7313930c22b9f18d67e630de084ffcc6fad5ebe7.
Seeing errors when trying to create the keystone admin
role with packages. (ImportError: No module named os_client_config)
Change-Id: I78796598ccb8d2ffd6bfca85dce7d18dc0fd768e
Related-bug: #1450786
|
|
|
|
|
|
Ceilometer can use different backends. A recent change moved backend
support for Ceilometer from MySQL to MongoDB. This commit introduce a
greater flexibility, letting the deployer choose wheter MySQL or MongoDB
should be used as a backend for Ceilometer.
Change-Id: I0d5bfb0763cbcee234df7ab13574d866743d5ddf
|
|
Change-Id: I43a74c1db324144d33e96a94cb718db30e0fd243
|
|
Change-Id: I6bf5ada5a5298f4079594f3cc8b01ac0ef85876e
|
|
Change-Id: I45511569fda6b00ca35b1e590537a29271e56ce0
Depends-On: I98b9b3dbc48009ce255d964ac580e1a31f279f1e
|
|
Install OpenStack Dashboad (Horizon) on the Overcloud Controller with
Puppet.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Depends-On: If9b12d373e407be8be8428d77145f131eb450e88
Change-Id: I254e895014f58a51dade3dcdc63eabbb5dc458ac
|
|
This patch adds support for configuring Keystone domain for Heat
via heat-keystone-setup-domain script. It should be reverted
as soon as Keystone v3 is fully functional.
Change-Id: I7397f49fac17c30262d02b70021d613aef5c6cad
|
|
Adds a new ControllerEnableSwiftStorage parameter that
can be used to enable/disable use of the contoller node
as a Swift storage node.
Change-Id: Ic54144f4a46a671818c2f12e419cfa619b0dc1f9
|
|
This patch adds a new ControllerEnableCephStorage option
which can be used to install and configure Ceph storage
(OSD) on the controller node.
The default is to have this disabled by default (this is
probably a more production like setting).
The motivation for this change is to help facilitate CI
jobs which actually use Ceph. Right now we have an issue
where once the Heat stack finishes Ceph is configured
and ready, but Cinder volume (required by our CI
devtest_overcloud.sh test) may or may not have had
enough time to recognize the amount of storage
on the remote Ceph storage nodes. Waiting another
periodic cycle for Cinder volume to recognize the
actual amount of storage on the remote OSD nodes
would work but there isn't a good way to do this
ATM. The right solution here is probably to
implement Heat breakpoints in our CI. As we haven't quite
landed that change, another option is to simply
make the controller node also be a Ceph storage node.
Since this runs as "step 2" within the controller
it ensures that the OSD will be available and thus
Cinder volume will register the correct amount of
storage on startup.
Enabling this feature also matches what we do with Swift
storage on the Controller (although we should provide
an option to actually disable this as well).
Change-Id: Ic47d028591edbaab83a52d7f38283d7805b63042
|
|
|
|
|
|
Depends-On: Ia1bbf53c674e34ba7c70249895b106ec0af3c249
Change-Id: Ifa9f579d26a3cba9f8705226984c7b987ae0ad1c
|