Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
This takes into use the cluster_host_map, which allows to give aliases
to the pacemaker nodes (which are FQDNs), and allows us to configure the
cluster using FQDNs.
We need FQDNs in order to request certificates, since the default CA
(FreeIPA) only allows certificates for FQDNs.
Change-Id: I2f146afdd32aef2d11cf25a65fa8d67428f621f5
|
|
|
|
|
|
|
|
The commit with change id [1], added the pacemaker HA support
for OVN DB servers. That commit created a new VIP which is
really not required.
This patch removes the code to create a new ip resource. Instead
it expects the pacemaker ip resource (with the ip address in the
'ovn_dbs_vip' parameter and with the name "ip-$ovn_dbs_vip") to be
created before ovn_northd class is called, which is the case anyway
if 'ovn_dbs_vip' is taken from the ServiceNetMapDefaults (in t-h-t).
[1] - I9dc366002ef5919339961e5deebbf8aa815c73db
Change-Id: I94d3960e6c5406e3af309cc8c787ac0a6c9b1756
Partial-bug: #1670564
|
|
In composable HA we bind resources to nodes that have special
node properties. We need to do this also for bundle resources
otherwise there is a potential race where the bundle might be
started on nodes where it is not supposed to during a small
window of time.
Tested with the depends-on and correctly obtained a containerized
composable HA deployment:
Docker container set: rabbitmq-bundle
[192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]
rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-rabbit-0
rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-rabbit-1
rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-rabbit-2
Docker container set: galera-bundle
[192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]
galera-bundle-0 (ocf::heartbeat:galera): Master overcloud-galera-0
galera-bundle-1 (ocf::heartbeat:galera): Master overcloud-galera-1
galera-bundle-2 (ocf::heartbeat:galera): Master overcloud-galera-2
Docker container set: redis-bundle
[192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]
redis-bundle-0 (ocf::heartbeat:redis): Master overcloud-controller-0
redis-bundle-1 (ocf::heartbeat:redis): Slave overcloud-controller-1
redis-bundle-2 (ocf::heartbeat:redis): Slave overcloud-controller-2
ip-192.168.24.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-10.0.0.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
ip-172.16.2.9 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.1.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.3.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Docker container set: haproxy-bundle
[192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]
haproxy-bundle-docker-0 (ocf::heartbeat:docker): Started overcloud-controller-0
haproxy-bundle-docker-1 (ocf::heartbeat:docker): Started overcloud-controller-1
haproxy-bundle-docker-2 (ocf::heartbeat:docker): Started overcloud-controller-2
Depends-On: I44449861cbfe56304b8829c9ca10fd648353b3ae
Change-Id: I48fb490040497ba08cae19937159c0efdf99e3f8
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based cinder-volume containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1668920
Change-Id: I95ad4dd89b47396bea672813d87de35e64c04b2d
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based cinder-backup containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1668920
Change-Id: If53495ff75d4832cc6be80dc0dc9bd540ab6583b
|
|
|
|
|
|
|
|
|
|
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based mysql containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1692842
Depends-On: I44fbd7f89ab22b72e8d3fc0a0e3fe54a9418a60f
Depends-On: Ie9b7e7d2a3cec4b121915a17c1e809e4ec950e7f
Change-Id: I3b4d8ad2eec70080419882d5d822f78ebd3721ae
|
|
|
|
Since galera is configured to use rsync, we ought to make sure the
package is installed. Particularly when using deployed-server, the
package is not always installed by default depending on what was used to
install the servers.
Change-Id: I92ee78f2dd2c0f7fd4d393b104166407d7c654e2
Closes-Bug: #1693003
|
|
This patch enables OVN DB servers to be started in master/slave
mode in the pacemaker cluster.
A virtual IP resource is created first and then the pacemaker OVN OCF
resource - "ovn:ovndb-servers" is created. The OVN OCF resource is
configured to be colocated with the vip resource. The ovn-controller and
Neutron OVN ML2 mechanism driver which depends on OVN DB servers will
always connect to the vip address on which the master OVN DB servers
listen on.
The OVN OCF resource itself takes care of (re)starting ovn-northd service
on the master node and we don't have to manage it.
When HA is enabled for OVN DB servers, haproxy does not configure the OVN DB
servers in its configuration.
This patch requires OVS 2.7 in the overcloud.
Co-authored:by: Numan Siddique <nusiddiq@redhat.com>
Change-Id: I9dc366002ef5919339961e5deebbf8aa815c73db
Partial-bug: #1670564
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based RabbitMQ containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Co-Authored-By: John Eckersberg <jeckersb@redhat.com>
Partial-Bug: #1692909
Change-Id: I0722e4a4d4716f477e8304cfa1aadd3eef7c2f31
Depends-On: I44fbd7f89ab22b72e8d3fc0a0e3fe54a9418a60f
Depends-On: Ie9b7e7d2a3cec4b121915a17c1e809e4ec950e7f
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based haproxy containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1692908
Depends-On: I44fbd7f89ab22b72e8d3fc0a0e3fe54a9418a60f
Depends-On: Ie9b7e7d2a3cec4b121915a17c1e809e4ec950e7f
Change-Id: Ifcf890a88ef003d3ab754cb677cbf34ba8db9312
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based Redis containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1692924
Depends-On: I44fbd7f89ab22b72e8d3fc0a0e3fe54a9418a60f
Depends-On: Ie9b7e7d2a3cec4b121915a17c1e809e4ec950e7f
Change-Id: Ia1131611d15670190b7b6654f72e6290bf7f8b9e
|
|
In HA overcloud deployments, HAProxy makes use of a helper service
called "clustercheck", to check whether galera nodes are available for
serving traffic.
This change implements a dedicated profile for clustercheck, which was
originally part of the pacemaker mysql profile. The profile generates
the necessary configuration files for clustercheck and let heat
templates manage the associated container's lifecycle.
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Partial-Bug: #1692969
Change-Id: I1aabe34fa6a9c8c705a4405f275b66502c313cf2
|
|
Add composable service interface for Neutron LBaaSv2 service.
Change-Id: Ieeb21fafd340fdfbaddbe7633946fe0f05c640c9
|
|
Now that puppet-redis supports ulimit for cluster managed redis (via
https://github.com/arioch/puppet-redis/pull/192), we need to remove the
file snippet as otherwise we will get a duplicate resource error.
We will need to create a THT change that at the very least sets the
redis::managed_by_cluster_manager key to true so that
/etc/security/limits.d/redis.conf gets created.
We also add code to not break backwards compatibility with the old hiera
key.
Change-Id: I4ffccfe3e3ba862d445476c14c8f2cb267fa108d
Partial-Bug: #1688464
|
|
In change Ib62001c03e1e08f58cf0c6e0ba07a8879a584084 we switched the
rabbitmq queues HA mode from ha-all to ha-exactly. While this gives us a
nice performance boost with rabbitmq, it makes rabbit less resilient to
network glitches as we painfully found out via
https://bugzilla.redhat.com/show_bug.cgi?id=1441635.
Will propose another THT change to actually change the default to
-1 so we get this ha-mode:all by default.
Change-Id: I9a90e71094b8d8d58b5be0a45a2979701b0ac21c
Partial-Bug: #1686337
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Co-Authored-By: John Eckersberg <jeckersb@redhat.com>
|
|
Previously we were always run the galera-ready exec every step. This
change switches it to be refreshonly so we only wait when the service is
setup or restarted.
Change-Id: I5ff9d49c2590751913b96777bcd72c8a15627a01
Closes-Bug: #1680586
|
|
|
|
|
|
This reverts commit 3f7e74ab24bb43f9ad7e24e0efd4206ac6a3dd4e.
After identifying how to workaround the performance issues on the
undercloud, let's put this back in. Enabling innodb_file_per_table is
important for operators to be able to better manage their databases.
Change-Id: I435de381a0f0e3ef221e498f442335cdce3fb818
Depends-On: I77507c638237072e38d9888aff3da884aeff0b59
Closes-Bug: #1660722
|
|
This reverts commit 621ea892a299d2029348db2b56fea1338bd41c48.
We're getting performance problems on SATA disks.
Change-Id: I30312fd5ca3405694d57e6a4ff98b490de388b92
Closes-Bug: #1661396
Related-Bug: #1660722
|
|
InnoDB uses a single file by default which can grow to be
tens/hundreds of gigabytes, and is not shrinkable even
if data is deleted from the database.
Best practices are that innodb_file_per_table is set to ON
which instead stores each database table in its own file, each of
which is also shrinkable by the InnoDB engine.
Closes-Bug: #1660722
Change-Id: I59ee53f6462a2eeddad72b1d75c77a69322d5de4
|
|
Follow up patch for I63da4f48da14534fd76265764569e76300534472
to support composable HA for the Ceph rbdmirror daemon.
Change-Id: I3767bee4b1c7849fa85e71bcc57534b393d2d415
|
|
|
|
|
|
This commit implements composable HA for the pacemaker profiles.
- Everytime a pacemaker resource gets included on a node,
that node will add a node cluster property with the name of the resource
(e.g. galera-role=true)
- Add a location rule constraint to force running the resource only
on the nodes that have that property
- We also make sure that any pacemaker resource/property creation has a
predefined number of tries (20 by default). The reason for this is
that within composable HA, it might be possible to get "older CIB"
errors when another node changed the CIB while we were doing an
operation on it. Simply retrying fixes this.
- Also make sure that we use the newly introduced
pacemaker::constraint::order class instead of the older
pacemaker::constraint::base class. The former uses the push_cib()
function and hence behaves correctly in case multiple nodes try
to modify the CIB at the same time.
Change-Id: I63da4f48da14534fd76265764569e76300534472
Depends-On: Ib931adaff43dbc16220a90fb509845178d696402
Depends-On: I8d78cc1b14f0e18e034b979a826bf3cdb0878bae
Depends-On: Iba1017c33b1cd4d56a3ee8824d851b38cfdbc2d3
|
|
Previously we missed to perform the basic Ceph client configuration
on a node where only the RBD mirror service was deployed.
Change-Id: Ie6a4284a88714bcee964a38636e12aa88bb95c9d
Co-Authored-By: Michele Baldessari <michele@acksyn.org>
Related-Bug: #1652177
|
|
There is a typo in the bootstrap check which will lead to:
Could not find data item ceph_rbdmirror_bootstrap_short_node_name in any
Hiera data file and no default supplied at
/etc/puppet/modules/tripleo/manifests/profile/pacemaker/ceph/rbdmirror.pp
We need to be using the correct one:
$ hiera ceph_rbdmirror_short_bootstrap_node_name
overcloud-remote-0
Change-Id: Ic343e5f99e48360bdd2d2989781a4b6ca484e8fc
|
|
This change adds a profile for the Ceph RBD mirror service, which
should be managed by Pacemaker to make sure there is always a single
instance running.
Change-Id: Ic63dc5cffece38942d305f538f71dd58a5d50789
Partial-Bug: #1652177
|
|
When we create a pacemaker resource it must happen from a single node.
If it happens from multiple nodes an immediate error will be returned by
pcs.
For the pacemaker roles we enforce this by leveraging the recently
introduced <SERVICE_NAME_bootstrap_short_node_name> which gives us
the first hostname per-service, regardless of the role.
(introduced via I03e8685f939e8ae1fcd8b16883b559615042505d)
With this approach if a pacemaker service belongs to two different
roles (say role Controller on node A and role galera on node B), it
will only create the resource from one of the two and not both (which
would return an error).
Only setting Partial-Bug for this one, because it addresses the issue
from the pacemaker resource creation POV (which is always affected). But
the issue itself is a race that we're theoretically affected by since
the composable roles work landed. While I have tried to fix the more
general case in previous attempts, I think it is best if we start a
discussion on how to fix it, because each approach has a bunch of
potential drawbacks and is quite invasive on how we do things. A
discussion slot for this has been proposed for the Atlanta PTG.
Change-Id: I662398cab60d523d204b57a5674ca8f5c0f2e68a
Partial-Bug: #1615983
|
|
Manila ceph driver reads ceph's client configuration
(keyring is the most important) from ceph.conf file
(or any other file set by cephfs_conf_path). ceph.conf
should be updated with keyring location.
If ceph is deployed by tripleo then also manila ceph key
is added into ceph and ceph filesystem is created.
Depends-On: I18436a64fc991b9e697a1d79e369ac110cf8fe20
Change-Id: Iac4a260af6738ed6afd4bcb107221a736d07c1b5
Partial-Bug: #1644784
Closes-Bug: #1646147
|
|
Removed redundant 'the'
Change-Id: Ie2051f35ec1e7010423c46084f5512c02af85f33
|
|
Manila pacemaker manifest (which sets manila share service only)
includes also manila api and scheduler manifests. There is no
reason for this. Also it causes that on whichever node manila share
service runs also manila api and scheduler services are started.
Change-Id: Ia1b39ef36c5bc34813cd6430b69ad9b698acc3cf
Closes-Bug: #1653500
|
|
By default galera-monitor xinetd is binding on all the interfaces.
That means that the port 9200 is exposed on the external network.
Because haproxy is using the same network for the backend and the
check we can reuse it for the xinetd binding.
Change-Id: If1a50515593e81f46d67309bdeecbe84c1d0ebe4
|
|
|
|
Puppet 4 ordering make things more strict in catalog, which is good.
Resources have to be orchestrated or Puppet will take them in the order
they are found in catalog.
This patch makes sure we create MySQL users only when Galera is actually
ready.
Closes-Bug: #1645787
Change-Id: I536a1a128c3a7eca49bcc4f34a1307bcd60b029e
|
|
|
|
If we upgrade a cloud that was configured with external load balancer
the process will fail during convergence step because it will try to
restart haproxy which is not configured when an external load balancer
is configured.
Closes-Bug: #1636527
Change-Id: I6f6caec3e5c96e77437c1c83e625f39649a66c48
|
|
With the landing of HA NG in Newton we can actually remove the
pacemaker profiles we do not need. The only ones that are being
used in one form or the other are:
$ grep -ir services\/pacemaker environments | awk '{ print $3 }' | sort | uniq
../puppet/services/pacemaker/cinder-backup.yaml
../puppet/services/pacemaker/cinder-volume.yaml
../puppet/services/pacemaker/database/mysql.yaml
../puppet/services/pacemaker/database/redis.yaml
../puppet/services/pacemaker/haproxy.yaml
../puppet/services/pacemaker/manila-share.yaml
../puppet/services/pacemaker/rabbitmq.yaml
../puppet/services/pacemaker.yaml
The only exception is profile/pacemaker/database/mongodbvalidator
because it is included by profile/base/database/mongodb.pp
Change-Id: I80c8559bb2d915385bcc20ae71fe144ddd6591c1
|