Age | Commit message (Collapse) | Author | Files | Lines |
|
2017-07-20 15:09:38.571317 | manifests/glance/nfs_mount.pp:65:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571430 | manifests/pacemaker/haproxy_with_vip.pp:107:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571473 | manifests/pacemaker/haproxy_with_vip.pp:108:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571511 | manifests/pacemaker/haproxy_with_vip.pp:109:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571551 | manifests/pacemaker/resource_restart_flag.pp:44:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571590 | manifests/profile/base/cinder/volume/nfs.pp:72:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571625 | manifests/profile/base/docker.pp:188:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571661 | manifests/profile/base/docker.pp:210:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571699 | manifests/profile/base/logging/fluentd.pp:79:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571735 | manifests/profile/base/pacemaker.pp:107:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571773 | manifests/profile/base/swift/ringbuilder.pp:97:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571811 | manifests/profile/base/swift/ringbuilder.pp:125:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571850 | manifests/profile/base/swift/ringbuilder.pp:130:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571889 | manifests/profile/pacemaker/ceph/rbdmirror.pp:79:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571927 | manifests/profile/pacemaker/cinder/backup.pp:66:WARNING: arrow should be on the right operand's line
2017-07-20 15:09:38.571965 | manifests/profile/pacemaker/ovn_northd.pp:96:WARNING: arrow should be on the right operand's line
Change-Id: I9393c5e04310cf84695531df9bb16f33e7e15abb
|
|
This solves a problem with bind-mounts when the containers are holding
files descriptors open.
At the same time this makes the template more robust to puppet changes
since new config files will be available in the containers without
needing to update the templates.
Closes-Bug: #1698323
Change-Id: I857c94ba5f7f064d7c58df621ec5d477654b9166
Depends-On: I78dcec741a941dc21adba33ba33a6dc6ff1d217c
|
|
The step is typically set with the hieradata setting an integer value:
{"step": 1}
However it would be useful for the value to be a string so that
substitutions are possible, for example:
{"step": "%{::step}"}
This change ensures the step parameter defaults to an integer by
calling Integer(hiera('step'))
This change was made by manually removing the undef defaults from
fluentd.pp, uchiwa.pp, and sensu.pp then bulk updating with:
find ./ -type f -print0 |xargs -0 sed -i "s/= hiera('step')/= Integer(hiera('step'))/"
Change-Id: I8a47ca53a7dea8391103abcb8960a97036a6f5b3
|
|
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based cinder-volume containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1668920
Change-Id: I95ad4dd89b47396bea672813d87de35e64c04b2d
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based cinder-backup containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1668920
Change-Id: If53495ff75d4832cc6be80dc0dc9bd540ab6583b
|
|
This commit implements composable HA for the pacemaker profiles.
- Everytime a pacemaker resource gets included on a node,
that node will add a node cluster property with the name of the resource
(e.g. galera-role=true)
- Add a location rule constraint to force running the resource only
on the nodes that have that property
- We also make sure that any pacemaker resource/property creation has a
predefined number of tries (20 by default). The reason for this is
that within composable HA, it might be possible to get "older CIB"
errors when another node changed the CIB while we were doing an
operation on it. Simply retrying fixes this.
- Also make sure that we use the newly introduced
pacemaker::constraint::order class instead of the older
pacemaker::constraint::base class. The former uses the push_cib()
function and hence behaves correctly in case multiple nodes try
to modify the CIB at the same time.
Change-Id: I63da4f48da14534fd76265764569e76300534472
Depends-On: Ib931adaff43dbc16220a90fb509845178d696402
Depends-On: I8d78cc1b14f0e18e034b979a826bf3cdb0878bae
Depends-On: Iba1017c33b1cd4d56a3ee8824d851b38cfdbc2d3
|
|
When we create a pacemaker resource it must happen from a single node.
If it happens from multiple nodes an immediate error will be returned by
pcs.
For the pacemaker roles we enforce this by leveraging the recently
introduced <SERVICE_NAME_bootstrap_short_node_name> which gives us
the first hostname per-service, regardless of the role.
(introduced via I03e8685f939e8ae1fcd8b16883b559615042505d)
With this approach if a pacemaker service belongs to two different
roles (say role Controller on node A and role galera on node B), it
will only create the resource from one of the two and not both (which
would return an error).
Only setting Partial-Bug for this one, because it addresses the issue
from the pacemaker resource creation POV (which is always affected). But
the issue itself is a race that we're theoretically affected by since
the composable roles work landed. While I have tried to fix the more
general case in previous attempts, I think it is best if we start a
discussion on how to fix it, because each approach has a bunch of
potential drawbacks and is quite invasive on how we do things. A
discussion slot for this has been proposed for the Atlanta PTG.
Change-Id: I662398cab60d523d204b57a5674ca8f5c0f2e68a
Partial-Bug: #1615983
|
|
With the landing of HA NG in Newton we can actually remove the
pacemaker profiles we do not need. The only ones that are being
used in one form or the other are:
$ grep -ir services\/pacemaker environments | awk '{ print $3 }' | sort | uniq
../puppet/services/pacemaker/cinder-backup.yaml
../puppet/services/pacemaker/cinder-volume.yaml
../puppet/services/pacemaker/database/mysql.yaml
../puppet/services/pacemaker/database/redis.yaml
../puppet/services/pacemaker/haproxy.yaml
../puppet/services/pacemaker/manila-share.yaml
../puppet/services/pacemaker/rabbitmq.yaml
../puppet/services/pacemaker.yaml
The only exception is profile/pacemaker/database/mongodbvalidator
because it is included by profile/base/database/mongodb.pp
Change-Id: I80c8559bb2d915385bcc20ae71fe144ddd6591c1
|
|
Back in the Mitaka cycle via the change If6b43982c958f63bc78ad997400bf1279c23df7e
we made sure that the default start and stop timeouts for pacemaker
systemd resources is 200s (>= twice the default 90s DefaultTimeoutStopSec
in systemd). We did this change by setting puppet resource defaults for
the Pacemaker::Resource::Service class:
Pacemaker::Resource::Service {
op_params => 'start timeout=200s stop timeout=200s',
}
The problem is that after the composable services rework, this does not
work anymore and the pacemaker systemd resources that still exist do not
have these timeouts set.
We want to move away from resource defaults for this because its results
are dependent on the inclusion order which in tripleo is not guaranteed
any longer (https://docs.puppet.com/puppet/latest/reference/lang_scope.html#scope-lookup-rules)
The only services affected in Newton are: cinder-volume,
cinder-backup, manila-share, haproxy. I preferred fixing all the
pacemaker resources because it seems the cleanest and most logical
commit.
Change-Id: If89a95706514e536a7a2949871a0002c79b6046e
Closes-Bug: #1629366
|
|
Write restart flag file for services managed by Pacemaker into
/var/lib/tripleo/pacemaker-restarts directory. The name of the file must
match the name of the clone resource defined in pacemaker. The
post-puppet restart script will restart each service having a restart
flag file and remove those files.
This approach focuses on $pacemaker_master only (we don't want to
restart the pacemaker services 3 times when we have 3 controllers), so
it relies on the assumption that we're making the matching config
changes across the pacemaker nodes.
Change-Id: I6369ab0c82dbf3c8f21043f8aa9ab810744ddc12
|
|
|
|
Adds a Cinder backup profile for Cinder backup service activation
(to be used in https://review.openstack.org/#/c/304563).
Cinder backup uses Swift as a default.
Change-Id: Ib1dfe52b83ab01819fc669312967950e75d8ddf1
Co-Authored-By: Jon Bernard <jobernar@redhat.com>
Co-Authored-By: Boris Kreitchman <bkreitch@gmail.com>
|
|
As we are staring to manually check overcloud services
the first step is to check that the puppet profiles
are all aligned.
Changes applied:
No logic added or removed in this submission.
Removed unused parameters.
Align header comments structure.
All profiles parameters sorted following:
"Mandatory params first sorted alphabetically
then optional params sorted alphabetically."
Note: Following submissions will check pacemaker,
cinder, mistral and redis services in the base profiles
as some of them has the $pacemaker_master parameter
defaulted to true.
Change-Id: I2f91c3f6baa33f74b5625789eec83233179a9655
|
|
This change moves the cinder-volume/cinder-scheduler constraints in the
cinder-scheduler profile as these can't be applied by the cinder-volume
service when cinder-scheduler isn't managed by Pacemaker.
Blueprint:
https://blueprints.launchpad.net/tripleo/+spec/ha-lightweight-architecture
Change-Id: I5e7585c08675d8a4bd071523b94210d325d79b59
Implements: blueprint ha-lightweight-architecture
Co-Author: cmsj@tenshu.net
|
|
In the Next Generation HA architecture a number of active/active services
will be run via systemd. In order for this to work we need to make sure that
the sync_db operation only takes place on the bootstrap node, just like it is
done today for the pacemaker profiles.
We do this by removing sync_db as a parameter and instead set it to true
or false depending if the hostname matches the bootstrap_node as it is done
today in the pacemaker role.
Note that we call hiera('bootstrap_nodeid', undef) because if a profile
is included on a non controller node that variable will be undefined.
The following testing was done:
- HA puppet-pacemaker.yaml scenario with three computes
- NonHA with one controller
- NonHA with three controllers
Fixes-Bug: 1600149
Co-Author: cmsj@tenshu.net
Change-Id: I04a7b9e3c18627ea512000a34357acb7f27d6e0e
Implements: blueprint ha-lightweight-architecture
|
|
Includes both the base and the pacemaker roles.
Change-Id: I3c6d5226eed5f0f852b0ad9476c7cd9a959fda69
|