aboutsummaryrefslogtreecommitdiffstats
path: root/manifests/profile/pacemaker/haproxy.pp
AgeCommit message (Collapse)AuthorFilesLines
2017-07-27Prevent haproxy to run iptables during docker-puppet configurationDamien Ciabrini1-1/+9
When docker-puppet runs module tripleo::haproxy to generate haproxy configuration file, and tripleo::firewall::manage_firewall is true, iptables is called to set up firewall rules for the proxied services and fails due to lack of NET_ADMIN capability. Make the generation of firewall rule configurable by exposing a new argument to the puppet module. That way, firewall management can be temporarily disabled when being run through docker-puppet. Change-Id: I2d6274d061039a9793ad162ed8e750bd87bf71e9 Partial-Bug: #1697921
2017-06-14Ensure hiera step value is an integerSteve Baker1-1/+1
The step is typically set with the hieradata setting an integer value: {"step": 1} However it would be useful for the value to be a string so that substitutions are possible, for example: {"step": "%{::step}"} This change ensures the step parameter defaults to an integer by calling Integer(hiera('step')) This change was made by manually removing the undef defaults from fluentd.pp, uchiwa.pp, and sensu.pp then bulk updating with: find ./ -type f -print0 |xargs -0 sed -i "s/= hiera('step')/= Integer(hiera('step'))/" Change-Id: I8a47ca53a7dea8391103abcb8960a97036a6f5b3
2017-01-25Composable HAMichele Baldessari1-20/+59
This commit implements composable HA for the pacemaker profiles. - Everytime a pacemaker resource gets included on a node, that node will add a node cluster property with the name of the resource (e.g. galera-role=true) - Add a location rule constraint to force running the resource only on the nodes that have that property - We also make sure that any pacemaker resource/property creation has a predefined number of tries (20 by default). The reason for this is that within composable HA, it might be possible to get "older CIB" errors when another node changed the CIB while we were doing an operation on it. Simply retrying fixes this. - Also make sure that we use the newly introduced pacemaker::constraint::order class instead of the older pacemaker::constraint::base class. The former uses the push_cib() function and hence behaves correctly in case multiple nodes try to modify the CIB at the same time. Change-Id: I63da4f48da14534fd76265764569e76300534472 Depends-On: Ib931adaff43dbc16220a90fb509845178d696402 Depends-On: I8d78cc1b14f0e18e034b979a826bf3cdb0878bae Depends-On: Iba1017c33b1cd4d56a3ee8824d851b38cfdbc2d3
2017-01-18Do not depend on bootstrap_nodeid for any pacemaker profileMichele Baldessari1-2/+2
When we create a pacemaker resource it must happen from a single node. If it happens from multiple nodes an immediate error will be returned by pcs. For the pacemaker roles we enforce this by leveraging the recently introduced <SERVICE_NAME_bootstrap_short_node_name> which gives us the first hostname per-service, regardless of the role. (introduced via I03e8685f939e8ae1fcd8b16883b559615042505d) With this approach if a pacemaker service belongs to two different roles (say role Controller on node A and role galera on node B), it will only create the resource from one of the two and not both (which would return an error). Only setting Partial-Bug for this one, because it addresses the issue from the pacemaker resource creation POV (which is always affected). But the issue itself is a race that we're theoretically affected by since the composable roles work landed. While I have tried to fix the more general case in previous attempts, I think it is best if we start a discussion on how to fix it, because each approach has a bunch of potential drawbacks and is quite invasive on how we do things. A discussion slot for this has been proposed for the Atlanta PTG. Change-Id: I662398cab60d523d204b57a5674ca8f5c0f2e68a Partial-Bug: #1615983
2016-10-25Only restart haproxy services when enable_load_balancer is definedMichele Baldessari1-1/+1
If we upgrade a cloud that was configured with external load balancer the process will fail during convergence step because it will try to restart haproxy which is not configured when an external load balancer is configured. Closes-Bug: #1636527 Change-Id: I6f6caec3e5c96e77437c1c83e625f39649a66c48
2016-10-03Fix the timeout for pacemaker systemd resourcesMichele Baldessari1-0/+1
Back in the Mitaka cycle via the change If6b43982c958f63bc78ad997400bf1279c23df7e we made sure that the default start and stop timeouts for pacemaker systemd resources is 200s (>= twice the default 90s DefaultTimeoutStopSec in systemd). We did this change by setting puppet resource defaults for the Pacemaker::Resource::Service class: Pacemaker::Resource::Service { op_params => 'start timeout=200s stop timeout=200s', } The problem is that after the composable services rework, this does not work anymore and the pacemaker systemd resources that still exist do not have these timeouts set. We want to move away from resource defaults for this because its results are dependent on the inclusion order which in tripleo is not guaranteed any longer (https://docs.puppet.com/puppet/latest/reference/lang_scope.html#scope-lookup-rules) The only services affected in Newton are: cinder-volume, cinder-backup, manila-share, haproxy. I preferred fixing all the pacemaker resources because it seems the cleanest and most logical commit. Change-Id: If89a95706514e536a7a2949871a0002c79b6046e Closes-Bug: #1629366
2016-08-30Write restart flags to restart services only when necessaryJiri Stransky1-0/+6
Write restart flag file for services managed by Pacemaker into /var/lib/tripleo/pacemaker-restarts directory. The name of the file must match the name of the clone resource defined in pacemaker. The post-puppet restart script will restart each service having a restart flag file and remove those files. This approach focuses on $pacemaker_master only (we don't want to restart the pacemaker services 3 times when we have 3 controllers), so it relies on the assumption that we're making the matching config changes across the pacemaker nodes. Change-Id: I6369ab0c82dbf3c8f21043f8aa9ab810744ddc12
2016-08-08Fix parameters and headers inconsistency in the puppet manifests.Carlos Camacho1-6/+5
As we are staring to manually check overcloud services the first step is to check that the puppet profiles are all aligned. Changes applied: No logic added or removed in this submission. Removed unused parameters. Align header comments structure. All profiles parameters sorted following: "Mandatory params first sorted alphabetically then optional params sorted alphabetically." Note: Following submissions will check pacemaker, cinder, mistral and redis services in the base profiles as some of them has the $pacemaker_master parameter defaulted to true. Change-Id: I2f91c3f6baa33f74b5625789eec83233179a9655
2016-06-07Remove loadbalancer profileEmilien Macchi1-1/+1
We don't need loadbalancer profile anymore, we now have haproxy & keepalived profiles that replace it. Change-Id: I5bf57f88a85fa8180392e9dde7ab39f4eda63113
2016-06-04Deprecate loabalancer profilesEmilien Macchi1-0/+99
Deprecate loadbalancer profiles so we have a profile for HAproxy and another for keepalived. Once THT uses the new profiles, we'll remove loadbalancer profiles here. Change-Id: I8aa9045fc80205485abab723968b26084f60bf71