summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2016-10-18Merge "Remove faulty migration logic to stop nova-api"Jenkins1-13/+0
2016-10-17Merge "packages: run upgrade at 'setup' stage"Jenkins3-30/+54
2016-10-17Remove faulty migration logic to stop nova-apiJuan Antonio Osorio Robles1-13/+0
The patch making nova run over httpd had added migration logic to stop nova-api, However, this doesn't work since nova-metadata is running over the same process. Now, the fact that is was running seems to be just luck, since the systemctl runs, then we start the service via the nova::api resource. So this is fragile in it's current state. This then removes the exec, as we don't need it for the migration. Change-Id: I4603b81d30a704b07eef461b3cdbfe164614b04f
2016-10-14Merge "Move heat domain/user creation into keystone profile"Jenkins2-15/+24
2016-10-14packages: run upgrade at 'setup' stageEmilien Macchi3-30/+54
Instead of using an operator to make sure we upgrade package before any service, which causes dependency cycles with iptables puppet module, let's do another approach where we upgrade rpms in the 'setup' stage, which is a stage that runs before configure and running services. In that way, we'll remove dependency cycles and make sure packages are upgrades before configure and running TripleO services. Change-Id: I1be83f88be1959885c980ab4f428477d412751f7
2016-10-14Merge "pacemaker: increase timeouts for rabbitmq and redis"Jenkins2-0/+2
2016-10-14Move heat domain/user creation into keystone profileSteven Hardy2-15/+24
This needs to happen on the node running keystone, or things break when you try to deploy e.g the heat_engine service on a non Controller role. We check the enabled flag for heat engine so this only happens if the heat_engine service is running on some (any) role. Partial-Bug: #1631130 Change-Id: Ib088a572b384b479f51d56555734d78ab840a1f3
2016-10-14Merge "Deploy nova over Apache httpd"Jenkins1-2/+18
2016-10-14Merge "Add part_power and min_part_hours for Swift"Jenkins1-0/+13
2016-10-13Merge "Only run ceilometer::db::sync on bootstrap node"Jenkins2-7/+11
2016-10-13Add part_power and min_part_hours for SwiftChristian Schwede1-0/+13
Change-Id: I78049105adf52226d47cc6764b1ba6c2c06e91e5 Related-Bug: 1631926
2016-10-13Merge "Ensure presence of pacemaker restart directory."Jenkins2-4/+11
2016-10-13Ensure presence of pacemaker restart directory.Sofer Athlan-Guyot2-4/+11
Currently the /var/lib/tripleo/pacemaker-restarts directory is created only when base/pacemaker.pp file is included in the manifest. There is a notification that ensures precedence order and trigger the touch. The trigger and the dependency on the base/pacemaker.pp should not be required as someone using the tripleo::pacemaker::resource_restart_flag would expect the file to be created no matter what. For instance in the Cinder upgrade in the convergence step has this defined: Cinder_config<||> ~> Tripleo::Pacemaker::Resource_restart_flag["${::cinder::params::volume_service}"] but in the convergence step, the base/pacemaker.pp is not included and the above trigger fails as the directory is not created. It looks the same for manilla.pp. This patch removes the trigger and ensures the directory is created when needed. Change-Id: Ic3aa82c818662e9e88e21c8381d657adef5b43ac Closes-Bug: #1632232
2016-10-13Deploy nova over Apache httpdJuan Antonio Osorio Robles1-2/+18
This adds the necessary resources to the manifest to migrate nova to run over httpd. The service name will be moved to t-h-t in a subsequent commit, but since this patch depends on t-h-t, we try to avoid circular dependencies of repos. Change-Id: I91d430a3871672f90b0f885736f067ddae3c238c Depends-On: I57fb20cf0d58b3376243ba4aeb04e995e7152ce3
2016-10-12Merge "Fix eqlx chap password"Jenkins1-1/+1
2016-10-12Merge "Add versioned_writes to Swift proxy config"Jenkins1-0/+1
2016-10-12pacemaker: increase timeouts for rabbitmq and redisEmilien Macchi2-0/+2
When we observe the 'stop timeout' values of pacemaker resources: rabbitmq and redis, they are set to 90s. But for all other services, it is set to 200s. The overcloud deployment sometimes fails due to this with the error: Error: Could not complete shutdown of rabbitmq-clone, 1 resources remaining Error performing operation: Timer expired This patch updates the timeout for Redis and RabbitMQ to avoid this error. Change-Id: I8a3b3951a896ee3e8e5e09778e8ea4717e76a1b4
2016-10-12Merge "Update websocket service name in config template"Jenkins1-1/+1
2016-10-11Add versioned_writes to Swift proxy configChristian Schwede1-0/+1
Tempest expects object versioning to be enabled by default in Swift; if not it has to be disabled explicitly in the Tempest config. This is a commonly used middleware, therefore it should be enabled in the overcloud proxy nodes as well. Closes-Bug: 1632215 Change-Id: I07a206473ff7939749e3eba1dfe3ea8c4526eb5c
2016-10-10Merge "Fetch internal certificates for HAProxy based on network"Jenkins3-80/+273
2016-10-10Merge "Use Heat role *_enabled hiera to check Manila backends"Jenkins1-8/+20
2016-10-07Fix eqlx chap passwordAlex Schultz1-1/+1
The hiera key generated by THT is eqlx_chap_password and not eql_san_password. https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/extraconfig/pre_deploy/controller/cinder-eqlx.yaml#L63 Change-Id: Ic062d9060f0ce437336e2bd6aaca3887fc33c8cf Closes-Bug: #1631527
2016-10-07Only run ceilometer::db::sync on bootstrap nodeAlex Schultz2-7/+11
The ceilometer::db::sync is included by default in ceilometer::db but we only want it to run on the bootstrap node. This change passes the sync_db parameter to ceilometer::db to manage the db sync process rather than trying to manage the inclusion of ceilometer::db::sync within the profile class. Change-Id: Ib56db1a90dd6fbfe7582fc57b7728df81942cce2 Closes-Bug: #1629373
2016-10-07Merge "Release 5.3.0 (RC3)"Jenkins1-1/+1
2016-10-07Release 5.3.0 (RC3)Emilien Macchi1-1/+1
Release Newton RC3 5.3.0 Change-Id: I1b367dcaba4c2c0bffa9eae0b81ee81f1676d754
2016-10-07Use Heat role *_enabled hiera to check Manila backendsGiulio Fidente1-8/+20
Aligns the way how we check for enabled backends in pacemaker/manila.pp with what we did in base/manila/api.pp with [1]. The benefit is that we don't need to emit from the templates custom hiera. 1. I86ba8b9d5872c0f1a94e74215e97b796ad129bfb Change-Id: I04e28a95e8d69a24cd3df109bf1802bfcbd941db
2016-10-07Set enabled_share_protocols based on enabled backendsGiulio Fidente1-3/+32
When deploying manila with cephfs, share creation fails because 'enabled_share_protocols' sticks to NFS,CIFS and does not get updated with CEPHFS. This change aims at fixing it by building the list of enabled protocols based on the list of enabled backends. Co-Authored-By: Tom Barron <tbarron@redhat.com> Closes-Bug: 1630564 Change-Id: I86ba8b9d5872c0f1a94e74215e97b796ad129bfb
2016-10-07Merge "Add ceph profile rspec testing"Jenkins6-0/+403
2016-10-07Merge "Add ceilometer profile rspec testing"Jenkins6-23/+354
2016-10-07Merge "Add aodh profile rspec testing"Jenkins8-1/+349
2016-10-06Merge "Enable usage of "short names" for Ceph cluster"Jenkins1-6/+11
2016-10-06Update websocket service name in config templateJulie Pichon1-1/+1
The name was changed to "zaqar-websocket" recently. Having the old name in the configuration file leads to errors and confusion when overriding URLs, as the override won't get picked up with the old name. Change-Id: I7acf900d094e41862958b3cddbb66ff0d8a3e46f Closes-Bug: #1630965
2016-10-06Merge "Enable usage of "short names" for Galera cluster"Jenkins1-1/+6
2016-10-05Merge "Explicitly use Keystone v2 endpoint in the UI"Jenkins1-1/+1
2016-10-05Add ceph profile rspec testingAlex Schultz6-0/+403
This change adds rspec testing for the ceph profiles in puppet-tripleo. Change-Id: I08954e011848d6b747735f11b3cbff5707460c26
2016-10-05Fetch internal certificates for HAProxy based on networkJuan Antonio Osorio Robles3-80/+273
The service profile in HAProxy has the capability of creating certificates based on a map. The idea is to standardize this, as some of those certificates should match certain networks the services are listening on (with the exception of the external network which is handled differently and the tenant network which doesn't need a certificate). So, based on which network a certain service is listening on, we fetch the appropriate certificate. bp tls-via-certmonger Change-Id: I89001ae32f46c9682aecc118753ef6cd647baa62
2016-10-05Merge "Use service-specific servernames for haproxy"Jenkins1-31/+31
2016-10-05Enable usage of "short names" for Ceph clusterJuan Antonio Osorio Robles1-6/+11
We're not able to use FQDNs yet, so to work around this, we give precedence to a "short name" list we'll get from t-h-t. We can migrate to using FQDNs in the next cycle. Change-Id: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee Related-Bug: #1628521
2016-10-05Merge "Change rabbitmq queues HA mode from ha-all to ha-exactly"Jenkins1-1/+21
2016-10-05Explicitly use Keystone v2 endpoint in the UIJulie Pichon1-1/+1
The UI expects a Keystone endpoint URL that includes the version (without it, it is not possible to log in). Looking at the dist/tripleo_ui_config.js.sample configuration sample in the tripleo-ui repository, the current expectation is a v2.0 URL so let's use that for now. Change-Id: I4ca04b16251fbee264cd4ce5e5433c2c1cb6d2f0 Closes-Bug: #1630546
2016-10-05Use service-specific servernames for haproxyJuan Antonio Osorio Robles1-31/+31
Right now we're hardcoding the server names for the services to be the controllers. This is problematic if we start using custom roles for services, which listen on nodes that are not controllers. We already have the server names for each service, so using this mapping instead fixes the issue. Change-Id: Ic4b65edb3dc1b75abbc3421a87cab97425b058c4 Closes-Bug: #1629098
2016-10-05Enable usage of "short names" for Galera clusterJuan Antonio Osorio Robles1-1/+6
We're not able to use FQDNs yet, so to work around this, we give precedence to a "short name" list we'll get from t-h-t. Change-Id: I4ef7786474c229d5212a0deb2ca02ee992b030d8 Related-Bug: #1628521
2016-10-05Change rabbitmq queues HA mode from ha-all to ha-exactlyMichele Baldessari1-1/+21
It turns out that reducing number of rabbitmq queues in cluster significantly improves performance of cluster especially in the case of failover recovery time. Right now the cluster uses ha-all mode for rabbitmq queues. It is best to change this to "ha-exactly" mode and reduce the number of queue copies to ceil(N/2) where N is number of controllers in the cluster - so in typical scenario of 3 controller It would be 2 by default. It does not make much sense to keep the copies of queues over whole cluster since if the quorum of nodes is lost then the rest of cluster nodes will be stopped anyway. We let the user override this with a parameter. I.e. for a 3 node controlplane cluster we will go from this: pcs resource show rabbitmq Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster) Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}" To this: pcs resource show rabbitmq Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster) Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}" According to Marin Krcmarik's testing recovery time from failure was reduced significantly. Co-Authored-By: Marian Krcmarik <mkrcmari@redhat.com> Change-Id: Ib62001c03e1e08f58cf0c6e0ba07a8879a584084 Partial-Bug: #1628998
2016-10-04Cleanup the firewall logic.Dan Prince1-1/+1
We added code in t-h-t to strip empty services from the service_names list. (These are often the result of a service set to OS::Heat::None). As such we can now drop this puppet reject statement. Change-Id: Ie66f14f183de7e44a1f69af862f7d4be9a14c904
2016-10-04Merge "Fix the timeout for pacemaker systemd resources"Jenkins32-10/+40
2016-10-04Clean out UI httpd configuration fileJulie Pichon1-0/+15
When updating the package with yum directly, a new httpd config file is created with a different name than the one used by Puppet, causing httpd to fail. Cleaning out the package config file and keeping it around means it won't get overwritten on update, and is the way other projects such as puppet-horizon handle this. Change-Id: I539729ce4cd0898f8b0f3f26266e4e6d55b99e37 Closes-Bug: #1628983
2016-10-03Merge "Use FallbackResource instead of Rewrite for UI"Jenkins1-13/+7
2016-10-03Fix the timeout for pacemaker systemd resourcesMichele Baldessari32-10/+40
Back in the Mitaka cycle via the change If6b43982c958f63bc78ad997400bf1279c23df7e we made sure that the default start and stop timeouts for pacemaker systemd resources is 200s (>= twice the default 90s DefaultTimeoutStopSec in systemd). We did this change by setting puppet resource defaults for the Pacemaker::Resource::Service class: Pacemaker::Resource::Service { op_params => 'start timeout=200s stop timeout=200s', } The problem is that after the composable services rework, this does not work anymore and the pacemaker systemd resources that still exist do not have these timeouts set. We want to move away from resource defaults for this because its results are dependent on the inclusion order which in tripleo is not guaranteed any longer (https://docs.puppet.com/puppet/latest/reference/lang_scope.html#scope-lookup-rules) The only services affected in Newton are: cinder-volume, cinder-backup, manila-share, haproxy. I preferred fixing all the pacemaker resources because it seems the cleanest and most logical commit. Change-Id: If89a95706514e536a7a2949871a0002c79b6046e Closes-Bug: #1629366
2016-10-03Merge "Add swift proxy for ceilometer middleware"Jenkins1-0/+1
2016-10-03Merge "Cinder: Add iSCSI protocol parameter"Jenkins1-0/+6