aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2016-10-21Remove the hardcoded tcp_keepalive false parameterMichele Baldessari1-2/+0
In change I35921652bd84d1d6be0727051294983d4a0dde10 we want to remove all those duplicate tcp_listen_option entries. One consequence of that is that we need to set rabbitmq::tcp_keepalive to true via hiera (as opposed to forcing it via the tcp_listen_option hash). For this to work we need to remove this forced parameter override. Note that even if I35921652bd84d1d6be0727051294983d4a0dde10 and this change don't merge at the exact same time it is still okay because we do force tcp_keepalive to true via the tcp_listen_options. Change-Id: I608477d5714a5081b3b4ab3b9fc2932bdd598301
2016-10-19Merge "Fix broken rabbitmqctl commands when using ipv6"Jenkins1-1/+2
2016-10-18Merge "Set memcached_servers for nova API"Jenkins1-0/+10
2016-10-18Merge "Remove explicit service_name setting from nova manifest"Jenkins1-3/+2
2016-10-18Set memcached_servers for nova APIDan Prince1-0/+10
This patch updates the Nova profile so that we set memcached servers correctly for the Nova keystone auth_token middleware. Most of the hiera settings for ::nova::keystone::authtoken are already included in the t-h-t nova-api service. Change-Id: I3b7ff02abbd0d5e0c38232d02b33e4c7bc411120 Closes-bug: #1633595
2016-10-18Merge "Remove faulty migration logic to stop nova-api"Jenkins1-13/+0
2016-10-18Fix broken rabbitmqctl commands when using ipv6Michele Baldessari1-1/+2
When deploying via ipv6, rabbitmq-ctl commands have the following issues: - `rabbitmq cluster_status` shows nodedown alerts - list_queues / list_connections hang - `rabbitmqctl node_health_check` fails with an error. * There is no any issue while performing activity on RHOS setup(From * horizon/cli). i.e. RHOS environment is functioning as expected. For example: sudo rabbitmqctl node_health_check -n rabbit@node1 Checking health of node 'rabbit@node1' ... Heath check failed: health check of node 'rabbit@node1' fails: nodedown The problem is that we are missing the following in /etc/rabbitmq/rabbitmq-env.conf: RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp" Fix these by setting the appropriate RABBITMQ_CTL_ERL_ARGS when deploying ipv6. Closes-Bug: #1633693 Change-Id: I53f4e76e687b3966fbb74fd0c2d83f05176630de
2016-10-17Merge "packages: run upgrade at 'setup' stage"Jenkins3-30/+54
2016-10-17Remove faulty migration logic to stop nova-apiJuan Antonio Osorio Robles1-13/+0
The patch making nova run over httpd had added migration logic to stop nova-api, However, this doesn't work since nova-metadata is running over the same process. Now, the fact that is was running seems to be just luck, since the systemctl runs, then we start the service via the nova::api resource. So this is fragile in it's current state. This then removes the exec, as we don't need it for the migration. Change-Id: I4603b81d30a704b07eef461b3cdbfe164614b04f
2016-10-14Merge "Move heat domain/user creation into keystone profile"Jenkins2-15/+24
2016-10-14packages: run upgrade at 'setup' stageEmilien Macchi3-30/+54
Instead of using an operator to make sure we upgrade package before any service, which causes dependency cycles with iptables puppet module, let's do another approach where we upgrade rpms in the 'setup' stage, which is a stage that runs before configure and running services. In that way, we'll remove dependency cycles and make sure packages are upgrades before configure and running TripleO services. Change-Id: I1be83f88be1959885c980ab4f428477d412751f7
2016-10-14Merge "pacemaker: increase timeouts for rabbitmq and redis"Jenkins2-0/+2
2016-10-14Move heat domain/user creation into keystone profileSteven Hardy2-15/+24
This needs to happen on the node running keystone, or things break when you try to deploy e.g the heat_engine service on a non Controller role. We check the enabled flag for heat engine so this only happens if the heat_engine service is running on some (any) role. Partial-Bug: #1631130 Change-Id: Ib088a572b384b479f51d56555734d78ab840a1f3
2016-10-14Remove explicit service_name setting from nova manifestJuan Antonio Osorio Robles1-3/+2
We can now get this parameter from t-h-t, so it's not needed here. Change-Id: I014e7b3a6feb5609ace2e8ef1e4df11448b0a0cc Depends-On: Ic229182cc5c887b57f6182c3db1bac8bed330f7c
2016-10-14Merge "Deploy nova over Apache httpd"Jenkins1-2/+18
2016-10-14Merge "Add part_power and min_part_hours for Swift"Jenkins1-0/+13
2016-10-13Merge "Only run ceilometer::db::sync on bootstrap node"Jenkins2-7/+11
2016-10-13Add part_power and min_part_hours for SwiftChristian Schwede1-0/+13
Change-Id: I78049105adf52226d47cc6764b1ba6c2c06e91e5 Related-Bug: 1631926
2016-10-13Merge "Ensure presence of pacemaker restart directory."Jenkins2-4/+11
2016-10-13Ensure presence of pacemaker restart directory.Sofer Athlan-Guyot2-4/+11
Currently the /var/lib/tripleo/pacemaker-restarts directory is created only when base/pacemaker.pp file is included in the manifest. There is a notification that ensures precedence order and trigger the touch. The trigger and the dependency on the base/pacemaker.pp should not be required as someone using the tripleo::pacemaker::resource_restart_flag would expect the file to be created no matter what. For instance in the Cinder upgrade in the convergence step has this defined: Cinder_config<||> ~> Tripleo::Pacemaker::Resource_restart_flag["${::cinder::params::volume_service}"] but in the convergence step, the base/pacemaker.pp is not included and the above trigger fails as the directory is not created. It looks the same for manilla.pp. This patch removes the trigger and ensures the directory is created when needed. Change-Id: Ic3aa82c818662e9e88e21c8381d657adef5b43ac Closes-Bug: #1632232
2016-10-13Deploy nova over Apache httpdJuan Antonio Osorio Robles1-2/+18
This adds the necessary resources to the manifest to migrate nova to run over httpd. The service name will be moved to t-h-t in a subsequent commit, but since this patch depends on t-h-t, we try to avoid circular dependencies of repos. Change-Id: I91d430a3871672f90b0f885736f067ddae3c238c Depends-On: I57fb20cf0d58b3376243ba4aeb04e995e7152ce3
2016-10-12Merge "Fix eqlx chap password"Jenkins1-1/+1
2016-10-12Merge "Add versioned_writes to Swift proxy config"Jenkins1-0/+1
2016-10-12pacemaker: increase timeouts for rabbitmq and redisEmilien Macchi2-0/+2
When we observe the 'stop timeout' values of pacemaker resources: rabbitmq and redis, they are set to 90s. But for all other services, it is set to 200s. The overcloud deployment sometimes fails due to this with the error: Error: Could not complete shutdown of rabbitmq-clone, 1 resources remaining Error performing operation: Timer expired This patch updates the timeout for Redis and RabbitMQ to avoid this error. Change-Id: I8a3b3951a896ee3e8e5e09778e8ea4717e76a1b4
2016-10-12Merge "Update websocket service name in config template"Jenkins1-1/+1
2016-10-11Add versioned_writes to Swift proxy configChristian Schwede1-0/+1
Tempest expects object versioning to be enabled by default in Swift; if not it has to be disabled explicitly in the Tempest config. This is a commonly used middleware, therefore it should be enabled in the overcloud proxy nodes as well. Closes-Bug: 1632215 Change-Id: I07a206473ff7939749e3eba1dfe3ea8c4526eb5c
2016-10-10Merge "Fetch internal certificates for HAProxy based on network"Jenkins3-80/+273
2016-10-10Merge "Use Heat role *_enabled hiera to check Manila backends"Jenkins1-8/+20
2016-10-07Fix eqlx chap passwordAlex Schultz1-1/+1
The hiera key generated by THT is eqlx_chap_password and not eql_san_password. https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/extraconfig/pre_deploy/controller/cinder-eqlx.yaml#L63 Change-Id: Ic062d9060f0ce437336e2bd6aaca3887fc33c8cf Closes-Bug: #1631527
2016-10-07Only run ceilometer::db::sync on bootstrap nodeAlex Schultz2-7/+11
The ceilometer::db::sync is included by default in ceilometer::db but we only want it to run on the bootstrap node. This change passes the sync_db parameter to ceilometer::db to manage the db sync process rather than trying to manage the inclusion of ceilometer::db::sync within the profile class. Change-Id: Ib56db1a90dd6fbfe7582fc57b7728df81942cce2 Closes-Bug: #1629373
2016-10-07Merge "Release 5.3.0 (RC3)"Jenkins1-1/+1
2016-10-07Release 5.3.0 (RC3)Emilien Macchi1-1/+1
Release Newton RC3 5.3.0 Change-Id: I1b367dcaba4c2c0bffa9eae0b81ee81f1676d754
2016-10-07Use Heat role *_enabled hiera to check Manila backendsGiulio Fidente1-8/+20
Aligns the way how we check for enabled backends in pacemaker/manila.pp with what we did in base/manila/api.pp with [1]. The benefit is that we don't need to emit from the templates custom hiera. 1. I86ba8b9d5872c0f1a94e74215e97b796ad129bfb Change-Id: I04e28a95e8d69a24cd3df109bf1802bfcbd941db
2016-10-07Set enabled_share_protocols based on enabled backendsGiulio Fidente1-3/+32
When deploying manila with cephfs, share creation fails because 'enabled_share_protocols' sticks to NFS,CIFS and does not get updated with CEPHFS. This change aims at fixing it by building the list of enabled protocols based on the list of enabled backends. Co-Authored-By: Tom Barron <tbarron@redhat.com> Closes-Bug: 1630564 Change-Id: I86ba8b9d5872c0f1a94e74215e97b796ad129bfb
2016-10-07Merge "Add ceph profile rspec testing"Jenkins6-0/+403
2016-10-07Merge "Add ceilometer profile rspec testing"Jenkins6-23/+354
2016-10-07Merge "Add aodh profile rspec testing"Jenkins8-1/+349
2016-10-06Merge "Enable usage of "short names" for Ceph cluster"Jenkins1-6/+11
2016-10-06Update websocket service name in config templateJulie Pichon1-1/+1
The name was changed to "zaqar-websocket" recently. Having the old name in the configuration file leads to errors and confusion when overriding URLs, as the override won't get picked up with the old name. Change-Id: I7acf900d094e41862958b3cddbb66ff0d8a3e46f Closes-Bug: #1630965
2016-10-06Merge "Enable usage of "short names" for Galera cluster"Jenkins1-1/+6
2016-10-05Merge "Explicitly use Keystone v2 endpoint in the UI"Jenkins1-1/+1
2016-10-05Add ceph profile rspec testingAlex Schultz6-0/+403
This change adds rspec testing for the ceph profiles in puppet-tripleo. Change-Id: I08954e011848d6b747735f11b3cbff5707460c26
2016-10-05Fetch internal certificates for HAProxy based on networkJuan Antonio Osorio Robles3-80/+273
The service profile in HAProxy has the capability of creating certificates based on a map. The idea is to standardize this, as some of those certificates should match certain networks the services are listening on (with the exception of the external network which is handled differently and the tenant network which doesn't need a certificate). So, based on which network a certain service is listening on, we fetch the appropriate certificate. bp tls-via-certmonger Change-Id: I89001ae32f46c9682aecc118753ef6cd647baa62
2016-10-05Merge "Use service-specific servernames for haproxy"Jenkins1-31/+31
2016-10-05Enable usage of "short names" for Ceph clusterJuan Antonio Osorio Robles1-6/+11
We're not able to use FQDNs yet, so to work around this, we give precedence to a "short name" list we'll get from t-h-t. We can migrate to using FQDNs in the next cycle. Change-Id: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee Related-Bug: #1628521
2016-10-05Merge "Change rabbitmq queues HA mode from ha-all to ha-exactly"Jenkins1-1/+21
2016-10-05Explicitly use Keystone v2 endpoint in the UIJulie Pichon1-1/+1
The UI expects a Keystone endpoint URL that includes the version (without it, it is not possible to log in). Looking at the dist/tripleo_ui_config.js.sample configuration sample in the tripleo-ui repository, the current expectation is a v2.0 URL so let's use that for now. Change-Id: I4ca04b16251fbee264cd4ce5e5433c2c1cb6d2f0 Closes-Bug: #1630546
2016-10-05Use service-specific servernames for haproxyJuan Antonio Osorio Robles1-31/+31
Right now we're hardcoding the server names for the services to be the controllers. This is problematic if we start using custom roles for services, which listen on nodes that are not controllers. We already have the server names for each service, so using this mapping instead fixes the issue. Change-Id: Ic4b65edb3dc1b75abbc3421a87cab97425b058c4 Closes-Bug: #1629098
2016-10-05Enable usage of "short names" for Galera clusterJuan Antonio Osorio Robles1-1/+6
We're not able to use FQDNs yet, so to work around this, we give precedence to a "short name" list we'll get from t-h-t. Change-Id: I4ef7786474c229d5212a0deb2ca02ee992b030d8 Related-Bug: #1628521
2016-10-05Change rabbitmq queues HA mode from ha-all to ha-exactlyMichele Baldessari1-1/+21
It turns out that reducing number of rabbitmq queues in cluster significantly improves performance of cluster especially in the case of failover recovery time. Right now the cluster uses ha-all mode for rabbitmq queues. It is best to change this to "ha-exactly" mode and reduce the number of queue copies to ceil(N/2) where N is number of controllers in the cluster - so in typical scenario of 3 controller It would be 2 by default. It does not make much sense to keep the copies of queues over whole cluster since if the quorum of nodes is lost then the rest of cluster nodes will be stopped anyway. We let the user override this with a parameter. I.e. for a 3 node controlplane cluster we will go from this: pcs resource show rabbitmq Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster) Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}" To this: pcs resource show rabbitmq Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster) Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}" According to Marin Krcmarik's testing recovery time from failure was reduced significantly. Co-Authored-By: Marian Krcmarik <mkrcmari@redhat.com> Change-Id: Ib62001c03e1e08f58cf0c6e0ba07a8879a584084 Partial-Bug: #1628998