aboutsummaryrefslogtreecommitdiffstats
path: root/manifests
AgeCommit message (Collapse)AuthorFilesLines
2016-10-19Enable TLS in the internal network for keystoneJuan Antonio Osorio Robles3-11/+156
This optionally enables TLS for keystone in the internal network. If internal TLS is enabled, each node that is serving the keystone service will use certmonger to request its certificate. This, in turn should also configure a command that should be ran when the certificate is refreshed (which requires the service to be restarted). bp tls-via-certmonger Change-Id: I303f6cf47859284785c0cdc65284a7eb89a4e039
2016-10-19Merge "Add port to rabbitmq node ip list"Jenkins12-14/+74
2016-10-19Merge "Include ::swift::config in Swift API and Storage roles"Jenkins2-0/+2
2016-10-19Merge "Fix broken rabbitmqctl commands when using ipv6"Jenkins1-1/+2
2016-10-18Merge "Set memcached_servers for nova API"Jenkins1-0/+10
2016-10-18Merge "Remove explicit service_name setting from nova manifest"Jenkins1-3/+2
2016-10-18Set memcached_servers for nova APIDan Prince1-0/+10
This patch updates the Nova profile so that we set memcached servers correctly for the Nova keystone auth_token middleware. Most of the hiera settings for ::nova::keystone::authtoken are already included in the t-h-t nova-api service. Change-Id: I3b7ff02abbd0d5e0c38232d02b33e4c7bc411120 Closes-bug: #1633595
2016-10-18Merge "Remove faulty migration logic to stop nova-api"Jenkins1-13/+0
2016-10-18Fix broken rabbitmqctl commands when using ipv6Michele Baldessari1-1/+2
When deploying via ipv6, rabbitmq-ctl commands have the following issues: - `rabbitmq cluster_status` shows nodedown alerts - list_queues / list_connections hang - `rabbitmqctl node_health_check` fails with an error. * There is no any issue while performing activity on RHOS setup(From * horizon/cli). i.e. RHOS environment is functioning as expected. For example: sudo rabbitmqctl node_health_check -n rabbit@node1 Checking health of node 'rabbit@node1' ... Heath check failed: health check of node 'rabbit@node1' fails: nodedown The problem is that we are missing the following in /etc/rabbitmq/rabbitmq-env.conf: RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp" Fix these by setting the appropriate RABBITMQ_CTL_ERL_ARGS when deploying ipv6. Closes-Bug: #1633693 Change-Id: I53f4e76e687b3966fbb74fd0c2d83f05176630de
2016-10-17Add port to rabbitmq node ip listBrent Eagles12-14/+74
We use the rabbit_hosts configuration for most of our services but we haven't been adding the configured port. This patch appends the IP port used provided to the service's heat template to the IPs in the list. Note: while we could use the value set for the rabbitmq server in rabbitmq::port, it doesn't allow for dealing with SSL. This also is also backwards compatible with the RabbitClientPort parameters used in the heat templates. Change-Id: I0000f039144a6b0e98c0a148dc69324f60db3d8b Closes-Bug: #1633580
2016-10-17Merge "packages: run upgrade at 'setup' stage"Jenkins2-27/+52
2016-10-17Remove faulty migration logic to stop nova-apiJuan Antonio Osorio Robles1-13/+0
The patch making nova run over httpd had added migration logic to stop nova-api, However, this doesn't work since nova-metadata is running over the same process. Now, the fact that is was running seems to be just luck, since the systemctl runs, then we start the service via the nova::api resource. So this is fragile in it's current state. This then removes the exec, as we don't need it for the migration. Change-Id: I4603b81d30a704b07eef461b3cdbfe164614b04f
2016-10-14Merge "Move heat domain/user creation into keystone profile"Jenkins2-15/+24
2016-10-14packages: run upgrade at 'setup' stageEmilien Macchi2-27/+52
Instead of using an operator to make sure we upgrade package before any service, which causes dependency cycles with iptables puppet module, let's do another approach where we upgrade rpms in the 'setup' stage, which is a stage that runs before configure and running services. In that way, we'll remove dependency cycles and make sure packages are upgrades before configure and running TripleO services. Change-Id: I1be83f88be1959885c980ab4f428477d412751f7
2016-10-14Merge "pacemaker: increase timeouts for rabbitmq and redis"Jenkins2-0/+2
2016-10-14Move heat domain/user creation into keystone profileSteven Hardy2-15/+24
This needs to happen on the node running keystone, or things break when you try to deploy e.g the heat_engine service on a non Controller role. We check the enabled flag for heat engine so this only happens if the heat_engine service is running on some (any) role. Partial-Bug: #1631130 Change-Id: Ib088a572b384b479f51d56555734d78ab840a1f3
2016-10-14Remove explicit service_name setting from nova manifestJuan Antonio Osorio Robles1-3/+2
We can now get this parameter from t-h-t, so it's not needed here. Change-Id: I014e7b3a6feb5609ace2e8ef1e4df11448b0a0cc Depends-On: Ic229182cc5c887b57f6182c3db1bac8bed330f7c
2016-10-14Merge "Deploy nova over Apache httpd"Jenkins1-2/+18
2016-10-14Merge "Add part_power and min_part_hours for Swift"Jenkins1-0/+13
2016-10-13Merge "Only run ceilometer::db::sync on bootstrap node"Jenkins1-5/+4
2016-10-13Add part_power and min_part_hours for SwiftChristian Schwede1-0/+13
Change-Id: I78049105adf52226d47cc6764b1ba6c2c06e91e5 Related-Bug: 1631926
2016-10-13Merge "Ensure presence of pacemaker restart directory."Jenkins2-4/+11
2016-10-13Ensure presence of pacemaker restart directory.Sofer Athlan-Guyot2-4/+11
Currently the /var/lib/tripleo/pacemaker-restarts directory is created only when base/pacemaker.pp file is included in the manifest. There is a notification that ensures precedence order and trigger the touch. The trigger and the dependency on the base/pacemaker.pp should not be required as someone using the tripleo::pacemaker::resource_restart_flag would expect the file to be created no matter what. For instance in the Cinder upgrade in the convergence step has this defined: Cinder_config<||> ~> Tripleo::Pacemaker::Resource_restart_flag["${::cinder::params::volume_service}"] but in the convergence step, the base/pacemaker.pp is not included and the above trigger fails as the directory is not created. It looks the same for manilla.pp. This patch removes the trigger and ensures the directory is created when needed. Change-Id: Ic3aa82c818662e9e88e21c8381d657adef5b43ac Closes-Bug: #1632232
2016-10-13Deploy nova over Apache httpdJuan Antonio Osorio Robles1-2/+18
This adds the necessary resources to the manifest to migrate nova to run over httpd. The service name will be moved to t-h-t in a subsequent commit, but since this patch depends on t-h-t, we try to avoid circular dependencies of repos. Change-Id: I91d430a3871672f90b0f885736f067ddae3c238c Depends-On: I57fb20cf0d58b3376243ba4aeb04e995e7152ce3
2016-10-12Merge "Fix eqlx chap password"Jenkins1-1/+1
2016-10-12Merge "Add versioned_writes to Swift proxy config"Jenkins1-0/+1
2016-10-12pacemaker: increase timeouts for rabbitmq and redisEmilien Macchi2-0/+2
When we observe the 'stop timeout' values of pacemaker resources: rabbitmq and redis, they are set to 90s. But for all other services, it is set to 200s. The overcloud deployment sometimes fails due to this with the error: Error: Could not complete shutdown of rabbitmq-clone, 1 resources remaining Error performing operation: Timer expired This patch updates the timeout for Redis and RabbitMQ to avoid this error. Change-Id: I8a3b3951a896ee3e8e5e09778e8ea4717e76a1b4
2016-10-11Add versioned_writes to Swift proxy configChristian Schwede1-0/+1
Tempest expects object versioning to be enabled by default in Swift; if not it has to be disabled explicitly in the Tempest config. This is a commonly used middleware, therefore it should be enabled in the overcloud proxy nodes as well. Closes-Bug: 1632215 Change-Id: I07a206473ff7939749e3eba1dfe3ea8c4526eb5c
2016-10-10Merge "Fetch internal certificates for HAProxy based on network"Jenkins3-80/+273
2016-10-07Fix eqlx chap passwordAlex Schultz1-1/+1
The hiera key generated by THT is eqlx_chap_password and not eql_san_password. https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/extraconfig/pre_deploy/controller/cinder-eqlx.yaml#L63 Change-Id: Ic062d9060f0ce437336e2bd6aaca3887fc33c8cf Closes-Bug: #1631527
2016-10-07Only run ceilometer::db::sync on bootstrap nodeAlex Schultz1-5/+4
The ceilometer::db::sync is included by default in ceilometer::db but we only want it to run on the bootstrap node. This change passes the sync_db parameter to ceilometer::db to manage the db sync process rather than trying to manage the inclusion of ceilometer::db::sync within the profile class. Change-Id: Ib56db1a90dd6fbfe7582fc57b7728df81942cce2 Closes-Bug: #1629373
2016-10-07Include ::swift::config in Swift API and Storage rolesGiulio Fidente2-0/+2
This changes makes the Swift API and Storage roles to include the ::swift::config class, as we do for the other OpenStack services, which is useful to push arbitrary config settings into Swift. Change-Id: Iaf2c2f0f0103fe9264ce875099a1578b353a5558
2016-10-07Use Heat role *_enabled hiera to check Manila backendsGiulio Fidente1-8/+20
Aligns the way how we check for enabled backends in pacemaker/manila.pp with what we did in base/manila/api.pp with [1]. The benefit is that we don't need to emit from the templates custom hiera. 1. I86ba8b9d5872c0f1a94e74215e97b796ad129bfb Change-Id: I04e28a95e8d69a24cd3df109bf1802bfcbd941db
2016-10-07Set enabled_share_protocols based on enabled backendsGiulio Fidente1-3/+32
When deploying manila with cephfs, share creation fails because 'enabled_share_protocols' sticks to NFS,CIFS and does not get updated with CEPHFS. This change aims at fixing it by building the list of enabled protocols based on the list of enabled backends. Co-Authored-By: Tom Barron <tbarron@redhat.com> Closes-Bug: 1630564 Change-Id: I86ba8b9d5872c0f1a94e74215e97b796ad129bfb
2016-10-07Merge "Add ceilometer profile rspec testing"Jenkins1-23/+45
2016-10-06Merge "Enable usage of "short names" for Ceph cluster"Jenkins1-6/+11
2016-10-06Merge "Enable usage of "short names" for Galera cluster"Jenkins1-1/+6
2016-10-05Merge "Explicitly use Keystone v2 endpoint in the UI"Jenkins1-1/+1
2016-10-05Fetch internal certificates for HAProxy based on networkJuan Antonio Osorio Robles3-80/+273
The service profile in HAProxy has the capability of creating certificates based on a map. The idea is to standardize this, as some of those certificates should match certain networks the services are listening on (with the exception of the external network which is handled differently and the tenant network which doesn't need a certificate). So, based on which network a certain service is listening on, we fetch the appropriate certificate. bp tls-via-certmonger Change-Id: I89001ae32f46c9682aecc118753ef6cd647baa62
2016-10-05Merge "Use service-specific servernames for haproxy"Jenkins1-31/+31
2016-10-05Enable usage of "short names" for Ceph clusterJuan Antonio Osorio Robles1-6/+11
We're not able to use FQDNs yet, so to work around this, we give precedence to a "short name" list we'll get from t-h-t. We can migrate to using FQDNs in the next cycle. Change-Id: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee Related-Bug: #1628521
2016-10-05Merge "Change rabbitmq queues HA mode from ha-all to ha-exactly"Jenkins1-1/+21
2016-10-05Explicitly use Keystone v2 endpoint in the UIJulie Pichon1-1/+1
The UI expects a Keystone endpoint URL that includes the version (without it, it is not possible to log in). Looking at the dist/tripleo_ui_config.js.sample configuration sample in the tripleo-ui repository, the current expectation is a v2.0 URL so let's use that for now. Change-Id: I4ca04b16251fbee264cd4ce5e5433c2c1cb6d2f0 Closes-Bug: #1630546
2016-10-05Use service-specific servernames for haproxyJuan Antonio Osorio Robles1-31/+31
Right now we're hardcoding the server names for the services to be the controllers. This is problematic if we start using custom roles for services, which listen on nodes that are not controllers. We already have the server names for each service, so using this mapping instead fixes the issue. Change-Id: Ic4b65edb3dc1b75abbc3421a87cab97425b058c4 Closes-Bug: #1629098
2016-10-05Enable usage of "short names" for Galera clusterJuan Antonio Osorio Robles1-1/+6
We're not able to use FQDNs yet, so to work around this, we give precedence to a "short name" list we'll get from t-h-t. Change-Id: I4ef7786474c229d5212a0deb2ca02ee992b030d8 Related-Bug: #1628521
2016-10-05Change rabbitmq queues HA mode from ha-all to ha-exactlyMichele Baldessari1-1/+21
It turns out that reducing number of rabbitmq queues in cluster significantly improves performance of cluster especially in the case of failover recovery time. Right now the cluster uses ha-all mode for rabbitmq queues. It is best to change this to "ha-exactly" mode and reduce the number of queue copies to ceil(N/2) where N is number of controllers in the cluster - so in typical scenario of 3 controller It would be 2 by default. It does not make much sense to keep the copies of queues over whole cluster since if the quorum of nodes is lost then the rest of cluster nodes will be stopped anyway. We let the user override this with a parameter. I.e. for a 3 node controlplane cluster we will go from this: pcs resource show rabbitmq Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster) Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}" To this: pcs resource show rabbitmq Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster) Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}" According to Marin Krcmarik's testing recovery time from failure was reduced significantly. Co-Authored-By: Marian Krcmarik <mkrcmari@redhat.com> Change-Id: Ib62001c03e1e08f58cf0c6e0ba07a8879a584084 Partial-Bug: #1628998
2016-10-04Cleanup the firewall logic.Dan Prince1-1/+1
We added code in t-h-t to strip empty services from the service_names list. (These are often the result of a service set to OS::Heat::None). As such we can now drop this puppet reject statement. Change-Id: Ie66f14f183de7e44a1f69af862f7d4be9a14c904
2016-10-04Merge "Fix the timeout for pacemaker systemd resources"Jenkins32-10/+40
2016-10-04Clean out UI httpd configuration fileJulie Pichon1-0/+15
When updating the package with yum directly, a new httpd config file is created with a different name than the one used by Puppet, causing httpd to fail. Cleaning out the package config file and keeping it around means it won't get overwritten on update, and is the way other projects such as puppet-horizon handle this. Change-Id: I539729ce4cd0898f8b0f3f26266e4e6d55b99e37 Closes-Bug: #1628983
2016-10-03Merge "Use FallbackResource instead of Rewrite for UI"Jenkins1-13/+7