aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2016-10-31Make sure keepalived is restarted before haproxy.Sofer Athlan-Guyot1-1/+4
When using SSL setup for undercloud, the admin and public vip required for ssl binding by haproxy are created by keepalived. This makes sure that keepalived is started before haproxy and thus that the interfaces are indeed present. This patch also ensures this is happening for overcloud ssl configuration. The case where another load-balancing technology other than haproxy is used is not covered. Closes-Bug: #1638029 Change-Id: I98cb0dcd7f389a1dd38ec8324429bfef4979aa66
2016-10-31Merge "Release 5.4.0"Jenkins1-1/+1
2016-10-31Merge "Calculate zaqar mongo from mongodb_node_ips"Jenkins1-2/+17
2016-10-31Merge "Reload haproxy if any configuration changes on HA"Jenkins1-1/+1
2016-10-31Merge "Enable TLS in the internal network for aodh"Jenkins2-3/+56
2016-10-31Release 5.4.0Emilien Macchi1-1/+1
New Newton release Change-Id: I152fbd1dcaac37474183d60654db15a9a4918209
2016-10-31Merge "Enable TLS in the internal network for ceilometer"Jenkins2-2/+54
2016-10-30Fixes transparent binding to OpenDaylight in HA ProxyTim Rozet1-2/+2
ODL was missing transparent binding mode, which causes HA deployments to fail since HA Proxy will try to come up on every node (even without VIP). Closes-Bug: 1637833 Change-Id: I0bb7839cdcfeacb4ca1a9fc6f878e8b51330be92 Signed-off-by: Tim Rozet <trozet@redhat.com>
2016-10-27Clean up service name from cinder apiJuan Antonio Osorio Robles1-3/+1
Since the service_name is now being passed from t-h-t, we can clean it up from the profile in puppet. Change-Id: I724af8c355c3077be64cf472cedbca80af55da01 Depends-On: I13638cd1af52537bef8540f0d5fa5f5f7decd392
2016-10-27Calculate zaqar mongo from mongodb_node_ipsBrad P. Crochet1-2/+17
In order to make the zaqar service fully composable, the mongo ips need to be calculated without assuming that mongo and zaqar are on the same node. Change-Id: I0b077e85ba5fcd9fdfd33956cf33ce2403fcb088
2016-10-27Merge "Set redis file descriptor limit when run via pacemaker"Jenkins1-0/+17
2016-10-26Reload haproxy if any configuration changes on HAJuan Antonio Osorio Robles1-1/+1
In some cases, for instance, when updating from a non-SSL setup in HAProxy to an SSL setup, we don't reload haproxy's configuration. This is problematic since we need HAProxy to serve the certificates and the new endpoints. This forces the reload when puppet notices changes. Change-Id: Ie1dd809e6beef33fadad48de55e488219fb7d686 Closes-Bug: #1636921
2016-10-26Merge "Deploy cinder over Apache httpd"Jenkins1-1/+4
2016-10-26Merge "Only restart haproxy services when enable_load_balancer is defined"Jenkins1-1/+1
2016-10-26Merge "Remove the hardcoded tcp_keepalive false parameter"Jenkins1-2/+0
2016-10-25Only restart haproxy services when enable_load_balancer is definedMichele Baldessari1-1/+1
If we upgrade a cloud that was configured with external load balancer the process will fail during convergence step because it will try to restart haproxy which is not configured when an external load balancer is configured. Closes-Bug: #1636527 Change-Id: I6f6caec3e5c96e77437c1c83e625f39649a66c48
2016-10-25Set redis file descriptor limit when run via pacemakerMichele Baldessari1-0/+17
The current redis file descriptor limit is 4096 because of two reasons: - It is run via the redis user - It is not started via systemd which has explicit LimitNOFILE set to 10240 (which matches the default configuration of maximum 10000 clients) Create an /etc/security/limits.d/redis.conf file in order to increase the fd limit value With this change we correctly get the following limits: [root@overcloud-controller-0 ~]# pcs status |grep -A2 redis Master/Slave Set: redis-master [redis] Masters: [ overcloud-controller-2 ] Slaves: [ overcloud-controller-0 overcloud-controller-1 ] [root@overcloud-controller-0 ~]# cat /proc/`pgrep redis`/limits | grep open Max open files 10240 10240 files Previously this limit was set to 4096. Change-Id: I7691581bad92ad9442cecd82cf44f5ac78ed169f Closes-Bug: #1635334
2016-10-23Merge "Enable communication between UI and the Undercloud by making HAProxy ↵Jenkins2-1/+21
proxy for the UI"
2016-10-23Merge "Enable haproxy statistics unix socket"Jenkins1-0/+4
2016-10-22Merge "Increase haproxy client/server timeout for swift-proxy"Jenkins1-0/+5
2016-10-22Merge "Use HAProxy for docker-registry endpoint"Jenkins1-0/+26
2016-10-21Merge "Deploy monitoring/logging agents sooner"Jenkins2-86/+82
2016-10-21Merge "Add zaqar profiles"Jenkins2-0/+52
2016-10-21NFS mounting for Glance file backendJiri Stransky2-4/+93
Previously we did this with Pacemaker, but with move to NG HA architecture we lost the ability to use NFS mounts as image storage for Glance. This reimplements the mounting without utilizing Pacemaker. The mount is by default also written to /etc/fstab so that it persists over reboot, but this behavior can be disabled. This could also go to puppet-glance eventually, but not yet -- we need this backported to Newton because it's a TripleO regression. I don't think puppet-glance would allow backporting this to Newton, because from their point of view it would be a RFE rather than a regression. Change-Id: I45ad34c36587a8d695069368cf791f1efb68256c Related-Bug: #1635606
2016-10-21Increase haproxy client/server timeout for swift-proxyJohn Trowbridge1-0/+5
The upload and extraction for the plan tarball to swift can take longer than the default one minute in slower environments. Doubling the timeout to two minutes has proven to help. This is only a partial fix, because the error reporting for this issue also needs to be improved. Change-Id: I06592d38fdfefacc8bdf76289a0bfa20eb33a89b Partial-Bug: 1635269
2016-10-21Merge "Removes logic dependent on 'odl_on_controller'"Jenkins3-15/+4
2016-10-21Merge "Enable TLS in the internal network for keystone"Jenkins3-11/+156
2016-10-21Deploy monitoring/logging agents soonerMartin Mágr2-86/+82
To be able to monitor during deployment, we need sensu clients and fluentd collectors be deployed as soon as it is possible. Change-Id: I952f0d6de6f6327d5c923b8f1d7a5979758dbc59
2016-10-21Remove the hardcoded tcp_keepalive false parameterMichele Baldessari1-2/+0
In change I35921652bd84d1d6be0727051294983d4a0dde10 we want to remove all those duplicate tcp_listen_option entries. One consequence of that is that we need to set rabbitmq::tcp_keepalive to true via hiera (as opposed to forcing it via the tcp_listen_option hash). For this to work we need to remove this forced parameter override. Note that even if I35921652bd84d1d6be0727051294983d4a0dde10 and this change don't merge at the exact same time it is still okay because we do force tcp_keepalive to true via the tcp_listen_options. Change-Id: I608477d5714a5081b3b4ab3b9fc2932bdd598301
2016-10-20Merge "pacemaker/mysql: wait step 2 to remove default accounts"Jenkins1-1/+11
2016-10-20Merge "Fixes missing ODL ML2 Authentication info"Jenkins1-4/+16
2016-10-20Use HAProxy for docker-registry endpointSteve Baker1-0/+26
The docker tooling has a preference for interacting with encrypted endpoints. Terminating the docker-registry endpoint with HAProxy allows the SSL VIP to be used for this purpose. Change-Id: Ifebfa7256e0887d6f26a478ff8dc82b0ef5f65f6
2016-10-19Enable TLS in the internal network for gnocchiJuan Antonio Osorio Robles2-4/+56
This optionally enables TLS for gnocchi in the internal network. If internal TLS is enabled, each node that is serving the gnocchi service will use certmonger to request its certificate. bp tls-via-certmonger Change-Id: Ie983933e062ac6a7f0af4d88b32634e6ce17838b
2016-10-19Enable TLS in the internal network for aodhJuan Antonio Osorio Robles2-3/+56
This optionally enables TLS for aodh in the internal network. If internal TLS is enabled, each node that is serving the aodh service will use certmonger to request its certificate. This, in turn should also configure a command that should be ran when the certificate is refreshed (which requires the service to be restarted). bp tls-via-certmonger Change-Id: I50ef0c8fbecb19d6597a28290daa61a91f3b13fc
2016-10-19Enable TLS in the internal network for ceilometerJuan Antonio Osorio Robles2-2/+54
This optionally enables TLS for aodh in the internal network. If internal TLS is enabled, each node that is serving the ceilometer service will use certmonger to request its certificate. This, in turn should also configure a command that should be ran when the certificate is refreshed (which requires the service to be restarted). bp tls-via-certmonger Change-Id: Ib5609f77a31b17ed12baea419ecfab5d5f676496
2016-10-19Enable TLS in the internal network for keystoneJuan Antonio Osorio Robles3-11/+156
This optionally enables TLS for keystone in the internal network. If internal TLS is enabled, each node that is serving the keystone service will use certmonger to request its certificate. This, in turn should also configure a command that should be ran when the certificate is refreshed (which requires the service to be restarted). bp tls-via-certmonger Change-Id: I303f6cf47859284785c0cdc65284a7eb89a4e039
2016-10-19Merge "Add port to rabbitmq node ip list"Jenkins14-17/+77
2016-10-19Merge "Include ::swift::config in Swift API and Storage roles"Jenkins2-0/+2
2016-10-19Add barbican profile rspec testingAlex Schultz3-0/+166
This change adds rspec tests for the barbican profiles to ensure they function as expected. Change-Id: I73f5405ade2cc73024efbeb2cfbfc831a2120f51
2016-10-19Add barbican profileAde Lee4-0/+121
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com> Change-Id: If2804b469eb3ee08f3f194c7dd3290d23a245a7a
2016-10-19Merge "Fix broken rabbitmqctl commands when using ipv6"Jenkins1-1/+2
2016-10-18Merge "Set memcached_servers for nova API"Jenkins1-0/+10
2016-10-18Fixes missing ODL ML2 Authentication infoTim Rozet1-4/+16
Without this, neutron-server fails to start and communication will not work to ODL REST. Parital-Bug: 1633630 Change-Id: Ifd906db4e6062ac271c2147fe1149b1009d06ae2 Signed-off-by: Tim Rozet <trozet@redhat.com>
2016-10-18Merge "Remove explicit service_name setting from nova manifest"Jenkins1-3/+2
2016-10-18Set memcached_servers for nova APIDan Prince1-0/+10
This patch updates the Nova profile so that we set memcached servers correctly for the Nova keystone auth_token middleware. Most of the hiera settings for ::nova::keystone::authtoken are already included in the t-h-t nova-api service. Change-Id: I3b7ff02abbd0d5e0c38232d02b33e4c7bc411120 Closes-bug: #1633595
2016-10-18Merge "Remove faulty migration logic to stop nova-api"Jenkins1-13/+0
2016-10-18Fix broken rabbitmqctl commands when using ipv6Michele Baldessari1-1/+2
When deploying via ipv6, rabbitmq-ctl commands have the following issues: - `rabbitmq cluster_status` shows nodedown alerts - list_queues / list_connections hang - `rabbitmqctl node_health_check` fails with an error. * There is no any issue while performing activity on RHOS setup(From * horizon/cli). i.e. RHOS environment is functioning as expected. For example: sudo rabbitmqctl node_health_check -n rabbit@node1 Checking health of node 'rabbit@node1' ... Heath check failed: health check of node 'rabbit@node1' fails: nodedown The problem is that we are missing the following in /etc/rabbitmq/rabbitmq-env.conf: RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp" Fix these by setting the appropriate RABBITMQ_CTL_ERL_ARGS when deploying ipv6. Closes-Bug: #1633693 Change-Id: I53f4e76e687b3966fbb74fd0c2d83f05176630de
2016-10-17Enable communication between UI and the Undercloud by making HAProxyDan Trainor2-1/+21
proxy for the UI Change-Id: I74eac4bbfc16720eeb6e2bf0ee251689dde3bafc Implements: enable-communication-ui-undercloud
2016-10-17Add port to rabbitmq node ip listBrent Eagles14-17/+77
We use the rabbit_hosts configuration for most of our services but we haven't been adding the configured port. This patch appends the IP port used provided to the service's heat template to the IPs in the list. Note: while we could use the value set for the rabbitmq server in rabbitmq::port, it doesn't allow for dealing with SSL. This also is also backwards compatible with the RabbitClientPort parameters used in the heat templates. Change-Id: I0000f039144a6b0e98c0a148dc69324f60db3d8b Closes-Bug: #1633580
2016-10-17Removes logic dependent on 'odl_on_controller'Tim Rozet3-15/+4
Since moving to composable service/roles there was some logic here that was relying on a variable to enable ODL rather than enabling the service itself to decide where ODL was enabled. Now that ODL and ODL OVS configuration are split into 2 different services we can make these truly composable. Partial-Bug: 1633625 Change-Id: Ia55c05e12d5d434111a13e1ed795da530e3ff4a5 Signed-off-by: Tim Rozet <trozet@redhat.com>