aboutsummaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2016-09-20RabbitMQ threads should be configured dynamicallyMichele Baldessari1-1/+1
Currently in puppet/services/rabbitmq.yaml we hardcode the thread pool size to 30 (via the +A30 snippet): rabbitmq_environment: RABBITMQ_SERVER_ERL_ARGS: '"+K true +A30 +P 1048576 -kernel inet_default_connect_options [{nodelay,true},{raw,6,18,<<5000:64/native>>}] -kernel inet_default_listen_options [{raw,6,18,<<5000:64/native>>}]"' Upstream rabbit has gained the ability to dynamically configure the number of threads since 3.6.2 via the following commit: https://github.com/rabbitmq/rabbitmq-server/commit/41ce5ad808863944cd6d62ce7f7e2271f1010582 Given that the default was hardcoded in rabbit from at least 3.4.0 up until 3.6.2 (see LP bug associated to this commit), we can actually remove this hardcoded value as it overrides a sane default. Before the change: /usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -A30 -P 1048576 ... After the change: /usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -P 1048576 ... So effectively with this change we will have the following: - With older rabbitmq versions we keep the +A30 default - With rabbitmq versions >= 3.6.2 the thread number is dynamically computed to nr_cpus * 16 Change-Id: I8d30c7d141c29fcc439d40fc767498520be7966e Closes-Bug: #1625486
2016-09-19Merge "Add a function to upgrade from full HA to NG HA"Jenkins3-16/+137
2016-09-19Merge "Set VNC URL parameters for nova-compute"Jenkins1-0/+3
2016-09-19Add a function to upgrade from full HA to NG HAMichele Baldessari3-16/+137
This is the initial work to have a function that migrates a full HA architecture as deployed in Mitaka to the HA architecture as deployed in Newton where only a few resources are managed by pacemaker. The sequence is the following: 1) We remove the desired services from pacemaker's control. The services at this point are still running normally via the systemd service as invoked by pacemaker 2) We do a "systemctl stop <service>" on all controllers for all the services that were removed from pacemaker's control. We do this to make sure that during the yum upgrade, the %post sections that call "systemctl try-restart" do not take ages, because at this point during the upgrade rabbit is down. The only exceptions are "openstack-core" and "delay" which are dummy pacemaker resources that do not exist on the system 3) We do a "systemctl start <service>" on all nodes for all the services mentioned above. We should probably merge this patch only when newton has branched as it is very specific to the M/N upgrade. Closes-Bug: 1617520 Change-Id: I4c409ce58c1a57b6e0decc3cf168b62698b32e39
2016-09-19Use osd_pool_default_* puppet parameters when creating the poolsGiulio Fidente1-3/+6
While it is possible to override the pg_num, pgp_num and size for each pool, the defaults are hardcoded. This patch uses as default the values given via ceph::profile::params::osd_pool_default_* parameters, if any. Closes-Bug: 1623590 Change-Id: Iecde772e7f72fd9abedb54cff4b8f2605df8fedd
2016-09-17Merge "M/N upgrade sahara-api fails to restart."Jenkins1-0/+2
2016-09-17Merge "Add fluentd client service"Jenkins66-0/+678
2016-09-17Merge "Move rabbit's clustering port away from the ephemeral port range"Jenkins1-3/+3
2016-09-17M/N upgrade sahara-api fails to restart.Sofer Athlan-Guyot1-0/+2
Change-Id: I7a041dab8b1b1edc9c80248e1eef3ce7ab272292 Closes-Bug: 1615056
2016-09-17Merge "Rework the pacemaker_common_functions for M..N upgrades"Jenkins4-61/+290
2016-09-17Set VNC URL parameters for nova-computeJuan Antonio Osorio Robles1-0/+3
These are needed so the computes can advertize the VNC URL correctly. Change-Id: Ic3eba9fe929ce396b584249eb84415de09ab1b62 Closes-Bug: #1623607
2016-09-17Merge "Add mongo config settings in collector service templates"Jenkins1-1/+10
2016-09-17Rework the pacemaker_common_functions for M..N upgradesmarios4-61/+290
For N we cannot assume services are managed by pacemaker. This adds functions to check if a service is systemd or pcmk managed and start/stops it accordingly. For pcmk, only stop/disable on bootstrap node for example, whereas systemd should stop/start on all controllers. There is also an equivalent change to the check_resource which has been reworked to allow both pcmk and systemd. Implements: blueprint overcloud-upgrades-workflow-mitaka-to-newton Change-Id: Ic8252736781dc906b3aef8fc756eb8b2f3bb1f02
2016-09-17Merge "Add NetApp Manila driver integration and tidy up generic"Jenkins7-73/+245
2016-09-17Merge "Convert AllNodesExtraConfig to support composable roles"Jenkins9-284/+130
2016-09-17Add fluentd client serviceLars Kellogg-Stedman66-0/+678
This implements support for installing fluentd agents as a composable service on the overcloud. Depends-On: I2e1abe4d8c8359e56ff626255ee50c9cacca1940 Implements: tripleo-opstools-centralized-logging Change-Id: I23b0e23881b742158fcfb6b8c145a3211d45086e
2016-09-16Merge "Expose parameter to enable combination alarms"Jenkins1-0/+6
2016-09-16Merge "Refactor upgrade checks."Jenkins3-62/+111
2016-09-16Merge "Add CephRgw to roles_data.yaml"Jenkins1-0/+1
2016-09-16Merge "Convert UpdateWorkflow to support composable roles"Jenkins4-89/+28
2016-09-16Merge "Fix use of batch_create in CephMon major upgrade template"Jenkins1-1/+2
2016-09-16Merge "Add hyperconverged-ceph environment to include CephOSD on computes"Jenkins1-0/+12
2016-09-16Merge "Fix _from_pool_v6.yaml str_split"Jenkins6-6/+6
2016-09-16Move rabbit's clustering port away from the ephemeral port rangeMichele Baldessari1-3/+3
Currently RabbitMQ cluster uses a predefined port 35672 for clustering. This port belongs to so-called ephemeral ports range. Ephemeral ports are the ports kernel assings to application if it doesn't specify which port to open. So there is a small chance that this application being started before RabbitMQ itself could grab this port. While rather unlikely we did see this happen. Selinux change should already be in place. On my Centos 7 we have: rabbitmq_port_t tcp 25672 corenet_tcp_bind_rabbitmq_port(rabbitmq_t) corenet_tcp_connect_rabbitmq_port(rabbitmq_t) First noted via: https://bugzilla.redhat.com/show_bug.cgi?id=1357522 Closes-Bug: #1623818 Depends-On: I0bcd0d063a7a766483426fdd5ea81cbe1dfaa348 Change-Id: I995bd96c2a17614e954ea5bbae4d58998ef420dc
2016-09-16Add mongo config settings in collector service templatesPradeep Kilambi1-1/+10
In scenario where mongo and collector are on separate nodes like as indicated in the bug, collector should be able to access mongo replset and other hiera data. Closes-bug: #1620468 Depends-On: I0bcd0d063a7a766483426fdd5ea81cbe1dfaa348 Change-Id: Iadf4c78fb03da183d19e93c30f78817a3cfed425
2016-09-16Merge "Convert deploy steps to jinja2 loop"Jenkins3-654/+145
2016-09-16Fix _from_pool_v6.yaml str_splitGiulio Fidente6-6/+6
Previously [1] we updated from_pool_v6 to use str_split but mistakenly copy/pasting lines referencing an attribute which isn't created in these templates. 1. I282dbc025500b1628d4f08a49b54a2adefd38b5f Closes-Bug: 1624412 Change-Id: I409ff5b36eab2a791db4d352dea5b68096c2dc21
2016-09-16Merge "Fixes the Ceph upgrade scripts"Jenkins2-5/+5
2016-09-16Merge "Set client protocol for glance registry client"Jenkins1-0/+1
2016-09-16Fix use of batch_create in CephMon major upgrade templateMathieu Bultel1-1/+2
The batch_create and rolling_update keys were incorrectly defined as properties of the resource instead of update policies. Change-Id: I19261adc78e4cdc3616f16221e85490a6b48d47b Closes-Bug: 1623506
2016-09-16Add CephRgw to roles_data.yamlGiulio Fidente1-0/+1
CephRgw defaults to None in the registry, seems like we missed it in roles_data after a rebase. Change-Id: I4ce8b160edfb193f5f6226f8295861e6625ef37b
2016-09-16Fixes the Ceph upgrade scriptsGiulio Fidente2-5/+5
The Ceph upgrade scripts was failing on the following: 1. a syntax error in an if condition 2. an attempt to read a possibly unbound variable 3. an attempt to chown a directory which might not exist this change aims at fixing all of the above. Closes-Bug: 1623942 Change-Id: I9e9d63d4ab7626893aaf2a25dccfcafbb97ccbdf
2016-09-16Merge "Unset Keystone public_endpoint"Jenkins1-1/+0
2016-09-16Merge "Populate vnc_api_lib.ini on compute nodes with OpenContrail"Jenkins1-0/+12
2016-09-16Convert AllNodesExtraConfig to support composable rolesSteven Hardy9-284/+130
This adjusts the interface to OS::TripleO::AllNodesExtraConfig so it supports custom/composable/optional roles. Note this does break backwards compatibility, and I can't see any way to avoid that. I've converted the in-tree templates, and we'll have to document carefully and or provide a script (or automated conversion via mistral perhaps?) to allow folks to easily adjust any out of tree templates to the new format. Basically you just have to: 1. Remove all the *_servers parameters, replace with one "servers" json parameter 2. Replace references to e.g "controller_servers" with "servers, Controller" which does a path-based lookup into the json map provided by overcloud.yaml Change-Id: I5eebf853646b2f6300d6b542fcd4f43e82d3b413 Partially-Implements: blueprint custom-roles
2016-09-16Convert UpdateWorkflow to support composable rolesSteven Hardy4-89/+28
We need to remove the hard-coded roles from overcloud.j2.yaml as now it's valid to e.g remove BlockStorage completely. The previous behavior for the per-role upgrade scripts is maintained but we'll need to rework this for newton->ocata upgrades where we can no longer be sure the servers mapping will contain all roles. Change-Id: I25e6c84757e3c00fba2aae834cd8206c62e44acf Partially-Implements: blueprint custom-roles
2016-09-16Convert deploy steps to jinja2 loopSteven Hardy3-654/+145
Refactor so the post-deploy steps recently moved into puppet/post.yaml are generated by jinja2 instead of hard-coded Change-Id: I488e46aaa449c95571bd3d1de9513c3d0730baf3 Partially-Implements: blueprint custom-roles
2016-09-14Set client protocol for glance registry clientJuan Antonio Osorio Robles1-0/+1
To communicate to glance registry, glance API has several parameters that it uses to form the URI. Right now we are defaulting to http, when we enable TLS everywhere, this will break. So setting the value from the endpoint map should fix it. Closes-Bug: #1623477 Change-Id: Id86787cbaa6f87fdcf9c26111c228fd59fbba012
2016-09-14Expose parameter to enable combination alarmsPradeep Kilambi1-0/+6
The puppet-tripleo change for the same is merged I9220b7d020dc8ed45dd6ca83ea9647efd67ea648 Change-Id: Ic5309ada98c78a15aa3a47dd94acb9e68eb25295
2016-09-14Merge "Convert allNodesConfig properties to composable jinja2"Jenkins1-28/+12
2016-09-14Merge "Add support for deploying Ceph RGW role"Jenkins9-0/+357
2016-09-13Convert allNodesConfig properties to composable jinja2Steven Hardy1-28/+12
To support custom roles we need to generate these lists of role specific data. Change-Id: Ide97cd57d1c07f7f7ff260ff7a6bbe2b71753bd0 Partially-Implements: blueprint custom-roles
2016-09-13Move role ResourceGroups inside the jinja2 loopSteven Hardy5-218/+61
This moves the now nearly identical group resources inside the loop there's a FIXME related to some deprecated compute parameters we'll need to work around. Change-Id: Iddd63c42754867125e65e7721ab9d9f46f4d6afb Partially-Implements: blueprint custom-roles
2016-09-13Merge "Enable proxy header parsing for Manila"Jenkins1-0/+1
2016-09-13Add NetApp Manila driver integration and tidy up genericmarios7-73/+245
Enables configuring a NetApp backend for the Manila service This was created based on the review at https://review.openstack.org/#/c/188138/ This makes the netapp and generic backends disabled by default in the services/manila-backend-*.yaml. A backend is then enabled via backend-specific environment files, which will set any config parameters and enable that backend. It is expected that multiple manila backend specific environment files might be specified simultaneously. Finally generic and manila config is split into separate service files rather than using manila-base for all the things. Co-Authored-By: Ryan Hefner <rhefner@redhat.com> Co-Authored-By: Ben Swartzlander <ben@swartzlander.org> Closes-Bug: 1618479 Depends-On: Ic6f8e8d27ca20b9badddea5d16550aa18bff8418 Change-Id: I35fce32d0f6a5cc1c3382c2d0e0d6028928fd943
2016-09-12Merge "De-bracket vncproxy_host in compute profile"Jenkins2-9/+2
2016-09-12Merge "Configure Keystone credentials"Jenkins1-0/+12
2016-09-12Merge "Add trunking plugin to list of default ML2 service plugins"Jenkins1-1/+1
2016-09-12Unset Keystone public_endpointAdam Young1-1/+0
The keystone public_endpoint value should be deduced from the calling request and not hardcoded, or it makes network isolation impossible. Change-Id: Ide6a65aa9393cb84591b0015ec5966cc01ffbcf8 Closes-Bug: 1381961
2016-09-12De-bracket vncproxy_host in compute profileBen Nemec2-9/+2
This is done in the vncproxy profile, but for some reason is not in the compute one. It causes hiera to explode when the brackets are left, so we need to do the bracket stripping here too. Also switches both places to just use the host_nobrackets version of the endpoint instead of stripping them with str_replace. Change-Id: I7ccd84b575fd652f6412fdb1869c31c79a7bf53b Closes-Bug: 1618623