Age | Commit message (Collapse) | Author | Files | Lines |
|
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
|
|
In change Ib62001c03e1e08f58cf0c6e0ba07a8879a584084 we switched the
rabbitmq queues HA mode from ha-all to ha-exactly. While this gives us a
nice performance boost with rabbitmq, it makes rabbit less resilient to
network glitches as we painfully found out via
https://bugzilla.redhat.com/show_bug.cgi?id=1441635.
This is the THT part of the change that changes the default to
ha-mode: all.
Closes-Bug: #1686337
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Co-Authored-By: John Eckersberg <jeckersb@redhat.com>
Change-Id: I7afcf2b3c8deb13fc2134e4cae9c06a44e775384
Depends-On: I9a90e71094b8d8d58b5be0a45a2979701b0ac21c
|
|
Usually a nested stack is used that contains the TLS-everywhere bits
(config_settings and metadata_settings). Nested stacks are very
resource intensive. So, instead of doing using nested stacks, this patch
changes that to use a conditional, and output the necessary
config_settings and metadata_settings this way in an attempt to save
resources.
Change-Id: Ic25f84a81aefef91b3ab8db2bc864853ee82c8aa
|
|
As with other services, this passes the necessary hieradata to enable
TLS for RabbitMQ. This will mean (once we set it via puppet-tripleo)
that there will only be TLS connections, as the ssl_only option is being
used.
bp tls-via-certmonger
Change-Id: I960bf747cd5e3040f99b28e2fc5873ca3a7472b5
Depends-On: Ic2a7f877745a0a490ddc9315123bd1180b03c514
|
|
|
|
Change-Id: I9c6116ddb4475b798876635cbb701214759fa33b
Partially-Implements: blueprint overcloud-upgrades-per-service
|
|
When deploying with EnablePackageInstall:True, the rabbitmq puppet
module defaults to the rpm package provider, which then tries to "rpm -i
undef" since we are setting rabbitmq::package_source to undef. Instead
of using the rpm provider at all, we should just use the yum provider to
install whatever rabbitmq rpm's are found in enabled repos.
Change-Id: I29365e675bfde676fde7a54dfc6c660c3970f50a
Partially-implements: blueprint split-stack-software-configuration
|
|
With this change we export ERL_EPMD_ADDRESS set to the
address rabbitmq is listening too. We need to explicitely
export it so that epmd can pick it up and bind to the address.
Closes-Bug: #1645898
Change-Id: Iacb2ee262da419f61ec3511f42b395f69f5d14da
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
The RabbitMQ's puppet manifest configures the node's IP and port through
environment variables. While this would usually be fine, it doesn't
allow us to use TLS-only, since it will always try to start a TCP
listener. So, by setting these values through the config file, when
setting ssl_only for rabbitmq, they will effectively be discarded and
thus allow us to use an SSL listener on the same port.
Change-Id: I33d051a8c740baf69b99517378e1f9b0f3cc1681
|
|
|
|
This seems to have broken the updates job, causing it to fail
with following error:
Can't set long node name!\nPlease check your configuration\n
Related-Bug: 1646873
This reverts commit 3e9fcfd09320ace07bc1bd4cb57feb98cd057332.
Change-Id: I72ba891cd9cd8c4f1bc204144f46aaabbdfd3647
|
|
|
|
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
architecture.
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I39f5426cb9da0b40bec4a7a3a4a353f69319bdf9
|
|
Change-Id: Iee1afeced0b210a46b273aafc0d40e99d6ee6d4e
|
|
After a brand new deployment we have the following in rabbitmq.config:
...
{rabbit, [
{tcp_listen_options,
[binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false}]
},
{tcp_listen_options, [binary, {packet, raw}, {reuseaddr, true},
{backlog, 128}, {nodelay, true}, {exit_on_close, false}, {keepalive,
true}]},
...
Let's remove these duplicate entries and make sure that we use the
parameters for the puppet module to set the following values
explicitely (it's the only parameter where we do not use the default
setting from the puppet module):
keepalive = true -> rabbitmq::tcp_keepalive: true
All the other options that we set are the default in the puppet module:
{packet, raw}
{reuseaddr, true}
{backlog, 128}{nodelay, true}
{exit_on_close, false}
Depends-On: I608477d5714a5081b3b4ab3b9fc2932bdd598301
Change-Id: I35921652bd84d1d6be0727051294983d4a0dde10
|
|
min-masters strategy"
|
|
It turns out that reducing number of rabbitmq queues in cluster
significantly improves performance of cluster especially in the case of
failover recovery time. Right now the cluster uses ha-all mode for rabbitmq
queues.
It is best to change this to "ha-exactly" mode and reduce the number
of queue copies to ceil(N/2) where N is number of controllers in the
cluster - so in typical scenario of 3 controller It would be 2 by
default.
It does not make much sense to keep the copies of queues over whole
cluster since if the quorum of nodes is lost then the rest of cluster
nodes will be stopped anyway. We let the user override this with a
parameter.
I.e. for a 3 node controlplane cluster we will go from this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}"
To this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}"
According to Marin Krcmarik's testing recovery time from failure was
reduced significantly.
Partial-Bug: #1628998
Change-Id: Iace6daf27a76cb8ef1050ada0de7ff1f530916c6
|
|
strategy
It may happen that one of the controllers may become unavailable and
Queue Masters will be located on available controllers during queue
declarations. Once a lost controller will be become available masters of
newly declared queues are not placed with priority to such controller
with obviously lower number of queue masters and thus the distribution
may be unbalanced and one of the controllers may become under
significantly higher load in some circumstances of multiple fail-overs.
With rabbit 3.6.0 rabbitmq introduced a new HA feature of Queue masters
distribution - one of the strategies is min-masters, which picks the
node hosting the minimum number of masters.
One of the ways how to turn such min-masters strategy on is by adding
following into configuration file - rabbitmq.config
{rabbit,[ ..
{queue_master_locator, <<"min-masters">>},
.. ]},
Change-Id: I61bcab0e93027282b62f2a97bec87cbb0a6e6551
Closes-Bug: #1629010
|
|
Currently in puppet/services/rabbitmq.yaml we hardcode the thread pool
size to 30 (via the +A30 snippet):
rabbitmq_environment:
RABBITMQ_SERVER_ERL_ARGS: '"+K true +A30 +P 1048576 -kernel inet_default_connect_options [{nodelay,true},{raw,6,18,<<5000:64/native>>}] -kernel inet_default_listen_options [{raw,6,18,<<5000:64/native>>}]"'
Upstream rabbit has gained the ability to dynamically configure the
number of threads since 3.6.2 via the following commit:
https://github.com/rabbitmq/rabbitmq-server/commit/41ce5ad808863944cd6d62ce7f7e2271f1010582
Given that the default was hardcoded in rabbit from at least 3.4.0 up
until 3.6.2 (see LP bug associated to this commit), we can actually
remove this hardcoded value as it overrides a sane default.
Before the change:
/usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -A30 -P 1048576 ...
After the change:
/usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -P 1048576 ...
So effectively with this change we will have the following:
- With older rabbitmq versions we keep the +A30 default
- With rabbitmq versions >= 3.6.2 the thread number is dynamically
computed to nr_cpus * 16
Change-Id: I8d30c7d141c29fcc439d40fc767498520be7966e
Closes-Bug: #1625486
|
|
Currently RabbitMQ cluster uses a predefined port 35672 for clustering.
This port belongs to so-called ephemeral ports range.
Ephemeral ports are the ports kernel assings to application if it
doesn't specify which port to open. So there is a small chance that this
application being started before RabbitMQ itself could grab this port.
While rather unlikely we did see this happen.
Selinux change should already be in place. On my Centos 7 we have:
rabbitmq_port_t tcp 25672
corenet_tcp_bind_rabbitmq_port(rabbitmq_t)
corenet_tcp_connect_rabbitmq_port(rabbitmq_t)
First noted via:
https://bugzilla.redhat.com/show_bug.cgi?id=1357522
Closes-Bug: #1623818
Depends-On: I0bcd0d063a7a766483426fdd5ea81cbe1dfaa348
Change-Id: I995bd96c2a17614e954ea5bbae4d58998ef420dc
|
|
- adds possibility to install sensu-client on all nodes
- each composable service has it's own subscription
Co-Authored-By: Emilien Macchi <emilien@redhat.com>
Co-Authored-By: Michele Baldessari <michele@redhat.com>
Implements: blueprint tripleo-opstools-availability-monitoring
Change-Id: I6a215763fd0f0015285b3573305d18d0f56c7770
|
|
This moves the config settings out of controller.yaml for RabbitMQ
and into puppet/services/rabbitmq.yaml.
Related-Bug: #1604414
Change-Id: I6b3d71653fb91b89b85dae7df4088afff22b71ac
|
|
This patch adds a new DefaultPasswords parameter to
composable services. This is needed to help provide
access to top level password resources that overcloud.yaml
currently manages (passwords for Rabbit, Mysql, etc.).
Moving the RandomString resources into composable services
would cause them to regenerate within the stack. With this
approach we can leave them where they are while we deprecate
the top level mechanism and move the code that uses the
passwords into the composable services.
Change-Id: I4f21603c58a169a093962594e860933306879e3f
|
|
This will be needed to pick the network where the service has
to bind to from within the service template.
Change-Id: I52652e1ad8c7b360efd2c7af199e35932aaaea8c
|
|
Migrate puppet/hieradata/*.yaml parameters to puppet/services/*.yaml
except for some services that are not composable yet.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: I7e5f8b18ee9aa63a1dffc6facaf88315b07d5fd7
|
|
Split out the firewall rules in puppet/hieradata/controller.yaml
into the composable services
Depends-On: Id370362ab57347b75b1ab25afda877885b047263
Change-Id: Icaecab100d3f278035fbbb3facb9bf6c62c76c03
|
|
This patch adds a new service_name section to each composable
service. We now have an explicit unit test check to ensure that
service_name exists in tools/yaml-validate.py.
This patch also wires service_names into hieradata on each
of the roles so that tools can access the deployed services locally
during deployment and upgrades.
Change-Id: I60861c5aa760534db3e314bba16a13b90ea72f0c
|
|
We now allow 65536 open file descriptors to better reflect the
real-world settings of downstream consumers of TripleO.
Change-Id: Ib04ea6afb2da1a9101839d9d70bc8891d69700ec
|
|
By passing the MysqlVirtualIP via the EndpointMap we won't need it
to be provided as a parameter to the services.
This follows what is already happening for the glance registry
service with I9186e56cd4746a60e65dc5ac12e6595ac56505f0.
Change-Id: Iad2ab389bf64d0fc8b06eb0e7d29b5370ff27dff
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
|
|
Change the way to implement RabbitMQ, as a composable role.
Implements: blueprint refactor-puppet-manifests
Change-Id: I5fed5c437ad492af75791a9163f99ae292f58895
|