Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch changes the default value and type of the NeutronWorkers
parameter, allowing it to be unset and let a system-dependent value to
be used (e.g. processorcount or some derivate value).
Change-Id: Ia385b3503fe405c4b981c451f131ac91e1af5602
Closes-Bug: #1626126
|
|
|
|
|
|
|
|
|
|
Currently we pass numbers in (hard-coded in post.j2.yaml) but the
SoftwareConfig schema defaults to String. If puppet requires an
integer number, setting this type may help preserve the type for
the hook.
Change-Id: Ie9227d7adb58ea3c791aa459a1ab5b17ad935919
|
|
This patch changes the default value and type of the Glance worker
configuration to allow it to be unset and allow a system dependent
default to be used (e.g. processorcount or some derivative value). The
previous default of 0 would result in a single self contained process,
which while suitable for debugging and testing is not appropriate for
production deployments.
Partial-Bug: #1626126
Change-Id: I58a6a72a581e7083e1dc4e5ca568fdd3fdd6cdf1
|
|
We hit problems in environments which don't have a lot of RAM (e.g. dev
envs, could be also CI) that Apache ate too much memory due to
too many worker processes being spawned.
This commit allows customizing the Apache MaxRequestWorkers and
ServerLimit directives via Heat parameters. The default stays 256 as
that's the default in the Puppet module, to be suited for production
environments with powerful machines. Also low-memory-usage.yaml
environment file is added, which can be used to make dev/test/CI
overclouds less memory hungry, where the limits are now set to 32.
Change-Id: Ibcf1d9c3326df8bb5b380066166c4ae3c4bf8d96
Co-Authored-By: Carlos Camacho <ccamacho@redhat.com>
Closes-Bug: #1619205
|
|
|
|
|
|
The neutron metadata agent's metadata_ip field is meant to refer to the
nova metadata service, not the local address on the NeutronApiNetwork.
Change-Id: Ibb25a80ea3e66ab3f5cf63c197460d495939778d
Closes-Bug: #1625504
|
|
This is needed because currently we're not generating
nova_metadata_vip or nova_metadata_nodes_ip, and a service profile is
required for that. Unfortunately, currently puppet-nova only deploys
osapi and metadata through the same manifest, so this profile doesn't
really inject any puppet code. We can make this more elegant later.
Change-Id: Id7112111f16d0c749a6203b90e29e6d9f1e4d57e
Closes-Bug: #1625543
|
|
Currently in puppet/services/rabbitmq.yaml we hardcode the thread pool
size to 30 (via the +A30 snippet):
rabbitmq_environment:
RABBITMQ_SERVER_ERL_ARGS: '"+K true +A30 +P 1048576 -kernel inet_default_connect_options [{nodelay,true},{raw,6,18,<<5000:64/native>>}] -kernel inet_default_listen_options [{raw,6,18,<<5000:64/native>>}]"'
Upstream rabbit has gained the ability to dynamically configure the
number of threads since 3.6.2 via the following commit:
https://github.com/rabbitmq/rabbitmq-server/commit/41ce5ad808863944cd6d62ce7f7e2271f1010582
Given that the default was hardcoded in rabbit from at least 3.4.0 up
until 3.6.2 (see LP bug associated to this commit), we can actually
remove this hardcoded value as it overrides a sane default.
Before the change:
/usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -A30 -P 1048576 ...
After the change:
/usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -P 1048576 ...
So effectively with this change we will have the following:
- With older rabbitmq versions we keep the +A30 default
- With rabbitmq versions >= 3.6.2 the thread number is dynamically
computed to nr_cpus * 16
Change-Id: I8d30c7d141c29fcc439d40fc767498520be7966e
Closes-Bug: #1625486
|
|
This patch conditionally enables Neutron L3 HA if there are multiple
controllers but DVR has not been enabled. If the conditions are false,
the value of NeutronL3HA is used.
Change-Id: If1ebeaf417c0da99d833450e394b71cabff2c800
Closes-Bug: #1623155
|
|
|
|
While it is possible to override the pg_num, pgp_num and size for
each pool, the defaults are hardcoded. This patch uses as default
the values given via ceph::profile::params::osd_pool_default_*
parameters, if any.
Closes-Bug: 1623590
Change-Id: Iecde772e7f72fd9abedb54cff4b8f2605df8fedd
|
|
|
|
|
|
These are needed so the computes can advertize the VNC URL correctly.
Change-Id: Ic3eba9fe929ce396b584249eb84415de09ab1b62
Closes-Bug: #1623607
|
|
|
|
|
|
|
|
This implements support for installing fluentd agents as a composable
service on the overcloud.
Depends-On: I2e1abe4d8c8359e56ff626255ee50c9cacca1940
Implements: tripleo-opstools-centralized-logging
Change-Id: I23b0e23881b742158fcfb6b8c145a3211d45086e
|
|
|
|
Currently RabbitMQ cluster uses a predefined port 35672 for clustering.
This port belongs to so-called ephemeral ports range.
Ephemeral ports are the ports kernel assings to application if it
doesn't specify which port to open. So there is a small chance that this
application being started before RabbitMQ itself could grab this port.
While rather unlikely we did see this happen.
Selinux change should already be in place. On my Centos 7 we have:
rabbitmq_port_t tcp 25672
corenet_tcp_bind_rabbitmq_port(rabbitmq_t)
corenet_tcp_connect_rabbitmq_port(rabbitmq_t)
First noted via:
https://bugzilla.redhat.com/show_bug.cgi?id=1357522
Closes-Bug: #1623818
Depends-On: I0bcd0d063a7a766483426fdd5ea81cbe1dfaa348
Change-Id: I995bd96c2a17614e954ea5bbae4d58998ef420dc
|
|
In scenario where mongo and collector are on separate nodes like as
indicated in the bug, collector should be able to access mongo replset
and other hiera data.
Closes-bug: #1620468
Depends-On: I0bcd0d063a7a766483426fdd5ea81cbe1dfaa348
Change-Id: Iadf4c78fb03da183d19e93c30f78817a3cfed425
|
|
|
|
|
|
|
|
|
|
This adjusts the interface to OS::TripleO::AllNodesExtraConfig so
it supports custom/composable/optional roles.
Note this does break backwards compatibility, and I can't see any way
to avoid that. I've converted the in-tree templates, and we'll have
to document carefully and or provide a script (or automated conversion
via mistral perhaps?) to allow folks to easily adjust any out of tree
templates to the new format.
Basically you just have to:
1. Remove all the *_servers parameters, replace with one "servers"
json parameter
2. Replace references to e.g "controller_servers" with "servers, Controller"
which does a path-based lookup into the json map provided by overcloud.yaml
Change-Id: I5eebf853646b2f6300d6b542fcd4f43e82d3b413
Partially-Implements: blueprint custom-roles
|
|
Refactor so the post-deploy steps recently moved into
puppet/post.yaml are generated by jinja2 instead of hard-coded
Change-Id: I488e46aaa449c95571bd3d1de9513c3d0730baf3
Partially-Implements: blueprint custom-roles
|
|
To communicate to glance registry, glance API has several parameters
that it uses to form the URI. Right now we are defaulting to http,
when we enable TLS everywhere, this will break. So setting the value
from the endpoint map should fix it.
Closes-Bug: #1623477
Change-Id: Id86787cbaa6f87fdcf9c26111c228fd59fbba012
|
|
The puppet-tripleo change for the same is merged
I9220b7d020dc8ed45dd6ca83ea9647efd67ea648
Change-Id: Ic5309ada98c78a15aa3a47dd94acb9e68eb25295
|
|
|
|
This moves the now nearly identical group resources inside the loop
there's a FIXME related to some deprecated compute parameters we'll
need to work around.
Change-Id: Iddd63c42754867125e65e7721ab9d9f46f4d6afb
Partially-Implements: blueprint custom-roles
|
|
|
|
Enables configuring a NetApp backend for the Manila service
This was created based on the review at
https://review.openstack.org/#/c/188138/
This makes the netapp and generic backends disabled by default
in the services/manila-backend-*.yaml. A backend is then
enabled via backend-specific environment files, which will set
any config parameters and enable that backend.
It is expected that multiple manila backend specific environment
files might be specified simultaneously.
Finally generic and manila config is split into separate
service files rather than using manila-base for all the things.
Co-Authored-By: Ryan Hefner <rhefner@redhat.com>
Co-Authored-By: Ben Swartzlander <ben@swartzlander.org>
Closes-Bug: 1618479
Depends-On: Ic6f8e8d27ca20b9badddea5d16550aa18bff8418
Change-Id: I35fce32d0f6a5cc1c3382c2d0e0d6028928fd943
|
|
|
|
|
|
|
|
The keystone public_endpoint value should be deduced from the calling
request and not hardcoded, or it makes network isolation impossible.
Change-Id: Ide6a65aa9393cb84591b0015ec5966cc01ffbcf8
Closes-Bug: 1381961
|
|
This is done in the vncproxy profile, but for some reason is not in
the compute one. It causes hiera to explode when the brackets are
left, so we need to do the bracket stripping here too.
Also switches both places to just use the host_nobrackets version
of the endpoint instead of stripping them with str_replace.
Change-Id: I7ccd84b575fd652f6412fdb1869c31c79a7bf53b
Closes-Bug: 1618623
|
|
Configure Keystone credentials by installing 2 keys with dynamic content
generated by python-tripleoclient.
Note: this is a first iteration of managing Keystone credentials. It has
a few limitations:
- keys are not exported to external storage.
- keys are not rotated automatically.
Change-Id: I45cf8821eadf528dfcdc8d74e6e0484597b0d2c0
|
|
There was currently no way of getting it and we can't asure that the
primary IP will use it. So it's explicitly needed there.
Change-Id: Idb3ca22ac136691b0bff6f94524d133a4fa10617
|
|
|
|
|
|
This is necessary for when HAProxy is terminating TLS for manila,
else we will have keystone discovery errors. This is the same we do
for several other services, as manila uses the same middleware.
Change-Id: Ice78b0abceb6a956bb8c1dc6212ee1b56b62b43f
|
|
|
|
This patch add support for deploying Ceph RGW.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: I88c8659a36c2435834e8646c75880b0adc52e964
|