Age | Commit message (Collapse) | Author | Files | Lines |
|
http_proxy_to_wsgi middleware was recently added to Aodh [1] and
in order to take it into use, we need to enable it via hiera.
[1] If2ada8a94c8e1ceacd4509605b4cd766a78f71d5
Depends-On: I0981e152700ed4511b797011ebe18e857c1fed71
Related-Bug: #1590608
Change-Id: Ie9605ae1e5437f488802b03ca23a325866f0ceb5
|
|
|
|
|
|
These keys are already specified in nova-metadata.yaml
where they get set correctly per the network management
local IP (based on 'service_name' list).
Depends-On: I94f985e719a3bf7408655fbbb5ab1aeaf15e994e
Change-Id: I5d57561b732783118efd2a637aa137f5f7bcddbc
Partial-bug: #1631133
|
|
|
|
The glance-api and glance-registry services are currently coupled
in that some of the hiera settings in the API are required for
the registry to run correctly (the backend settings).
This patch moves some of the common settings into glance-base and
then updates the glance-api and glance-registry services to
supply that service.
Change-Id: Ie3d7e24c7fd475e3f6ad542c1654eb7dbd9d9b35
Closes-bug: #1628582
|
|
|
|
|
|
Need to set the right default notification driver for glance so
telemetry receives them accordingly. Without this tempest tests
fail.
Closes-bug: #1631939
Change-Id: I1cee5467d077eea6142076925646f7d0cdae96c7
|
|
This resolves the issue causing the 'step' hiera setting
to get written as a string (thus causing puppet failures)
on a pacemaker controller.
Change-Id: I70037889e499846460357928f8637a35ac97bc7a
Closes-bug: #1631488
|
|
|
|
Depends-On: I04e28a95e8d69a24cd3df109bf1802bfcbd941db
Change-Id: I4ada033155e5fde0add08ec9aa8f6af7c31d53f3
|
|
|
|
role.role.j2.yaml"
|
|
The puppet-ceph module defaults to 'ceph' but that is a metapacakge
which isn't provided in all repos.
Depends-On: I13462219522386f8740b0d70916a44f3474115e4
Change-Id: Ie55d22301dd22102d471e6002dfcaad4bfadd5f6
Related-Bug: 1629933
|
|
- Move VXLAN and VRRP rules from Neutron Server to the right services.
- Enable Firewall by default on Compute nodes.
Change-Id: I99d172dcedaf6be297aad184cc51fe9f292a57e1
|
|
This default setting got lots in the composable roles/services patches.
Re-enable the ManageFirewall setting by default per what we did in
git commit 73c76b867ddc8a23a30b9a3cac4031189d4178c6.
We also fix a typo in neutron-api.yaml so that the firewall rules
matches to service_name. (otherwise it won't get loaded).
Also, drops the environments/manage-firewall.yaml which is
no longer needed if we enable firewall management by default.
Change-Id: Ie198e4efd190131d0722085b10ef77da9005bc1b
Closes-bug: 1629934
|
|
This will wire up the per-network hostnames in the generic role.
Needs to land after https://review.openstack.org/#/c/378764
Partial-Bug: #1626976
Change-Id: I595f35cce03d9f416a1768aa5c349a1bb20b0e19
|
|
|
|
|
|
|
|
This submission creates a generic template
file to deploy custom roles.
Also adds a file to specify an exclusion role
list in order to avoid not to generate the
template for those roles.
Partial-Bug: #1626976
Depends-On: I6d7247bbb8702eb0ab9bdf133b5ab1c6e8349d98
Change-Id: I3e11c089023b793a5063d9e1714527a3fe2b7458
|
|
When deploying manila with cephfs backend,
/etc/manila/manila.conf should define
cephfs_conf_path = /etc/ceph/ceph.conf
in the cephfs native backend since this is
the conventional path that ceph operators expect
and since we document that path upstream.
Change-Id: I4abf5c33b675b1102413a84d64f4ce23b07b4485
Closes-Bug: 1630777
|
|
in the great rebase following the JINJA ALL THE THINGS changes we lost
critical functionality in the fluentd client service. This review
restores the missing features.
Change-Id: I7c23f16f81e75f3da6a24587b2eb8385b3e920a4
Closes-bug: 1630692
|
|
This patch modifies the service name to be more appropriately called
"OpenDaylightApi" along side the "OpenDaylightOvs" service used to
configure OpenVSwitch. It also splits out the OVS configuration for
controller nodes into the composable OpenDaylightOvs service.
Related-Bug: #1629408
Change-Id: I15221401acdfb2a9ef81107b54a8005348f8372f
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee
Change-Id: I754c4a41d8a294a4c7c18bd282ae014efd4b9b16
Closes-Bug: #1628521
|
|
generation"
|
|
|
|
When generating these templates, we should
create them with the "-role" appended as they will
be generated from a role.role.j2.yaml file.
i.e. role.role.j2.yaml will generate <service>-role.yaml
config.role.j2.yaml will generate <service>-config.yaml
Partial-Bug: #1626976
Change-Id: I614dc462fd7fc088b67634d489d8e7b68e7d4ab1
|
|
This patch updates the pacemaker composable service templates for
mongo and redis to extend the proper base (redis.yaml and mongo.yaml)
templates instead of the -base.yaml versions. This was causing
some missing hiera settings for these services which caused symptoms
like missing firewall rules for these services.
Change-Id: I3f94acbf4d1baadbb151b1c4d34b4a0ab28ad5e5
Partial-bug: #1629934
|
|
|
|
|
|
|
|
|
|
When updating a certificate for HAProxy, we only do a reload of the
configuration on non-HA setups. This means that if we try the same in
an HA setup, the cloud will still serve the old certificate and that
leads to several issues, such as serving a revoked or even a
compromised certificate for some time, or just SSL issues that the
certificate doesn't match. This enables a reload for HA cases too.
Change-Id: Ib8ca2fe91be345ef4324fc8265c45df8108add7a
Closes-Bug: #1629886
|
|
min-masters strategy"
|
|
It turns out that reducing number of rabbitmq queues in cluster
significantly improves performance of cluster especially in the case of
failover recovery time. Right now the cluster uses ha-all mode for rabbitmq
queues.
It is best to change this to "ha-exactly" mode and reduce the number
of queue copies to ceil(N/2) where N is number of controllers in the
cluster - so in typical scenario of 3 controller It would be 2 by
default.
It does not make much sense to keep the copies of queues over whole
cluster since if the quorum of nodes is lost then the rest of cluster
nodes will be stopped anyway. We let the user override this with a
parameter.
I.e. for a 3 node controlplane cluster we will go from this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}"
To this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}"
According to Marin Krcmarik's testing recovery time from failure was
reduced significantly.
Partial-Bug: #1628998
Change-Id: Iace6daf27a76cb8ef1050ada0de7ff1f530916c6
|
|
|
|
|
|
|
|
These hard-coded references to the Controller role mean that
things won't work if the keystone service is moved to any other
role, so we need to generate the lists dynamically based on the
enabled services for each role.
Change-Id: I5f1250a8a1a38cb3909feeb7d4c1000fd0fabd14
Closes-Bug: #1629096
|
|
This means the user won't have to manually specify e.g the
OS::TripleO::ACustomRoleConfig resource manually.
Partial-Bug: 1626976
Change-Id: I063571d4c5cbc2f295a7a044d81c27d703bd0e10
Depends-On: I9f920e191344040a564214f3f9a1147b265e9ff3
|
|
This removes the (nearly empty) per role manifests, and
replaces them with a generic manifest, where we use str_replace
to substitute the role name at runtime (or in some cases a
subset of the name for backwards compatibility)
Change-Id: I79da0f523189959b783bbcbb3b0f37be778e02fe
Partial-Bug: #1626976
|
|
They are now normalized and set in puppet-tripleo.
Change-Id: I197481c577b85894178e7899a55869da47847755
Closes-Bug: #1629279
Depends-On: Ic6de09acf0d36ca90cc2041c0add1bc2b4a369a5
|
|
Add redis_password parameter in Hiera so we can re-use it from
puppet-tripleo later for Aodh, Ceilometer and Gnocchi.
Change-Id: I038e2bac22e3bfa5047d2e76e23cff664546464d
Partial-Bug: #1629279
|
|
|
|
|
|
This will be used for internal (or even public) TLS, for when
certmonger is generating the certificates. This same setting is used
for the undercloud with the generate_service_certificate option.
Change-Id: Ic54fe512b9ed5c71417a66491b7954e653f660b6
|
|
strategy
It may happen that one of the controllers may become unavailable and
Queue Masters will be located on available controllers during queue
declarations. Once a lost controller will be become available masters of
newly declared queues are not placed with priority to such controller
with obviously lower number of queue masters and thus the distribution
may be unbalanced and one of the controllers may become under
significantly higher load in some circumstances of multiple fail-overs.
With rabbit 3.6.0 rabbitmq introduced a new HA feature of Queue masters
distribution - one of the strategies is min-masters, which picks the
node hosting the minimum number of masters.
One of the ways how to turn such min-masters strategy on is by adding
following into configuration file - rabbitmq.config
{rabbit,[ ..
{queue_master_locator, <<"min-masters">>},
.. ]},
Change-Id: I61bcab0e93027282b62f2a97bec87cbb0a6e6551
Closes-Bug: #1629010
|
|
This adds the necessary hieradata to run nova over httpd instead
of eventlet.
Change-Id: I57fb20cf0d58b3376243ba4aeb04e995e7152ce3
|