Age | Commit message (Collapse) | Author | Files | Lines |
|
The puppet-ceph module defaults to 'ceph' but that is a metapacakge
which isn't provided in all repos.
Depends-On: I13462219522386f8740b0d70916a44f3474115e4
Change-Id: Ie55d22301dd22102d471e6002dfcaad4bfadd5f6
Related-Bug: 1629933
|
|
- Move VXLAN and VRRP rules from Neutron Server to the right services.
- Enable Firewall by default on Compute nodes.
Change-Id: I99d172dcedaf6be297aad184cc51fe9f292a57e1
|
|
This default setting got lots in the composable roles/services patches.
Re-enable the ManageFirewall setting by default per what we did in
git commit 73c76b867ddc8a23a30b9a3cac4031189d4178c6.
We also fix a typo in neutron-api.yaml so that the firewall rules
matches to service_name. (otherwise it won't get loaded).
Also, drops the environments/manage-firewall.yaml which is
no longer needed if we enable firewall management by default.
Change-Id: Ie198e4efd190131d0722085b10ef77da9005bc1b
Closes-bug: 1629934
|
|
This will wire up the per-network hostnames in the generic role.
Needs to land after https://review.openstack.org/#/c/378764
Partial-Bug: #1626976
Change-Id: I595f35cce03d9f416a1768aa5c349a1bb20b0e19
|
|
|
|
|
|
|
|
This submission creates a generic template
file to deploy custom roles.
Also adds a file to specify an exclusion role
list in order to avoid not to generate the
template for those roles.
Partial-Bug: #1626976
Depends-On: I6d7247bbb8702eb0ab9bdf133b5ab1c6e8349d98
Change-Id: I3e11c089023b793a5063d9e1714527a3fe2b7458
|
|
When deploying manila with cephfs backend,
/etc/manila/manila.conf should define
cephfs_conf_path = /etc/ceph/ceph.conf
in the cephfs native backend since this is
the conventional path that ceph operators expect
and since we document that path upstream.
Change-Id: I4abf5c33b675b1102413a84d64f4ce23b07b4485
Closes-Bug: 1630777
|
|
in the great rebase following the JINJA ALL THE THINGS changes we lost
critical functionality in the fluentd client service. This review
restores the missing features.
Change-Id: I7c23f16f81e75f3da6a24587b2eb8385b3e920a4
Closes-bug: 1630692
|
|
This patch modifies the service name to be more appropriately called
"OpenDaylightApi" along side the "OpenDaylightOvs" service used to
configure OpenVSwitch. It also splits out the OVS configuration for
controller nodes into the composable OpenDaylightOvs service.
Related-Bug: #1629408
Change-Id: I15221401acdfb2a9ef81107b54a8005348f8372f
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee
Change-Id: I754c4a41d8a294a4c7c18bd282ae014efd4b9b16
Closes-Bug: #1628521
|
|
generation"
|
|
|
|
When generating these templates, we should
create them with the "-role" appended as they will
be generated from a role.role.j2.yaml file.
i.e. role.role.j2.yaml will generate <service>-role.yaml
config.role.j2.yaml will generate <service>-config.yaml
Partial-Bug: #1626976
Change-Id: I614dc462fd7fc088b67634d489d8e7b68e7d4ab1
|
|
This patch updates the pacemaker composable service templates for
mongo and redis to extend the proper base (redis.yaml and mongo.yaml)
templates instead of the -base.yaml versions. This was causing
some missing hiera settings for these services which caused symptoms
like missing firewall rules for these services.
Change-Id: I3f94acbf4d1baadbb151b1c4d34b4a0ab28ad5e5
Partial-bug: #1629934
|
|
|
|
|
|
|
|
|
|
When updating a certificate for HAProxy, we only do a reload of the
configuration on non-HA setups. This means that if we try the same in
an HA setup, the cloud will still serve the old certificate and that
leads to several issues, such as serving a revoked or even a
compromised certificate for some time, or just SSL issues that the
certificate doesn't match. This enables a reload for HA cases too.
Change-Id: Ib8ca2fe91be345ef4324fc8265c45df8108add7a
Closes-Bug: #1629886
|
|
min-masters strategy"
|
|
It turns out that reducing number of rabbitmq queues in cluster
significantly improves performance of cluster especially in the case of
failover recovery time. Right now the cluster uses ha-all mode for rabbitmq
queues.
It is best to change this to "ha-exactly" mode and reduce the number
of queue copies to ceil(N/2) where N is number of controllers in the
cluster - so in typical scenario of 3 controller It would be 2 by
default.
It does not make much sense to keep the copies of queues over whole
cluster since if the quorum of nodes is lost then the rest of cluster
nodes will be stopped anyway. We let the user override this with a
parameter.
I.e. for a 3 node controlplane cluster we will go from this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}"
To this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}"
According to Marin Krcmarik's testing recovery time from failure was
reduced significantly.
Partial-Bug: #1628998
Change-Id: Iace6daf27a76cb8ef1050ada0de7ff1f530916c6
|
|
|
|
|
|
|
|
These hard-coded references to the Controller role mean that
things won't work if the keystone service is moved to any other
role, so we need to generate the lists dynamically based on the
enabled services for each role.
Change-Id: I5f1250a8a1a38cb3909feeb7d4c1000fd0fabd14
Closes-Bug: #1629096
|
|
This means the user won't have to manually specify e.g the
OS::TripleO::ACustomRoleConfig resource manually.
Partial-Bug: 1626976
Change-Id: I063571d4c5cbc2f295a7a044d81c27d703bd0e10
Depends-On: I9f920e191344040a564214f3f9a1147b265e9ff3
|
|
This removes the (nearly empty) per role manifests, and
replaces them with a generic manifest, where we use str_replace
to substitute the role name at runtime (or in some cases a
subset of the name for backwards compatibility)
Change-Id: I79da0f523189959b783bbcbb3b0f37be778e02fe
Partial-Bug: #1626976
|
|
They are now normalized and set in puppet-tripleo.
Change-Id: I197481c577b85894178e7899a55869da47847755
Closes-Bug: #1629279
Depends-On: Ic6de09acf0d36ca90cc2041c0add1bc2b4a369a5
|
|
Add redis_password parameter in Hiera so we can re-use it from
puppet-tripleo later for Aodh, Ceilometer and Gnocchi.
Change-Id: I038e2bac22e3bfa5047d2e76e23cff664546464d
Partial-Bug: #1629279
|
|
|
|
|
|
This sets up a flag that tells the profiles to use TLS (this will happen
in the internal network).
bp tls-via-certmonger
Change-Id: If47febb5b38b1c65f60f9de87a34cb31936a7c0d
|
|
This adds some basic pieces to get certmonger to manage the
certificates for HAProxy. The aim is to be flexible enough that we
will be able to manage both public and internal certificates.
This also adds a relevant environment to get the endpoints to have
TLS everywhere.
bp tls-via-certmonger
Depends-On: I89001ae32f46c9682aecc118753ef6cd647baa62
Change-Id: Ife5f8c2f07233295bc15b4c605acf3d9bd62f162
|
|
This will be used for internal (or even public) TLS, for when
certmonger is generating the certificates. This same setting is used
for the undercloud with the generate_service_certificate option.
Change-Id: Ic54fe512b9ed5c71417a66491b7954e653f660b6
|
|
strategy
It may happen that one of the controllers may become unavailable and
Queue Masters will be located on available controllers during queue
declarations. Once a lost controller will be become available masters of
newly declared queues are not placed with priority to such controller
with obviously lower number of queue masters and thus the distribution
may be unbalanced and one of the controllers may become under
significantly higher load in some circumstances of multiple fail-overs.
With rabbit 3.6.0 rabbitmq introduced a new HA feature of Queue masters
distribution - one of the strategies is min-masters, which picks the
node hosting the minimum number of masters.
One of the ways how to turn such min-masters strategy on is by adding
following into configuration file - rabbitmq.config
{rabbit,[ ..
{queue_master_locator, <<"min-masters">>},
.. ]},
Change-Id: I61bcab0e93027282b62f2a97bec87cbb0a6e6551
Closes-Bug: #1629010
|
|
This adds the necessary hieradata to run nova over httpd instead
of eventlet.
Change-Id: I57fb20cf0d58b3376243ba4aeb04e995e7152ce3
|
|
We do not want cinder-volume to be managed by Pacemaker on
BlockStorage nodes, where Pacemaker is not running at all.
This change adds a new BlockStorageCinderVolume service name
which can (and is, by default) mapped to the non Pacemaker
implementation of the service.
The error was:
Could not find dependency Exec[wait-for-settle] for
Pacemaker::Resource::Systemd[openstack-cinder-volume]
Also moves cinder::host setting into the Pacemaker specific service
definition because we only want to set a shared host= string when
the service is managed by Pacemaker.
Closes-Bug: #1628912
Change-Id: I2f7e82db4fdfd5f161e44d65d17893c3e19a89c9
|
|
Moving the rest of the static based resource registry
entries to j2, this allows to extend the content of the
template to the roles_list.
Also moved the templates to correspond with the role name.
Partial-Bug: #1626976
Change-Id: I1cbe101eb4ce5a89cba5f2cc45cace43d3380f22
|
|
This patch deprecates netapp_eseries_host_type in favor of netapp_host_type.
Change-Id: I113c770ca2e4dc54526d4262bacae48e223c54f4
Closes-Bug: 1579161
|
|
This patch movs the various db::mysql hiera settings into a
'mysql' specific service_config_settings section for each
service so that these will only get applied on the MySQL service
node. This follows a similar puppet-tripleo change where we
create the actual databases for all services locally on
the MySQL service node to avoid permission issues.
Change-Id: Ic0692b1f7aa8409699630ef3924c4be98ca6ffb2
Closes-bug: #1620595
Depends-On: I05cc0afa9373429a3197c194c3e8f784ae96de5f
Depends-On: I5e1ef2dc6de6f67d7c509e299855baec371f614d
|
|
|
|
In upstream puppet-keystone, the boostrap process should use an admin
password not the admin token for the bootstrapping of keystone. The
admin password option is being added to the upstream class so we will
need to provide it to properly have keystone bootstrapped.
Change-Id: Icab4b0cb70d6caf2f2792c4fe730f060b807fbc1
Depends-On: I7a706d93b43ec025bdb4b29667f64ff2f7dd52a0
Related-Bug: #1621959
|
|
This patch enables correctly setting the NTP server passed via
--ntp-server in the overcloud nodes' /etc/ntp.conf.
Change-Id: Iff644b9da51fb8cd1946ad9d297ba0e94d3d782b
|
|
|
|
|
|
Without setting this parameter, overcloud deploy fails and
'openstack stack failures list overcloud' reveals the
following error:
Error: Puppet::Type::Keystone_user_role::ProviderOpenstack: Could
not find project with name [services] and domain [Default]
Error:
/Stage[main]/Manila::Keystone::Auth/Keystone::Resource::Service_identity[manilav2]/Keystone_user_role[manilav2@services]:
Could not evaluate: undefined method `[]' for nil:NilClass
When we set manila::keystone::auth::tenant to 'service', analogous
to cinder, nova, etc., the overcloud deploy completes successfully.
Change-Id: I996ac2ff602c632a9f9ea9c293472a6f2f92fd72
|
|
|
|
|