Age | Commit message (Collapse) | Author | Files | Lines |
|
This commit implements composable HA for the pacemaker profiles.
- Everytime a pacemaker resource gets included on a node,
that node will add a node cluster property with the name of the resource
(e.g. galera-role=true)
- Add a location rule constraint to force running the resource only
on the nodes that have that property
- We also make sure that any pacemaker resource/property creation has a
predefined number of tries (20 by default). The reason for this is
that within composable HA, it might be possible to get "older CIB"
errors when another node changed the CIB while we were doing an
operation on it. Simply retrying fixes this.
- Also make sure that we use the newly introduced
pacemaker::constraint::order class instead of the older
pacemaker::constraint::base class. The former uses the push_cib()
function and hence behaves correctly in case multiple nodes try
to modify the CIB at the same time.
Change-Id: I63da4f48da14534fd76265764569e76300534472
Depends-On: Ib931adaff43dbc16220a90fb509845178d696402
Depends-On: I8d78cc1b14f0e18e034b979a826bf3cdb0878bae
Depends-On: Iba1017c33b1cd4d56a3ee8824d851b38cfdbc2d3
|
|
|
|
|
|
|
|
Since the commit this depends on sets it up via hieradata, the
conditions here are no longer needed.
bp tls-via-certmonger
Change-Id: I66956f0b85e8e3bf1ab9562221d51d51c230b88e
Depends-On: I693213a1f35021b540202240e512d121cc1cd0eb
|
|
|
|
|
|
This support enables a base profile called pacemaker_remote which will
allow the operator to automatically configure the pacemaker_remote
service on such nodes. This manifest also automatically adds any
pacemaker_remote nodes to the pacemaker cluster.
Depends-On: I0c01ecb7df1a0f9856fdc866b9d06acf0283fa4f
Depends-On: Ic0488f4fc63e35b9aede60fae1e2cab34b1fbdd5
Change-Id: I92953afcc7d536d387381f08164cae8b52f41605
|
|
|
|
|
|
|
|
|
|
This uses the tls_proxy resource added in the previous commit [1] in
front of the Glance API server when internal TLS is enabled. Right
now values are passed quite manually, but a subsequent commit will use
t-h-t to pass the appropriate hieradata, and then we'll be able to clean
it up from here.
Note that the proxy is only deployed when internal TLS is enabled.
[1] I82243fd3acfe4f23aab373116b78e1daf9d08467
bp tls-via-certmonger
Depends-On: Id5dfb38852cf2420f4195a3c1cb98d5c47bbd45e
Change-Id: Id35a846d43ecae8903a0d58306d9803d5ea00bee
|
|
Glance Registry has been removed in TripleO. So we can clean
puppet-tripleo and remove last bits that used to deploy this service.
Change-Id: Iea8f6340349ab366606205305a3ec9a6e4f11ba6
|
|
|
|
|
|
Change-Id: If4b091e1ca02f43aa9c65392baf8ceea007b7cfb
|
|
|
|
Currently the inter-cluster communication port listens to all ip
addresses:
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 25631/beam.smp
In order to limit it to listen only to the network assigned to rabbitmq
we need to add the following:
{kernel, [
...
{inet_dist_use_interface, {172,17,0,16}},
...
]}
In order to do the conversion from an ip address to the Erlang
representation we add a function that takes a string and returns a
converted output. The (~400 randomly generated) IPv6/4 addresses at [1]
have been parsed both via erl's built-in inet:parse_address() function
and our ruby implementation. All converted ip addresses resulted in the
same output [2], [3]. The only difference is that Erlang's parse_address()
considers network ip addresses (e.g. 10.0.0.0) invalid whereas the ruby
function does not. This should not be a problem as the use case here is
to bind a service to a specific ip address on an interface and if
anything we likely prefer the less strict behaviour, given that at least
in theory it is perfectly valid for an interface to have a network
address assigned to it.
[1] http://acksyn.org/files/tripleo/ip-addresses.txt
[2] http://acksyn.org/files/tripleo/ip-addresses-ruby.txt
[3] http://acksyn.org/files/tripleo/ip-addresses-erl.txt
Change-Id: I211c75b9bab25c545bcc7f90f34edebc92bba788
Partial-Bug: #1645898
|
|
glance params are also used by cinder-volume. This patch aims to use
cinder::glance in common roles for cinder, so we can split cinder api
and cinder volume.
Depends-On: Id81c029318016068481dd614ed62cc4bfaf0f3e8
Change-Id: I9703efb38c2a3166c7f21c5c1b942f33abb9e76c
|
|
nova::placement needs to be declared on more than placement api node,
because credentials are used by different services (at least
nova-compute now).
This patch moves the class to base/nova.pp, at the same step.
So compute nodes will have the credentials and will be able to use
Placement API on multinode environments.
Change-Id: Iada8e9fcccec7dbfe7ac0ec0f9ec6eac1581290e
|
|
|
|
|
|
Adds initial base profile and profile for API service.
Partially-implements: blueprint octavia-service-integration
Change-Id: I77783029797be4fb488c6e743c51d228eba9c474
|
|
This puppet manifest will install and configure by default
the NTP service. It will also make sure chrony is purged, because it's
present on the EL7 images.
Change-Id: If3cf7d9690001b051465ea25cf8a8c3bc6f7c33a
|
|
Let's set a default number of retries also for the stonith
property creation. Just like we do for most of the composable
HA resource creation.
Change-Id: Ie6e19cc838a3f45100f6c98a350bdf6a37d40590
Depends-On: I20098c5d69cde356fe79f6d8dbdc03ae42ecb3ef
|
|
etcd is used by networking-vpp ML2 driver as the messaging mechanism. This
patch adds etcd service which can be used by other services.
Implements: blueprint fdio-integration-tripleo
Change-Id: Idaa3e3deddf9be3d278e90b569466c2717e2d517
Signed-off-by: Feng Pan <fpan@redhat.com>
|
|
|
|
This change adds a profile for the Ceph RBD mirror service, which
should be managed by Pacemaker to make sure there is always a single
instance running.
Change-Id: Ic63dc5cffece38942d305f538f71dd58a5d50789
Partial-Bug: #1652177
|
|
We dont need this flag anymore as we will disable api
using composable interface instead.
See I67900f7e6816212831aea8ed18f323652857fbd3
Closes-bug: #1656364
Change-Id: Ib6aea02bde6ad7e5223336579f0a99d6cd3ee98f
|
|
Based on Steve Hardy's comments in
https://review.openstack.org/#/c/413748/, we need to move handling of
the list of plugins out of the heat templates and into puppet. This
module now uses the service_names variable to look up information on
per-service collectd plugins.
Change-Id: Ie5fba01e1f91ffdc39eb0eb1be9b1464c797b04f
|
|
When we create a pacemaker resource it must happen from a single node.
If it happens from multiple nodes an immediate error will be returned by
pcs.
For the pacemaker roles we enforce this by leveraging the recently
introduced <SERVICE_NAME_bootstrap_short_node_name> which gives us
the first hostname per-service, regardless of the role.
(introduced via I03e8685f939e8ae1fcd8b16883b559615042505d)
With this approach if a pacemaker service belongs to two different
roles (say role Controller on node A and role galera on node B), it
will only create the resource from one of the two and not both (which
would return an error).
Only setting Partial-Bug for this one, because it addresses the issue
from the pacemaker resource creation POV (which is always affected). But
the issue itself is a race that we're theoretically affected by since
the composable roles work landed. While I have tried to fix the more
general case in previous attempts, I think it is best if we start a
discussion on how to fix it, because each approach has a bunch of
potential drawbacks and is quite invasive on how we do things. A
discussion slot for this has been proposed for the Atlanta PTG.
Change-Id: I662398cab60d523d204b57a5674ca8f5c0f2e68a
Partial-Bug: #1615983
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This feature is broken for us now and there is work in progress in Nova
to improve nova cell deployment.
Until it's fixed upstream, we need to disable cells deployment for now,
so we can promote our CI.
Change-Id: I379ba9e94a92ed225a03a67fc975b542447a9c8b
Related-Bug: #1649341
|
|
Since we include ::heat::keystone::domain at step 3, and that class
requires heat.conf since it uses the heat_config resource, we need to
also include ::heat at step 3. The ::heat class will take care of
installing openstack-heat-common that provides heat.conf.
Closes-Bug: #165389
Partially-implements: blueprint split-stack-software-configuration
Change-Id: I5ba34ca96ca84d3f1cf3785ed8bbef6720f7bd42
|
|
Manila ceph driver reads ceph's client configuration
(keyring is the most important) from ceph.conf file
(or any other file set by cephfs_conf_path). ceph.conf
should be updated with keyring location.
If ceph is deployed by tripleo then also manila ceph key
is added into ceph and ceph filesystem is created.
Depends-On: I18436a64fc991b9e697a1d79e369ac110cf8fe20
Change-Id: Iac4a260af6738ed6afd4bcb107221a736d07c1b5
Partial-Bug: #1644784
Closes-Bug: #1646147
|
|
Allow TripleO to deploy Nova Placement API with a new profile.
Change-Id: I5e25a50f3d7a9b39f4146a61cb528963ee09e90c
|
|
|
|
|
|
|
|
|
|
This change fixes the hiera calls in the base nova profile to use the
parameter rather than continue to call hiera. Additionally this change
includes basic test coverage for the various nova profiles.
Change-Id: If393606eeb3c39ed3a2655bd89c5c276a9cf106e
|
|
Having the db_sync code live in the mysql profile causes
coupling that doesn't work unless your MySQL server has the
latest Nova packages installed. This may not work for some
baremetal setups (where an isolated database exists) or
with containers where the MySQL container definately doesn't
have nova packages installed.
Moving this code into the nova-api role also matches where we
were already db syncing the normal API database so it should be
fine and safe.
Change-Id: Ib625e2ac9c8d6bd1d335c58e291facc4ea5839ae
Co-Authored-By: Alex Schultz <aschultz@redhat.com>
|
|
This patch add the option for using Keyston V3 authention with
the Ceph/RGW service instead of using the admin_token
Change-Id: I42861afcac221478dcb68be13b6dbc2533a7f158
|