Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Without this, neutron-server fails to start and communication will not
work to ODL REST.
Parital-Bug: 1633630
Change-Id: Ifd906db4e6062ac271c2147fe1149b1009d06ae2
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
This patch updates the Nova profile so that we set memcached
servers correctly for the Nova keystone auth_token middleware.
Most of the hiera settings for ::nova::keystone::authtoken are
already included in the t-h-t nova-api service.
Change-Id: I3b7ff02abbd0d5e0c38232d02b33e4c7bc411120
Closes-bug: #1633595
|
|
|
|
When deploying via ipv6, rabbitmq-ctl commands have the following
issues:
- `rabbitmq cluster_status` shows nodedown alerts
- list_queues / list_connections hang
- `rabbitmqctl node_health_check` fails with an error.
* There is no any issue while performing activity on RHOS setup(From
* horizon/cli). i.e. RHOS environment is functioning as expected.
For example:
sudo rabbitmqctl node_health_check -n rabbit@node1
Checking health of node 'rabbit@node1' ...
Heath check failed:
health check of node 'rabbit@node1' fails: nodedown
The problem is that we are missing the following in
/etc/rabbitmq/rabbitmq-env.conf:
RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp"
Fix these by setting the appropriate RABBITMQ_CTL_ERL_ARGS when
deploying ipv6.
Closes-Bug: #1633693
Change-Id: I53f4e76e687b3966fbb74fd0c2d83f05176630de
|
|
We use the rabbit_hosts configuration for most of our services but we
haven't been adding the configured port. This patch appends the IP port
used provided to the service's heat template to the IPs in the list.
Note: while we could use the value set for the rabbitmq server in
rabbitmq::port, it doesn't allow for dealing with SSL. This also is also
backwards compatible with the RabbitClientPort parameters used in the
heat templates.
Change-Id: I0000f039144a6b0e98c0a148dc69324f60db3d8b
Closes-Bug: #1633580
|
|
Since moving to composable service/roles there was some logic here that
was relying on a variable to enable ODL rather than enabling the service
itself to decide where ODL was enabled. Now that ODL and ODL OVS
configuration are split into 2 different services we can make these
truly composable.
Partial-Bug: 1633625
Change-Id: Ia55c05e12d5d434111a13e1ed795da530e3ff4a5
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Change-Id: Ie215289a7be681a2b1aa5495d3f965c005d62f52
Depends-On: Ia863b38bbac1aceabe6b7deb6939c9db693ff16d
|
|
This adds the necessary resources to the manifest to run cinder
to run over httpd. The service name will be moved to t-h-t in a
subsequent commit, but since this patch depends on t-h-t, we try to
avoid circular dependencies of repos.
Change-Id: I950257e3b5d8db071752e53557115429574e98e2
Depends-On: Ic1967a6f4f60a273965811516f33121115d518b4
|
|
The patch making nova run over httpd had added migration logic to
stop nova-api, However, this doesn't work since nova-metadata is
running over the same process. Now, the fact that is was running
seems to be just luck, since the systemctl runs, then we start the
service via the nova::api resource. So this is fragile in it's
current state.
This then removes the exec, as we don't need it for the migration.
Change-Id: I4603b81d30a704b07eef461b3cdbfe164614b04f
|
|
|
|
|
|
This needs to happen on the node running keystone, or things break
when you try to deploy e.g the heat_engine service on a non Controller
role. We check the enabled flag for heat engine so this only happens
if the heat_engine service is running on some (any) role.
Partial-Bug: #1631130
Change-Id: Ib088a572b384b479f51d56555734d78ab840a1f3
|
|
We can now get this parameter from t-h-t, so it's not needed here.
Change-Id: I014e7b3a6feb5609ace2e8ef1e4df11448b0a0cc
Depends-On: Ic229182cc5c887b57f6182c3db1bac8bed330f7c
|
|
|
|
|
|
|
|
Change-Id: I78049105adf52226d47cc6764b1ba6c2c06e91e5
Related-Bug: 1631926
|
|
|
|
remove_default_accounts is a mysql::server parameter that, set to True,
will execute some MySQL commands to cleanup MySQL defaults accounts
created by packaging.
In order to successfully run the commands, we need MySQL up and running,
which is not the case at step 1 but at step 2.
This patch make sure we run the commands at step 2 on pacemaker master
only.
No change for scenarios without Pacemaker.
Change-Id: Ifad3cb40fd958d7ea606b9cd2ba4c8ec22a8e94e
Closes-Bug: #1633113
|
|
Currently the /var/lib/tripleo/pacemaker-restarts directory is created
only when base/pacemaker.pp file is included in the manifest. There is a
notification that ensures precedence order and trigger the touch.
The trigger and the dependency on the base/pacemaker.pp should not be
required as someone using the tripleo::pacemaker::resource_restart_flag
would expect the file to be created no matter what.
For instance in the Cinder upgrade in the convergence step has this
defined:
Cinder_config<||> ~> Tripleo::Pacemaker::Resource_restart_flag["${::cinder::params::volume_service}"]
but in the convergence step, the base/pacemaker.pp is not included and
the above trigger fails as the directory is not created.
It looks the same for manilla.pp.
This patch removes the trigger and ensures the directory is created when
needed.
Change-Id: Ic3aa82c818662e9e88e21c8381d657adef5b43ac
Closes-Bug: #1632232
|
|
This adds the necessary resources to the manifest to migrate nova
to run over httpd. The service name will be moved to t-h-t in a
subsequent commit, but since this patch depends on t-h-t, we try to
avoid circular dependencies of repos.
Change-Id: I91d430a3871672f90b0f885736f067ddae3c238c
Depends-On: I57fb20cf0d58b3376243ba4aeb04e995e7152ce3
|
|
|
|
|
|
When we observe the 'stop timeout' values of pacemaker resources:
rabbitmq and redis, they are set to 90s. But for all other services, it
is set to 200s.
The overcloud deployment sometimes fails due to this with the error:
Error: Could not complete shutdown of rabbitmq-clone, 1 resources
remaining
Error performing operation: Timer expired
This patch updates the timeout for Redis and RabbitMQ to avoid this
error.
Change-Id: I8a3b3951a896ee3e8e5e09778e8ea4717e76a1b4
|
|
Tempest expects object versioning to be enabled by default in Swift;
if not it has to be disabled explicitly in the Tempest config.
This is a commonly used middleware, therefore it should be enabled
in the overcloud proxy nodes as well.
Closes-Bug: 1632215
Change-Id: I07a206473ff7939749e3eba1dfe3ea8c4526eb5c
|
|
|
|
The hiera key generated by THT is eqlx_chap_password and not
eql_san_password.
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/extraconfig/pre_deploy/controller/cinder-eqlx.yaml#L63
Change-Id: Ic062d9060f0ce437336e2bd6aaca3887fc33c8cf
Closes-Bug: #1631527
|
|
The ceilometer::db::sync is included by default in ceilometer::db but we
only want it to run on the bootstrap node. This change passes the
sync_db parameter to ceilometer::db to manage the db sync process rather
than trying to manage the inclusion of ceilometer::db::sync within the
profile class.
Change-Id: Ib56db1a90dd6fbfe7582fc57b7728df81942cce2
Closes-Bug: #1629373
|
|
This changes makes the Swift API and Storage roles to include the
::swift::config class, as we do for the other OpenStack services,
which is useful to push arbitrary config settings into Swift.
Change-Id: Iaf2c2f0f0103fe9264ce875099a1578b353a5558
|
|
Aligns the way how we check for enabled backends in
pacemaker/manila.pp with what we did in base/manila/api.pp with [1].
The benefit is that we don't need to emit from the templates
custom hiera.
1. I86ba8b9d5872c0f1a94e74215e97b796ad129bfb
Change-Id: I04e28a95e8d69a24cd3df109bf1802bfcbd941db
|
|
When deploying manila with cephfs, share creation fails because
'enabled_share_protocols' sticks to NFS,CIFS and does not get updated
with CEPHFS. This change aims at fixing it by building the list of
enabled protocols based on the list of enabled backends.
Co-Authored-By: Tom Barron <tbarron@redhat.com>
Closes-Bug: 1630564
Change-Id: I86ba8b9d5872c0f1a94e74215e97b796ad129bfb
|
|
|
|
|
|
|
|
The service profile in HAProxy has the capability of creating
certificates based on a map. The idea is to standardize this, as
some of those certificates should match certain networks the services
are listening on (with the exception of the external network which is
handled differently and the tenant network which doesn't need a
certificate). So, based on which network a certain service is
listening on, we fetch the appropriate certificate.
bp tls-via-certmonger
Change-Id: I89001ae32f46c9682aecc118753ef6cd647baa62
|
|
We're not able to use FQDNs yet, so to work around this, we give
precedence to a "short name" list we'll get from t-h-t. We can
migrate to using FQDNs in the next cycle.
Change-Id: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee
Related-Bug: #1628521
|
|
We're not able to use FQDNs yet, so to work around this, we give
precedence to a "short name" list we'll get from t-h-t.
Change-Id: I4ef7786474c229d5212a0deb2ca02ee992b030d8
Related-Bug: #1628521
|
|
It turns out that reducing number of rabbitmq queues in cluster
significantly improves performance of cluster especially in the case of
failover recovery time. Right now the cluster uses ha-all mode for rabbitmq
queues.
It is best to change this to "ha-exactly" mode and reduce the number
of queue copies to ceil(N/2) where N is number of controllers in the
cluster - so in typical scenario of 3 controller It would be 2 by
default.
It does not make much sense to keep the copies of queues over whole
cluster since if the quorum of nodes is lost then the rest of cluster
nodes will be stopped anyway. We let the user override this with a
parameter.
I.e. for a 3 node controlplane cluster we will go from this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"all"}"
To this:
pcs resource show rabbitmq
Resource: rabbitmq (class=ocf provider=heartbeat type=rabbitmq-cluster)
Attributes: set_policy="ha-all ^(?!amq\.).* {"ha-mode":"exactly","ha-params":2}"
According to Marin Krcmarik's testing recovery time from failure was
reduced significantly.
Co-Authored-By: Marian Krcmarik <mkrcmari@redhat.com>
Change-Id: Ib62001c03e1e08f58cf0c6e0ba07a8879a584084
Partial-Bug: #1628998
|
|
Back in the Mitaka cycle via the change If6b43982c958f63bc78ad997400bf1279c23df7e
we made sure that the default start and stop timeouts for pacemaker
systemd resources is 200s (>= twice the default 90s DefaultTimeoutStopSec
in systemd). We did this change by setting puppet resource defaults for
the Pacemaker::Resource::Service class:
Pacemaker::Resource::Service {
op_params => 'start timeout=200s stop timeout=200s',
}
The problem is that after the composable services rework, this does not
work anymore and the pacemaker systemd resources that still exist do not
have these timeouts set.
We want to move away from resource defaults for this because its results
are dependent on the inclusion order which in tripleo is not guaranteed
any longer (https://docs.puppet.com/puppet/latest/reference/lang_scope.html#scope-lookup-rules)
The only services affected in Newton are: cinder-volume,
cinder-backup, manila-share, haproxy. I preferred fixing all the
pacemaker resources because it seems the cleanest and most logical
commit.
Change-Id: If89a95706514e536a7a2949871a0002c79b6046e
Closes-Bug: #1629366
|
|
|
|
|
|
This change adds rspec testing for the ceilometer profiles. While
writing these tests, the tripleo::profile::base::ceilometer::collector
class needed to have the hiera lookups moved to class parameters to
allow for testing the possible options around the database backend.
These tests add coverage for ipv4 and ipv6 configurations for the
collector profile as well as excluding mongodb on the backend.
Change-Id: I1abae040104e8492a9fe266de74080e1e7701731
|
|
Normalize coordination_url for Telemetry services, so we can deploy them
with IPv6.
Change-Id: Ic6de09acf0d36ca90cc2041c0add1bc2b4a369a5
Partial-Bug: #1629279
Depends-On: I038e2bac22e3bfa5047d2e76e23cff664546464d
|
|
|
|
This patch moves the various DB syncs into the MySQL role.
Database creation needs to occur on the MySQL server to
avoid permission issues.
This patch also moves database creation to step 2 so we can
guarantee that all per-service databases exist at this time.
This avoids complex ordering needed during step 3 where
services, on different hosts, can run their own db sync's
in a distributed fashion.
Change-Id: I05cc0afa9373429a3197c194c3e8f784ae96de5f
Partial-bug: #1620595
|
|
|
|
|