Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
If we upgrade a cloud that was configured with external load balancer
the process will fail during convergence step because it will try to
restart haproxy which is not configured when an external load balancer
is configured.
Closes-Bug: #1636527
Change-Id: I6f6caec3e5c96e77437c1c83e625f39649a66c48
|
|
proxy for the UI"
|
|
|
|
|
|
|
|
|
|
|
|
The upload and extraction for the plan tarball to swift can take
longer than the default one minute in slower environments. Doubling
the timeout to two minutes has proven to help.
This is only a partial fix, because the error reporting for this
issue also needs to be improved.
Change-Id: I06592d38fdfefacc8bdf76289a0bfa20eb33a89b
Partial-Bug: 1635269
|
|
|
|
|
|
To be able to monitor during deployment, we need sensu clients
and fluentd collectors be deployed as soon as it is possible.
Change-Id: I952f0d6de6f6327d5c923b8f1d7a5979758dbc59
|
|
In change I35921652bd84d1d6be0727051294983d4a0dde10 we want to remove
all those duplicate tcp_listen_option entries. One consequence of that
is that we need to set rabbitmq::tcp_keepalive to true via hiera
(as opposed to forcing it via the tcp_listen_option hash).
For this to work we need to remove this forced parameter override.
Note that even if I35921652bd84d1d6be0727051294983d4a0dde10 and this
change don't merge at the exact same time it is still okay because
we do force tcp_keepalive to true via the tcp_listen_options.
Change-Id: I608477d5714a5081b3b4ab3b9fc2932bdd598301
|
|
|
|
|
|
The docker tooling has a preference for interacting with encrypted
endpoints. Terminating the docker-registry endpoint with HAProxy
allows the SSL VIP to be used for this purpose.
Change-Id: Ifebfa7256e0887d6f26a478ff8dc82b0ef5f65f6
|
|
This optionally enables TLS for keystone in the internal network.
If internal TLS is enabled, each node that is serving the keystone
service will use certmonger to request its certificate.
This, in turn should also configure a command that should be ran when
the certificate is refreshed (which requires the service to be
restarted).
bp tls-via-certmonger
Change-Id: I303f6cf47859284785c0cdc65284a7eb89a4e039
|
|
|
|
|
|
|
|
|
|
Without this, neutron-server fails to start and communication will not
work to ODL REST.
Parital-Bug: 1633630
Change-Id: Ifd906db4e6062ac271c2147fe1149b1009d06ae2
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
This patch updates the Nova profile so that we set memcached
servers correctly for the Nova keystone auth_token middleware.
Most of the hiera settings for ::nova::keystone::authtoken are
already included in the t-h-t nova-api service.
Change-Id: I3b7ff02abbd0d5e0c38232d02b33e4c7bc411120
Closes-bug: #1633595
|
|
|
|
When deploying via ipv6, rabbitmq-ctl commands have the following
issues:
- `rabbitmq cluster_status` shows nodedown alerts
- list_queues / list_connections hang
- `rabbitmqctl node_health_check` fails with an error.
* There is no any issue while performing activity on RHOS setup(From
* horizon/cli). i.e. RHOS environment is functioning as expected.
For example:
sudo rabbitmqctl node_health_check -n rabbit@node1
Checking health of node 'rabbit@node1' ...
Heath check failed:
health check of node 'rabbit@node1' fails: nodedown
The problem is that we are missing the following in
/etc/rabbitmq/rabbitmq-env.conf:
RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp"
Fix these by setting the appropriate RABBITMQ_CTL_ERL_ARGS when
deploying ipv6.
Closes-Bug: #1633693
Change-Id: I53f4e76e687b3966fbb74fd0c2d83f05176630de
|
|
proxy for the UI
Change-Id: I74eac4bbfc16720eeb6e2bf0ee251689dde3bafc
Implements: enable-communication-ui-undercloud
|
|
We use the rabbit_hosts configuration for most of our services but we
haven't been adding the configured port. This patch appends the IP port
used provided to the service's heat template to the IPs in the list.
Note: while we could use the value set for the rabbitmq server in
rabbitmq::port, it doesn't allow for dealing with SSL. This also is also
backwards compatible with the RabbitClientPort parameters used in the
heat templates.
Change-Id: I0000f039144a6b0e98c0a148dc69324f60db3d8b
Closes-Bug: #1633580
|
|
Since moving to composable service/roles there was some logic here that
was relying on a variable to enable ODL rather than enabling the service
itself to decide where ODL was enabled. Now that ODL and ODL OVS
configuration are split into 2 different services we can make these
truly composable.
Partial-Bug: 1633625
Change-Id: Ia55c05e12d5d434111a13e1ed795da530e3ff4a5
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
Change-Id: Ie215289a7be681a2b1aa5495d3f965c005d62f52
Depends-On: Ia863b38bbac1aceabe6b7deb6939c9db693ff16d
|
|
The patch making nova run over httpd had added migration logic to
stop nova-api, However, this doesn't work since nova-metadata is
running over the same process. Now, the fact that is was running
seems to be just luck, since the systemctl runs, then we start the
service via the nova::api resource. So this is fragile in it's
current state.
This then removes the exec, as we don't need it for the migration.
Change-Id: I4603b81d30a704b07eef461b3cdbfe164614b04f
|
|
By enabling the statistics socket we allow the collection
of statistics over time for haproxy.
This socket is set to "user" level, so this socket is limited
to read-only. The "stats timeout" line is optional, but since the
default timeout of the stats socket is 10s, we set this higher.
Change-Id: I22d3ab771e981be0d2c74b60443d276973bc1639
|
|
|
|
Instead of using an operator to make sure we upgrade package before any
service, which causes dependency cycles with iptables puppet module,
let's do another approach where we upgrade rpms in the 'setup' stage,
which is a stage that runs before configure and running services.
In that way, we'll remove dependency cycles and make sure packages are
upgrades before configure and running TripleO services.
Change-Id: I1be83f88be1959885c980ab4f428477d412751f7
|
|
|
|
This needs to happen on the node running keystone, or things break
when you try to deploy e.g the heat_engine service on a non Controller
role. We check the enabled flag for heat engine so this only happens
if the heat_engine service is running on some (any) role.
Partial-Bug: #1631130
Change-Id: Ib088a572b384b479f51d56555734d78ab840a1f3
|
|
We can now get this parameter from t-h-t, so it's not needed here.
Change-Id: I014e7b3a6feb5609ace2e8ef1e4df11448b0a0cc
Depends-On: Ic229182cc5c887b57f6182c3db1bac8bed330f7c
|
|
|
|
|
|
|
|
Change-Id: I78049105adf52226d47cc6764b1ba6c2c06e91e5
Related-Bug: 1631926
|
|
|
|
remove_default_accounts is a mysql::server parameter that, set to True,
will execute some MySQL commands to cleanup MySQL defaults accounts
created by packaging.
In order to successfully run the commands, we need MySQL up and running,
which is not the case at step 1 but at step 2.
This patch make sure we run the commands at step 2 on pacemaker master
only.
No change for scenarios without Pacemaker.
Change-Id: Ifad3cb40fd958d7ea606b9cd2ba4c8ec22a8e94e
Closes-Bug: #1633113
|
|
Currently the /var/lib/tripleo/pacemaker-restarts directory is created
only when base/pacemaker.pp file is included in the manifest. There is a
notification that ensures precedence order and trigger the touch.
The trigger and the dependency on the base/pacemaker.pp should not be
required as someone using the tripleo::pacemaker::resource_restart_flag
would expect the file to be created no matter what.
For instance in the Cinder upgrade in the convergence step has this
defined:
Cinder_config<||> ~> Tripleo::Pacemaker::Resource_restart_flag["${::cinder::params::volume_service}"]
but in the convergence step, the base/pacemaker.pp is not included and
the above trigger fails as the directory is not created.
It looks the same for manilla.pp.
This patch removes the trigger and ensures the directory is created when
needed.
Change-Id: Ic3aa82c818662e9e88e21c8381d657adef5b43ac
Closes-Bug: #1632232
|
|
This adds the necessary resources to the manifest to migrate nova
to run over httpd. The service name will be moved to t-h-t in a
subsequent commit, but since this patch depends on t-h-t, we try to
avoid circular dependencies of repos.
Change-Id: I91d430a3871672f90b0f885736f067ddae3c238c
Depends-On: I57fb20cf0d58b3376243ba4aeb04e995e7152ce3
|
|
|
|
|
|
When we observe the 'stop timeout' values of pacemaker resources:
rabbitmq and redis, they are set to 90s. But for all other services, it
is set to 200s.
The overcloud deployment sometimes fails due to this with the error:
Error: Could not complete shutdown of rabbitmq-clone, 1 resources
remaining
Error performing operation: Timer expired
This patch updates the timeout for Redis and RabbitMQ to avoid this
error.
Change-Id: I8a3b3951a896ee3e8e5e09778e8ea4717e76a1b4
|