Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
The bootstrap_nodeid comparison should be case insensitive.
Change-Id: I1e6672bb0219c1cf56ab21dd911c6f33e2436cc3
Closes-Bug: #1698190
|
|
gnocchi upgrade requires storage sacks to be initialized. This means
we need to ensure the storage backends are up before running the
upgrade and starting the api. Lets move the api to step 4 so we can
ensure other dependencies are in place.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: Ibfa9fb39f60c1e4a802d189b32ff4c34476c93d3
Change-Id: If2ae48b21389e76fd638c0b48c148a5d4f227630
|
|
|
|
|
|
It makes little sense to enforce the stonith property on remote nodes and/or
all cluster nodes. We can just enforce it once on the pacemaker_master
node as it is a cluster-wide property anyway. We can also remove the
tripleo::fencing -> pacemaker::stonith constraint in the pacemaker
remote profile now as the fencing stuff happens on step 5 anyway and
the property is set at step 1.
While this works in general it creates extra CIB changes for nothing and
slows down the deployment.
Change-Id: Ifef08033043a4cc90a6261e962d2fdecdf275650
Closes-Bug: #1696336
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This takes into use the cluster_host_map, which allows to give aliases
to the pacemaker nodes (which are FQDNs), and allows us to configure the
cluster using FQDNs.
We need FQDNs in order to request certificates, since the default CA
(FreeIPA) only allows certificates for FQDNs.
Change-Id: I2f146afdd32aef2d11cf25a65fa8d67428f621f5
|
|
|
|
|
|
|
|
The horizon proxy should redirect all HTTP requests to HTTPS,
regardless of the 'Host' field in the header. The current rule will
cause haproxy to redirect HTTP requests if the 'Host' field contains
the public virtual IP address. It will not redirect if the 'Host'
field contains a hostname, FQDN, etc.
Change-Id: I6c8f58a30f97cdf4c668734793197ea976297733
Signed-off-by: Ryan O'Hara <rohara@redhat.com>
|
|
|
|
The commit with change id [1], added the pacemaker HA support
for OVN DB servers. That commit created a new VIP which is
really not required.
This patch removes the code to create a new ip resource. Instead
it expects the pacemaker ip resource (with the ip address in the
'ovn_dbs_vip' parameter and with the name "ip-$ovn_dbs_vip") to be
created before ovn_northd class is called, which is the case anyway
if 'ovn_dbs_vip' is taken from the ServiceNetMapDefaults (in t-h-t).
[1] - I9dc366002ef5919339961e5deebbf8aa815c73db
Change-Id: I94d3960e6c5406e3af309cc8c787ac0a6c9b1756
Partial-bug: #1670564
|
|
In composable HA we bind resources to nodes that have special
node properties. We need to do this also for bundle resources
otherwise there is a potential race where the bundle might be
started on nodes where it is not supposed to during a small
window of time.
Tested with the depends-on and correctly obtained a containerized
composable HA deployment:
Docker container set: rabbitmq-bundle
[192.168.24.1:8787/tripleoupstream/centos-binary-rabbitmq:latest]
rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-rabbit-0
rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-rabbit-1
rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-rabbit-2
Docker container set: galera-bundle
[192.168.24.1:8787/tripleoupstream/centos-binary-mariadb:latest]
galera-bundle-0 (ocf::heartbeat:galera): Master overcloud-galera-0
galera-bundle-1 (ocf::heartbeat:galera): Master overcloud-galera-1
galera-bundle-2 (ocf::heartbeat:galera): Master overcloud-galera-2
Docker container set: redis-bundle
[192.168.24.1:8787/tripleoupstream/centos-binary-redis:latest]
redis-bundle-0 (ocf::heartbeat:redis): Master overcloud-controller-0
redis-bundle-1 (ocf::heartbeat:redis): Slave overcloud-controller-1
redis-bundle-2 (ocf::heartbeat:redis): Slave overcloud-controller-2
ip-192.168.24.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-10.0.0.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
ip-172.16.2.9 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.1.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.3.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Docker container set: haproxy-bundle
[192.168.24.1:8787/tripleoupstream/centos-binary-haproxy:latest]
haproxy-bundle-docker-0 (ocf::heartbeat:docker): Started overcloud-controller-0
haproxy-bundle-docker-1 (ocf::heartbeat:docker): Started overcloud-controller-1
haproxy-bundle-docker-2 (ocf::heartbeat:docker): Started overcloud-controller-2
Depends-On: I44449861cbfe56304b8829c9ca10fd648353b3ae
Change-Id: I48fb490040497ba08cae19937159c0efdf99e3f8
|
|
Change-Id: I097c494d3953b7d26d94aecc546ddef5225d1125
Depends-On: I2f0eb779b711e57f1532b1227896542d0ecffc89
|
|
The current order is broken if there were changes to the account and
container devices, but not to the object devices. In these cases it can
happen that the rebalance happens before modifying devices.
Change-Id: I15641c32266939c9a00936cc471cc59b1bb54eec
|
|
|
|
|
|
This sets up the CRL file to be triggered on the certmonger_user
resource. Furtherly, HAProxy uses this CRL file in the member options,
thus effectively enabling revocation for proxied nodes.
So, if a certificate has been revoked by the CA, HAProxy will not proxy
requests to it.
bp tls-via-certmonger
Change-Id: I4f1edc551488aa5bf6033442c4fa1fb0d3f735cd
|
|
This will fetch the CRL file from the specified file or URL. Furtherly
it will set up a cron job to refresh the crl file once a week and notify
the needed services.
bp tls-via-certmonger
Change-Id: I38e163e8ebb80ea5f79cfb8df44a71fdcd284e04
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based cinder-volume containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1668920
Change-Id: I95ad4dd89b47396bea672813d87de35e64c04b2d
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based cinder-backup containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1668920
Change-Id: If53495ff75d4832cc6be80dc0dc9bd540ab6583b
|
|
docker host"
|
|
|
|
If the tripleo::profile::base::neutron::sriov is included it
is expected that the SR-IOV agent should be deployed and configured so
references to core plugin configuration is out of place and currently
breaks deployment.
Change-Id: Ie5d8cd7863c0d042cc6a4e1fc52602d8a03a1935
|
|
The ring up- and downloading was never executed if run within a
containerized environment. This is due to the fact that this manifest
gets executed within step 6(5) only. There is also an ordering issue,
which actually tries to create the tarballs before rebalancing.
This patch fixes the step conditions and also chains the tarball
creation to the rebalance.
The check to query rings on all nodes can now be disabled. This is
required on containerized environments: the local ring will be modified
and rebalanced, but rings on the existing servers are not yet modified.
Therefore a recon-check will fail, and needs to be disabled.
Closes-Bug: 1694211
Change-Id: I51c5795b9893d797bd73e059910f17a98f04cdbe
|
|
host
The polkit rules are currently evaluated in the context of the docker host.
As a result the check fails for the kolla nova compute user, as the uids are not
consistent with the host uids (in fact we probably can't assume a nova user exists
on the docker host).
As a short-term workaround a 'docker_nova' user group is created on the docker host
and the polkit rule is updated to grant this user access to the libvirtd socket.
Longer term solution probably requires running polkitd in a container too.
Change-Id: I91be1f1eacf8eed9017bbfef393ee2d66771e8d6
Related-bug: #1693844
|
|
|
|
|
|
Future work in the UI requires Apache to proxy for the
ironic-inspector service the same as it has for other
related services. This adds support for ironic-inspector
through Apache's mod_proxy
Closes-Bug: 1695202
Depends-On: Id395604f1dfbc4bf4f26adbe05f484a10227fd76
Change-Id: I9dcb0769ff90a2fc9561cb86bb822be8087ffe8e
|
|
This is needed in order to deploy novajoin in a containerized undercloud
environment.
Change-Id: Iea461f66b8f4e3b01a0498e566a2c3684144df80
|
|
|
|
|
|
|
|
|
|
This module is used by tripleo-heat-templates to configure and deploy
Kolla-based mysql containers managed by pacemaker.
We use short-lived containers that call pcs via puppet to create
the needed pacemaker resources, properties and constraints.
Co-Authored-By: Michele Baldesari <michele@acksyn.org>
Partial-Bug: #1692842
Depends-On: I44fbd7f89ab22b72e8d3fc0a0e3fe54a9418a60f
Depends-On: Ie9b7e7d2a3cec4b121915a17c1e809e4ec950e7f
Change-Id: I3b4d8ad2eec70080419882d5d822f78ebd3721ae
|
|
|
|
|
|
|
|
If selinux is enabled the authlogin_nsswitch_use_ldap Boolean must
be enabled. This setting allows LDAP communications to the confined
LDAP/server port. This change includes a conditional for enabling this
Boolean only when selinux is in use.
Change-Id: If985f2434d28fcd33198929bf61f2a3a82e601fe
Closes-Bug: #1695002
|
|
|
|
Since galera is configured to use rsync, we ought to make sure the
package is installed. Particularly when using deployed-server, the
package is not always installed by default depending on what was used to
install the servers.
Change-Id: I92ee78f2dd2c0f7fd4d393b104166407d7c654e2
Closes-Bug: #1693003
|
|
This patch enables OVN DB servers to be started in master/slave
mode in the pacemaker cluster.
A virtual IP resource is created first and then the pacemaker OVN OCF
resource - "ovn:ovndb-servers" is created. The OVN OCF resource is
configured to be colocated with the vip resource. The ovn-controller and
Neutron OVN ML2 mechanism driver which depends on OVN DB servers will
always connect to the vip address on which the master OVN DB servers
listen on.
The OVN OCF resource itself takes care of (re)starting ovn-northd service
on the master node and we don't have to manage it.
When HA is enabled for OVN DB servers, haproxy does not configure the OVN DB
servers in its configuration.
This patch requires OVS 2.7 in the overcloud.
Co-authored:by: Numan Siddique <nusiddiq@redhat.com>
Change-Id: I9dc366002ef5919339961e5deebbf8aa815c73db
Partial-bug: #1670564
|
|
|