Age | Commit message (Collapse) | Author | Files | Lines |
|
Two PR have been merged upstream that let use improve our current
implementation :
* service_manage[1]
* conn string has namevar[2]
[1] https://github.com/puppetlabs/puppetlabs-mongodb/pull/198
[2] https://github.com/puppetlabs/puppetlabs-mongodb/pull/200
Change-Id: Ia2247348a9e0292b5fcbc65ea1e41e6bc7c477fa
|
|
The list of drivers loaded by the ML2 plugin does not have to
match the list of tenant_network_types, this will make ML2 load
the flat, gre, vxlan and vlan drivers so that the provider
networks can be of flat (default) and vlan type as well.
Change-Id: I0b74f86acf5c1ff644deb46c0a1d14129c1882d4
|
|
This patch updates the overcloud pacemaker role manifest so
that it optionally configures VIPs on isolated networks if
they are enabled.
Change-Id: I6123ee622abe4d8d7b5f76cf9bac43acd80c1f64
|
|
This patch refactors the puppet controller role so that it
makes use of per service VIP settings for each service.
Previously the VIP for the ctlplane was hard wired to
many of the controller service. With this patch we have
the ability to isolate traffic for services which
made use of the ctlplane and public VIPs for their
settings.
The implementation includes:
* stops the use of the VirtualIP and PublicVirtualIP within the
controller role. These parameters have now been replaced with
per service heat parameters for the controller nested stack which
are determined via VipMap based on per service settings in the heat
environment.
* All VIP configuration is now moved into puppet/vip-config.yaml.
This made sense so we could deprecate the use of the VirtualIP
and PublicVirtualIP settings above.
* The puppet manifests for the controller were cleaned up for several
to use Hiera directly instead of constructing URLs based on the
static controller and public network VIPs. This improvement
was something we wanted to do anyways and made the implementation
cleaner.
Change-Id: I9b9a15be67f74bec97366408f7047acfd6ea0ec6
|
|
This patch adds a new NetIpListMap abstraction which we can use
to make the all-nodes-config IP list network assignments
configurable. Ip address lists for all overcloud services
which require IPs were added to all-nodes-config so
that puppet manifests can be directly supplied the
correct network list for each service.
Change-Id: I209f2b4f97a4bb78648c54813dad8615770bcf1a
|
|
|
|
|
|
|
|
With current effort of creating isolated networks, the controller_host
hiera variable does not exist anymore. Hence we remove it else the
lookup will fail.
The hiera binding neutron::agents::ml2::ovs::local_ip has been written
in another review[1]
[1] I1dc11987b4ea3c37775b14fbdddb75588499e9bb
Change-Id: I12777c512d379210e5cddb5e683be4d79808fa2c
|
|
|
|
Change-Id: I1c8fc6beacc8352ad2aabe44ff20614ac52c1795
|
|
Change-Id: I1243b68506f37d6b78807c03948874ae100fef65
|
|
Constraints based on vncproxy are commented due to it not starting
with websockify < 0.6, see [1]
1. http://lists.openstack.org/pipermail/openstack-dev/2014-October/048535.html
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Change-Id: Ie51014bf563920d2e75c5e38942bc42ddc2a3939
|
|
Adds neutron-server, neutron-l3-agent, neutron-dhcp-agent,
neutron-openvswitch-agent and neutron-metadata-agent as
pacemaker resources.
Change-Id: I4dcc6f56db4c27a2a4f627fa8303cbeb2bd563d4
|
|
This change adds parameters to specify which networks the MySQL
service will use. If the internal_api network exists the MySQL
service will bind to the IP address on that network, otherwise
the services will default to the IP on the Undercloud 'ctlplane'
network.
This patch also drop the old 'controller_host' variable from
the puppet controller template since it is no longer in use.
Change-Id: I4fba2c957f7db47e916bc269fb4bd32ccc99bd4c
|
|
|
|
|
|
|
|
|
|
Fixes the colocation order between glance-api and glance-registry to
match the ref-arch[1]
[1]
https://github.com/beekhof/osp-ha-deploy/blob/master/pcmk/glance.scenario#L108
Change-Id: I40f35afedb3333d97c8b689538bb80a90a66afe8
|
|
Make sure the keystone service starts before the glance-registry one.
Change-Id: Ia81df13682bf556a39cc36520def48105ee3e27d
|
|
Make sure the keystone service starts before the cinder-api one.
Change-Id: I21549c066afcf051e52fc4bba4fae2f34ad2ba4b
|
|
The interface for pcmk_resource offers the parameter master_params to set
--master during the resource creation.
Change-Id: I6fa769f14a6248b371810af3ba6819a1f9ed9442
|
|
Depends-On: I7b992450176595a89dba9fe2eccf619af2645d6b
Change-Id: I30cebb6d3a8670f49587bedaf51af18a87a8d24c
|
|
|
|
|
|
|
|
|
|
Previously the Glance Pacemaker resources were mistakenly defined
on all nodes causing intermittent duplication errors.
Change-Id: I839ee49b153aa96ec08ebdb7e44aaeac28785963
|
|
Change-Id: I4631f962415164975143e94ec0b251ee5972c552
|
|
Change-Id: If87cc4d55e8524246d2cd41a62805f84780006b2
|
|
Add Pacemaker resources for Cinder services, also add relevant ordering
and colocation constraints.
Change-Id: Idc2e1b5ec96d882543f7a1a4ec723a010020ab02
|
|
|
|
With change I4b6b77e878017bf92d7c59c868d393e74405a355 we started
using the root user for clustercheck script so we don't need to
create the clustercheck user anymore.
Change-Id: Ic92bd12baeeeaf3f674e766fbc0a8badfb44822f
|
|
|
|
Previously we've been starting non-pacemakerized services in step 3 on
bootstrap node and in step 4 on others. Now that $sync_db in OpenStack
Puppet modules is decoupled from $enabled and $manage_service [1] we can
start the services in step 4 on all nodes.
[1] https://bugs.launchpad.net/puppet-glance/+bug/1452278
Change-Id: I6351d972ab00f4661d98338d95310d33f271de2f
|
|
|
|
|
|
|
|
This will configure the sysctl settings via puppet instead of
sysctl image element.
Change-Id: Ieb129d4cbe4b6d4184172631499ecd638073564f
|
|
The exec timeout/attempts is configured so that it is
left running for up to 30mins if the command runs but is
unsuccessfull and up to 2h if the command times out.
Change-Id: I4b6b77e878017bf92d7c59c868d393e74405a355
|
|
Change-Id: I7e9eb665275bd48d9c079934cc01ba62b5f59e16
|
|
We need to write config for OpenStack services on all nodes in step 3 so
that we can then create pacemaker resources in step 4. (If we wrote
config on non-bootstrap nodes in step 4 as it is currently, services on
those nodes might be started unconfigured. This is an inter-node
ordering issue that cannot be easily solved from within Puppet
manifests, hence the use of steps to enforce this ordering.)
Change-Id: Ia78ec38520bd1295872ea2690e8d3f8d6b01c46c
|
|
Set clone params according to [1].
[1] https://github.com/beekhof/osp-ha-deploy/blob/f8a65ab4c34f94737edde7db60337b830bfe6311/pcmk/rabbitmq.scenario
Change-Id: I5644de2d6253ab762a1420560ecb5bee2fd83092
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
|
|
Aims at having the Pacemaker resources configuration happening
in a single if condition.
Change-Id: I497538510f80a356e876d476024671b787b77fc9
|
|
Change-Id: I724c341f148fedf725f3b3da778e491741b754ae
|
|
NTP synchronization is moved to to step 1 where initial Pacemaker
configuration is performed.
Memacached is moved to step 2 to make sure it is up before the
OpenStack services are started.
Change-Id: I84121a687ee5ddb522239ecefd4d1d76c2f910b5
|
|
Change-Id: Ia8cb04b214c71afc884647fb20be3cc1a309c194
|
|
As with RabbitMQ previously, we can hit the same race conditions between
config being written on all nodes vs. pacemaker starting the
services. Configuring the services at least one step earlier than
starting them will allow us to get rid of this race condition.
Change-Id: I78f47dfb82ca8609ed40f784d65ba92db3d411f3
|
|
Recently puppet-pacemaker has changed in a backward incompatible way, we
need to reflect the changes in TripleO.
This patch also addresses non-deterministic order between corosync
service and VIP creation.
Depends-On: Ia68fee38f99dba18badc07eb0adbc473cfcffdf3
Change-Id: Ia7fe14cfb1401be98b62afeed589bb9f1b8af761
Co-Authored-By: Yanis Guenane <yanis.guenane@enovance.com>
|