Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch removes the custom config_id outputs and replaces
it with OS::stack_id which allows us to just call get_resource
in the parent stack.
The motivation for this change is we'll be adding more os-net-config
templates and it would be nice to take advantage of this newer
template feature.
Change-Id: I6fcb26024b94420779b86766e16d8a24210c4f8e
|
|
This patch uses the new NetIpMap and ServiceMap abstractions
to assign the Neutron tenant tunneling network addresses.
By default this is associated with the tenant network. If no
tenant network is activated this will still default to
the control plane IP address.
Change-Id: I9db7dd0c282af4e5f24947f31da2b89f231e6ae4
|
|
This patch adds a resource which constructs a Json output
parameter called net_ip_map which will allow us to easily
extract arbitrary IP addresses for each network using the
get_attr function in heat.
The goal is to use this data construct in each role
template to obtain the correct IP address on each
network.
Change-Id: I1a8c382651f8096f606ad38f78bbd76314fbae5f
|
|
This patch updates the cinder block storage roles so that
they can optionally make use of isolated network
ports on the storage, storage management, and internal_api
networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I4e18cd4763455f815a8f8b82c93a598c99cc3842
|
|
This patch updates the swift roles so that
they can optionally make use of isolated network
ports on the storage, storage management, and internal API
networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I9984404331705f6ce569fb54a38b2838a8142faa
|
|
This patch updates the ceph roles so that
they can optionally make use of isolated network
ports on the storage and storage management networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I35cb8e7812202f8a7bc0379067bf33d483cd2aec
|
|
This patch updates the compute roles so that
they can optionally make use of isolated network
ports on the tenant, storage, and internal_api networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: Ib07b4b7256ede7fb47ecc4eb5abe64b9144b9aa1
|
|
This patch updates the controller roles so that
they can optionally make use of isolated network
ports on each of 5 available overcloud networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I9bbd6c8f5b9697ab605bcdb5f84280bed74a8d66
|
|
This patch adds parameters so that we can pass in the
ipaddress/subnet for each of the isolated overcloud
traffic nets to os-net-config templates. This
interface change will allow deployers to plug
in a custom version of an os-net-config template
that drives isolated network configuration.
Change-Id: I35bbe9a0bd81e79f9bfd531fe89c700af8b354c4
|
|
This patch adds a set of templates to create ports on isolated
networks via Heat. There are 5 port templates in total
which are split out according to the available overcloud
networks.
Change-Id: I5175ef48c1960ea0d13fc8518328db53921c70cd
|
|
|
|
This patch enables uses to selectively enable the creation
of split out networks for the overcloud traffic. These
networks will be created on the undercloud's neutron
instance.
By default a noop network is used so that no extra networks
are created. This allows our default to continue being
all traffic on the control plane.
Change-Id: Ied49d9458c2d94e9d8e7d760d5b2d971c7c7ed2d
|
|
|
|
|
|
|
|
Align all Deployment resource so we can use a glob convention for
stepped deployments via heat hooks/breakpoints.
Since most resources already use a FooDeployment_StepN convention,
align those which deviate from this as a precursor to supporting
stepped deployment, e.g stepping through "*Deployment_Step*".
Change-Id: I6bfee04649aa36116d1141ebe06d08b310ec8939
|
|
|
|
Change-Id: If87cc4d55e8524246d2cd41a62805f84780006b2
|
|
Add Pacemaker resources for Cinder services, also add relevant ordering
and colocation constraints.
Change-Id: Idc2e1b5ec96d882543f7a1a4ec723a010020ab02
|
|
|
|
|
|
Previously we've been starting non-pacemakerized services in step 3 on
bootstrap node and in step 4 on others. Now that $sync_db in OpenStack
Puppet modules is decoupled from $enabled and $manage_service [1] we can
start the services in step 4 on all nodes.
[1] https://bugs.launchpad.net/puppet-glance/+bug/1452278
Change-Id: I6351d972ab00f4661d98338d95310d33f271de2f
|
|
|
|
This patch bumps the HOT version for the overcloud
to Kilo 2015-04-30. We should have already done this
since we are making use of OS::stack_id (a kilo feature)
in some of the nested stacks. Also, this will give us access to
the new repeat function as well.
Change-Id: Ic534e5aeb03bd53296dc4d98c2ac5971464d7fe4
|
|
|
|
|
|
This will configure the sysctl settings via puppet instead of
sysctl image element.
Change-Id: Ieb129d4cbe4b6d4184172631499ecd638073564f
|
|
|
|
|
|
The exec timeout/attempts is configured so that it is
left running for up to 30mins if the command runs but is
unsuccessfull and up to 2h if the command times out.
Change-Id: I4b6b77e878017bf92d7c59c868d393e74405a355
|
|
We need to write config for OpenStack services on all nodes in step 3 so
that we can then create pacemaker resources in step 4. (If we wrote
config on non-bootstrap nodes in step 4 as it is currently, services on
those nodes might be started unconfigured. This is an inter-node
ordering issue that cannot be easily solved from within Puppet
manifests, hence the use of steps to enforce this ordering.)
Change-Id: Ia78ec38520bd1295872ea2690e8d3f8d6b01c46c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Set clone params according to [1].
[1] https://github.com/beekhof/osp-ha-deploy/blob/f8a65ab4c34f94737edde7db60337b830bfe6311/pcmk/rabbitmq.scenario
Change-Id: I5644de2d6253ab762a1420560ecb5bee2fd83092
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
|
|
This will change the way how RabbitMQ clients get to the servers,
they will not go through HAProxy anymore.
Change-Id: I522d7520b383a280505e0e7c8fecba9ac02d2c9b
|
|
Aims at having the Pacemaker resources configuration happening
in a single if condition.
Change-Id: I497538510f80a356e876d476024671b787b77fc9
|
|
Change-Id: I724c341f148fedf725f3b3da778e491741b754ae
|
|
|
|
|
|
NTP synchronization is moved to to step 1 where initial Pacemaker
configuration is performed.
Memacached is moved to step 2 to make sure it is up before the
OpenStack services are started.
Change-Id: I84121a687ee5ddb522239ecefd4d1d76c2f910b5
|
|
Change-Id: Ia8cb04b214c71afc884647fb20be3cc1a309c194
|
|
Use of Pacemaker is governed by the resource registry since
change Ibefb80d0d8f98404133e4c31cf078d729b64dac3
Change-Id: I2f1fa8d6d28ae009940be2c2c530066197aa543b
|
|
As with RabbitMQ previously, we can hit the same race conditions between
config being written on all nodes vs. pacemaker starting the
services. Configuring the services at least one step earlier than
starting them will allow us to get rid of this race condition.
Change-Id: I78f47dfb82ca8609ed40f784d65ba92db3d411f3
|
|
Recently puppet-pacemaker has changed in a backward incompatible way, we
need to reflect the changes in TripleO.
This patch also addresses non-deterministic order between corosync
service and VIP creation.
Depends-On: Ia68fee38f99dba18badc07eb0adbc473cfcffdf3
Change-Id: Ia7fe14cfb1401be98b62afeed589bb9f1b8af761
Co-Authored-By: Yanis Guenane <yanis.guenane@enovance.com>
|