Age | Commit message (Collapse) | Author | Files | Lines |
|
This change adds parameters to specify which network the Neutron API should
use. If the internal_api network exists, Neutron will bind to the IP on that
network, otherwise the Undercloud 'ctlplane' network will be used. The
network that the Neutron API is bound to can be overridden in an environment
file.
Change-Id: I11bcebba3a22e8850095250a2ddfaf972339476b
|
|
This change adds parameters to specify which networks the Keystone API
services will use. If the external network exists, Keystone will bind to
the IP on that network for the public API, otherwise it will default to
the IP on the Undercloud 'ctlplane' network. If the internal_api network
exists it will be used for the Keystone Admin API, otherwise it will
default to the 'ctlplane' IP. The networks these APIs are bound to can
be overridden in an environment file.
Change-Id: I6694ef6ca3b9b7afbde5d4f9d173723b9ce71b20
|
|
This change adds parameters to specify which networks the Glance services
will use. If the internal_api network exists, Glance Registry will bind
to the IP on that network, otherwise it will default to the Undercloud
'ctlplane' network. If the storage network exists, Glance API will bind
to the IP on that network, otherwise it will default to 'ctlplane'. The
networks that these services use can be overridden with an environment
file.
Change-Id: I6114b2d898c5a0ba4cdb26a3da2dbf669666ba99
|
|
This change adds parameters to specify which networks the Cinder API and
Cinder iSCSI services will listen on. If the internal_api network exists,
Cinder API will be bound to the IP on that network, otherwise it will
default to the Undercloud 'ctlplane' network. The Cinder iSCSI service will
bind to the storage network if it exists, otherwise will also default to
using the Undercloud 'ctlplane' network.
Change-Id: I98149f108baf28d46eb199b69a72d0f6914486fd
|
|
This change adds the parameters to specify which networks the Ceilometer
and MongoDB servers listen on. It is set to the internal_api network if
present, and reverts to the default Undercloud 'ctlplane' network if not.
Change-Id: Ib646e4a34496966f9b1d454f04d07bf95543517f
|
|
This commit adds an environment file which adds all
the relevant resource registry entries to enable isolated
overcloud networks.
Change-Id: I8c5e0ca300b86a38925f59c9df7831d69da9f787
|
|
This patch removes the custom config_id outputs and replaces
it with OS::stack_id which allows us to just call get_resource
in the parent stack.
The motivation for this change is we'll be adding more os-net-config
templates and it would be nice to take advantage of this newer
template feature.
Change-Id: I6fcb26024b94420779b86766e16d8a24210c4f8e
|
|
This patch uses the new NetIpMap and ServiceMap abstractions
to assign the Neutron tenant tunneling network addresses.
By default this is associated with the tenant network. If no
tenant network is activated this will still default to
the control plane IP address.
Change-Id: I9db7dd0c282af4e5f24947f31da2b89f231e6ae4
|
|
This patch adds a resource which constructs a Json output
parameter called net_ip_map which will allow us to easily
extract arbitrary IP addresses for each network using the
get_attr function in heat.
The goal is to use this data construct in each role
template to obtain the correct IP address on each
network.
Change-Id: I1a8c382651f8096f606ad38f78bbd76314fbae5f
|
|
This patch updates the cinder block storage roles so that
they can optionally make use of isolated network
ports on the storage, storage management, and internal_api
networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I4e18cd4763455f815a8f8b82c93a598c99cc3842
|
|
This patch updates the swift roles so that
they can optionally make use of isolated network
ports on the storage, storage management, and internal API
networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I9984404331705f6ce569fb54a38b2838a8142faa
|
|
This patch updates the ceph roles so that
they can optionally make use of isolated network
ports on the storage and storage management networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I35cb8e7812202f8a7bc0379067bf33d483cd2aec
|
|
This patch updates the compute roles so that
they can optionally make use of isolated network
ports on the tenant, storage, and internal_api networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: Ib07b4b7256ede7fb47ecc4eb5abe64b9144b9aa1
|
|
This patch updates the controller roles so that
they can optionally make use of isolated network
ports on each of 5 available overcloud networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I9bbd6c8f5b9697ab605bcdb5f84280bed74a8d66
|
|
This patch adds parameters so that we can pass in the
ipaddress/subnet for each of the isolated overcloud
traffic nets to os-net-config templates. This
interface change will allow deployers to plug
in a custom version of an os-net-config template
that drives isolated network configuration.
Change-Id: I35bbe9a0bd81e79f9bfd531fe89c700af8b354c4
|
|
This patch adds a set of templates to create ports on isolated
networks via Heat. There are 5 port templates in total
which are split out according to the available overcloud
networks.
Change-Id: I5175ef48c1960ea0d13fc8518328db53921c70cd
|
|
|
|
This patch enables uses to selectively enable the creation
of split out networks for the overcloud traffic. These
networks will be created on the undercloud's neutron
instance.
By default a noop network is used so that no extra networks
are created. This allows our default to continue being
all traffic on the control plane.
Change-Id: Ied49d9458c2d94e9d8e7d760d5b2d971c7c7ed2d
|
|
|
|
|
|
|
|
Align all Deployment resource so we can use a glob convention for
stepped deployments via heat hooks/breakpoints.
Since most resources already use a FooDeployment_StepN convention,
align those which deviate from this as a precursor to supporting
stepped deployment, e.g stepping through "*Deployment_Step*".
Change-Id: I6bfee04649aa36116d1141ebe06d08b310ec8939
|
|
|
|
Change-Id: If87cc4d55e8524246d2cd41a62805f84780006b2
|
|
Add Pacemaker resources for Cinder services, also add relevant ordering
and colocation constraints.
Change-Id: Idc2e1b5ec96d882543f7a1a4ec723a010020ab02
|
|
|
|
|
|
Previously we've been starting non-pacemakerized services in step 3 on
bootstrap node and in step 4 on others. Now that $sync_db in OpenStack
Puppet modules is decoupled from $enabled and $manage_service [1] we can
start the services in step 4 on all nodes.
[1] https://bugs.launchpad.net/puppet-glance/+bug/1452278
Change-Id: I6351d972ab00f4661d98338d95310d33f271de2f
|
|
|
|
This patch bumps the HOT version for the overcloud
to Kilo 2015-04-30. We should have already done this
since we are making use of OS::stack_id (a kilo feature)
in some of the nested stacks. Also, this will give us access to
the new repeat function as well.
Change-Id: Ic534e5aeb03bd53296dc4d98c2ac5971464d7fe4
|
|
|
|
|
|
This will configure the sysctl settings via puppet instead of
sysctl image element.
Change-Id: Ieb129d4cbe4b6d4184172631499ecd638073564f
|
|
|
|
|
|
The exec timeout/attempts is configured so that it is
left running for up to 30mins if the command runs but is
unsuccessfull and up to 2h if the command times out.
Change-Id: I4b6b77e878017bf92d7c59c868d393e74405a355
|
|
We need to write config for OpenStack services on all nodes in step 3 so
that we can then create pacemaker resources in step 4. (If we wrote
config on non-bootstrap nodes in step 4 as it is currently, services on
those nodes might be started unconfigured. This is an inter-node
ordering issue that cannot be easily solved from within Puppet
manifests, hence the use of steps to enforce this ordering.)
Change-Id: Ia78ec38520bd1295872ea2690e8d3f8d6b01c46c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Set clone params according to [1].
[1] https://github.com/beekhof/osp-ha-deploy/blob/f8a65ab4c34f94737edde7db60337b830bfe6311/pcmk/rabbitmq.scenario
Change-Id: I5644de2d6253ab762a1420560ecb5bee2fd83092
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
|
|
This will change the way how RabbitMQ clients get to the servers,
they will not go through HAProxy anymore.
Change-Id: I522d7520b383a280505e0e7c8fecba9ac02d2c9b
|
|
Aims at having the Pacemaker resources configuration happening
in a single if condition.
Change-Id: I497538510f80a356e876d476024671b787b77fc9
|
|
Change-Id: I724c341f148fedf725f3b3da778e491741b754ae
|
|
|