Age | Commit message (Collapse) | Author | Files | Lines |
|
Adds support for NFS backend for Cinder, but remains disabled by
default.
Change-Id: I9ebef072ed115efe980fa4904ea80f02384522af
|
|
This patch adds a new parameter to configure the
neutron external network bridge. This setting
applies to the bridge used in the Neutron l3_agent.ini file
and can by useful if you wish to set external_network_bridge = ''
in that file.
As part of this fix we also update the environment file for
network isolation so that we automatically set the new
NeutronExternalNetworkBridge to an empty string. This fixes
an issue where overcloud floating IPs did not work correctly
when using the external network interface for floating IP
traffic.
Change-Id: I3bfcda8746780ea0851d88ed6db8557e261cef0d
|
|
Add two new parameters: EnableFencing and FencingConfig.
FencingConfig is a json with an expected structure documented in the
templates. It gets passed further to puppet-tripleo, which configures
the fencing devices.
Fencing is configured and enabled in the last step after all pacemaker
resources and constraints have been created, which should be a more
stable approach than the other way round.
Change-Id: Ifd432bfd2443b6d13e7efa006d4120bb0eaa2554
Depends-On: I819fc8c126ec47cd207c59b3dcf92ff699649c5a
Depends-On: I8b7adff6f05f864115071c51810b41efad887584
|
|
|
|
This patch updates the cinder block storage role
for Puppet so that it supports network isolation.
This includes using the (optional) isolated networks
for MySQL, Glance API, and iscsi network traffic.
Change-Id: Icdfbf5fce7380e6049babca0cd50ca2e4008c1b0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Currently, we use the heat default server names, which results in some
fairly unreadable hostnames due to the level of nesting in the templates.
e.g ov-sszdbj5rdne-0-bhseh65edxv6-Controller-zoqc6tlypbdp
Instead, we allow the user to specify a format string per role, defaulted
to a string which formats the name e.g <stackname>-controller-<index>
e.g overcloud-controller-0
Optionally additional hostname components (not replaced by heat) could be
added, such that deployment time customization of hostnames via firstboot
scripts (e.g cloud-init) may be possible.
Should anyone wish to maintain the old heat-generated names, they can pass
an empty string via these parameters, which heat will treat as if no "name"
property was provided to OS::Nova::Server.
Change-Id: I1730caa0c2256f970da22ab21fa3aa1549b3f90b
|
|
When you do a stack-update which affects, e.g ControllerDeployment
such that some value in hieradata is updated (for example changing
the "Debug" parameter to True), we only write the hieradata file and
don't reapply the manifests.
So we introduce a dependency on the deploy_stdout values from all
hieradata applying configs, such that the manifests will be re-applied
on update if the data is changed.
This requires https://review.openstack.org/#/c/190282/ so that
99-refresh-completed will return the derived config ID as part of the
deploy_stdout payload.
Closes-Bug: #1463092
Change-Id: I1175248c3236d0c42e37d062afce550efce8aadc
|
|
|
|
The redis_vip should come from a Neutron Port as its cidr depends
on the Neutron Network configuration. This change adds 2 new files
and modifies 1 in the network/ports directory:
- noop.yaml - Passes through the ctlplane Controller IP (modified)
- ctlplane_vip.yaml - Creates a new VIP on the control plane
- vip.yaml - Creates a VIP on the named network (for isolated nets)
Also, changes to overcloud-without-mergepy.yaml create the
Redis Virtual IP. The standard resource registry was modified to
use noop.yaml for the new Redis VIP. The Puppet resource registry
was modified to use ctlplane_vip.yaml by default, but can be made
to use vip.yaml when network isolation is used by using an
environment file. vip.yaml will place the VIP according to the
ServiceNetMap, which can also be overridden.
We use this new VIP port definition to assign a VIP to Redis,
but follow-up patches will assign VIPs to the rest of the
services in a similar fashion.
Co-Authored-By: Dan Sneddon <dsneddon@redhat.com>
Change-Id: I2cb44ea7a057c4064d0e1999702623618ee3390c
|
|
This patch renames the NeutronLocalIp option to be called
NeutronTenantNetwork. This is more consistent with
all of the other ServiceNetMap settings which end in
'Network' and initial end user feedback found the
old name a bit cryptic as well.
This is the network for neutron tenant traffic so lets
just name it that.
Change-Id: Id49afe75c372887453413c092190a5775aa3e1ee
|
|
This patch makes it possible to configure the isolated network
for the Nova vnc proxy client.
Change-Id: I462dfaea94e5fe9cb260ba91a42433a250f07984
|
|
This patch updates the Puppet Swift storage role
so that it supports network isolation. By default
all traffic still flows on the ctlplane network
but if network isolation is enabled then network
traffic will flow over the configured storage_mgmt
network interface.
This patch also fixes a few critical issues with
the swift storage role that prevented it from
working:
- oac_data for the swift devices was overriding the
data provided in the swift_devices_and_proxy
hieradata file.
- the role was missing declarations to load hieradata
files for swift_devices_and_proxy and all_nodes
- The required snmpd settings were not getting set
correctly in the 'object' hiera data file.
With all of these changes the Swift storage role
works correctly with and without network isolation.
Change-Id: I541abb2604380f603bba91ad88e54783ee450a8f
|
|
|
|
This change adds config and deployment resources to trigger package
updates on nodes. The deployments are triggered by doing a stack-update
and setting one of the parameters to a unique value.
The intent is that rolling update will be controlled by setting
breakpoints on all of the UpdateDeployment resources inside the
role resource groups.
Change-Id: I56bbf944ecd6cbdbf116021b8a53f9f9111c134f
|
|
Change-Id: I731b408f24da01c1bc897bfffe8fd4d5638932ed
|
|
|
|
Turns NeutronNetworkVLANRanges into a list and makes it consumable by
neutron::plugins::ml2::network_vlan_ranges as an array. Previously
usage of vlans was impossible due to puppet-neutron failing to
join() network_vlan_ranges.
Also fixes wiring of network_vlan_ranges on computes and adds a
sample environment file to test use of vlans for tenant networks.
Change-Id: I8725cdb9591dd8d0b7125fdacbefdc9138703266
|
|
This patch updates the Ceph configuration for the puppet
implementation so that it isolates the Ceph traffic
for the public and cluster interfaces. By default public traffic
runs on the "storage" network and the cluster traffic runs on the
"storage mgmt" network.
If network isolation is not enabled then the default
ctlplane address's will be used for both the public and
cluster interfaces.
Change-Id: I791244d72c8f42142d9de99e0cf0acdca19e62b0
|
|
This patch refactors the puppet controller role so that it
makes use of per service VIP settings for each service.
Previously the VIP for the ctlplane was hard wired to
many of the controller service. With this patch we have
the ability to isolate traffic for services which
made use of the ctlplane and public VIPs for their
settings.
The implementation includes:
* stops the use of the VirtualIP and PublicVirtualIP within the
controller role. These parameters have now been replaced with
per service heat parameters for the controller nested stack which
are determined via VipMap based on per service settings in the heat
environment.
* All VIP configuration is now moved into puppet/vip-config.yaml.
This made sense so we could deprecate the use of the VirtualIP
and PublicVirtualIP settings above.
* The puppet manifests for the controller were cleaned up for several
to use Hiera directly instead of constructing URLs based on the
static controller and public network VIPs. This improvement
was something we wanted to do anyways and made the implementation
cleaner.
Change-Id: I9b9a15be67f74bec97366408f7047acfd6ea0ec6
|
|
|
|
|
|
|
|
|
|
|
|
We forgot to pass NeutronEnableTunnelling param to controllers
(passed only to computes), making it unusable.
Change-Id: I74756732deabd1c7ba9039832ea169fd322a569f
|
|
This hasn't been properly wired in for a while AFAICT, so it makes
sense to remove it, and introduce a value via parameter_defaults
which enables easier global selection of a particular transport
without passing the value down through all the nested stacks.
Change-Id: Icd830aea00768e65adc1df1287440fdab98058f9
|
|
We want to ensure this actually worked, or subsequent configuration
steps may fail.
Change-Id: Ia9ae12e70dd32dd3ae6c26cbfd3e3e2dba5d272f
|
|
We want to know this deployment succeeded, again the
ControllerAllNodesPostDeployment depends_on this, which implies
it should actually be done before doing the PostDeployment stuff,
which is impossible to determine with NO_SIGNAL.
Change-Id: I46d23bce8762ac414e4de82cf42193694aebb763
|
|
We need to be sure the boostrap node data has been propagated to the
cluster before proceeding with configuration, because
ControllerNodesPostDeployment consumes the data put in place by this
and depends_on for serialization, which is essentially meaningless when
combined with NO_SIGNAL.
Change-Id: I73a1e5a2cda4c79f457bfbd9ce2836dc5c1902cc
|
|
As most of the OpenStack services are automatically bound
to the public virtual IP already we don't need to set
the default network for Horizon and Keystone to the 'external'
network. These should probably default to the internal_api
network like the rest of the OpenStack services...
Change-Id: I04cf64568c2fc7bb8a821b0de5ba56aa90158e2d
|
|
This patch adds VIPs for the internal_api, storage,
and storage management networks.
For puppet these are persisted into a local vip-config
hieradata file which is then used by puppet-tripleo's
loadbalancer module to apply per-service VIP settings.
Change-Id: I909c3bdc9d17a8e15351f4797287769e3f76c849
|
|
We probably don't need to split out separate networks
for Heat CFN and Cloudwatch. Just having a single network
for Heat API in the overcloud is probably fine.
Change-Id: I917b314e01227af72129645c9b72ad8e54f07865
|
|
This patch adds a new NetIpListMap abstraction which we can use
to make the all-nodes-config IP list network assignments
configurable. Ip address lists for all overcloud services
which require IPs were added to all-nodes-config so
that puppet manifests can be directly supplied the
correct network list for each service.
Change-Id: I209f2b4f97a4bb78648c54813dad8615770bcf1a
|
|
This patch makes ServiceNetMap a top level parameter.
This is helpful to tools like Tuskar which don't support Heat
environments that contain both a resource_registry and default_parameters.
ServiceNetMap will in fact be utilized at the top level in some of
the VIP related patches that follow.
Change-Id: I375063dacf5f3fc68e6df93e11c3e88f48aa3c3a
|
|
|
|
This patch enables uses to selectively enable the creation
of split out networks for the overcloud traffic. These
networks will be created on the undercloud's neutron
instance.
By default a noop network is used so that no extra networks
are created. This allows our default to continue being
all traffic on the control plane.
Change-Id: Ied49d9458c2d94e9d8e7d760d5b2d971c7c7ed2d
|
|
Use of Pacemaker is governed by the resource registry since
change Ibefb80d0d8f98404133e4c31cf078d729b64dac3.
The param stayed longer in the template to prevent breakage of scripts
which could have passed it when launching stack-create, despite it being
ignored.
This change removes the param entirely.
Change-Id: I026ce391319a4306c4b81a15652e3cad470e5cb7
Depends-On: I775724b207c737043a2a418a3ec8ede2cbaa8fa0
|
|
|
|
This patch bumps the HOT version for the overcloud
to Kilo 2015-04-30. We should have already done this
since we are making use of OS::stack_id (a kilo feature)
in some of the nested stacks. Also, this will give us access to
the new repeat function as well.
Change-Id: Ic534e5aeb03bd53296dc4d98c2ac5971464d7fe4
|
|
The exec timeout/attempts is configured so that it is
left running for up to 30mins if the command runs but is
unsuccessfull and up to 2h if the command times out.
Change-Id: I4b6b77e878017bf92d7c59c868d393e74405a355
|
|
This commit aims to support the creation of the galera cluster via
Pacemaker. With this commit in, three use-cases will be supported.
* Non HA setup / Non Pacemaker setup : The deployment will take place
as it is currently the case in f20puppet-nonha. Nothing changes.
* Non HA setup / Pacemaker setup : Even though it is a non ha setup,
galera cluster via pacemaker will be deployed with a cluster nbr of 1.
* HA setup / Non Pacemaker setup : N/A
* HA setup / Pacemaker setup : It is assumed that HA setup will
always be with pacemaker. So in this situation pacemaker will deploy a
cluster of 3 galera master nodes.
Depends-On: I7aed9acec11486e0f4f67e4d522727476c767d83
Change-Id: If0c37a86fa8b5aa6d452129bccf7341a3a3ba667
|
|
|