Age | Commit message (Collapse) | Author | Files | Lines |
|
This adds an option which enables package installation via
Yum when Puppet executes. Users might want to disable Yum
installation of packages via puppet when using pre-installed
images.
The option is off by default: meaning that Puppet will no
longer install packages by default. Users will need to
enable the EnablePackageInstall in order to get
the previous behavior.
The intent is to use the default_parameters section
of the Heat environment to allow users to cleanly enable this
features without wiring it into the top level. This is because
the new parameter is Puppet specific and doesn't really apply to
other implementations. Kilo Heat already has support for
default_parameters and so does python-heatclient.
NOTE: most TripleO users do not yet have the heatclient
features because setup-clienttools in tripleo-incubator only installs
releases via pip. It is for these reasons the default_parameters
section in overcloud-resource-registry-puppet.yaml is commented out
for now.
Change-Id: I3af71b801b87d080b367d9e4a1fb44c1bfea6e87
|
|
This patch adds NTP support to all roles.
As part of this change overcloud-without-mergepy.yaml has
also been updated so that it passes the NtpServer parameters into
the Swift and Cinder storage node templates so that Ntp can
also be configured on those machines as well.
NOTE: The puppet support here uses the puppetlabs-ntp modules
which we add in Ib10ccbfdb3140b19f40049707548c6655d250e1c.
Change-Id: If2ef236fa42a714e84c6944eee5fe4daddf3fedf
|
|
Our existing default (replicas == 1) means that no data
(or copies) is being replicated in a multi-node Swift
environment. This seems like a bad production default
setting and could easily slip by if not set.
Setting it to 3 shouldn't hurt anything and seems to follow
suit with what several production installers (based around Puppet)
actually use. If using an installation with less than 3 swift
nodes I believe swift will do its best, and still work fine.
FWIW I noticed this when testing a multi-node Puppet swift
installation and was surprised when I didn't see any *data
files getting replicated across the storage cluster.
Change-Id: I44bdfff7aae6bdf845b79ca1f8f450c22113caed
|
|
In doing the Puppet version of the Swift role I noticed
4 parameters which we apply to storage nodes which should
not be required. This patch drops the following parameters
from the swift-storage and swift-storage-puppet nested
stacks which should not be required.
1) ControllerIP: There is no reason a storage node should need
the IP address of the controller. The swift proxy would need
this information in order to be able to contact keystone.
This swift-proxy is not installed on storage nodes so we can
drop the parameter here.
2) NeutronEnableTunnelling: There is no reason for Neutron
to be installed on Swift storage nodes. No need to create
an OVS bridge either.
3) NeutronNetworkType: Similar to above. No neutron requirements
exist here so this parameter is not required.
4) Password: This only applies to the the swift-proxy which is
currently part of our controller role. Storage nodes shouldn't need
the keystone service-password for any reason.
Change-Id: Icbf05363475c388fc722277da3d3d00a7355b19a
|
|
This patch implements the required changes to configure
swift storage nodes via Puppet. Similar to the overcloud
we generate the rings on each node (with the same seed).
Change-Id: I677c85b09b6e656b3ac1f938a4bd6bc7daae1755
|