Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Currently the MysqlRootPassword parameter is retrieved from the
templates but not honored, preventing a user to specify it.
This commit fixes that.
Change-Id: Ib6842736a37aea3cc16f1a7c75fc877408682bf7
|
|
|
|
The loadbalancer Puppet code moved to puppet-tripleo (lightweight)
composition layer.
This patch aims to use it and refactor the loadbalancer.pp file.
Co-Authored-By: Dan Prince <dprince@redhat.com>
Change-Id: I1765ac9b6cb01cb64d5d28dad646674ddca859e9
|
|
Currently we have a hard-coded default for auth_encryption_key,
which isn't ideal as it's used as a salt for the DB encryption.
Instead, reference an OS::Heat::RandomString resource so we create
a random key for each deployment.
Change-Id: Ic76b89db17603c114d98d28c01f75cc287fb2e90
|
|
Updates the puppet configuration for the Ceilometer auth agent
so that we do the join conversions in the Heat templates and
use only hiera for configuration of the ::ceilometer::agent::auth
class.
Change-Id: I932afafe21b2485a0581ac3910ac9d46161eee0d
|
|
Updates the puppet configuration for the Nova glance configs
so that we do the join conversions in the Heat templates and
use only hiera for configuration of the ::nova class.
Change-Id: Id12fb05470470558f1dccd45150bfce00a554466
|
|
Updates the puppet configuration for the Nova neutron configs
so that we do the join conversions in the Heat templates and
use only hiera for configuration of the ::nova::network::neutron
class. This updates the compute configuration to match what
we now do on the controller as well.
Change-Id: I2b352551777f64e0ceb119f48cc3b3ab1779f4d5
|
|
Currently Cinder iscsi backend is configured within the DEFAULT section.
Since we aim to support multibackend, this commit puts the iscsi backend
in its own section and enable it by default configuring it properly.
Also adds a parameter which can be used to disable the default backend.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: I05fb44b59829c0afa8a6588956a48320f2f65159
|
|
|
|
We're already configuring Neutron in Overcloud, but the controller
is still configured to use the default Nova neutron_api_class for
default configuration for networking, which means it used Nova Network
and not Neutron. This causes some of the Nova API is_neutron
checks to behave incorrectly.
This patch updates the controller to use nova::network::neutron (like
we already do on the overcloud_compute.pp role). As part of the change
several of the compute specific hiera settings for the
nova::network::neutron class have been moved to common.yaml.
Change-Id: Id2d5a5a0aa1ca087de714880ef1ea98484b06849
|
|
|
|
Before starting the Neutron agents, we need to make sure neutron-server
is running so we don't have a race when starting the services.
This patch adds some orchestration to do it.
Change-Id: I24db069d6af1fadd302b0924f769db3f58f65685
|
|
Include ::cinder::glance into the controller manifest to have proper
(upstream) default value for cinder's glance related parameters.
Change-Id: I9ac83b9e997d3c2502b08b642d4e41dba36ddf67
|
|
This patch updates the glance::backend::swift implementation to
use only hiera variables instead of a mix of hiera, and inline
class variables.
Nothing was functionally wrong with the previous approach but now
that we can compose more freely using the SoftwareDeployment defining
all the variables in Hiera makes sense and is cleaner.
Change-Id: I6d319841488d2ed94e088a5ac21e41dcd964ed1a
Co-Authored-By: Dan Prince <dprince@redhat.com>
|
|
The puppet-heat module just added a new class
parameter to help manage instance_user today in
I44fef59d3ed1f7851d8504855a7ae0d5460fdc84. This
actually broke us because we were setting it manually
via heat_config (puppet doesn't allow two settings).
Change-Id: Ib25e8de8ca3849701d506a5d0c956a6f3317ac8a
Closes-bug: #1429328
|
|
Also, we can actually uncomment this now that heatclient 0.3
has been released.
Change-Id: I0b4ce13f1426c364ea7921596022e5165e025fdb
|
|
This is a first implementation of Ceph support in TripleO with Puppet:
* Install ceph-mon on controller node
* Install ceph-osd on cephstorage node
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: I48488cbe950047fae5e746e458106d6edb9a6183
|
|
|
|
|
|
|
|
|
|
|
|
This patch moves all the related mergepy templates for the
overcloud into a deprecated directory. The Makefile has
been updated so that overcloud.yaml is still generated
at the top level so this shouldn't break end users.
This is to reduce confusion for new users who are learning the
TripleO heat templates and find the fact that we have two full
implementations very confusing.
Change-Id: I0848aca4dee3e37cb4c6089c5f655ad22ac6c5fd
|
|
This patch applies the allNodesConfig data to swift storage
nodes. This contains hosts information which could be useful.
Change-Id: Iaccfdc698e371d6618d561c33f256ccc3c166fb7
|
|
This patch adds a new BlockStoreNodesPostDeployment resource
which can be used along with the environment file to
specify a nested stack which is guaranteed to execute
after all the BlockStore config deployments have executed.
This is really useful for Puppet in that Heat actually
controls where puppet executes in the deployment
process and we want to ensure puppet runs after
all hiera configuration data has be deployed to
the nodes. With the previous approach some of the
data would be there, but allNodes data would not be
guaranteed to be there in time.
As os-apply-config (tripleo-image-elements) have their
ordering controlled within the elements themselves an empty stubbed
in nested stack has been added so that we don't break that
implementation.
Change-Id: I29b3574e341eecd53b2867788f415bff153cfa9f
|
|
This patch adds a new ObjectStoreNodesPostDeployment resource
which can be used along with the environment file to
specify a nested stack which is guaranteed to execute
after all the ObjectStore config deployments have executed.
This is really useful for Puppet in that Heat actually
controls where puppet executes in the deployment
process and we want to ensure puppet runs after
all hiera configuration data has be deployed to
the nodes. With the previous approach some of the
data would be there, but allNodes data would not be
guaranteed to be there in time.
As os-apply-config (tripleo-image-elements) have their
ordering controlled within the elements themselves an empty stubbed
in nested stack has been added so that we don't break that
implementation.
Change-Id: I778b87a17d5e6824233fdf9957c76549c36b3f78
|
|
This patch adds a new ComputeNodesPostDeployment resource
which can be used along with the environment file to
specify a nested stack which is guaranteed to execute
after all the Compute config deployments have executed.
This is really useful for Puppet in that Heat actually
controls where puppet executes in the deployment
process and we want to ensure puppet runs after
all hiera configuration data has be deployed to
the nodes. With the previous approach some of the
data would be there, but allNodes data would not be
guaranteed to be there in time.
As os-apply-config (tripleo-image-elements) have their
ordering controlled within the elements themselves an empty stubbed
in nested stack has been added so that we don't break that
implementation.
Change-Id: I80bccd692e45393f8250607073d1fe7beb0d7396
|
|
This patch splits out the BootstrapNode config
such that alternate implementation (puppet for example)
can implement their own SoftwareConfig's via a nested stack.
This is controlled by the standard overcloud heat environment.
For os-apply-config deployments the implementation should work the
same as before.
For puppet deployments the implementation uses hiera metadata
to configure bootstrap_nodeid.
Change-Id: I691a9d7c474866038a5d47beab295899b5479d03
|
|
Allow to install & configure RabbitMQ in cluster with Puppet on
the controller node.
Change-Id: Iebbf55c75b8c80453c7313bb41faf42c7fdf7159
|
|
This patch splits out the allNodesConfig config
such that alternate implementation (puppet for example)
can implement their own SoftwareConfig's via a nested stack.
This is controlled by the standard overcloud heat environment.
For os-apply-config deployments the implementation should work the
same as before.
For puppet deployments the implementation uses hiera metadata
to configure rabbit_nodes. The puppet deployment doesn't support
hosts, or freeform sysctl metadata yet so those are the same
for now as well.
Change-Id: I34ae30b1f37aca8b39586f7e350511462d66f694
|
|
This reverts commit 4d470abc589c660cd55e4ced92de234fdf83d882
where we disabled swift (and the glance swift backend) due
to the fact that some of the Heat metadata wasn't showing up.
Change-Id: Ib0c01be5844aa79d74b7de02ba3d0657db5047ba
Closes-bug: 1418805
|
|
This patch splits out the SwiftDevicesAndProxy config
such that alternate implementation (puppet for example)
can implement their own SoftwareConfig's via a nested stack.
This is controlled by the standard overcloud heat environment.
For os-apply-config deployments the implementation should work the
same as before.
For puppet deployments the implementation uses hiera metadata
to configure swift devices.
Partial-bug: 1418805
Change-Id: Ibf6038460f36279ad51a04947589d4a03a553f66
|
|
This patch adds a new ControllerNodesPostDeployment resource
which can be used along with the environment file to
specify a nested stack which is guaranteed to execute
after all the Controller config (HA, or other) have
executed.
This is really useful for Puppet in that Heat actually
controls where puppet executes in the deployment
process and we want to ensure puppet runs after
all hiera configuration data has be deployed to
the nodes. With the previous approach some of the
data would be there, but most of the HA data which
actually gets composed outside of the controller-puppet.yaml
nested stack would not be guaranteed to be there in time.
As os-apply-config (tripleo-image-elements) have their
ordering controlled within the elements themselves an empty stubbed
in nested stack has been added so that we don't break that
implementation.
Partial-bug: 1418805
Change-Id: Icd6b2c9c1f9b057c28649ee3bdce0039f3fd8422
|
|
This cleans up the top level tree by moving all the puppet
related bits into the puppet directory. The only exception
is overcloud-resource-registry-puppet.yaml which is
the puppet environment file and is used externally.
Change-Id: Idb65a7143b0f29e5579d4e9d1642e4cda6f65d50
|
|
The new ceph-source.yaml file provides the config settings needed
by the elements which configure Ceph on controllers (monitors) and
storage nodes (OSDs) as well as the Cinder backend which uses it.
There is also a without-mergepy copy named ceph-storage.yaml
Change-Id: I954861536c41b2a7e6cbd86a0f0b55004eed4c70
|
|
Not all installations have an NtpServer configured and if
they don't the ntp service will fail to startup correctly.
This patch makes it so that ntp is only enabled if
the ntp::servers array is greater than 0.
Change-Id: I8417f87ad2a3c1237ebb00ee1232b5313cd45d46
|
|
We have an issue where swift.devices metadata isn't showing
up on our controllers. This causes ringbuilding to fail
meaning swift-proxy won't startup.
This patch disables the swift-proxy and glance swift backend
until we can figure out exactly what caused this change.
Change-Id: I723a4b703d979d7475ac48f41c4c0ac91c306884
Partial-bug: 1418805
|
|
This adds an option which enables package installation via
Yum when Puppet executes. Users might want to disable Yum
installation of packages via puppet when using pre-installed
images.
The option is off by default: meaning that Puppet will no
longer install packages by default. Users will need to
enable the EnablePackageInstall in order to get
the previous behavior.
The intent is to use the default_parameters section
of the Heat environment to allow users to cleanly enable this
features without wiring it into the top level. This is because
the new parameter is Puppet specific and doesn't really apply to
other implementations. Kilo Heat already has support for
default_parameters and so does python-heatclient.
NOTE: most TripleO users do not yet have the heatclient
features because setup-clienttools in tripleo-incubator only installs
releases via pip. It is for these reasons the default_parameters
section in overcloud-resource-registry-puppet.yaml is commented out
for now.
Change-Id: I3af71b801b87d080b367d9e4a1fb44c1bfea6e87
|
|
This configures an snmp agent for the undercloud
ceilometer 'hardware' metering. This rely's on the
razorsedge/puppet-snmp which we are adding in
I8ae104de7382767c3448a493cd37ff2994cf4f52.
Change-Id: If2b6b63279b9b0402c5136ff1635e10acad1de7e
|
|
This patch updates puppet on the controller so that it
configures the Neutron dnsmasq options file data with
the value provided by the Heat NeutronDnsmasqOptions
parameter.
Properly configuring this setting can help resolve/tune
overcloud instance connectivity issues w/ SSH etc.
Change-Id: If47ab3d3002ebe19fc980ca5d37f84f4d8851f9b
|
|
This updates the controller config for the Neutron metadata
agent so that it configures the correct nova_metadata_ip
in /etc/neutron/metadata_agent.ini.
Change-Id: I4d6658ba54e582673938fa14a8c7de287dcd6662
Closes-bug: #1416781
|
|
This moves the loadbalancer composition class parameters
into the loadbalancer specific software deployment.
This keeps the resource contained and seperate from the rest
of the OS hiera data configs.
Change-Id: I8af48b479348e431a8e563917e1345ca4b895a60
|
|
This patch adds the ability to configure the Heat API and
Heat engine on controller nodes via puppet.
Change-Id: Ie81090bceed3e18199a36ebb11d1cbcaea83c410
|
|
This patch adds NTP support to all roles.
As part of this change overcloud-without-mergepy.yaml has
also been updated so that it passes the NtpServer parameters into
the Swift and Cinder storage node templates so that Ntp can
also be configured on those machines as well.
NOTE: The puppet support here uses the puppetlabs-ntp modules
which we add in Ib10ccbfdb3140b19f40049707548c6655d250e1c.
Change-Id: If2ef236fa42a714e84c6944eee5fe4daddf3fedf
|
|
This patch adds support for the Ceilometer controller
role including the Ceilometer:
-API
-central agent
-alarm notifier
-alarm evaluator
-collector
-expirer
In order to enable swift metering the swift::proxy ceilometer middleware
was added in.
Also, a minor adjustment to the existing ceilometer HA proxy setting
was made to accommodate ceilometer auth settings. (not exactly sure
why but this seems to be required)
Like upstream TripleO Ceilometer is currently using a MySQL database
backend. A follow on patch can support configuring MongoDB for use
with Ceilometer.
Change-Id: I4e171274bd7679d386d93492d13dfa7c5d37f6a8
|
|
Cinder block storage nodes shouldn't need to know the
AdminPassword and CinderPassword values. There are
no services which require Keystone related passwords
on the block storage nodes.
Change-Id: I4aa89347c60ec6258bd66725a895f6fd2b4844f6
|
|
This patch implements the required changes to configure
common Cinder block storage nodes via Puppet.
Change-Id: Iac8b4679a00f58d5faac4a1d08b7a830f0360ba5
|
|
Our existing default (replicas == 1) means that no data
(or copies) is being replicated in a multi-node Swift
environment. This seems like a bad production default
setting and could easily slip by if not set.
Setting it to 3 shouldn't hurt anything and seems to follow
suit with what several production installers (based around Puppet)
actually use. If using an installation with less than 3 swift
nodes I believe swift will do its best, and still work fine.
FWIW I noticed this when testing a multi-node Puppet swift
installation and was surprised when I didn't see any *data
files getting replicated across the storage cluster.
Change-Id: I44bdfff7aae6bdf845b79ca1f8f450c22113caed
|
|
In doing the Puppet version of the Swift role I noticed
4 parameters which we apply to storage nodes which should
not be required. This patch drops the following parameters
from the swift-storage and swift-storage-puppet nested
stacks which should not be required.
1) ControllerIP: There is no reason a storage node should need
the IP address of the controller. The swift proxy would need
this information in order to be able to contact keystone.
This swift-proxy is not installed on storage nodes so we can
drop the parameter here.
2) NeutronEnableTunnelling: There is no reason for Neutron
to be installed on Swift storage nodes. No need to create
an OVS bridge either.
3) NeutronNetworkType: Similar to above. No neutron requirements
exist here so this parameter is not required.
4) Password: This only applies to the the swift-proxy which is
currently part of our controller role. Storage nodes shouldn't need
the keystone service-password for any reason.
Change-Id: Icbf05363475c388fc722277da3d3d00a7355b19a
|