Age | Commit message (Collapse) | Author | Files | Lines |
|
This updates the controller config for the Neutron metadata
agent so that it configures the correct nova_metadata_ip
in /etc/neutron/metadata_agent.ini.
Change-Id: I4d6658ba54e582673938fa14a8c7de287dcd6662
Closes-bug: #1416781
|
|
This moves the loadbalancer composition class parameters
into the loadbalancer specific software deployment.
This keeps the resource contained and seperate from the rest
of the OS hiera data configs.
Change-Id: I8af48b479348e431a8e563917e1345ca4b895a60
|
|
This patch adds the ability to configure the Heat API and
Heat engine on controller nodes via puppet.
Change-Id: Ie81090bceed3e18199a36ebb11d1cbcaea83c410
|
|
This patch adds NTP support to all roles.
As part of this change overcloud-without-mergepy.yaml has
also been updated so that it passes the NtpServer parameters into
the Swift and Cinder storage node templates so that Ntp can
also be configured on those machines as well.
NOTE: The puppet support here uses the puppetlabs-ntp modules
which we add in Ib10ccbfdb3140b19f40049707548c6655d250e1c.
Change-Id: If2ef236fa42a714e84c6944eee5fe4daddf3fedf
|
|
This patch adds support for the Ceilometer controller
role including the Ceilometer:
-API
-central agent
-alarm notifier
-alarm evaluator
-collector
-expirer
In order to enable swift metering the swift::proxy ceilometer middleware
was added in.
Also, a minor adjustment to the existing ceilometer HA proxy setting
was made to accommodate ceilometer auth settings. (not exactly sure
why but this seems to be required)
Like upstream TripleO Ceilometer is currently using a MySQL database
backend. A follow on patch can support configuring MongoDB for use
with Ceilometer.
Change-Id: I4e171274bd7679d386d93492d13dfa7c5d37f6a8
|
|
Our existing default (replicas == 1) means that no data
(or copies) is being replicated in a multi-node Swift
environment. This seems like a bad production default
setting and could easily slip by if not set.
Setting it to 3 shouldn't hurt anything and seems to follow
suit with what several production installers (based around Puppet)
actually use. If using an installation with less than 3 swift
nodes I believe swift will do its best, and still work fine.
FWIW I noticed this when testing a multi-node Puppet swift
installation and was surprised when I didn't see any *data
files getting replicated across the storage cluster.
Change-Id: I44bdfff7aae6bdf845b79ca1f8f450c22113caed
|
|
Now that we have swift we can switch glance over
to make use of it.
Change-Id: I9513cb63079235337b684aa734af73a0f0cc0afd
|
|
This patch adds support for a Swift proxy and storage
node on the controller.
The implementation is fairly straightforward with the
exception of building the ring. I've followed an
upstream TripleO model here where we build the
actual ring on each node (rather than build once
and rsync). This works because Heat will always
know all the devices ahead of time. In the future
when we have Heat breakpoints it might be possible
to consider optimizing this by generating the ring
once and then rsyncing to all the nodes.
The ringbuilder logic is executed as a seperate
Heat software deployment. On the controller the ring
is executed in between the base service (mysql/rabbit)
and OpenStack service steps. This is to ensure the
ring exists before the Swift proxy is started.
Having the ringbuilder.pp logic as a separate software
config should allow us to reuse it for the Storage
node role.
It should also be noted that swift.zones support is
added here but we are missing an upstream Heat
template change in order for it to be wired
in properly. See: I0e0f5189da1575f2e1ed7fba4bbbe13a8fbf6221
Likewise we need to properly wire in SwiftRingBuild as well.
See: I01311ec3ca265b151f8740bf7dc57cdf0cf0df6f
The underlying puppet ringbuilder code is already wired
to support this change when it lands.
As is this works today and will provide a working Overcloud
Swift-proxy/storage node config. Will follow this up with
a related Swift storage node patch which should allow
puppet to be used for configuration on the storage nodes
as well...
Change-Id: Id1272f796e2507a7357309e8cd6a51ad9e0160af
|
|
In I228216a0b55ff2d384b281d9ad2a61b93d58dab9 we split
out just the Controller software config in an effort
to provide hooks for alternate implementations (puppet).
This sort of worked but caused quirky ordering issues
with signal handling. It also causes problems for Tuskar
which would prefer to think of these nested stacks and
not have us split out just the software configs like this.
This patch moves all the controller related stuff for
our two implementations:
controller.yaml: is used by os-apply-config (uses the
tripleo-image-elements)
controller-puppet.yaml: uses stackforge puppet-* modules for
configuration
By duplicating the entire controller in this manner we make
it much easier to create dependencies and implement proper
signal handling. The only (temporary) downside is the duplication
of parameters most of which will eventually go away when we move towards
using the global parameters via Heat environment files instead.
Change-Id: Iaf3c889d7c8815f862308cd8e15ce1010059f5c6
|