Age | Commit message (Collapse) | Author | Files | Lines |
|
Migrate puppet/hieradata/*.yaml parameters to puppet/services/*.yaml
except for some services that are not composable yet.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: I7e5f8b18ee9aa63a1dffc6facaf88315b07d5fd7
|
|
This patch adds settings for swift::storage::all so
that we set the recommended the incoming and outgoing
chmod permissions.
Depends-On: I627ab2255087b0ebc2d3ddc9cd4a7a7d254abb65
Change-Id: I2f14c9afe7b7135ad1bfecb9db0a39bfc3b4d03a
|
|
Allows inclusion of additional arbitrary puppet classes by the
manifests if defined in the *_classes hieradata.
Example: to specify the Nova RAM allocation ratio there is a
param in nova::scheduler::filter but we do not include it
by default; if needed one can use:
nova::scheduler::filter::ram_allocation_ratio: 1.8
controller_classes:
- nova::scheduler::filter
Change-Id: I61d64d2498bed5c49376dee917d106598392db51
|
|
This patch adds support for the Ceilometer controller
role including the Ceilometer:
-API
-central agent
-alarm notifier
-alarm evaluator
-collector
-expirer
In order to enable swift metering the swift::proxy ceilometer middleware
was added in.
Also, a minor adjustment to the existing ceilometer HA proxy setting
was made to accommodate ceilometer auth settings. (not exactly sure
why but this seems to be required)
Like upstream TripleO Ceilometer is currently using a MySQL database
backend. A follow on patch can support configuring MongoDB for use
with Ceilometer.
Change-Id: I4e171274bd7679d386d93492d13dfa7c5d37f6a8
|
|
This patch adds support for a Swift proxy and storage
node on the controller.
The implementation is fairly straightforward with the
exception of building the ring. I've followed an
upstream TripleO model here where we build the
actual ring on each node (rather than build once
and rsync). This works because Heat will always
know all the devices ahead of time. In the future
when we have Heat breakpoints it might be possible
to consider optimizing this by generating the ring
once and then rsyncing to all the nodes.
The ringbuilder logic is executed as a seperate
Heat software deployment. On the controller the ring
is executed in between the base service (mysql/rabbit)
and OpenStack service steps. This is to ensure the
ring exists before the Swift proxy is started.
Having the ringbuilder.pp logic as a separate software
config should allow us to reuse it for the Storage
node role.
It should also be noted that swift.zones support is
added here but we are missing an upstream Heat
template change in order for it to be wired
in properly. See: I0e0f5189da1575f2e1ed7fba4bbbe13a8fbf6221
Likewise we need to properly wire in SwiftRingBuild as well.
See: I01311ec3ca265b151f8740bf7dc57cdf0cf0df6f
The underlying puppet ringbuilder code is already wired
to support this change when it lands.
As is this works today and will provide a working Overcloud
Swift-proxy/storage node config. Will follow this up with
a related Swift storage node patch which should allow
puppet to be used for configuration on the storage nodes
as well...
Change-Id: Id1272f796e2507a7357309e8cd6a51ad9e0160af
|