diff options
author | Dan Prince <dprince@redhat.com> | 2015-04-10 18:52:14 -0400 |
---|---|---|
committer | Dan Prince <dprince@redhat.com> | 2015-04-21 09:14:02 -0400 |
commit | bf466bcd5189495be9783366440dbe2c3db1ef3d (patch) | |
tree | 45c72a7f0129e834bd85b4752d7812ee5120150b /puppet/ceph-cluster-config.yaml | |
parent | 5513b5e61c3c26b883d1ab7a9b356bb01d881101 (diff) |
Add option to enable ceph storage on controller
This patch adds a new ControllerEnableCephStorage option
which can be used to install and configure Ceph storage
(OSD) on the controller node.
The default is to have this disabled by default (this is
probably a more production like setting).
The motivation for this change is to help facilitate CI
jobs which actually use Ceph. Right now we have an issue
where once the Heat stack finishes Ceph is configured
and ready, but Cinder volume (required by our CI
devtest_overcloud.sh test) may or may not have had
enough time to recognize the amount of storage
on the remote Ceph storage nodes. Waiting another
periodic cycle for Cinder volume to recognize the
actual amount of storage on the remote OSD nodes
would work but there isn't a good way to do this
ATM. The right solution here is probably to
implement Heat breakpoints in our CI. As we haven't quite
landed that change, another option is to simply
make the controller node also be a Ceph storage node.
Since this runs as "step 2" within the controller
it ensures that the OSD will be available and thus
Cinder volume will register the correct amount of
storage on startup.
Enabling this feature also matches what we do with Swift
storage on the Controller (although we should provide
an option to actually disable this as well).
Change-Id: Ic47d028591edbaab83a52d7f38283d7805b63042
Diffstat (limited to 'puppet/ceph-cluster-config.yaml')
0 files changed, 0 insertions, 0 deletions