diff options
authorDan Radez <>2016-11-14 12:36:27 -0500
committerDan Radez <>2016-11-22 12:36:23 -0500
commit011c7a4b750b11c138728e5537a6e8f65a5d43fa (patch)
parent4f05ece5c264d68b4f3edcbc8dfd9f0138bbea87 (diff)
Allow passing a device name to ceph
JIRA: APEX-347 Change-Id: Ibc6d141e20faf613e0f6314286b55aff01ce862e Signed-off-by: Dan Radez <> (cherry picked from commit e36f790d036c0bfb5d7ed81d656f9bb1f5200a1a)
4 files changed, 19 insertions, 1 deletions
diff --git a/config/deploy/deploy_settings.yaml b/config/deploy/deploy_settings.yaml
index e7821f1..ee1dc14 100644
--- a/config/deploy/deploy_settings.yaml
+++ b/config/deploy/deploy_settings.yaml
@@ -48,6 +48,13 @@ deploy_options:
# Whether to run vsperf after the install has completed
#vsperf: false
+ # Specify a device for ceph to use for the OSDs. By default a virtual disk
+ # is created for the OSDs. This setting allows you to specify a different
+ # target for the OSDs. The setting must be valid on all overcloud nodes.
+ # The controllers and the compute nodes all have OSDs setup on them and
+ # therefore this device name must be valid on all overcloud nodes.
+ #ceph_device: /dev/sdb
# Set performance options on specific roles. The valid roles are 'Compute', 'Controller'
# and 'Storage', and the valid sections are 'kernel' and 'nova'
diff --git a/docs/installationprocedure/architecture.rst b/docs/installationprocedure/architecture.rst
index c2b38d0..3353678 100644
--- a/docs/installationprocedure/architecture.rst
+++ b/docs/installationprocedure/architecture.rst
@@ -44,6 +44,7 @@ will run the following services:
- OpenDaylight
- HA Proxy
- Pacemaker & VIPs
+- Ceph Monitors and OSDs
Stateless OpenStack services
All running statesless OpenStack services are load balanced by HA Proxy.
@@ -77,6 +78,12 @@ Pacemaker & VIPs
start up order and Virtual IPs associated with specific services are running
on the proper host.
+Ceph Monitors & OSDs
+ The Ceph monitors run on each of the control nodes. Each control node also
+ has a Ceph OSD running on it. By default the OSDs use an autogenerated
+ virtual disk as their target device. A non-autogenerated device can be
+ specified in the deploy file.
VM Migration is configured and VMs can be evacuated as needed or as invoked
by tools such as heat as part of a monitored stack deployment in the overcloud.
diff --git a/lib/python/apex/ b/lib/python/apex/
index 10b7831..0d48bd8 100644
--- a/lib/python/apex/
+++ b/lib/python/apex/
@@ -21,7 +21,7 @@ REQ_DEPLOY_SETTINGS = ['sdn_controller',
-OPT_DEPLOY_SETTINGS = ['performance', 'vsperf']
+OPT_DEPLOY_SETTINGS = ['performance', 'vsperf', 'ceph_device']
VALID_ROLES = ['Controller', 'Compute', 'ObjectStorage']
VALID_PERF_OPTS = ['kernel', 'nova', 'vpp']
diff --git a/lib/ b/lib/
index 9512298..d034742 100755
--- a/lib/
+++ b/lib/
@@ -217,6 +217,10 @@ if [[ "$net_isolation_enabled" == "TRUE" ]]; then
+if [[ -n "${deploy_options_array['ceph_device']}" ]]; then
+ sed -i '/ExtraConfig/a\\ ceph::profile::params::osds: {\\x27${deploy_options_array['ceph_device']}\\x27: {}}' opnfv-environment.yaml
sudo sed -i '/CephClusterFSID:/c\\ CephClusterFSID: \\x27$(cat /proc/sys/kernel/random/uuid)\\x27' /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml
sudo sed -i '/CephMonKey:/c\\ CephMonKey: \\x27'"\$(ceph-authtool --gen-print-key)"'\\x27' /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml
sudo sed -i '/CephAdminKey:/c\\ CephAdminKey: \\x27'"\$(ceph-authtool --gen-print-key)"'\\x27' /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml