summaryrefslogtreecommitdiffstats
path: root/deploy
diff options
context:
space:
mode:
authorwutianwei <wutianwei1@huawei.com>2017-10-27 11:25:11 +0800
committerwutianwei <wutianwei1@huawei.com>2017-10-27 14:19:52 +0800
commit43b19eaefb31c9fc4c20e7d9402f52ab9542ef9b (patch)
tree2d991b6c52181016e5b0f03521602aa58d8491ab /deploy
parentde2ba2c73205c2580f87a73fe7974ae8a8368093 (diff)
fix ceph reboot issue
When storage nodes reboot or shutdown, the partitions of loop device will lose. We add the command partprobe -s {{loopdevice}} to rc.local, it will reload the partitions, when boot up the storage nodes. Change-Id: I31dfca953aa254fa516421a494318b01cd39675c Signed-off-by: wutianwei <wutianwei1@huawei.com>
Diffstat (limited to 'deploy')
-rw-r--r--deploy/adapters/ansible/roles/storage/tasks/ceph.yml7
1 files changed, 7 insertions, 0 deletions
diff --git a/deploy/adapters/ansible/roles/storage/tasks/ceph.yml b/deploy/adapters/ansible/roles/storage/tasks/ceph.yml
index e024c671..50476c7b 100644
--- a/deploy/adapters/ansible/roles/storage/tasks/ceph.yml
+++ b/deploy/adapters/ansible/roles/storage/tasks/ceph.yml
@@ -43,3 +43,10 @@
line: "losetup -f /var/{{ item }}.img"
insertbefore: "{{ rc_local_insert_before }}"
with_items: "{{ ceph_osd_images }}"
+
+- name: Create ceph partitions at boot time
+ lineinfile:
+ dest: "{{ rc_local }}"
+ line: "partprobe -s {{ item }}"
+ insertbefore: "{{ rc_local_insert_before }}"
+ with_items: "{{ ceph_loopback.results | map(attribute='stdout') | list }}"