summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorleonwang <wanghui71@huawei.com>2018-03-15 08:25:05 +0000
committerleonwang <wanghui71@huawei.com>2018-03-15 08:39:22 +0000
commit6bc7e08cc5d80941c80e8d36d3a2b1373f147a05 (patch)
tree3e236cfc1f4ce35ad8ab09843010d16010da4054
parent6018fcdd41c2074b2c94d8033f1434be028b054b (diff)
Merge nbp installation into opensds ansible script
In this update, the nbp-ansible is removed from stor4nfv repo and all code has been merged into ansible repo. Besides, the latest update reduce a lot of work to download and build opensds source code. And some installation docs are also updated. Remove license statement for the moment. Change-Id: Ib8504d96e2d41e1c3ab7e0c94689111679d56abd Signed-off-by: leonwang <wanghui71@huawei.com>
-rw-r--r--[-rwxr-xr-x]ci/ansible/README.md368
-rw-r--r--[-rwxr-xr-x]ci/ansible/clean.yml26
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/ceph/all.yml1002
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/ceph/ceph.hosts16
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/ceph/ceph.yaml13
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/ceph/osds.yml518
-rw-r--r--ci/ansible/group_vars/cinder/cinder.yaml31
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/common.yml73
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/lvm/lvm.yaml13
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/osdsdb.yml68
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/osdsdock.yml157
-rw-r--r--[-rwxr-xr-x]ci/ansible/group_vars/osdslet.yml39
-rw-r--r--ci/ansible/install_ansible.sh9
-rw-r--r--[-rwxr-xr-x]ci/ansible/local.hosts13
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/cleaner/tasks/main.yml355
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/common/tasks/main.yml242
-rw-r--r--ci/ansible/roles/nbp-installer/scenarios/csi.yml (renamed from ci/nbp-ansible/roles/installer/scenarios/csi.yml)0
-rw-r--r--ci/ansible/roles/nbp-installer/scenarios/flexvolume.yml (renamed from ci/nbp-ansible/roles/installer/scenarios/flexvolume.yml)0
-rw-r--r--ci/ansible/roles/nbp-installer/tasks/main.yml (renamed from ci/nbp-ansible/roles/installer/tasks/main.yml)0
-rw-r--r--ci/ansible/roles/osdsdb/scenarios/container.yml20
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/osdsdb/scenarios/etcd.yml78
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/osdsdb/tasks/main.yml16
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/osdsdock/scenarios/ceph.yml151
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/osdsdock/scenarios/cinder.yml10
-rw-r--r--ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml291
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/osdsdock/scenarios/lvm.yml47
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/osdsdock/tasks/main.yml88
-rw-r--r--[-rwxr-xr-x]ci/ansible/roles/osdslet/tasks/main.yml52
-rw-r--r--[-rwxr-xr-x]ci/ansible/site.yml37
-rw-r--r--ci/nbp-ansible/README.md51
-rw-r--r--ci/nbp-ansible/clean.yml12
-rw-r--r--ci/nbp-ansible/group_vars/common.yml33
-rw-r--r--ci/nbp-ansible/nbp.hosts2
-rw-r--r--ci/nbp-ansible/roles/cleaner/tasks/main.yml22
-rw-r--r--ci/nbp-ansible/roles/common/tasks/main.yml24
-rw-r--r--ci/nbp-ansible/site.yml13
-rw-r--r--tutorials/csi-plugin.md5
-rw-r--r--tutorials/flexvolume-plugin.md2
38 files changed, 1900 insertions, 1997 deletions
diff --git a/ci/ansible/README.md b/ci/ansible/README.md
index 0e2a3d1..8e86694 100755..100644
--- a/ci/ansible/README.md
+++ b/ci/ansible/README.md
@@ -1,180 +1,188 @@
-# opensds-ansible
-This is an installation tool for opensds using ansible.
-
-## 1. How to install an opensds local cluster
-This installation document assumes there is a clean Ubuntu 16.04 environment. If golang is already installed in the environment, make sure the following parameters are configured in ```/etc/profile``` and run ``source /etc/profile``:
-```conf
-export GOROOT=/usr/local/go
-export GOPATH=$HOME/gopath
-export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
-```
-
-### Pre-config (Ubuntu 16.04)
-First download some system packages:
-```
-sudo apt-get install -y openssh-server git make gcc
-```
-Then config ```/etc/ssh/sshd_config``` file and change one line:
-```conf
-PermitRootLogin yes
-```
-Next generate ssh-token:
-```bash
-ssh-keygen -t rsa
-ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation
-```
-
-### Install docker
-If use a standalone cinder as backend, you also need to install docker to run cinder service. Please see the [docker installation document](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) for details.
-
-### Install ansible tool
-```bash
-sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.
-sudo apt-get update
-sudo apt-get install ansible
-ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.
-```
-
-### Download opensds source code
-```bash
-mkdir -p $HOME/gopath/src/github.com/opensds && cd $HOME/gopath/src/github.com/opensds
-git clone https://github.com/opensds/opensds.git -b <specified_branch_name>
-cd opensds/contrib/ansible
-```
-
-### Configure opensds cluster variables:
-##### System environment:
-Configure the `workplace` and `container_enabled` in `group_vars/common.yml`:
-```yaml
-workplace: /home/your_username # Change this field according to your username. If login as root, configure this parameter to '/root'
-
-container_enabled: <false_or_true>
-```
-
-##### LVM
-If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
-```yaml
-enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
-pv_device: "your_pv_device_path" # Specify a block device and ensure it exists if lvm is chosen
-vg_name: "specified_vg_name" # Specify a name for VG if choosing lvm
-```
-Modify ```group_vars/lvm/lvm.yaml```, change pool name to be the same as `vg_name` above:
-```yaml
-"vg001" # change pool name to be the same as vg_name
-```
-##### Ceph
-If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
-```yaml
-enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
-ceph_pool_name: "specified_pool_name" # Specify a name for ceph pool if choosing ceph
-```
-Modify ```group_vars/ceph/ceph.yaml```, change pool name to be the same as `ceph_pool_name`:
-```yaml
-"rbd" # change pool name to be the same as ceph pool
-```
-Configure two files under ```group_vars/ceph```: `all.yml` and `osds.yml`. Here is an example:
-
-```group_vars/ceph/all.yml```:
-```yml
-ceph_origin: repository
-ceph_repository: community
-ceph_stable_release: luminous # Choose luminous as default version
-public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
-cluster_network: "{{ public_network }}"
-monitor_interface: eth1 # Change to the network interface on the target machine
-```
-```group_vars/ceph/osds.yml```:
-```yml
-devices: # For ceph devices, append one or multiple devices like the example below:
- - '/dev/sda' # Ensure this device exists and available if ceph is chosen
- - '/dev/sdb' # Ensure this device exists and available if ceph is chosen
-osd_scenario: collocated
-```
-
-##### Cinder
-If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
-```yaml
-enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
-
-# Use block-box install cinder_standalone if true, see details in:
-use_cinder_standalone: true
-# If true, you can configure cinder_container_platform, cinder_image_tag,
-# cinder_volume_group.
-
-# Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.
-cinder_container_platform: debian:stretch
-# The image tag can be arbitrarily modified, as long as follow the image naming
-# conventions, default: debian-cinder
-cinder_image_tag: debian-cinder
-# The cinder standalone use lvm driver as default driver, therefore `volume_group`
-# should be configured, the default is: cinder-volumes. The volume group will be
-# removed when use ansible script clean environment.
-cinder_volume_group: cinder-volumes
-```
-
-Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone.
-
-### Check if the hosts can be reached
-```bash
-sudo ansible all -m ping -i local.hosts
-```
-
-### Run opensds-ansible playbook to start deploy
-```bash
-sudo ansible-playbook site.yml -i local.hosts
-```
-
-## 2. How to test opensds cluster
-
-### Configure opensds CLI tool
-```bash
-sudo cp $GOPATH/src/github.com/opensds/opensds/build/out/bin/osdsctl /usr/local/bin
-export OPENSDS_ENDPOINT=http://127.0.0.1:50040
-osdsctl pool list # Check if the pool resource is available
-```
-
-### Create a default profile first.
-```
-osdsctl profile create '{"name": "default", "description": "default policy"}'
-```
-
-### Create a volume.
-```
-osdsctl volume create 1 --name=test-001
-```
-For cinder, az needs to be specified.
-```
-osdsctl volume create 1 --name=test-001 --az nova
-```
-
-### List all volumes.
-```
-osdsctl volume list
-```
-
-### Delete the volume.
-```
-osdsctl volume delete <your_volume_id>
-```
-
-
-## 3. How to purge and clean opensds cluster
-
-### Run opensds-ansible playbook to clean the environment
-```bash
-sudo ansible-playbook clean.yml -i local.hosts
-```
-
-### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed
-```bash
-cd /opt/ceph-ansible
-sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts
-```
-
-In addition, clean up the logical partition on the physical block device used by ceph, using the ```fdisk``` tool.
-
-### Remove ceph-ansible source code (optional)
-```bash
-cd ..
-sudo rm -rf /opt/ceph-ansible
-```
+# opensds-ansible
+This is an installation tool for opensds using ansible.
+
+## 1. How to install an opensds local cluster
+### Pre-config (Ubuntu 16.04)
+First download some system packages:
+```
+sudo apt-get install -y openssh-server git make gcc
+```
+Then config ```/etc/ssh/sshd_config``` file and change one line:
+```conf
+PermitRootLogin yes
+```
+Next generate ssh-token:
+```bash
+ssh-keygen -t rsa
+ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation
+```
+
+### Install docker
+If use a standalone cinder as backend, you also need to install docker to run cinder service. Please see the [docker installation document](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) for details.
+
+### Install ansible tool
+To install ansible, you can run `install_ansible.sh` directly or input these commands below:
+```bash
+sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.
+sudo apt-get update
+sudo apt-get install ansible
+ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.
+```
+
+### Configure opensds cluster variables:
+##### System environment:
+Configure these variables below in `group_vars/common.yml`:
+```yaml
+opensds_release: v0.1.4 # The version should be at least v0.1.4.
+nbp_release: v0.1.0 # The version should be at least v0.1.0.
+
+container_enabled: <false_or_true>
+```
+
+If you want to integrate OpenSDS with cloud platform (for example k8s), please modify `nbp_plugin_type` variable in `group_vars/common.yml`:
+```yaml
+nbp_plugin_type: standalone # standalone is the default integration way, but you can change it to 'csi', 'flexvolume'
+```
+
+#### Database configuration
+Currently OpenSDS adopts `etcd` as database backend, and the default db endpoint is `localhost:2379,localhost:2380`. But to avoid some conflicts with existing environment (k8s local cluster), we suggest you change the port of etcd cluster in `group_vars/osdsdb.yml`:
+```yaml
+db_endpoint: localhost:62379,localhost:62380
+
+etcd_host: 127.0.0.1
+etcd_port: 62379
+etcd_peer_port: 62380
+```
+
+##### LVM
+If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
+pv_devices: # Specify block devices and ensure them existed if you choose lvm
+ #- /dev/sdc
+ #- /dev/sdd
+vg_name: "specified_vg_name" # Specify a name for VG if choosing lvm
+```
+Modify ```group_vars/lvm/lvm.yaml```, change pool name to be the same as `vg_name` above:
+```yaml
+"vg001" # change pool name to be the same as vg_name
+```
+##### Ceph
+If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
+ceph_pools: # Specify pool name randomly if choosing ceph
+ - rbd
+ #- ssd
+ #- sas
+```
+Modify ```group_vars/ceph/ceph.yaml```, change pool name to be the same as `ceph_pool_name`. But if you enable multiple pools, please append the current pool format:
+```yaml
+"rbd" # change pool name to be the same as ceph pool
+```
+Configure two files under ```group_vars/ceph```: `all.yml` and `osds.yml`. Here is an example:
+
+```group_vars/ceph/all.yml```:
+```yml
+ceph_origin: repository
+ceph_repository: community
+ceph_stable_release: luminous # Choose luminous as default version
+public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
+cluster_network: "{{ public_network }}"
+monitor_interface: eth1 # Change to the network interface on the target machine
+```
+```group_vars/ceph/osds.yml```:
+```yml
+devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
+ - '/dev/sda' # Ensure this device exists and available if ceph is chosen
+ - '/dev/sdb' # Ensure this device exists and available if ceph is chosen
+osd_scenario: collocated
+```
+
+##### Cinder
+If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
+
+# Use block-box install cinder_standalone if true, see details in:
+use_cinder_standalone: true
+# If true, you can configure cinder_container_platform, cinder_image_tag,
+# cinder_volume_group.
+
+# Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.
+cinder_container_platform: debian:stretch
+# The image tag can be arbitrarily modified, as long as follow the image naming
+# conventions, default: debian-cinder
+cinder_image_tag: debian-cinder
+# The cinder standalone use lvm driver as default driver, therefore `volume_group`
+# should be configured, the default is: cinder-volumes. The volume group will be
+# removed when use ansible script clean environment.
+cinder_volume_group: cinder-volumes
+```
+
+Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone.
+
+### Check if the hosts can be reached
+```bash
+sudo ansible all -m ping -i local.hosts
+```
+
+### Run opensds-ansible playbook to start deploy
+```bash
+sudo ansible-playbook site.yml -i local.hosts
+```
+
+## 2. How to test opensds cluster
+
+### Configure opensds CLI tool
+```bash
+sudo cp /opt/opensds-{opensds-release}-linux-amd64/bin/osdsctl /usr/local/bin
+export OPENSDS_ENDPOINT=http://127.0.0.1:50040
+export OPENSDS_AUTH_STRATEGY=noauth
+
+osdsctl pool list # Check if the pool resource is available
+```
+
+### Create a default profile first.
+```
+osdsctl profile create '{"name": "default", "description": "default policy"}'
+```
+
+### Create a volume.
+```
+osdsctl volume create 1 --name=test-001
+```
+For cinder, az needs to be specified.
+```
+osdsctl volume create 1 --name=test-001 --az nova
+```
+
+### List all volumes.
+```
+osdsctl volume list
+```
+
+### Delete the volume.
+```
+osdsctl volume delete <your_volume_id>
+```
+
+
+## 3. How to purge and clean opensds cluster
+
+### Run opensds-ansible playbook to clean the environment
+```bash
+sudo ansible-playbook clean.yml -i local.hosts
+```
+
+### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed
+```bash
+cd /opt/ceph-ansible
+sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts
+```
+
+### Remove ceph-ansible source code (optional)
+```bash
+cd ..
+sudo rm -rf /opt/ceph-ansible
+```
diff --git a/ci/ansible/clean.yml b/ci/ansible/clean.yml
index 505c85b..fd2f1c9 100755..100644
--- a/ci/ansible/clean.yml
+++ b/ci/ansible/clean.yml
@@ -1,14 +1,14 @@
----
-# Defines some clean processes when banishing the cluster.
-
-- name: destory an opensds cluster
- hosts: all
- remote_user: root
- vars_files:
- - group_vars/common.yml
- - group_vars/osdsdb.yml
- - group_vars/osdsdock.yml
- gather_facts: false
- become: True
- roles:
+---
+# Defines some clean processes when banishing the cluster.
+
+- name: destory an opensds cluster
+ hosts: all
+ remote_user: root
+ vars_files:
+ - group_vars/common.yml
+ - group_vars/osdsdb.yml
+ - group_vars/osdsdock.yml
+ gather_facts: false
+ become: True
+ roles:
- cleaner \ No newline at end of file
diff --git a/ci/ansible/group_vars/ceph/all.yml b/ci/ansible/group_vars/ceph/all.yml
index 1d49e6c..9594d33 100755..100644
--- a/ci/ansible/group_vars/ceph/all.yml
+++ b/ci/ansible/group_vars/ceph/all.yml
@@ -1,501 +1,501 @@
----
-# Variables here are applicable to all host groups NOT roles
-
-# This sample file generated by generate_group_vars_sample.sh
-
-# Dummy variable to avoid error because ansible does not recognize the
-# file as a good configuration file when no variable in it.
-dummy:
-
-# You can override vars by using host or group vars
-
-###########
-# GENERAL #
-###########
-
-######################################
-# Releases name to number dictionary #
-######################################
-#ceph_release_num:
-# dumpling: 0.67
-# emperor: 0.72
-# firefly: 0.80
-# giant: 0.87
-# hammer: 0.94
-# infernalis: 9
-# jewel: 10
-# kraken: 11
-# luminous: 12
-# mimic: 13
-
-# Directory to fetch cluster fsid, keys etc...
-#fetch_directory: fetch/
-
-# The 'cluster' variable determines the name of the cluster.
-# Changing the default value to something else means that you will
-# need to change all the command line calls as well, for example if
-# your cluster name is 'foo':
-# "ceph health" will become "ceph --cluster foo health"
-#
-# An easier way to handle this is to use the environment variable CEPH_ARGS
-# So run: "export CEPH_ARGS="--cluster foo"
-# With that you will be able to run "ceph health" normally
-#cluster: ceph
-
-# Inventory host group variables
-#mon_group_name: mons
-#osd_group_name: osds
-#rgw_group_name: rgws
-#mds_group_name: mdss
-#nfs_group_name: nfss
-#restapi_group_name: restapis
-#rbdmirror_group_name: rbdmirrors
-#client_group_name: clients
-#iscsi_gw_group_name: iscsi-gws
-#mgr_group_name: mgrs
-
-# If check_firewall is true, then ansible will try to determine if the
-# Ceph ports are blocked by a firewall. If the machine running ansible
-# cannot reach the Ceph ports for some other reason, you may need or
-# want to set this to False to skip those checks.
-#check_firewall: False
-
-
-############
-# PACKAGES #
-############
-#debian_package_dependencies:
-# - python-pycurl
-# - hdparm
-
-#centos_package_dependencies:
-# - python-pycurl
-# - hdparm
-# - epel-release
-# - python-setuptools
-# - libselinux-python
-
-#redhat_package_dependencies:
-# - python-pycurl
-# - hdparm
-# - python-setuptools
-
-# Whether or not to install the ceph-test package.
-#ceph_test: false
-
-# Enable the ntp service by default to avoid clock skew on
-# ceph nodes
-#ntp_service_enabled: true
-
-# Set uid/gid to default '64045' for bootstrap directories.
-# '64045' is used for debian based distros. It must be set to 167 in case of rhel based distros.
-# These values have to be set according to the base OS used by the container image, NOT the host.
-#bootstrap_dirs_owner: "64045"
-#bootstrap_dirs_group: "64045"
-
-# This variable determines if ceph packages can be updated. If False, the
-# package resources will use "state=present". If True, they will use
-# "state=latest".
-#upgrade_ceph_packages: False
-
-#ceph_use_distro_backports: false # DEBIAN ONLY
-
-
-###########
-# INSTALL #
-###########
-#ceph_rhcs_cdn_install: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_repository_type: "{{ 'cdn' if ceph_rhcs_cdn_install else 'iso' if ceph_rhcs_iso_install else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_rhcs_iso_install: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_rhcs: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_stable: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_dev: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_stable_uca: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_custom: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-
-# ORIGIN SOURCE
-#
-# Choose between:
-# - 'repository' means that you will get ceph installed through a new repository. Later below choose between 'community', 'rhcs' or 'dev'
-# - 'distro' means that no separate repo file will be added
-# you will get whatever version of Ceph is included in your Linux distro.
-# 'local' means that the ceph binaries will be copied over from the local machine
-#ceph_origin: "{{ 'repository' if ceph_rhcs or ceph_stable or ceph_dev or ceph_stable_uca or ceph_custom else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#valid_ceph_origins:
-# - repository
-# - distro
-# - local
-ceph_origin: repository
-ceph_repository: community
-
-#ceph_repository: "{{ 'community' if ceph_stable else 'rhcs' if ceph_rhcs else 'dev' if ceph_dev else 'uca' if ceph_stable_uca else 'custom' if ceph_custom else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#valid_ceph_repository:
-# - community
-# - rhcs
-# - dev
-# - uca
-# - custom
-
-
-# REPOSITORY: COMMUNITY VERSION
-#
-# Enabled when ceph_repository == 'community'
-#
-#ceph_mirror: http://download.ceph.com
-#ceph_stable_key: https://download.ceph.com/keys/release.asc
-ceph_stable_release: luminous
-#ceph_stable_repo: "{{ ceph_mirror }}/debian-{{ ceph_stable_release }}"
-
-#nfs_ganesha_stable: true # use stable repos for nfs-ganesha
-#nfs_ganesha_stable_branch: V2.5-stable
-#nfs_ganesha_stable_deb_repo: "{{ ceph_mirror }}/nfs-ganesha/deb-{{ nfs_ganesha_stable_branch }}/{{ ceph_stable_release }}"
-
-
-# Use the option below to specify your applicable package tree, eg. when using non-LTS Ubuntu versions
-# # for a list of available Debian distributions, visit http://download.ceph.com/debian-{{ ceph_stable_release }}/dists/
-# for more info read: https://github.com/ceph/ceph-ansible/issues/305
-#ceph_stable_distro_source: "{{ ansible_lsb.codename }}"
-
-# This option is needed for _both_ stable and dev version, so please always fill the right version
-# # for supported distros, see http://download.ceph.com/rpm-{{ ceph_stable_release }}/
-#ceph_stable_redhat_distro: el7
-
-
-# REPOSITORY: RHCS VERSION RED HAT STORAGE (from 1.3)
-#
-# Enabled when ceph_repository == 'rhcs'
-#
-# This version is only supported on RHEL >= 7.1
-# As of RHEL 7.1, libceph.ko and rbd.ko are now included in Red Hat's kernel
-# packages natively. The RHEL 7.1 kernel packages are more stable and secure than
-# using these 3rd-party kmods with RHEL 7.0. Please update your systems to RHEL
-# 7.1 or later if you want to use the kernel RBD client.
-#
-# The CephFS kernel client is undergoing rapid development upstream, and we do
-# not recommend running the CephFS kernel module on RHEL 7's 3.10 kernel at this
-# time. Please use ELRepo's latest upstream 4.x kernels if you want to run CephFS
-# on RHEL 7.
-#
-#
-#ceph_rhcs_version: "{{ ceph_stable_rh_storage_version | default(2) }}"
-#valid_ceph_repository_type:
-# - cdn
-# - iso
-#ceph_rhcs_iso_path: "{{ ceph_stable_rh_storage_iso_path | default('') }}"
-#ceph_rhcs_mount_path: "{{ ceph_stable_rh_storage_mount_path | default('/tmp/rh-storage-mount') }}"
-#ceph_rhcs_repository_path: "{{ ceph_stable_rh_storage_repository_path | default('/tmp/rh-storage-repo') }}" # where to copy iso's content
-
-# RHCS installation in Debian systems
-#ceph_rhcs_cdn_debian_repo: https://customername:customerpasswd@rhcs.download.redhat.com
-#ceph_rhcs_cdn_debian_repo_version: "/3-release/" # for GA, later for updates use /3-updates/
-
-
-# REPOSITORY: UBUNTU CLOUD ARCHIVE
-#
-# Enabled when ceph_repository == 'uca'
-#
-# This allows the install of Ceph from the Ubuntu Cloud Archive. The Ubuntu Cloud Archive
-# usually has newer Ceph releases than the normal distro repository.
-#
-#
-#ceph_stable_repo_uca: "http://ubuntu-cloud.archive.canonical.com/ubuntu"
-#ceph_stable_openstack_release_uca: liberty
-#ceph_stable_release_uca: "{{ansible_lsb.codename}}-updates/{{ceph_stable_openstack_release_uca}}"
-
-
-# REPOSITORY: DEV
-#
-# Enabled when ceph_repository == 'dev'
-#
-#ceph_dev_branch: master # development branch you would like to use e.g: master, wip-hack
-#ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built)
-
-#nfs_ganesha_dev: false # use development repos for nfs-ganesha
-
-# Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman
-# flavors so far include: ceph_master, ceph_jewel, ceph_kraken, ceph_luminous
-#nfs_ganesha_flavor: "ceph_master"
-
-#ceph_iscsi_config_dev: true # special repo for deploying iSCSI gateways
-
-
-# REPOSITORY: CUSTOM
-#
-# Enabled when ceph_repository == 'custom'
-#
-# Use a custom repository to install ceph. For RPM, ceph_custom_repo should be
-# a URL to the .repo file to be installed on the targets. For deb,
-# ceph_custom_repo should be the URL to the repo base.
-#
-#ceph_custom_repo: https://server.domain.com/ceph-custom-repo
-
-
-# ORIGIN: LOCAL CEPH INSTALLATION
-#
-# Enabled when ceph_repository == 'local'
-#
-# Path to DESTDIR of the ceph install
-#ceph_installation_dir: "/path/to/ceph_installation/"
-# Whether or not to use installer script rundep_installer.sh
-# This script takes in rundep and installs the packages line by line onto the machine
-# If this is set to false then it is assumed that the machine ceph is being copied onto will already have
-# all runtime dependencies installed
-#use_installer: false
-# Root directory for ceph-ansible
-#ansible_dir: "/path/to/ceph-ansible"
-
-
-######################
-# CEPH CONFIGURATION #
-######################
-
-## Ceph options
-#
-# Each cluster requires a unique, consistent filesystem ID. By
-# default, the playbook generates one for you and stores it in a file
-# in `fetch_directory`. If you want to customize how the fsid is
-# generated, you may find it useful to disable fsid generation to
-# avoid cluttering up your ansible repo. If you set `generate_fsid` to
-# false, you *must* generate `fsid` in another way.
-# ACTIVATE THE FSID VARIABLE FOR NON-VAGRANT DEPLOYMENT
-#fsid: "{{ cluster_uuid.stdout }}"
-#generate_fsid: true
-
-#ceph_conf_key_directory: /etc/ceph
-
-#cephx: true
-
-## Client options
-#
-#rbd_cache: "true"
-#rbd_cache_writethrough_until_flush: "true"
-#rbd_concurrent_management_ops: 20
-
-#rbd_client_directories: true # this will create rbd_client_log_path and rbd_client_admin_socket_path directories with proper permissions
-
-# Permissions for the rbd_client_log_path and
-# rbd_client_admin_socket_path. Depending on your use case for Ceph
-# you may want to change these values. The default, which is used if
-# any of the variables are unset or set to a false value (like `null`
-# or `false`) is to automatically determine what is appropriate for
-# the Ceph version with non-OpenStack workloads -- ceph:ceph and 0770
-# for infernalis releases, and root:root and 1777 for pre-infernalis
-# releases.
-#
-# For other use cases, including running Ceph with OpenStack, you'll
-# want to set these differently:
-#
-# For OpenStack on RHEL, you'll want:
-# rbd_client_directory_owner: "qemu"
-# rbd_client_directory_group: "libvirtd" (or "libvirt", depending on your version of libvirt)
-# rbd_client_directory_mode: "0755"
-#
-# For OpenStack on Ubuntu or Debian, set:
-# rbd_client_directory_owner: "libvirt-qemu"
-# rbd_client_directory_group: "kvm"
-# rbd_client_directory_mode: "0755"
-#
-# If you set rbd_client_directory_mode, you must use a string (e.g.,
-# 'rbd_client_directory_mode: "0755"', *not*
-# 'rbd_client_directory_mode: 0755', or Ansible will complain: mode
-# must be in octal or symbolic form
-#rbd_client_directory_owner: null
-#rbd_client_directory_group: null
-#rbd_client_directory_mode: null
-
-#rbd_client_log_path: /var/log/ceph
-#rbd_client_log_file: "{{ rbd_client_log_path }}/qemu-guest-$pid.log" # must be writable by QEMU and allowed by SELinux or AppArmor
-#rbd_client_admin_socket_path: /var/run/ceph # must be writable by QEMU and allowed by SELinux or AppArmor
-
-## Monitor options
-#
-# You must define either monitor_interface, monitor_address or monitor_address_block.
-# These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).
-# Eg. If you want to specify for each monitor which address the monitor will bind to you can set it in your **inventory host file** by using 'monitor_address' variable.
-# Preference will go to monitor_address if both monitor_address and monitor_interface are defined.
-# To use an IPv6 address, use the monitor_address setting instead (and set ip_version to ipv6)
-monitor_interface: ens3
-#monitor_address: 0.0.0.0
-#monitor_address_block: subnet
-# set to either ipv4 or ipv6, whichever your network is using
-#ip_version: ipv4
-#mon_use_fqdn: false # if set to true, the MON name used will be the fqdn in the ceph.conf
-
-## OSD options
-#
-journal_size: 100 # OSD journal size in MB
-public_network: 100.64.128.40/24
-cluster_network: "{{ public_network }}"
-#osd_mkfs_type: xfs
-#osd_mkfs_options_xfs: -f -i size=2048
-#osd_mount_options_xfs: noatime,largeio,inode64,swalloc
-#osd_objectstore: filestore
-
-# xattrs. by default, 'filestore xattr use omap' is set to 'true' if
-# 'osd_mkfs_type' is set to 'ext4'; otherwise it isn't set. This can
-# be set to 'true' or 'false' to explicitly override those
-# defaults. Leave it 'null' to use the default for your chosen mkfs
-# type.
-#filestore_xattr_use_omap: null
-
-## MDS options
-#
-#mds_use_fqdn: false # if set to true, the MDS name used will be the fqdn in the ceph.conf
-#mds_allow_multimds: false
-#mds_max_mds: 3
-
-## Rados Gateway options
-#
-#radosgw_dns_name: your.subdomain.tld # subdomains used by radosgw. See http://ceph.com/docs/master/radosgw/config/#enabling-subdomain-s3-calls
-#radosgw_resolve_cname: false # enable for radosgw to resolve DNS CNAME based bucket names
-#radosgw_civetweb_port: 8080
-#radosgw_civetweb_num_threads: 100
-# For additional civetweb configuration options available such as SSL, logging,
-# keepalive, and timeout settings, please see the civetweb docs at
-# https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md
-#radosgw_civetweb_options: "num_threads={{ radosgw_civetweb_num_threads }}"
-# You must define either radosgw_interface, radosgw_address.
-# These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).
-# Eg. If you want to specify for each radosgw node which address the radosgw will bind to you can set it in your **inventory host file** by using 'radosgw_address' variable.
-# Preference will go to radosgw_address if both radosgw_address and radosgw_interface are defined.
-# To use an IPv6 address, use the radosgw_address setting instead (and set ip_version to ipv6)
-#radosgw_interface: interface
-#radosgw_address: "{{ '0.0.0.0' if rgw_containerized_deployment else 'address' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#radosgw_address_block: subnet
-#radosgw_keystone: false # activate OpenStack Keystone options full detail here: http://ceph.com/docs/master/radosgw/keystone/
-# Rados Gateway options
-#email_address: foo@bar.com
-
-## REST API options
-#
-#restapi_interface: "{{ monitor_interface }}"
-#restapi_address: "{{ monitor_address }}"
-#restapi_port: 5000
-
-## Testing mode
-# enable this mode _only_ when you have a single node
-# if you don't want it keep the option commented
-#common_single_host_mode: true
-
-## Handlers - restarting daemons after a config change
-# if for whatever reasons the content of your ceph configuration changes
-# ceph daemons will be restarted as well. At the moment, we can not detect
-# which config option changed so all the daemons will be restarted. Although
-# this restart will be serialized for each node, in between a health check
-# will be performed so we make sure we don't move to the next node until
-# ceph is not healthy
-# Obviously between the checks (for monitors to be in quorum and for osd's pgs
-# to be clean) we have to wait. These retries and delays can be configurable
-# for both monitors and osds.
-#
-# Monitor handler checks
-#handler_health_mon_check_retries: 5
-#handler_health_mon_check_delay: 10
-#
-# OSD handler checks
-#handler_health_osd_check_retries: 40
-#handler_health_osd_check_delay: 30
-#handler_health_osd_check: true
-#
-# MDS handler checks
-#handler_health_mds_check_retries: 5
-#handler_health_mds_check_delay: 10
-#
-# RGW handler checks
-#handler_health_rgw_check_retries: 5
-#handler_health_rgw_check_delay: 10
-
-# NFS handler checks
-#handler_health_nfs_check_retries: 5
-#handler_health_nfs_check_delay: 10
-
-# RBD MIRROR handler checks
-#handler_health_rbd_mirror_check_retries: 5
-#handler_health_rbd_mirror_check_delay: 10
-
-# MGR handler checks
-#handler_health_mgr_check_retries: 5
-#handler_health_mgr_check_delay: 10
-
-###############
-# NFS-GANESHA #
-###############
-
-# Confiure the type of NFS gatway access. At least one must be enabled for an
-# NFS role to be useful
-#
-# Set this to true to enable File access via NFS. Requires an MDS role.
-#nfs_file_gw: false
-# Set this to true to enable Object access via NFS. Requires an RGW role.
-#nfs_obj_gw: true
-
-###################
-# CONFIG OVERRIDE #
-###################
-
-# Ceph configuration file override.
-# This allows you to specify more configuration options
-# using an INI style format.
-# The following sections are supported: [global], [mon], [osd], [mds], [rgw]
-#
-# Example:
-# ceph_conf_overrides:
-# global:
-# foo: 1234
-# bar: 5678
-#
-#ceph_conf_overrides: {}
-
-
-#############
-# OS TUNING #
-#############
-
-#disable_transparent_hugepage: true
-#os_tuning_params:
-# - { name: kernel.pid_max, value: 4194303 }
-# - { name: fs.file-max, value: 26234859 }
-# - { name: vm.zone_reclaim_mode, value: 0 }
-# - { name: vm.swappiness, value: 10 }
-# - { name: vm.min_free_kbytes, value: "{{ vm_min_free_kbytes }}" }
-
-# For Debian & Red Hat/CentOS installs set TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES
-# Set this to a byte value (e.g. 134217728)
-# A value of 0 will leave the package default.
-#ceph_tcmalloc_max_total_thread_cache: 0
-
-
-##########
-# DOCKER #
-##########
-#docker_exec_cmd:
-#docker: false
-#ceph_docker_image: "ceph/daemon"
-#ceph_docker_image_tag: latest
-#ceph_docker_registry: docker.io
-#ceph_docker_enable_centos_extra_repo: false
-#ceph_docker_on_openstack: false
-#ceph_mon_docker_interface: "{{ monitor_interface }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_mon_docker_subnet: "{{ public_network }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#mon_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#osd_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#mds_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#rgw_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#containerized_deployment: "{{ True if mon_containerized_deployment or osd_containerized_deployment or mds_containerized_deployment or rgw_containerized_deployment else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-
-
-############
-# KV store #
-############
-#containerized_deployment_with_kv: false
-#mon_containerized_default_ceph_conf_with_kv: false
-#kv_type: etcd
-#kv_endpoint: 127.0.0.1
-#kv_port: 2379
-
-
-# this is only here for usage with the rolling_update.yml playbook
-# do not ever change this here
-#rolling_update: false
-
-
+---
+# Variables here are applicable to all host groups NOT roles
+
+# This sample file generated by generate_group_vars_sample.sh
+
+# Dummy variable to avoid error because ansible does not recognize the
+# file as a good configuration file when no variable in it.
+dummy:
+
+# You can override vars by using host or group vars
+
+###########
+# GENERAL #
+###########
+
+######################################
+# Releases name to number dictionary #
+######################################
+#ceph_release_num:
+# dumpling: 0.67
+# emperor: 0.72
+# firefly: 0.80
+# giant: 0.87
+# hammer: 0.94
+# infernalis: 9
+# jewel: 10
+# kraken: 11
+# luminous: 12
+# mimic: 13
+
+# Directory to fetch cluster fsid, keys etc...
+#fetch_directory: fetch/
+
+# The 'cluster' variable determines the name of the cluster.
+# Changing the default value to something else means that you will
+# need to change all the command line calls as well, for example if
+# your cluster name is 'foo':
+# "ceph health" will become "ceph --cluster foo health"
+#
+# An easier way to handle this is to use the environment variable CEPH_ARGS
+# So run: "export CEPH_ARGS="--cluster foo"
+# With that you will be able to run "ceph health" normally
+#cluster: ceph
+
+# Inventory host group variables
+#mon_group_name: mons
+#osd_group_name: osds
+#rgw_group_name: rgws
+#mds_group_name: mdss
+#nfs_group_name: nfss
+#restapi_group_name: restapis
+#rbdmirror_group_name: rbdmirrors
+#client_group_name: clients
+#iscsi_gw_group_name: iscsi-gws
+#mgr_group_name: mgrs
+
+# If check_firewall is true, then ansible will try to determine if the
+# Ceph ports are blocked by a firewall. If the machine running ansible
+# cannot reach the Ceph ports for some other reason, you may need or
+# want to set this to False to skip those checks.
+#check_firewall: False
+
+
+############
+# PACKAGES #
+############
+#debian_package_dependencies:
+# - python-pycurl
+# - hdparm
+
+#centos_package_dependencies:
+# - python-pycurl
+# - hdparm
+# - epel-release
+# - python-setuptools
+# - libselinux-python
+
+#redhat_package_dependencies:
+# - python-pycurl
+# - hdparm
+# - python-setuptools
+
+# Whether or not to install the ceph-test package.
+#ceph_test: false
+
+# Enable the ntp service by default to avoid clock skew on
+# ceph nodes
+#ntp_service_enabled: true
+
+# Set uid/gid to default '64045' for bootstrap directories.
+# '64045' is used for debian based distros. It must be set to 167 in case of rhel based distros.
+# These values have to be set according to the base OS used by the container image, NOT the host.
+#bootstrap_dirs_owner: "64045"
+#bootstrap_dirs_group: "64045"
+
+# This variable determines if ceph packages can be updated. If False, the
+# package resources will use "state=present". If True, they will use
+# "state=latest".
+#upgrade_ceph_packages: False
+
+#ceph_use_distro_backports: false # DEBIAN ONLY
+
+
+###########
+# INSTALL #
+###########
+#ceph_rhcs_cdn_install: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_repository_type: "{{ 'cdn' if ceph_rhcs_cdn_install else 'iso' if ceph_rhcs_iso_install else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_rhcs_iso_install: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_rhcs: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_stable: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_dev: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_stable_uca: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_custom: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+
+# ORIGIN SOURCE
+#
+# Choose between:
+# - 'repository' means that you will get ceph installed through a new repository. Later below choose between 'community', 'rhcs' or 'dev'
+# - 'distro' means that no separate repo file will be added
+# you will get whatever version of Ceph is included in your Linux distro.
+# 'local' means that the ceph binaries will be copied over from the local machine
+#ceph_origin: "{{ 'repository' if ceph_rhcs or ceph_stable or ceph_dev or ceph_stable_uca or ceph_custom else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#valid_ceph_origins:
+# - repository
+# - distro
+# - local
+ceph_origin: repository
+ceph_repository: community
+
+#ceph_repository: "{{ 'community' if ceph_stable else 'rhcs' if ceph_rhcs else 'dev' if ceph_dev else 'uca' if ceph_stable_uca else 'custom' if ceph_custom else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#valid_ceph_repository:
+# - community
+# - rhcs
+# - dev
+# - uca
+# - custom
+
+
+# REPOSITORY: COMMUNITY VERSION
+#
+# Enabled when ceph_repository == 'community'
+#
+#ceph_mirror: http://download.ceph.com
+#ceph_stable_key: https://download.ceph.com/keys/release.asc
+ceph_stable_release: luminous
+#ceph_stable_repo: "{{ ceph_mirror }}/debian-{{ ceph_stable_release }}"
+
+#nfs_ganesha_stable: true # use stable repos for nfs-ganesha
+#nfs_ganesha_stable_branch: V2.5-stable
+#nfs_ganesha_stable_deb_repo: "{{ ceph_mirror }}/nfs-ganesha/deb-{{ nfs_ganesha_stable_branch }}/{{ ceph_stable_release }}"
+
+
+# Use the option below to specify your applicable package tree, eg. when using non-LTS Ubuntu versions
+# # for a list of available Debian distributions, visit http://download.ceph.com/debian-{{ ceph_stable_release }}/dists/
+# for more info read: https://github.com/ceph/ceph-ansible/issues/305
+#ceph_stable_distro_source: "{{ ansible_lsb.codename }}"
+
+# This option is needed for _both_ stable and dev version, so please always fill the right version
+# # for supported distros, see http://download.ceph.com/rpm-{{ ceph_stable_release }}/
+#ceph_stable_redhat_distro: el7
+
+
+# REPOSITORY: RHCS VERSION RED HAT STORAGE (from 1.3)
+#
+# Enabled when ceph_repository == 'rhcs'
+#
+# This version is only supported on RHEL >= 7.1
+# As of RHEL 7.1, libceph.ko and rbd.ko are now included in Red Hat's kernel
+# packages natively. The RHEL 7.1 kernel packages are more stable and secure than
+# using these 3rd-party kmods with RHEL 7.0. Please update your systems to RHEL
+# 7.1 or later if you want to use the kernel RBD client.
+#
+# The CephFS kernel client is undergoing rapid development upstream, and we do
+# not recommend running the CephFS kernel module on RHEL 7's 3.10 kernel at this
+# time. Please use ELRepo's latest upstream 4.x kernels if you want to run CephFS
+# on RHEL 7.
+#
+#
+#ceph_rhcs_version: "{{ ceph_stable_rh_storage_version | default(2) }}"
+#valid_ceph_repository_type:
+# - cdn
+# - iso
+#ceph_rhcs_iso_path: "{{ ceph_stable_rh_storage_iso_path | default('') }}"
+#ceph_rhcs_mount_path: "{{ ceph_stable_rh_storage_mount_path | default('/tmp/rh-storage-mount') }}"
+#ceph_rhcs_repository_path: "{{ ceph_stable_rh_storage_repository_path | default('/tmp/rh-storage-repo') }}" # where to copy iso's content
+
+# RHCS installation in Debian systems
+#ceph_rhcs_cdn_debian_repo: https://customername:customerpasswd@rhcs.download.redhat.com
+#ceph_rhcs_cdn_debian_repo_version: "/3-release/" # for GA, later for updates use /3-updates/
+
+
+# REPOSITORY: UBUNTU CLOUD ARCHIVE
+#
+# Enabled when ceph_repository == 'uca'
+#
+# This allows the install of Ceph from the Ubuntu Cloud Archive. The Ubuntu Cloud Archive
+# usually has newer Ceph releases than the normal distro repository.
+#
+#
+#ceph_stable_repo_uca: "http://ubuntu-cloud.archive.canonical.com/ubuntu"
+#ceph_stable_openstack_release_uca: liberty
+#ceph_stable_release_uca: "{{ansible_lsb.codename}}-updates/{{ceph_stable_openstack_release_uca}}"
+
+
+# REPOSITORY: DEV
+#
+# Enabled when ceph_repository == 'dev'
+#
+#ceph_dev_branch: master # development branch you would like to use e.g: master, wip-hack
+#ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built)
+
+#nfs_ganesha_dev: false # use development repos for nfs-ganesha
+
+# Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman
+# flavors so far include: ceph_master, ceph_jewel, ceph_kraken, ceph_luminous
+#nfs_ganesha_flavor: "ceph_master"
+
+#ceph_iscsi_config_dev: true # special repo for deploying iSCSI gateways
+
+
+# REPOSITORY: CUSTOM
+#
+# Enabled when ceph_repository == 'custom'
+#
+# Use a custom repository to install ceph. For RPM, ceph_custom_repo should be
+# a URL to the .repo file to be installed on the targets. For deb,
+# ceph_custom_repo should be the URL to the repo base.
+#
+#ceph_custom_repo: https://server.domain.com/ceph-custom-repo
+
+
+# ORIGIN: LOCAL CEPH INSTALLATION
+#
+# Enabled when ceph_repository == 'local'
+#
+# Path to DESTDIR of the ceph install
+#ceph_installation_dir: "/path/to/ceph_installation/"
+# Whether or not to use installer script rundep_installer.sh
+# This script takes in rundep and installs the packages line by line onto the machine
+# If this is set to false then it is assumed that the machine ceph is being copied onto will already have
+# all runtime dependencies installed
+#use_installer: false
+# Root directory for ceph-ansible
+#ansible_dir: "/path/to/ceph-ansible"
+
+
+######################
+# CEPH CONFIGURATION #
+######################
+
+## Ceph options
+#
+# Each cluster requires a unique, consistent filesystem ID. By
+# default, the playbook generates one for you and stores it in a file
+# in `fetch_directory`. If you want to customize how the fsid is
+# generated, you may find it useful to disable fsid generation to
+# avoid cluttering up your ansible repo. If you set `generate_fsid` to
+# false, you *must* generate `fsid` in another way.
+# ACTIVATE THE FSID VARIABLE FOR NON-VAGRANT DEPLOYMENT
+#fsid: "{{ cluster_uuid.stdout }}"
+#generate_fsid: true
+
+#ceph_conf_key_directory: /etc/ceph
+
+#cephx: true
+
+## Client options
+#
+#rbd_cache: "true"
+#rbd_cache_writethrough_until_flush: "true"
+#rbd_concurrent_management_ops: 20
+
+#rbd_client_directories: true # this will create rbd_client_log_path and rbd_client_admin_socket_path directories with proper permissions
+
+# Permissions for the rbd_client_log_path and
+# rbd_client_admin_socket_path. Depending on your use case for Ceph
+# you may want to change these values. The default, which is used if
+# any of the variables are unset or set to a false value (like `null`
+# or `false`) is to automatically determine what is appropriate for
+# the Ceph version with non-OpenStack workloads -- ceph:ceph and 0770
+# for infernalis releases, and root:root and 1777 for pre-infernalis
+# releases.
+#
+# For other use cases, including running Ceph with OpenStack, you'll
+# want to set these differently:
+#
+# For OpenStack on RHEL, you'll want:
+# rbd_client_directory_owner: "qemu"
+# rbd_client_directory_group: "libvirtd" (or "libvirt", depending on your version of libvirt)
+# rbd_client_directory_mode: "0755"
+#
+# For OpenStack on Ubuntu or Debian, set:
+# rbd_client_directory_owner: "libvirt-qemu"
+# rbd_client_directory_group: "kvm"
+# rbd_client_directory_mode: "0755"
+#
+# If you set rbd_client_directory_mode, you must use a string (e.g.,
+# 'rbd_client_directory_mode: "0755"', *not*
+# 'rbd_client_directory_mode: 0755', or Ansible will complain: mode
+# must be in octal or symbolic form
+#rbd_client_directory_owner: null
+#rbd_client_directory_group: null
+#rbd_client_directory_mode: null
+
+#rbd_client_log_path: /var/log/ceph
+#rbd_client_log_file: "{{ rbd_client_log_path }}/qemu-guest-$pid.log" # must be writable by QEMU and allowed by SELinux or AppArmor
+#rbd_client_admin_socket_path: /var/run/ceph # must be writable by QEMU and allowed by SELinux or AppArmor
+
+## Monitor options
+#
+# You must define either monitor_interface, monitor_address or monitor_address_block.
+# These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).
+# Eg. If you want to specify for each monitor which address the monitor will bind to you can set it in your **inventory host file** by using 'monitor_address' variable.
+# Preference will go to monitor_address if both monitor_address and monitor_interface are defined.
+# To use an IPv6 address, use the monitor_address setting instead (and set ip_version to ipv6)
+monitor_interface: ens3
+#monitor_address: 0.0.0.0
+#monitor_address_block: subnet
+# set to either ipv4 or ipv6, whichever your network is using
+#ip_version: ipv4
+#mon_use_fqdn: false # if set to true, the MON name used will be the fqdn in the ceph.conf
+
+## OSD options
+#
+journal_size: 100 # OSD journal size in MB
+public_network: 100.64.128.40/24
+cluster_network: "{{ public_network }}"
+#osd_mkfs_type: xfs
+#osd_mkfs_options_xfs: -f -i size=2048
+#osd_mount_options_xfs: noatime,largeio,inode64,swalloc
+#osd_objectstore: filestore
+
+# xattrs. by default, 'filestore xattr use omap' is set to 'true' if
+# 'osd_mkfs_type' is set to 'ext4'; otherwise it isn't set. This can
+# be set to 'true' or 'false' to explicitly override those
+# defaults. Leave it 'null' to use the default for your chosen mkfs
+# type.
+#filestore_xattr_use_omap: null
+
+## MDS options
+#
+#mds_use_fqdn: false # if set to true, the MDS name used will be the fqdn in the ceph.conf
+#mds_allow_multimds: false
+#mds_max_mds: 3
+
+## Rados Gateway options
+#
+#radosgw_dns_name: your.subdomain.tld # subdomains used by radosgw. See http://ceph.com/docs/master/radosgw/config/#enabling-subdomain-s3-calls
+#radosgw_resolve_cname: false # enable for radosgw to resolve DNS CNAME based bucket names
+#radosgw_civetweb_port: 8080
+#radosgw_civetweb_num_threads: 100
+# For additional civetweb configuration options available such as SSL, logging,
+# keepalive, and timeout settings, please see the civetweb docs at
+# https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md
+#radosgw_civetweb_options: "num_threads={{ radosgw_civetweb_num_threads }}"
+# You must define either radosgw_interface, radosgw_address.
+# These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).
+# Eg. If you want to specify for each radosgw node which address the radosgw will bind to you can set it in your **inventory host file** by using 'radosgw_address' variable.
+# Preference will go to radosgw_address if both radosgw_address and radosgw_interface are defined.
+# To use an IPv6 address, use the radosgw_address setting instead (and set ip_version to ipv6)
+#radosgw_interface: interface
+#radosgw_address: "{{ '0.0.0.0' if rgw_containerized_deployment else 'address' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#radosgw_address_block: subnet
+#radosgw_keystone: false # activate OpenStack Keystone options full detail here: http://ceph.com/docs/master/radosgw/keystone/
+# Rados Gateway options
+#email_address: foo@bar.com
+
+## REST API options
+#
+#restapi_interface: "{{ monitor_interface }}"
+#restapi_address: "{{ monitor_address }}"
+#restapi_port: 5000
+
+## Testing mode
+# enable this mode _only_ when you have a single node
+# if you don't want it keep the option commented
+#common_single_host_mode: true
+
+## Handlers - restarting daemons after a config change
+# if for whatever reasons the content of your ceph configuration changes
+# ceph daemons will be restarted as well. At the moment, we can not detect
+# which config option changed so all the daemons will be restarted. Although
+# this restart will be serialized for each node, in between a health check
+# will be performed so we make sure we don't move to the next node until
+# ceph is not healthy
+# Obviously between the checks (for monitors to be in quorum and for osd's pgs
+# to be clean) we have to wait. These retries and delays can be configurable
+# for both monitors and osds.
+#
+# Monitor handler checks
+#handler_health_mon_check_retries: 5
+#handler_health_mon_check_delay: 10
+#
+# OSD handler checks
+#handler_health_osd_check_retries: 40
+#handler_health_osd_check_delay: 30
+#handler_health_osd_check: true
+#
+# MDS handler checks
+#handler_health_mds_check_retries: 5
+#handler_health_mds_check_delay: 10
+#
+# RGW handler checks
+#handler_health_rgw_check_retries: 5
+#handler_health_rgw_check_delay: 10
+
+# NFS handler checks
+#handler_health_nfs_check_retries: 5
+#handler_health_nfs_check_delay: 10
+
+# RBD MIRROR handler checks
+#handler_health_rbd_mirror_check_retries: 5
+#handler_health_rbd_mirror_check_delay: 10
+
+# MGR handler checks
+#handler_health_mgr_check_retries: 5
+#handler_health_mgr_check_delay: 10
+
+###############
+# NFS-GANESHA #
+###############
+
+# Confiure the type of NFS gatway access. At least one must be enabled for an
+# NFS role to be useful
+#
+# Set this to true to enable File access via NFS. Requires an MDS role.
+#nfs_file_gw: false
+# Set this to true to enable Object access via NFS. Requires an RGW role.
+#nfs_obj_gw: true
+
+###################
+# CONFIG OVERRIDE #
+###################
+
+# Ceph configuration file override.
+# This allows you to specify more configuration options
+# using an INI style format.
+# The following sections are supported: [global], [mon], [osd], [mds], [rgw]
+#
+# Example:
+# ceph_conf_overrides:
+# global:
+# foo: 1234
+# bar: 5678
+#
+#ceph_conf_overrides: {}
+
+
+#############
+# OS TUNING #
+#############
+
+#disable_transparent_hugepage: true
+#os_tuning_params:
+# - { name: kernel.pid_max, value: 4194303 }
+# - { name: fs.file-max, value: 26234859 }
+# - { name: vm.zone_reclaim_mode, value: 0 }
+# - { name: vm.swappiness, value: 10 }
+# - { name: vm.min_free_kbytes, value: "{{ vm_min_free_kbytes }}" }
+
+# For Debian & Red Hat/CentOS installs set TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES
+# Set this to a byte value (e.g. 134217728)
+# A value of 0 will leave the package default.
+#ceph_tcmalloc_max_total_thread_cache: 0
+
+
+##########
+# DOCKER #
+##########
+#docker_exec_cmd:
+#docker: false
+#ceph_docker_image: "ceph/daemon"
+#ceph_docker_image_tag: latest
+#ceph_docker_registry: docker.io
+#ceph_docker_enable_centos_extra_repo: false
+#ceph_docker_on_openstack: false
+#ceph_mon_docker_interface: "{{ monitor_interface }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_mon_docker_subnet: "{{ public_network }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#mon_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#osd_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#mds_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#rgw_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#containerized_deployment: "{{ True if mon_containerized_deployment or osd_containerized_deployment or mds_containerized_deployment or rgw_containerized_deployment else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+
+
+############
+# KV store #
+############
+#containerized_deployment_with_kv: false
+#mon_containerized_default_ceph_conf_with_kv: false
+#kv_type: etcd
+#kv_endpoint: 127.0.0.1
+#kv_port: 2379
+
+
+# this is only here for usage with the rolling_update.yml playbook
+# do not ever change this here
+#rolling_update: false
+
+
diff --git a/ci/ansible/group_vars/ceph/ceph.hosts b/ci/ansible/group_vars/ceph/ceph.hosts
index 42f5da8..34a7b26 100755..100644
--- a/ci/ansible/group_vars/ceph/ceph.hosts
+++ b/ci/ansible/group_vars/ceph/ceph.hosts
@@ -1,8 +1,8 @@
-[mons]
-localhost ansible_connection=local
-
-[osds]
-localhost ansible_connection=local
-
-[mgrs]
-localhost ansible_connection=local
+[mons]
+localhost ansible_connection=local
+
+[osds]
+localhost ansible_connection=local
+
+[mgrs]
+localhost ansible_connection=local
diff --git a/ci/ansible/group_vars/ceph/ceph.yaml b/ci/ansible/group_vars/ceph/ceph.yaml
index 8272cd1..5e70724 100755..100644
--- a/ci/ansible/group_vars/ceph/ceph.yaml
+++ b/ci/ansible/group_vars/ceph/ceph.yaml
@@ -1,5 +1,8 @@
-configFile: /etc/ceph/ceph.conf
-pool:
- "rbd": # change pool name same to ceph pool, but don't change it if you choose lvm backend
- diskType: SSD
- AZ: default \ No newline at end of file
+configFile: /etc/ceph/ceph.conf
+pool:
+ "rbd": # change pool name same to ceph pool, but don't change it if you choose lvm backend
+ diskType: SSD
+ AZ: default
+ accessProtocol: rbd
+ thinProvisioned: true
+ compressed: false
diff --git a/ci/ansible/group_vars/ceph/osds.yml b/ci/ansible/group_vars/ceph/osds.yml
index 1f12204..57cf581 100755..100644
--- a/ci/ansible/group_vars/ceph/osds.yml
+++ b/ci/ansible/group_vars/ceph/osds.yml
@@ -1,259 +1,259 @@
----
-# Variables here are applicable to all host groups NOT roles
-
-# This sample file generated by generate_group_vars_sample.sh
-
-# Dummy variable to avoid error because ansible does not recognize the
-# file as a good configuration file when no variable in it.
-dummy:
-
-# You can override default vars defined in defaults/main.yml here,
-# but I would advice to use host or group vars instead
-
-#raw_journal_devices: "{{ dedicated_devices }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#raw_multi_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#dmcrytpt_journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#dmcrypt_dedicated_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-
-
-###########
-# GENERAL #
-###########
-
-# Even though OSD nodes should not have the admin key
-# at their disposal, some people might want to have it
-# distributed on OSD nodes. Setting 'copy_admin_key' to 'true'
-# will copy the admin key to the /etc/ceph/ directory
-#copy_admin_key: false
-
-
-####################
-# OSD CRUSH LOCATION
-####################
-
-# /!\
-#
-# BE EXTREMELY CAREFUL WITH THIS OPTION
-# DO NOT USE IT UNLESS YOU KNOW WHAT YOU ARE DOING
-#
-# /!\
-#
-# It is probably best to keep this option to 'false' as the default
-# suggests it. This option should only be used while doing some complex
-# CRUSH map. It allows you to force a specific location for a set of OSDs.
-#
-# The following options will build a ceph.conf with OSD sections
-# Example:
-# [osd.X]
-# osd crush location = "root=location"
-#
-# This works with your inventory file
-# To match the following 'osd_crush_location' option the inventory must look like:
-#
-# [osds]
-# osd0 ceph_crush_root=foo ceph_crush_rack=bar
-
-#crush_location: false
-#osd_crush_location: "\"root={{ ceph_crush_root }} rack={{ ceph_crush_rack }} host={{ ansible_hostname }}\""
-
-
-##############
-# CEPH OPTIONS
-##############
-
-# Devices to be used as OSDs
-# You can pre-provision disks that are not present yet.
-# Ansible will just skip them. Newly added disk will be
-# automatically configured during the next run.
-#
-
-
-# Declare devices to be used as OSDs
-# All scenario(except 3rd) inherit from the following device declaration
-
-devices:
-# - /dev/sda
-# - /dev/sdc
-# - /dev/sdd
-# - /dev/sde
-
-#devices: []
-
-
-#'osd_auto_discovery' mode prevents you from filling out the 'devices' variable above.
-# You can use this option with First and Forth and Fifth OSDS scenario.
-# Device discovery is based on the Ansible fact 'ansible_devices'
-# which reports all the devices on a system. If chosen all the disks
-# found will be passed to ceph-disk. You should not be worried on using
-# this option since ceph-disk has a built-in check which looks for empty devices.
-# Thus devices with existing partition tables will not be used.
-#
-#osd_auto_discovery: false
-
-# Encrypt your OSD device using dmcrypt
-# If set to True, no matter which osd_objecstore and osd_scenario you use the data will be encrypted
-#dmcrypt: "{{ True if dmcrytpt_journal_collocation or dmcrypt_dedicated_journal else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-
-
-# I. First scenario: collocated
-#
-# To enable this scenario do: osd_scenario: collocated
-#
-#
-# If osd_objectstore: filestore is enabled both 'ceph data' and 'ceph journal' partitions
-# will be stored on the same device.
-#
-# If osd_objectstore: bluestore is enabled 'ceph data', 'ceph block', 'ceph block.db', 'ceph block.wal' will be stored
-# on the same device. The device will get 2 partitions:
-# - One for 'data', called 'ceph data'
-# - One for 'ceph block', 'ceph block.db', 'ceph block.wal' called 'ceph block'
-#
-# Example of what you will get:
-# [root@ceph-osd0 ~]# blkid /dev/sda*
-# /dev/sda: PTTYPE="gpt"
-# /dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"
-#
-
-#osd_scenario: "{{ 'collocated' if journal_collocation or dmcrytpt_journal_collocation else 'non-collocated' if raw_multi_journal or dmcrypt_dedicated_journal else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#valid_osd_scenarios:
-# - collocated
-# - non-collocated
-# - lvm
-osd_scenario: collocated
-
-# II. Second scenario: non-collocated
-#
-# To enable this scenario do: osd_scenario: non-collocated
-#
-# If osd_objectstore: filestore is enabled 'ceph data' and 'ceph journal' partitions
-# will be stored on different devices:
-# - 'ceph data' will be stored on the device listed in 'devices'
-# - 'ceph journal' will be stored on the device listed in 'dedicated_devices'
-#
-# Let's take an example, imagine 'devices' was declared like this:
-#
-# devices:
-# - /dev/sda
-# - /dev/sdb
-# - /dev/sdc
-# - /dev/sdd
-#
-# And 'dedicated_devices' was declared like this:
-#
-# dedicated_devices:
-# - /dev/sdf
-# - /dev/sdf
-# - /dev/sdg
-# - /dev/sdg
-#
-# This will result in the following mapping:
-# - /dev/sda will have /dev/sdf1 as journal
-# - /dev/sdb will have /dev/sdf2 as a journal
-# - /dev/sdc will have /dev/sdg1 as a journal
-# - /dev/sdd will have /dev/sdg2 as a journal
-#
-#
-# If osd_objectstore: bluestore is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored
-# on a dedicated device.
-#
-# So the following will happen:
-# - The devices listed in 'devices' will get 2 partitions, one for 'block' and one for 'data'.
-# 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata.
-# 'block' will store all your actual data.
-# - The devices in 'dedicated_devices' will get 1 partition for RocksDB DB, called 'block.db'
-# and one for RocksDB WAL, called 'block.wal'
-#
-# By default dedicated_devices will represent block.db
-#
-# Example of what you will get:
-# [root@ceph-osd0 ~]# blkid /dev/sd*
-# /dev/sda: PTTYPE="gpt"
-# /dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"
-# /dev/sdb: PTTYPE="gpt"
-# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"
-# /dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"
-#dedicated_devices: []
-
-
-# More device granularity for Bluestore
-#
-# ONLY if osd_objectstore: bluestore is enabled.
-#
-# By default, if 'bluestore_wal_devices' is empty, it will get the content of 'dedicated_devices'.
-# If set, then you will have a dedicated partition on a specific device for block.wal.
-#
-# Example of what you will get:
-# [root@ceph-osd0 ~]# blkid /dev/sd*
-# /dev/sda: PTTYPE="gpt"
-# /dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"
-# /dev/sdb: PTTYPE="gpt"
-# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"
-# /dev/sdc: PTTYPE="gpt"
-# /dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"
-#bluestore_wal_devices: "{{ dedicated_devices }}"
-
-# III. Use ceph-volume to create OSDs from logical volumes.
-# Use 'osd_scenario: lvm' to enable this scenario. Currently we only support dedicated journals
-# when using lvm, not collocated journals.
-# lvm_volumes is a list of dictionaries. Each dictionary must contain a data, journal and vg_name
-# key. Any logical volume or logical group used must be a name and not a path.
-# data must be a logical volume
-# journal can be either a lv, device or partition. You can not use the same journal for many data lvs.
-# data_vg must be the volume group name of the data lv
-# journal_vg is optional and must be the volume group name of the journal lv, if applicable
-# For example:
-# lvm_volumes:
-# - data: data-lv1
-# data_vg: vg1
-# journal: journal-lv1
-# journal_vg: vg2
-# - data: data-lv2
-# journal: /dev/sda
-# data_vg: vg1
-# - data: data-lv3
-# journal: /dev/sdb1
-# data_vg: vg2
-#lvm_volumes: []
-
-
-##########
-# DOCKER #
-##########
-
-#ceph_config_keys: [] # DON'T TOUCH ME
-
-# Resource limitation
-# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints
-# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations
-# These options can be passed using the 'ceph_osd_docker_extra_env' variable.
-#ceph_osd_docker_memory_limit: 1g
-#ceph_osd_docker_cpu_limit: 1
-
-# PREPARE DEVICE
-#
-# WARNING /!\ DMCRYPT scenario ONLY works with Docker version 1.12.5 and above
-#
-#ceph_osd_docker_devices: "{{ devices }}"
-#ceph_osd_docker_prepare_env: -e OSD_JOURNAL_SIZE={{ journal_size }}
-
-# ACTIVATE DEVICE
-#
-#ceph_osd_docker_extra_env:
-#ceph_osd_docker_run_script_path: "/usr/share" # script called by systemd to run the docker command
-
-
-###########
-# SYSTEMD #
-###########
-
-# ceph_osd_systemd_overrides will override the systemd settings
-# for the ceph-osd services.
-# For example,to set "PrivateDevices=false" you can specify:
-#ceph_osd_systemd_overrides:
-# Service:
-# PrivateDevices: False
-
+---
+# Variables here are applicable to all host groups NOT roles
+
+# This sample file generated by generate_group_vars_sample.sh
+
+# Dummy variable to avoid error because ansible does not recognize the
+# file as a good configuration file when no variable in it.
+dummy:
+
+# You can override default vars defined in defaults/main.yml here,
+# but I would advice to use host or group vars instead
+
+#raw_journal_devices: "{{ dedicated_devices }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#raw_multi_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#dmcrytpt_journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+#dmcrypt_dedicated_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1
+
+
+###########
+# GENERAL #
+###########
+
+# Even though OSD nodes should not have the admin key
+# at their disposal, some people might want to have it
+# distributed on OSD nodes. Setting 'copy_admin_key' to 'true'
+# will copy the admin key to the /etc/ceph/ directory
+#copy_admin_key: false
+
+
+####################
+# OSD CRUSH LOCATION
+####################
+
+# /!\
+#
+# BE EXTREMELY CAREFUL WITH THIS OPTION
+# DO NOT USE IT UNLESS YOU KNOW WHAT YOU ARE DOING
+#
+# /!\
+#
+# It is probably best to keep this option to 'false' as the default
+# suggests it. This option should only be used while doing some complex
+# CRUSH map. It allows you to force a specific location for a set of OSDs.
+#
+# The following options will build a ceph.conf with OSD sections
+# Example:
+# [osd.X]
+# osd crush location = "root=location"
+#
+# This works with your inventory file
+# To match the following 'osd_crush_location' option the inventory must look like:
+#
+# [osds]
+# osd0 ceph_crush_root=foo ceph_crush_rack=bar
+
+#crush_location: false
+#osd_crush_location: "\"root={{ ceph_crush_root }} rack={{ ceph_crush_rack }} host={{ ansible_hostname }}\""
+
+
+##############
+# CEPH OPTIONS
+##############
+
+# Devices to be used as OSDs
+# You can pre-provision disks that are not present yet.
+# Ansible will just skip them. Newly added disk will be
+# automatically configured during the next run.
+#
+
+
+# Declare devices to be used as OSDs
+# All scenario(except 3rd) inherit from the following device declaration
+
+devices:
+# - /dev/sda
+# - /dev/sdc
+# - /dev/sdd
+# - /dev/sde
+
+#devices: []
+
+
+#'osd_auto_discovery' mode prevents you from filling out the 'devices' variable above.
+# You can use this option with First and Forth and Fifth OSDS scenario.
+# Device discovery is based on the Ansible fact 'ansible_devices'
+# which reports all the devices on a system. If chosen all the disks
+# found will be passed to ceph-disk. You should not be worried on using
+# this option since ceph-disk has a built-in check which looks for empty devices.
+# Thus devices with existing partition tables will not be used.
+#
+#osd_auto_discovery: false
+
+# Encrypt your OSD device using dmcrypt
+# If set to True, no matter which osd_objecstore and osd_scenario you use the data will be encrypted
+#dmcrypt: "{{ True if dmcrytpt_journal_collocation or dmcrypt_dedicated_journal else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+
+
+# I. First scenario: collocated
+#
+# To enable this scenario do: osd_scenario: collocated
+#
+#
+# If osd_objectstore: filestore is enabled both 'ceph data' and 'ceph journal' partitions
+# will be stored on the same device.
+#
+# If osd_objectstore: bluestore is enabled 'ceph data', 'ceph block', 'ceph block.db', 'ceph block.wal' will be stored
+# on the same device. The device will get 2 partitions:
+# - One for 'data', called 'ceph data'
+# - One for 'ceph block', 'ceph block.db', 'ceph block.wal' called 'ceph block'
+#
+# Example of what you will get:
+# [root@ceph-osd0 ~]# blkid /dev/sda*
+# /dev/sda: PTTYPE="gpt"
+# /dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"
+# /dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"
+#
+
+#osd_scenario: "{{ 'collocated' if journal_collocation or dmcrytpt_journal_collocation else 'non-collocated' if raw_multi_journal or dmcrypt_dedicated_journal else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#valid_osd_scenarios:
+# - collocated
+# - non-collocated
+# - lvm
+osd_scenario: collocated
+
+# II. Second scenario: non-collocated
+#
+# To enable this scenario do: osd_scenario: non-collocated
+#
+# If osd_objectstore: filestore is enabled 'ceph data' and 'ceph journal' partitions
+# will be stored on different devices:
+# - 'ceph data' will be stored on the device listed in 'devices'
+# - 'ceph journal' will be stored on the device listed in 'dedicated_devices'
+#
+# Let's take an example, imagine 'devices' was declared like this:
+#
+# devices:
+# - /dev/sda
+# - /dev/sdb
+# - /dev/sdc
+# - /dev/sdd
+#
+# And 'dedicated_devices' was declared like this:
+#
+# dedicated_devices:
+# - /dev/sdf
+# - /dev/sdf
+# - /dev/sdg
+# - /dev/sdg
+#
+# This will result in the following mapping:
+# - /dev/sda will have /dev/sdf1 as journal
+# - /dev/sdb will have /dev/sdf2 as a journal
+# - /dev/sdc will have /dev/sdg1 as a journal
+# - /dev/sdd will have /dev/sdg2 as a journal
+#
+#
+# If osd_objectstore: bluestore is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored
+# on a dedicated device.
+#
+# So the following will happen:
+# - The devices listed in 'devices' will get 2 partitions, one for 'block' and one for 'data'.
+# 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata.
+# 'block' will store all your actual data.
+# - The devices in 'dedicated_devices' will get 1 partition for RocksDB DB, called 'block.db'
+# and one for RocksDB WAL, called 'block.wal'
+#
+# By default dedicated_devices will represent block.db
+#
+# Example of what you will get:
+# [root@ceph-osd0 ~]# blkid /dev/sd*
+# /dev/sda: PTTYPE="gpt"
+# /dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"
+# /dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"
+# /dev/sdb: PTTYPE="gpt"
+# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"
+# /dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"
+#dedicated_devices: []
+
+
+# More device granularity for Bluestore
+#
+# ONLY if osd_objectstore: bluestore is enabled.
+#
+# By default, if 'bluestore_wal_devices' is empty, it will get the content of 'dedicated_devices'.
+# If set, then you will have a dedicated partition on a specific device for block.wal.
+#
+# Example of what you will get:
+# [root@ceph-osd0 ~]# blkid /dev/sd*
+# /dev/sda: PTTYPE="gpt"
+# /dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"
+# /dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"
+# /dev/sdb: PTTYPE="gpt"
+# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"
+# /dev/sdc: PTTYPE="gpt"
+# /dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"
+#bluestore_wal_devices: "{{ dedicated_devices }}"
+
+# III. Use ceph-volume to create OSDs from logical volumes.
+# Use 'osd_scenario: lvm' to enable this scenario. Currently we only support dedicated journals
+# when using lvm, not collocated journals.
+# lvm_volumes is a list of dictionaries. Each dictionary must contain a data, journal and vg_name
+# key. Any logical volume or logical group used must be a name and not a path.
+# data must be a logical volume
+# journal can be either a lv, device or partition. You can not use the same journal for many data lvs.
+# data_vg must be the volume group name of the data lv
+# journal_vg is optional and must be the volume group name of the journal lv, if applicable
+# For example:
+# lvm_volumes:
+# - data: data-lv1
+# data_vg: vg1
+# journal: journal-lv1
+# journal_vg: vg2
+# - data: data-lv2
+# journal: /dev/sda
+# data_vg: vg1
+# - data: data-lv3
+# journal: /dev/sdb1
+# data_vg: vg2
+#lvm_volumes: []
+
+
+##########
+# DOCKER #
+##########
+
+#ceph_config_keys: [] # DON'T TOUCH ME
+
+# Resource limitation
+# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints
+# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations
+# These options can be passed using the 'ceph_osd_docker_extra_env' variable.
+#ceph_osd_docker_memory_limit: 1g
+#ceph_osd_docker_cpu_limit: 1
+
+# PREPARE DEVICE
+#
+# WARNING /!\ DMCRYPT scenario ONLY works with Docker version 1.12.5 and above
+#
+#ceph_osd_docker_devices: "{{ devices }}"
+#ceph_osd_docker_prepare_env: -e OSD_JOURNAL_SIZE={{ journal_size }}
+
+# ACTIVATE DEVICE
+#
+#ceph_osd_docker_extra_env:
+#ceph_osd_docker_run_script_path: "/usr/share" # script called by systemd to run the docker command
+
+
+###########
+# SYSTEMD #
+###########
+
+# ceph_osd_systemd_overrides will override the systemd settings
+# for the ceph-osd services.
+# For example,to set "PrivateDevices=false" you can specify:
+#ceph_osd_systemd_overrides:
+# Service:
+# PrivateDevices: False
+
diff --git a/ci/ansible/group_vars/cinder/cinder.yaml b/ci/ansible/group_vars/cinder/cinder.yaml
index bfb1d85..e7971d0 100644
--- a/ci/ansible/group_vars/cinder/cinder.yaml
+++ b/ci/ansible/group_vars/cinder/cinder.yaml
@@ -1,14 +1,17 @@
-authOptions:
- noAuth: true
- endpoint: "http://127.0.0.1/identity"
- cinderEndpoint: "http://127.0.0.1:8776/v2"
- domainId: "Default"
- domainName: "Default"
- username: ""
- password: ""
- tenantId: "myproject"
- tenantName: "myproject"
-pool:
- "cinder-lvm@lvm#lvm":
- AZ: nova
- thin: true
+authOptions:
+ noAuth: true
+ endpoint: "http://127.0.0.1/identity"
+ cinderEndpoint: "http://127.0.0.1:8776/v2"
+ domainId: "Default"
+ domainName: "Default"
+ username: ""
+ password: ""
+ tenantId: "myproject"
+ tenantName: "myproject"
+pool:
+ "cinder-lvm@lvm#lvm":
+ AZ: nova
+ thin: true
+ accessProtocol: iscsi
+ thinProvisioned: true
+ compressed: true
diff --git a/ci/ansible/group_vars/common.yml b/ci/ansible/group_vars/common.yml
index 734d2e3..cbdaaf6 100755..100644
--- a/ci/ansible/group_vars/common.yml
+++ b/ci/ansible/group_vars/common.yml
@@ -1,34 +1,39 @@
----
-# Dummy variable to avoid error because ansible does not recognize the
-# file as a good configuration file when no variable in it.
-dummy:
-
-
-###########
-# GENERAL #
-###########
-
-workplace: /home/krej # Change this field according to your username, use '/root' if you login as root.
-
-# These fields are NOT suggested to be modified
-remote_url: https://github.com/opensds/opensds.git
-opensds_root_dir: "{{ workplace }}/gopath/src/github.com/opensds/opensds"
-opensds_build_dir: "{{ opensds_root_dir }}/build"
-opensds_config_dir: /etc/opensds
-opensds_log_dir: /var/log/opensds
-
-###########
-# GOLANG #
-###########
-
-golang_release: 1.9.2
-
-# These fields are NOT suggested to be modified
-golang_tarball: go{{ golang_release }}.linux-amd64.tar.gz
-golang_download_url: https://storage.googleapis.com/golang/{{ golang_tarball }}
-
-###########
-#CONTAINER#
-###########
-
-container_enabled: false
+---
+# Dummy variable to avoid error because ansible does not recognize the
+# file as a good configuration file when no variable in it.
+dummy:
+
+
+###########
+# GENERAL #
+###########
+
+opensds_release: v0.1.4 # The version should be at least v0.1.4.
+nbp_release: v0.1.0 # The version should be at least v0.1.0.
+
+# These fields are not suggested to be modified
+opensds_download_url: https://github.com/opensds/opensds/releases/download/{{ opensds_release }}/opensds-{{ opensds_release }}-linux-amd64.tar.gz
+opensds_tarball_url: /opt/opensds-{{ opensds_release }}-linux-amd64.tar.gz
+opensds_dir: /opt/opensds-{{ opensds_release }}-linux-amd64
+nbp_download_url: https://github.com/opensds/nbp/releases/download/{{ nbp_release }}/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz
+nbp_tarball_url: /opt/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz
+nbp_dir: /opt/opensds-k8s-{{ nbp_release }}-linux-amd64
+
+opensds_config_dir: /etc/opensds
+opensds_log_dir: /var/log/opensds
+
+
+###########
+# PLUGIN #
+###########
+
+nbp_plugin_type: standalone # standalone is the default integration way, but you can change it to 'csi', 'flexvolume'
+
+flexvolume_plugin_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds
+
+
+###########
+#CONTAINER#
+###########
+
+container_enabled: false
diff --git a/ci/ansible/group_vars/lvm/lvm.yaml b/ci/ansible/group_vars/lvm/lvm.yaml
index a5aecb8..a360891 100755..100644
--- a/ci/ansible/group_vars/lvm/lvm.yaml
+++ b/ci/ansible/group_vars/lvm/lvm.yaml
@@ -1,5 +1,8 @@
-tgtBindIp: 127.0.0.1
-pool:
- "vg001": # change pool name same to vg_name, but don't change it if you choose ceph backend
- diskType: SSD
- AZ: default \ No newline at end of file
+tgtBindIp: 127.0.0.1
+pool:
+ "vg001": # change pool name same to vg_name, but don't change it if you choose ceph backend
+ diskType: SSD
+ AZ: default
+ accessProtocol: iscsi
+ thinProvisioned: false
+ compressed: false
diff --git a/ci/ansible/group_vars/osdsdb.yml b/ci/ansible/group_vars/osdsdb.yml
index c8ef864..1b6b812 100755..100644
--- a/ci/ansible/group_vars/osdsdb.yml
+++ b/ci/ansible/group_vars/osdsdb.yml
@@ -1,33 +1,35 @@
----
-# Dummy variable to avoid error because ansible does not recognize the
-# file as a good configuration file when no variable in it.
-dummy:
-
-
-###########
-# GENERAL #
-###########
-
-db_driver: etcd
-db_endpoint: localhost:2379,localhost:2380
-#db_credential: opensds:password@127.0.0.1:3306/dbname
-
-###########
-# ETCD #
-###########
-
-etcd_release: v3.2.0
-etcd_host: 127.0.0.1
-etcd_port: 2379
-etcd_peer_port: 2380
-
-# These fields are not suggested to be modified
-etcd_tarball: etcd-{{ etcd_release }}-linux-amd64.tar.gz
-etcd_download_url: https://github.com/coreos/etcd/releases/download/{{ etcd_release }}/{{ etcd_tarball }}
-etcd_dir: /opt/etcd-{{ etcd_release }}-linux-amd64
-
-###########
-# DOCKER #
-###########
-
-etcd_docker_image: quay.io/coreos/etcd:latest
+---
+# Dummy variable to avoid error because ansible does not recognize the
+# file as a good configuration file when no variable in it.
+dummy:
+
+
+###########
+# GENERAL #
+###########
+
+db_driver: etcd
+db_endpoint: localhost:2379,localhost:2380
+#db_credential: opensds:password@127.0.0.1:3306/dbname
+
+
+###########
+# ETCD #
+###########
+
+etcd_release: v3.2.0
+etcd_host: 127.0.0.1
+etcd_port: 2379
+etcd_peer_port: 2380
+
+# These fields are not suggested to be modified
+etcd_tarball: etcd-{{ etcd_release }}-linux-amd64.tar.gz
+etcd_download_url: https://github.com/coreos/etcd/releases/download/{{ etcd_release }}/{{ etcd_tarball }}
+etcd_dir: /opt/etcd-{{ etcd_release }}-linux-amd64
+
+
+###########
+# DOCKER #
+###########
+
+etcd_docker_image: quay.io/coreos/etcd:latest
diff --git a/ci/ansible/group_vars/osdsdock.yml b/ci/ansible/group_vars/osdsdock.yml
index a8c4ce9..1544c65 100755..100644
--- a/ci/ansible/group_vars/osdsdock.yml
+++ b/ci/ansible/group_vars/osdsdock.yml
@@ -1,76 +1,81 @@
----
-# Dummy variable to avoid error because ansible does not recognize the
-# file as a good configuration file when no variable in it.
-dummy:
-
-
-###########
-# GENERAL #
-###########
-
-# Change it according to your backend, currently support 'lvm', 'ceph', 'cinder'
-enabled_backend: lvm
-
-# These fields are NOT suggested to be modified
-dock_endpoint: localhost:50050
-dock_log_file: "{{ opensds_log_dir }}/osdsdock.log"
-
-###########
-# LVM #
-###########
-
-pv_device: /dev/sdc # Specify a block device and ensure it existed if you choose lvm
-vg_name: vg001 # Specify a name randomly
-
-# These fields are NOT suggested to be modified
-lvm_name: lvm backend
-lvm_description: This is a lvm backend service
-lvm_driver_name: lvm
-lvm_config_path: "{{ opensds_config_dir }}/driver/lvm.yaml"
-
-###########
-# CEPH #
-###########
-
-ceph_pool_name: rbd # Specify a name randomly
-
-# These fields are NOT suggested to be modified
-ceph_name: ceph backend
-ceph_description: This is a ceph backend service
-ceph_driver_name: ceph
-ceph_config_path: "{{ opensds_config_dir }}/driver/ceph.yaml"
-
-###########
-# CINDER #
-###########
-
-# Use block-box install cinder_standalone if true, see details in:
-# https://github.com/openstack/cinder/tree/master/contrib/block-box
-use_cinder_standalone: true
-# If true, you can configure cinder_container_platform, cinder_image_tag,
-# cinder_volume_group.
-
-# Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.
-cinder_container_platform: debian:stretch
-# The image tag can be arbitrarily modified, as long as follow the image naming
-# conventions, default: debian-cinder
-cinder_image_tag: debian-cinder
-# The cinder standalone use lvm driver as default driver, therefore `volume_group`
-# should be configured, the default is: cinder-volumes. The volume group will be
-# removed when use ansible script clean environment.
-cinder_volume_group: cinder-volumes
-# All source code and volume group file will be placed in the cinder_data_dir:
-cinder_data_dir: "{{ workplace }}/cinder_data_dir"
-
-
-# These fields are not suggested to be modified
-cinder_name: cinder backend
-cinder_description: This is a cinder backend service
-cinder_driver_name: cinder
-cinder_config_path: "{{ opensds_config_dir }}/driver/cinder.yaml"
-
-###########
-# DOCKER #
-###########
-
-dock_docker_image: opensdsio/opensds-dock:latest
+---
+# Dummy variable to avoid error because ansible does not recognize the
+# file as a good configuration file when no variable in it.
+dummy:
+
+
+###########
+# GENERAL #
+###########
+
+# Change it according to your backend, currently support 'lvm', 'ceph', 'cinder'
+enabled_backend: lvm
+
+# These fields are NOT suggested to be modified
+dock_endpoint: localhost:50050
+dock_log_file: "{{ opensds_log_dir }}/osdsdock.log"
+
+###########
+# LVM #
+###########
+
+pv_devices: # Specify block devices and ensure them existed if you choose lvm
+ #- /dev/sdc
+ #- /dev/sdd
+vg_name: vg001 # Specify a name randomly
+
+# These fields are NOT suggested to be modified
+lvm_name: lvm backend
+lvm_description: This is a lvm backend service
+lvm_driver_name: lvm
+lvm_config_path: "{{ opensds_config_dir }}/driver/lvm.yaml"
+
+###########
+# CEPH #
+###########
+
+ceph_pools: # Specify pool name randomly
+ - rbd
+ #- ssd
+ #- sas
+
+# These fields are NOT suggested to be modified
+ceph_name: ceph backend
+ceph_description: This is a ceph backend service
+ceph_driver_name: ceph
+ceph_config_path: "{{ opensds_config_dir }}/driver/ceph.yaml"
+
+###########
+# CINDER #
+###########
+
+# Use block-box install cinder_standalone if true, see details in:
+# https://github.com/openstack/cinder/tree/master/contrib/block-box
+use_cinder_standalone: true
+# If true, you can configure cinder_container_platform, cinder_image_tag,
+# cinder_volume_group.
+
+# Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.
+cinder_container_platform: debian:stretch
+# The image tag can be arbitrarily modified, as long as follow the image naming
+# conventions, default: debian-cinder
+cinder_image_tag: debian-cinder
+# The cinder standalone use lvm driver as default driver, therefore `volume_group`
+# should be configured, the default is: cinder-volumes. The volume group will be
+# removed when use ansible script clean environment.
+cinder_volume_group: cinder-volumes
+# All source code and volume group file will be placed in the cinder_data_dir:
+cinder_data_dir: "{{ workplace }}/cinder_data_dir"
+
+
+# These fields are not suggested to be modified
+cinder_name: cinder backend
+cinder_description: This is a cinder backend service
+cinder_driver_name: cinder
+cinder_config_path: "{{ opensds_config_dir }}/driver/cinder.yaml"
+
+###########
+# DOCKER #
+###########
+
+dock_docker_image: opensdsio/opensds-dock:latest
diff --git a/ci/ansible/group_vars/osdslet.yml b/ci/ansible/group_vars/osdslet.yml
index f9be9de..a872449 100755..100644
--- a/ci/ansible/group_vars/osdslet.yml
+++ b/ci/ansible/group_vars/osdslet.yml
@@ -1,19 +1,20 @@
----
-# Dummy variable to avoid error because ansible does not recognize the
-# file as a good configuration file when no variable in it.
-dummy:
-
-
-###########
-# GENERAL #
-###########
-
-# These fields are NOT suggested to be modified
-controller_endpoint: 0.0.0.0:50040
-controller_log_file: "{{ opensds_log_dir }}/osdslet.log"
-
-###########
-# DOCKER #
-###########
-
-controller_docker_image: opensdsio/opensds-controller:latest
+---
+# Dummy variable to avoid error because ansible does not recognize the
+# file as a good configuration file when no variable in it.
+dummy:
+
+
+###########
+# GENERAL #
+###########
+
+# These fields are NOT suggested to be modified
+controller_endpoint: 0.0.0.0:50040
+controller_log_file: "{{ opensds_log_dir }}/osdslet.log"
+
+
+###########
+# DOCKER #
+###########
+
+controller_docker_image: opensdsio/opensds-controller:latest
diff --git a/ci/ansible/install_ansible.sh b/ci/ansible/install_ansible.sh
new file mode 100644
index 0000000..b3f43bb
--- /dev/null
+++ b/ci/ansible/install_ansible.sh
@@ -0,0 +1,9 @@
+#!/bin/bash
+
+sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.
+
+sudo apt-get update
+sudo apt-get install -y ansible
+sleep 3
+
+ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.
diff --git a/ci/ansible/local.hosts b/ci/ansible/local.hosts
index a48639c..afdd826 100755..100644
--- a/ci/ansible/local.hosts
+++ b/ci/ansible/local.hosts
@@ -1,5 +1,8 @@
-[controllers]
-localhost ansible_connection=local
-
-[docks]
-localhost ansible_connection=local
+[controllers]
+localhost ansible_connection=local
+
+[docks]
+localhost ansible_connection=local
+
+[worker-nodes]
+localhost ansible_connection=local
diff --git a/ci/ansible/roles/cleaner/tasks/main.yml b/ci/ansible/roles/cleaner/tasks/main.yml
index c1c465c..4b3b0c2 100755..100644
--- a/ci/ansible/roles/cleaner/tasks/main.yml
+++ b/ci/ansible/roles/cleaner/tasks/main.yml
@@ -1,167 +1,188 @@
----
-- name: remove golang tarball
- file:
- path: "/opt/{{ golang_tarball }}"
- state: absent
- force: yes
- ignore_errors: yes
-
-- name: kill etcd daemon service
- shell: killall etcd
- ignore_errors: yes
- when: db_driver == "etcd" and container_enabled == false
-
-- name: kill etcd containerized service
- docker:
- image: quay.io/coreos/etcd:latest
- state: stopped
- when: container_enabled == true
-
-- name: remove etcd service data
- file:
- path: "{{ etcd_dir }}"
- state: absent
- force: yes
- ignore_errors: yes
- when: db_driver == "etcd"
-
-- name: remove etcd tarball
- file:
- path: "/opt/{{ etcd_tarball }}"
- state: absent
- force: yes
- ignore_errors: yes
- when: db_driver == "etcd"
-
-- name: kill osdslet daemon service
- shell: killall osdslet
- ignore_errors: yes
- when: container_enabled == false
-
-- name: kill osdslet containerized service
- docker:
- image: opensdsio/opensds-controller:latest
- state: stopped
- when: container_enabled == true
-
-- name: kill osdsdock daemon service
- shell: killall osdsdock
- ignore_errors: yes
- when: container_enabled == false
-
-- name: kill osdsdock containerized service
- docker:
- image: opensdsio/opensds-dock:latest
- state: stopped
- when: container_enabled == true
-
-- name: clean all opensds build files
- shell: . /etc/profile; make clean
- args:
- chdir: "{{ opensds_root_dir }}"
-
-- name: clean all opensds configuration files
- file:
- path: "{{ opensds_config_dir }}"
- state: absent
- force: yes
- ignore_errors: yes
-
-- name: clean all opensds log files
- file:
- path: "{{ opensds_log_dir }}"
- state: absent
- force: yes
- ignore_errors: yes
-
-- name: check if it existed before cleaning a volume group
- shell: vgdisplay {{ vg_name }}
- ignore_errors: yes
- register: vg_existed
- when: enabled_backend == "lvm"
-
-- name: remove a volume group if lvm backend specified
- shell: vgremove {{ vg_name }}
- when: enabled_backend == "lvm" and vg_existed.rc == 0
-
-- name: check if it existed before cleaning a physical volume
- shell: pvdisplay {{ pv_device }}
- ignore_errors: yes
- register: pv_existed
- when: enabled_backend == "lvm"
-
-- name: remove a physical volume if lvm backend specified
- shell: pvremove {{ pv_device }}
- when: enabled_backend == "lvm" and pv_existed.rc == 0
-
-- name: stop cinder-standalone service
- shell: docker-compose down
- become: true
- args:
- chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
- when: enabled_backend == "cinder"
-
-- name: clean the volume group of cinder
- shell:
- _raw_params: |
-
- # _clean_lvm_volume_group removes all default LVM volumes
- #
- # Usage: _clean_lvm_volume_group $vg
- function _clean_lvm_volume_group {
- local vg=$1
-
- # Clean out existing volumes
- sudo lvremove -f $vg
- }
-
- # _remove_lvm_volume_group removes the volume group
- #
- # Usage: _remove_lvm_volume_group $vg
- function _remove_lvm_volume_group {
- local vg=$1
-
- # Remove the volume group
- sudo vgremove -f $vg
- }
-
- # _clean_lvm_backing_file() removes the backing file of the
- # volume group
- #
- # Usage: _clean_lvm_backing_file() $backing_file
- function _clean_lvm_backing_file {
- local backing_file=$1
-
- # If the backing physical device is a loop device, it was probably setup by DevStack
- if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then
- local vg_dev
- vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}')
- if [[ -n "$vg_dev" ]]; then
- sudo losetup -d $vg_dev
- fi
- rm -f $backing_file
- fi
- }
-
- # clean_lvm_volume_group() cleans up the volume group and removes the
- # backing file
- #
- # Usage: clean_lvm_volume_group $vg
- function clean_lvm_volume_group {
- local vg=$1
-
- _clean_lvm_volume_group $vg
- _remove_lvm_volume_group $vg
- # if there is no logical volume left, it's safe to attempt a cleanup
- # of the backing file
- if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then
- _clean_lvm_backing_file {{ cinder_data_dir }}/${vg}.img
- fi
- }
-
- clean_lvm_volume_group {{cinder_volume_group}}
-
- args:
- executable: /bin/bash
- become: true
- when: enabled_backend == "cinder"
+---
+- name: kill osdslet daemon service
+ shell: killall osdslet
+ ignore_errors: yes
+ when: container_enabled == false
+
+- name: kill osdslet containerized service
+ docker:
+ image: opensdsio/opensds-controller:latest
+ state: stopped
+ when: container_enabled == true
+
+- name: kill osdsdock daemon service
+ shell: killall osdsdock
+ ignore_errors: yes
+ when: container_enabled == false
+
+- name: kill osdsdock containerized service
+ docker:
+ image: opensdsio/opensds-dock:latest
+ state: stopped
+ when: container_enabled == true
+
+- name: kill etcd daemon service
+ shell: killall etcd
+ ignore_errors: yes
+ when: db_driver == "etcd" and container_enabled == false
+
+- name: kill etcd containerized service
+ docker:
+ image: quay.io/coreos/etcd:latest
+ state: stopped
+ when: db_driver == "etcd" and container_enabled == true
+
+- name: remove etcd service data
+ file:
+ path: "{{ etcd_dir }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+ when: db_driver == "etcd"
+
+- name: remove etcd tarball
+ file:
+ path: "/opt/{{ etcd_tarball }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+ when: db_driver == "etcd"
+
+- name: clean opensds release files
+ file:
+ path: "{{ opensds_dir }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+
+- name: clean opensds release tarball file
+ file:
+ path: "{{ opensds_tarball_url }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+
+- name: clean opensds flexvolume plugins binary file
+ file:
+ path: "{{ flexvolume_plugin_dir }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+ when: nbp_plugin_type == "flexvolume"
+
+- name: clean nbp release files
+ file:
+ path: "{{ nbp_dir }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+
+- name: clean nbp release tarball file
+ file:
+ path: "{{ nbp_tarball_url }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+
+- name: clean all opensds configuration files
+ file:
+ path: "{{ opensds_config_dir }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+
+- name: clean all opensds log files
+ file:
+ path: "{{ opensds_log_dir }}"
+ state: absent
+ force: yes
+ ignore_errors: yes
+
+- name: check if it existed before cleaning a volume group
+ shell: vgdisplay {{ vg_name }}
+ ignore_errors: yes
+ register: vg_existed
+ when: enabled_backend == "lvm"
+
+- name: remove a volume group if lvm backend specified
+ lvg:
+ vg: "{{ vg_name }}"
+ state: absent
+ when: enabled_backend == "lvm" and vg_existed.rc == 0
+
+- name: remove physical volumes if lvm backend specified
+ shell: pvremove {{ item }}
+ with_items: "{{ pv_devices }}"
+ when: enabled_backend == "lvm"
+
+- name: stop cinder-standalone service
+ shell: docker-compose down
+ become: true
+ args:
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
+ when: enabled_backend == "cinder"
+
+- name: clean the volume group of cinder
+ shell:
+ _raw_params: |
+
+ # _clean_lvm_volume_group removes all default LVM volumes
+ #
+ # Usage: _clean_lvm_volume_group $vg
+ function _clean_lvm_volume_group {
+ local vg=$1
+
+ # Clean out existing volumes
+ sudo lvremove -f $vg
+ }
+
+ # _remove_lvm_volume_group removes the volume group
+ #
+ # Usage: _remove_lvm_volume_group $vg
+ function _remove_lvm_volume_group {
+ local vg=$1
+
+ # Remove the volume group
+ sudo vgremove -f $vg
+ }
+
+ # _clean_lvm_backing_file() removes the backing file of the
+ # volume group
+ #
+ # Usage: _clean_lvm_backing_file() $backing_file
+ function _clean_lvm_backing_file {
+ local backing_file=$1
+
+ # If the backing physical device is a loop device, it was probably setup by DevStack
+ if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then
+ local vg_dev
+ vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}')
+ if [[ -n "$vg_dev" ]]; then
+ sudo losetup -d $vg_dev
+ fi
+ rm -f $backing_file
+ fi
+ }
+
+ # clean_lvm_volume_group() cleans up the volume group and removes the
+ # backing file
+ #
+ # Usage: clean_lvm_volume_group $vg
+ function clean_lvm_volume_group {
+ local vg=$1
+
+ _clean_lvm_volume_group $vg
+ _remove_lvm_volume_group $vg
+ # if there is no logical volume left, it's safe to attempt a cleanup
+ # of the backing file
+ if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then
+ _clean_lvm_backing_file {{ cinder_data_dir }}/${vg}.img
+ fi
+ }
+
+ clean_lvm_volume_group {{cinder_volume_group}}
+
+ args:
+ executable: /bin/bash
+ become: true
+ when: enabled_backend == "cinder"
diff --git a/ci/ansible/roles/common/tasks/main.yml b/ci/ansible/roles/common/tasks/main.yml
index d6bef82..7ae2234 100755..100644
--- a/ci/ansible/roles/common/tasks/main.yml
+++ b/ci/ansible/roles/common/tasks/main.yml
@@ -1,121 +1,121 @@
----
-# If we can't get golang installed before any module is used we will fail
-# so just try what we can to get it installed
-- name: check for golang
- stat:
- path: /usr/local/go
- ignore_errors: yes
- register: systemgolang
-
-- name: install golang for debian based systems
- shell:
- cmd: |
- set -e
- set -x
-
- wget {{ golang_download_url }} -P /opt/
- tar xvf /opt/{{ golang_tarball }} -C /usr/local/
- cat >> /etc/profile <<GOLANG__CONFIG_DOC
- export GOROOT=/usr/local/go
- export GOPATH=\$HOME/gopath
- export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
- GOLANG__CONFIG_DOC
-
- executable: /bin/bash
- ignore_errors: yes
- when:
- - systemgolang.stat.exists is undefined or systemgolang.stat.exists == false
-
-- name: Run the equivalent of "apt-get update" as a separate step
- apt:
- update_cache: yes
-
-- name: install librados-dev external package
- apt:
- name: librados-dev
-
-- name: install librbd-dev external package
- apt:
- name: librbd-dev
-
-- pip:
- name: docker-py
- when: container_enabled == true
-
-- name: check for opensds source code existed
- stat:
- path: "{{ opensds_root_dir }}"
- ignore_errors: yes
- register: opensdsexisted
-
-- name: download opensds source code
- git:
- repo: "{{ remote_url }}"
- dest: "{{ opensds_root_dir }}"
- when:
- - opensdsexisted.stat.exists is undefined or opensdsexisted.stat.exists == false
-
-- name: check for opensds binary file existed
- stat:
- path: "{{ opensds_build_dir }}"
- ignore_errors: yes
- register: opensdsbuilt
-
-- name: build opensds binary file
- shell: . /etc/profile; make
- args:
- chdir: "{{ opensds_root_dir }}"
- when:
- - opensdsbuilt.stat.exists is undefined or opensdsbuilt.stat.exists == false
-
-- name: create opensds global config directory if it doesn't exist
- file:
- path: "{{ opensds_config_dir }}/driver"
- state: directory
- mode: 0755
-
-- name: create opensds log directory if it doesn't exist
- file:
- path: "{{ opensds_log_dir }}"
- state: directory
- mode: 0755
-
-- name: configure opensds global info
- shell: |
- cat > opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC
- [osdslet]
- api_endpoint = {{ controller_endpoint }}
- graceful = True
- log_file = {{ controller_log_file }}
- socket_order = inc
-
- [osdsdock]
- api_endpoint = {{ dock_endpoint }}
- log_file = {{ dock_log_file }}
- # Specify which backends should be enabled, sample,ceph,cinder,lvm and so on.
- enabled_backends = {{ enabled_backend }}
-
- [lvm]
- name = {{ lvm_name }}
- description = {{ lvm_description }}
- driver_name = {{ lvm_driver_name }}
- config_path = {{ lvm_config_path }}
-
- [ceph]
- name = {{ ceph_name }}
- description = {{ ceph_description }}
- driver_name = {{ ceph_driver_name }}
- config_path = {{ ceph_config_path }}
-
- [cinder]
- name = {{ cinder_name }}
- description = {{ cinder_description }}
- driver_name = {{ cinder_driver_name }}
- config_path = {{ cinder_config_path }}
-
- [database]
- endpoint = {{ db_endpoint }}
- driver = {{ db_driver }}
- args:
- chdir: "{{ opensds_config_dir }}"
- ignore_errors: yes
+---
+- name: run the equivalent of "apt-get update" as a separate step
+ apt:
+ update_cache: yes
+
+- name: install librados-dev and librbd-dev external packages
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items:
+ - librados-dev
+ - librbd-dev
+
+- name: install docker-py package with pip when enabling containerized deployment
+ pip:
+ name: docker-py
+ when: container_enabled == true
+
+- name: check for opensds release files existed
+ stat:
+ path: "{{ opensds_dir }}"
+ ignore_errors: yes
+ register: opensdsreleasesexisted
+
+- name: download opensds release files
+ get_url:
+ url={{ opensds_download_url }}
+ dest={{ opensds_tarball_url }}
+ when:
+ - opensdsreleasesexisted.stat.exists is undefined or opensdsreleasesexisted.stat.exists == false
+
+- name: extract the opensds release tarball
+ unarchive:
+ src={{ opensds_tarball_url }}
+ dest=/opt/
+ when:
+ - opensdsreleasesexisted.stat.exists is undefined or opensdsreleasesexisted.stat.exists == false
+
+- name: check for nbp release files existed
+ stat:
+ path: "{{ nbp_dir }}"
+ ignore_errors: yes
+ register: nbpreleasesexisted
+
+- name: download nbp release files
+ get_url:
+ url={{ nbp_download_url }}
+ dest={{ nbp_tarball_url }}
+ when:
+ - nbpreleasesexisted.stat.exists is undefined or nbpreleasesexisted.stat.exists == false
+
+- name: extract the nbp release tarball
+ unarchive:
+ src={{ nbp_tarball_url }}
+ dest=/opt/
+ when:
+ - nbpreleasesexisted.stat.exists is undefined or nbpreleasesexisted.stat.exists == false
+
+- name: change the mode of all binary files in opensds release
+ file:
+ path: "{{ opensds_dir }}/bin"
+ mode: 0755
+ recurse: yes
+
+- name: change the mode of all binary files in nbp release
+ file:
+ path: "{{ nbp_dir }}/flexvolume"
+ mode: 0755
+ recurse: yes
+
+- name: create opensds global config directory if it doesn't exist
+ file:
+ path: "{{ opensds_config_dir }}/driver"
+ state: directory
+ mode: 0755
+
+- name: create opensds log directory if it doesn't exist
+ file:
+ path: "{{ opensds_log_dir }}"
+ state: directory
+ mode: 0755
+
+- name: configure opensds global info
+ shell: |
+ cat > opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC
+ [osdslet]
+ api_endpoint = {{ controller_endpoint }}
+ graceful = True
+ log_file = {{ controller_log_file }}
+ socket_order = inc
+
+ [osdsdock]
+ api_endpoint = {{ dock_endpoint }}
+ log_file = {{ dock_log_file }}
+ # Specify which backends should be enabled, sample,ceph,cinder,lvm and so on.
+ enabled_backends = {{ enabled_backend }}
+
+ [lvm]
+ name = {{ lvm_name }}
+ description = {{ lvm_description }}
+ driver_name = {{ lvm_driver_name }}
+ config_path = {{ lvm_config_path }}
+
+ [ceph]
+ name = {{ ceph_name }}
+ description = {{ ceph_description }}
+ driver_name = {{ ceph_driver_name }}
+ config_path = {{ ceph_config_path }}
+
+ [cinder]
+ name = {{ cinder_name }}
+ description = {{ cinder_description }}
+ driver_name = {{ cinder_driver_name }}
+ config_path = {{ cinder_config_path }}
+
+ [database]
+ endpoint = {{ db_endpoint }}
+ driver = {{ db_driver }}
+ args:
+ chdir: "{{ opensds_config_dir }}"
+ ignore_errors: yes
diff --git a/ci/nbp-ansible/roles/installer/scenarios/csi.yml b/ci/ansible/roles/nbp-installer/scenarios/csi.yml
index e69de29..e69de29 100644
--- a/ci/nbp-ansible/roles/installer/scenarios/csi.yml
+++ b/ci/ansible/roles/nbp-installer/scenarios/csi.yml
diff --git a/ci/nbp-ansible/roles/installer/scenarios/flexvolume.yml b/ci/ansible/roles/nbp-installer/scenarios/flexvolume.yml
index 0bba93b..0bba93b 100644
--- a/ci/nbp-ansible/roles/installer/scenarios/flexvolume.yml
+++ b/ci/ansible/roles/nbp-installer/scenarios/flexvolume.yml
diff --git a/ci/nbp-ansible/roles/installer/tasks/main.yml b/ci/ansible/roles/nbp-installer/tasks/main.yml
index 58057f1..58057f1 100644
--- a/ci/nbp-ansible/roles/installer/tasks/main.yml
+++ b/ci/ansible/roles/nbp-installer/tasks/main.yml
diff --git a/ci/ansible/roles/osdsdb/scenarios/container.yml b/ci/ansible/roles/osdsdb/scenarios/container.yml
index 8a75ef2..afbd15b 100644
--- a/ci/ansible/roles/osdsdb/scenarios/container.yml
+++ b/ci/ansible/roles/osdsdb/scenarios/container.yml
@@ -1,10 +1,10 @@
----
-- name: run etcd containerized service
- docker:
- name: myetcd
- image: quay.io/coreos/etcd:latest
- command: /usr/local/bin/etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} -advertise-client-urls http://{{ etcd_host }}:{{ etcd_peer_port }} -listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }}
- state: started
- net: host
- volumes:
- - "/usr/share/ca-certificates/:/etc/ssl/certs"
+---
+- name: run etcd containerized service
+ docker:
+ name: myetcd
+ image: quay.io/coreos/etcd:latest
+ command: /usr/local/bin/etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} -advertise-client-urls http://{{ etcd_host }}:{{ etcd_peer_port }} -listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }}
+ state: started
+ net: host
+ volumes:
+ - "/usr/share/ca-certificates/:/etc/ssl/certs"
diff --git a/ci/ansible/roles/osdsdb/scenarios/etcd.yml b/ci/ansible/roles/osdsdb/scenarios/etcd.yml
index b05f0e7..9c3352b 100755..100644
--- a/ci/ansible/roles/osdsdb/scenarios/etcd.yml
+++ b/ci/ansible/roles/osdsdb/scenarios/etcd.yml
@@ -1,39 +1,39 @@
----
-- name: check for etcd existed
- stat:
- path: "{{ etcd_dir }}/etcd"
- ignore_errors: yes
- register: etcdexisted
-
-- name: download etcd
- get_url:
- url={{ etcd_download_url }}
- dest=/opt/{{ etcd_tarball }}
- when:
- - etcdexisted.stat.exists is undefined or etcdexisted.stat.exists == false
-
-- name: extract the etcd tarball
- unarchive:
- src=/opt/{{ etcd_tarball }}
- dest=/opt/
- when:
- - etcdexisted.stat.exists is undefined or etcdexisted.stat.exists == false
-
-- name: Check if etcd is running
- shell: ps aux | grep etcd | grep -v grep
- ignore_errors: true
- register: service_etcd_status
-
-- name: run etcd daemon service
- shell: nohup ./etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} -advertise-client-urls http://{{ etcd_host }}:{{ etcd_peer_port }} -listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }} &>>etcd.log &
- become: true
- args:
- chdir: "{{ etcd_dir }}"
- when: service_etcd_status.rc != 0
-
-- name: check etcd cluster health
- shell: ./etcdctl cluster-health
- become: true
- ignore_errors: true
- args:
- chdir: "{{ etcd_dir }}"
+---
+- name: check for etcd existed
+ stat:
+ path: "{{ etcd_dir }}/etcd"
+ ignore_errors: yes
+ register: etcdexisted
+
+- name: download etcd
+ get_url:
+ url={{ etcd_download_url }}
+ dest=/opt/{{ etcd_tarball }}
+ when:
+ - etcdexisted.stat.exists is undefined or etcdexisted.stat.exists == false
+
+- name: extract the etcd tarball
+ unarchive:
+ src=/opt/{{ etcd_tarball }}
+ dest=/opt/
+ when:
+ - etcdexisted.stat.exists is undefined or etcdexisted.stat.exists == false
+
+- name: Check if etcd is running
+ shell: ps aux | grep etcd | grep -v grep
+ ignore_errors: true
+ register: service_etcd_status
+
+- name: run etcd daemon service
+ shell: nohup ./etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} -advertise-client-urls http://{{ etcd_host }}:{{ etcd_peer_port }} -listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }} &>>etcd.log &
+ become: true
+ args:
+ chdir: "{{ etcd_dir }}"
+ when: service_etcd_status.rc != 0
+
+- name: check etcd cluster health
+ shell: ./etcdctl cluster-health
+ become: true
+ ignore_errors: true
+ args:
+ chdir: "{{ etcd_dir }}"
diff --git a/ci/ansible/roles/osdsdb/tasks/main.yml b/ci/ansible/roles/osdsdb/tasks/main.yml
index 03530b4..efbfba9 100755..100644
--- a/ci/ansible/roles/osdsdb/tasks/main.yml
+++ b/ci/ansible/roles/osdsdb/tasks/main.yml
@@ -1,8 +1,8 @@
----
-- name: include scenarios/etcd.yml
- include: scenarios/etcd.yml
- when: db_driver == "etcd" and container_enabled == false
-
-- name: include scenarios/container.yml
- include: scenarios/container.yml
- when: db_driver == "etcd" and container_enabled == true
+---
+- name: include scenarios/etcd.yml
+ include: scenarios/etcd.yml
+ when: db_driver == "etcd" and container_enabled == false
+
+- name: include scenarios/container.yml
+ include: scenarios/container.yml
+ when: db_driver == "etcd" and container_enabled == true
diff --git a/ci/ansible/roles/osdsdock/scenarios/ceph.yml b/ci/ansible/roles/osdsdock/scenarios/ceph.yml
index 2b6196c..b844a29 100755..100644
--- a/ci/ansible/roles/osdsdock/scenarios/ceph.yml
+++ b/ci/ansible/roles/osdsdock/scenarios/ceph.yml
@@ -1,74 +1,77 @@
----
-- name: install ceph-common external package when ceph backend enabled
- apt:
- name: ceph-common
- when: enabled_backend == "ceph"
-
-- name: copy opensds ceph backend file if specify ceph backend
- copy:
- src: ../../../group_vars/ceph/ceph.yaml
- dest: "{{ ceph_config_path }}"
-
-- name: check for ceph-ansible source code existed
- stat:
- path: /opt/ceph-ansible
- ignore_errors: yes
- register: cephansibleexisted
-
-- name: download ceph-ansible source code
- git:
- repo: https://github.com/ceph/ceph-ansible.git
- dest: /opt/ceph-ansible
- when:
- - cephansibleexisted.stat.exists is undefined or cephansibleexisted.stat.exists == false
-
-- name: copy ceph inventory host into ceph-ansible directory
- copy:
- src: ../../../group_vars/ceph/ceph.hosts
- dest: /opt/ceph-ansible/ceph.hosts
-
-- name: copy ceph all.yml file into ceph-ansible group_vars directory
- copy:
- src: ../../../group_vars/ceph/all.yml
- dest: /opt/ceph-ansible/group_vars/all.yml
-
-- name: copy ceph osds.yml file into ceph-ansible group_vars directory
- copy:
- src: ../../../group_vars/ceph/osds.yml
- dest: /opt/ceph-ansible/group_vars/osds.yml
-
-- name: copy site.yml.sample to site.yml in ceph-ansible
- copy:
- src: /opt/ceph-ansible/site.yml.sample
- dest: /opt/ceph-ansible/site.yml
-
-- name: ping all hosts
- shell: ansible all -m ping -i ceph.hosts
- become: true
- args:
- chdir: /opt/ceph-ansible
-
-- name: run ceph-ansible playbook
- shell: ansible-playbook site.yml -i ceph.hosts | tee /var/log/ceph_ansible.log
- become: true
- args:
- chdir: /opt/ceph-ansible
-
-#- name: Check if ceph osd is running
-# shell: ps aux | grep ceph-osd | grep -v grep
-# ignore_errors: false
-# changed_when: false
-# register: service_ceph_osd_status
-
-- name: Check if ceph mon is running
- shell: ps aux | grep ceph-mon | grep -v grep
- ignore_errors: false
- changed_when: false
- register: service_ceph_mon_status
-
-- name: Create a pool and initialize it.
- shell: ceph osd pool create {{ ceph_pool_name }} 100 && ceph osd pool set {{ ceph_pool_name }} size 1
- ignore_errors: yes
- changed_when: false
- register: ceph_init_pool
- when: service_ceph_mon_status.rc == 0 # and service_ceph_osd_status.rc == 0
+---
+- name: install ceph-common external package when ceph backend enabled
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items:
+ - ceph-common
+ when: enabled_backend == "ceph"
+
+- name: copy opensds ceph backend file if specify ceph backend
+ copy:
+ src: ../../../group_vars/ceph/ceph.yaml
+ dest: "{{ ceph_config_path }}"
+
+- name: check for ceph-ansible source code existed
+ stat:
+ path: /opt/ceph-ansible
+ ignore_errors: yes
+ register: cephansibleexisted
+
+- name: download ceph-ansible source code
+ git:
+ repo: https://github.com/ceph/ceph-ansible.git
+ dest: /opt/ceph-ansible
+ when:
+ - cephansibleexisted.stat.exists is undefined or cephansibleexisted.stat.exists == false
+
+- name: copy ceph inventory host into ceph-ansible directory
+ copy:
+ src: ../../../group_vars/ceph/ceph.hosts
+ dest: /opt/ceph-ansible/ceph.hosts
+
+- name: copy ceph all.yml file into ceph-ansible group_vars directory
+ copy:
+ src: ../../../group_vars/ceph/all.yml
+ dest: /opt/ceph-ansible/group_vars/all.yml
+
+- name: copy ceph osds.yml file into ceph-ansible group_vars directory
+ copy:
+ src: ../../../group_vars/ceph/osds.yml
+ dest: /opt/ceph-ansible/group_vars/osds.yml
+
+- name: copy site.yml.sample to site.yml in ceph-ansible
+ copy:
+ src: /opt/ceph-ansible/site.yml.sample
+ dest: /opt/ceph-ansible/site.yml
+
+- name: ping all hosts
+ shell: ansible all -m ping -i ceph.hosts
+ become: true
+ args:
+ chdir: /opt/ceph-ansible
+
+- name: run ceph-ansible playbook
+ shell: ansible-playbook site.yml -i ceph.hosts | tee /var/log/ceph_ansible.log
+ become: true
+ args:
+ chdir: /opt/ceph-ansible
+
+#- name: Check if ceph osd is running
+# shell: ps aux | grep ceph-osd | grep -v grep
+# ignore_errors: false
+# changed_when: false
+# register: service_ceph_osd_status
+
+- name: Check if ceph mon is running
+ shell: ps aux | grep ceph-mon | grep -v grep
+ ignore_errors: false
+ changed_when: false
+ register: service_ceph_mon_status
+
+- name: Create specified pools and initialize them with default pool size.
+ shell: ceph osd pool create {{ item }} 100 && ceph osd pool set {{ item }} size 1
+ ignore_errors: yes
+ changed_when: false
+ with_items: "{{ ceph_pools }}"
+ when: service_ceph_mon_status.rc == 0 # and service_ceph_osd_status.rc == 0
diff --git a/ci/ansible/roles/osdsdock/scenarios/cinder.yml b/ci/ansible/roles/osdsdock/scenarios/cinder.yml
index 333c5c0..6136f25 100755..100644
--- a/ci/ansible/roles/osdsdock/scenarios/cinder.yml
+++ b/ci/ansible/roles/osdsdock/scenarios/cinder.yml
@@ -1,5 +1,5 @@
----
-- name: copy opensds cinder backend file if specify cinder backend
- copy:
- src: ../../../group_vars/cinder/cinder.yaml
- dest: "{{ cinder_config_path }}"
+---
+- name: copy opensds cinder backend file if specify cinder backend
+ copy:
+ src: ../../../group_vars/cinder/cinder.yaml
+ dest: "{{ cinder_config_path }}"
diff --git a/ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml b/ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml
index 7bf2b97..49f4063 100644
--- a/ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml
+++ b/ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml
@@ -1,146 +1,145 @@
----
-
-- name: install python-pip
- apt:
- name: python-pip
-
-- name: install lvm2
- apt:
- name: lvm2
-
-- name: install thin-provisioning-tools
- apt:
- name: thin-provisioning-tools
-
-- name: install docker-compose
- pip:
- name: docker-compose
-
-- name: copy opensds cinder backend file if specify cinder backend
- copy:
- src: ../../../group_vars/cinder/cinder.yaml
- dest: "{{ cinder_config_path }}"
-
-- name: create directory to save source code and volume group file
- file:
- path: "{{ cinder_data_dir }}"
- state: directory
- recurse: yes
-
-- name: create volume group in thin mode
- shell:
- _raw_params: |
- function _create_lvm_volume_group {
- local vg=$1
- local size=$2
-
- local backing_file={{ cinder_data_dir }}/${vg}.img
- if ! sudo vgs $vg; then
- # Only create if the file doesn't already exists
- [[ -f $backing_file ]] || truncate -s $size $backing_file
- local vg_dev
- vg_dev=`sudo losetup -f --show $backing_file`
-
- # Only create volume group if it doesn't already exist
- if ! sudo vgs $vg; then
- sudo vgcreate $vg $vg_dev
- fi
- fi
- }
- modprobe dm_thin_pool
- _create_lvm_volume_group {{ cinder_volume_group }} 10G
- args:
- executable: /bin/bash
- become: true
-
-- name: check for python-cinderclient source code existed
- stat:
- path: "{{ cinder_data_dir }}/python-cinderclient"
- ignore_errors: yes
- register: cinderclient_existed
-
-- name: download python-cinderclient source code
- git:
- repo: https://github.com/openstack/python-cinderclient.git
- dest: "{{ cinder_data_dir }}/python-cinderclient"
- when:
- - cinderclient_existed.stat.exists is undefined or cinderclient_existed.stat.exists == false
-
-# Tested successfully in this version `ab0185bfc6e8797a35a2274c2a5ee03afb03dd60`
-# git checkout -b ab0185bfc6e8797a35a2274c2a5ee03afb03dd60
-- name: pip install cinderclinet
- shell: |
- pip install -e .
- become: true
- args:
- chdir: "{{ cinder_data_dir }}/python-cinderclient"
-
-- name: check for python-brick-cinderclient-ext source code existed
- stat:
- path: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"
- ignore_errors: yes
- register: brick_existed
-
-- name: download python-brick-cinderclient-ext source code
- git:
- repo: https://github.com/openstack/python-brick-cinderclient-ext.git
- dest: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"
- when:
- - brick_existed.stat.exists is undefined or brick_existed.stat.exists == false
-
-# Tested successfully in this version `a281e67bf9c12521ea5433f86cec913854826a33`
-# git checkout -b a281e67bf9c12521ea5433f86cec913854826a33
-- name: pip install python-brick-cinderclient-ext
- shell: |
- pip install -e .
- become: true
- args:
- chdir: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"
-
-
-- name: check for cinder source code existed
- stat:
- path: "{{ cinder_data_dir }}/cinder"
- ignore_errors: yes
- register: cinder_existed
-
-- name: download cinder source code
- git:
- repo: https://github.com/openstack/cinder.git
- dest: "{{ cinder_data_dir }}/cinder"
- when:
- - cinder_existed.stat.exists is undefined or cinder_existed.stat.exists == false
-
-# Tested successfully in this version `7bbc95344d3961d0bf059252723fa40b33d4b3fe`
-# git checkout -b 7bbc95344d3961d0bf059252723fa40b33d4b3fe
-- name: update blockbox configuration
- shell: |
- sed -i "s/PLATFORM ?= debian:stretch/PLATFORM ?= {{ cinder_container_platform }}/g" Makefile
- sed -i "s/TAG ?= debian-cinder:latest/TAG ?= {{ cinder_image_tag }}:latest/g" Makefile
-
- sed -i "s/image: debian-cinder/image: {{ cinder_image_tag }}/g" docker-compose.yml
- sed -i "s/image: lvm-debian-cinder/image: lvm-{{ cinder_image_tag }}/g" docker-compose.yml
-
- sed -i "s/volume_group = cinder-volumes /volume_group = {{ cinder_volume_group }}/g" etc/cinder.conf
- become: true
- args:
- chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
-
-- name: make blockbox
- shell: make blockbox
- become: true
- args:
- chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
-
-- name: start cinder-standalone service
- shell: docker-compose up -d
- become: true
- args:
- chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
-
-- name: wait for cinder service to start normally
- wait_for:
- host: 127.0.0.1
- port: 8776
- delay: 2
- timeout: 120
+---
+- name: install python-pip
+ apt:
+ name: python-pip
+
+- name: install lvm2
+ apt:
+ name: lvm2
+
+- name: install thin-provisioning-tools
+ apt:
+ name: thin-provisioning-tools
+
+- name: install docker-compose
+ pip:
+ name: docker-compose
+
+- name: copy opensds cinder backend file if specify cinder backend
+ copy:
+ src: ../../../group_vars/cinder/cinder.yaml
+ dest: "{{ cinder_config_path }}"
+
+- name: create directory to save source code and volume group file
+ file:
+ path: "{{ cinder_data_dir }}"
+ state: directory
+ recurse: yes
+
+- name: create volume group in thin mode
+ shell:
+ _raw_params: |
+ function _create_lvm_volume_group {
+ local vg=$1
+ local size=$2
+
+ local backing_file={{ cinder_data_dir }}/${vg}.img
+ if ! sudo vgs $vg; then
+ # Only create if the file doesn't already exists
+ [[ -f $backing_file ]] || truncate -s $size $backing_file
+ local vg_dev
+ vg_dev=`sudo losetup -f --show $backing_file`
+
+ # Only create volume group if it doesn't already exist
+ if ! sudo vgs $vg; then
+ sudo vgcreate $vg $vg_dev
+ fi
+ fi
+ }
+ modprobe dm_thin_pool
+ _create_lvm_volume_group {{ cinder_volume_group }} 10G
+ args:
+ executable: /bin/bash
+ become: true
+
+- name: check for python-cinderclient source code existed
+ stat:
+ path: "{{ cinder_data_dir }}/python-cinderclient"
+ ignore_errors: yes
+ register: cinderclient_existed
+
+- name: download python-cinderclient source code
+ git:
+ repo: https://github.com/openstack/python-cinderclient.git
+ dest: "{{ cinder_data_dir }}/python-cinderclient"
+ when:
+ - cinderclient_existed.stat.exists is undefined or cinderclient_existed.stat.exists == false
+
+# Tested successfully in this version `ab0185bfc6e8797a35a2274c2a5ee03afb03dd60`
+# git checkout -b ab0185bfc6e8797a35a2274c2a5ee03afb03dd60
+- name: pip install cinderclinet
+ shell: |
+ pip install -e .
+ become: true
+ args:
+ chdir: "{{ cinder_data_dir }}/python-cinderclient"
+
+- name: check for python-brick-cinderclient-ext source code existed
+ stat:
+ path: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"
+ ignore_errors: yes
+ register: brick_existed
+
+- name: download python-brick-cinderclient-ext source code
+ git:
+ repo: https://github.com/openstack/python-brick-cinderclient-ext.git
+ dest: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"
+ when:
+ - brick_existed.stat.exists is undefined or brick_existed.stat.exists == false
+
+# Tested successfully in this version `a281e67bf9c12521ea5433f86cec913854826a33`
+# git checkout -b a281e67bf9c12521ea5433f86cec913854826a33
+- name: pip install python-brick-cinderclient-ext
+ shell: |
+ pip install -e .
+ become: true
+ args:
+ chdir: "{{ cinder_data_dir }}/python-brick-cinderclient-ext"
+
+
+- name: check for cinder source code existed
+ stat:
+ path: "{{ cinder_data_dir }}/cinder"
+ ignore_errors: yes
+ register: cinder_existed
+
+- name: download cinder source code
+ git:
+ repo: https://github.com/openstack/cinder.git
+ dest: "{{ cinder_data_dir }}/cinder"
+ when:
+ - cinder_existed.stat.exists is undefined or cinder_existed.stat.exists == false
+
+# Tested successfully in this version `7bbc95344d3961d0bf059252723fa40b33d4b3fe`
+# git checkout -b 7bbc95344d3961d0bf059252723fa40b33d4b3fe
+- name: update blockbox configuration
+ shell: |
+ sed -i "s/PLATFORM ?= debian:stretch/PLATFORM ?= {{ cinder_container_platform }}/g" Makefile
+ sed -i "s/TAG ?= debian-cinder:latest/TAG ?= {{ cinder_image_tag }}:latest/g" Makefile
+
+ sed -i "s/image: debian-cinder/image: {{ cinder_image_tag }}/g" docker-compose.yml
+ sed -i "s/image: lvm-debian-cinder/image: lvm-{{ cinder_image_tag }}/g" docker-compose.yml
+
+ sed -i "s/volume_group = cinder-volumes /volume_group = {{ cinder_volume_group }}/g" etc/cinder.conf
+ become: true
+ args:
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
+
+- name: make blockbox
+ shell: make blockbox
+ become: true
+ args:
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
+
+- name: start cinder-standalone service
+ shell: docker-compose up -d
+ become: true
+ args:
+ chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
+
+- name: wait for cinder service to start normally
+ wait_for:
+ host: 127.0.0.1
+ port: 8776
+ delay: 2
+ timeout: 120
diff --git a/ci/ansible/roles/osdsdock/scenarios/lvm.yml b/ci/ansible/roles/osdsdock/scenarios/lvm.yml
index 5847aa3..743fe3b 100755..100644
--- a/ci/ansible/roles/osdsdock/scenarios/lvm.yml
+++ b/ci/ansible/roles/osdsdock/scenarios/lvm.yml
@@ -1,27 +1,20 @@
----
-- name: install lvm2 external package when lvm backend enabled
- apt:
- name: lvm2
-
-- name: copy opensds lvm backend file if specify lvm backend
- copy:
- src: ../../../group_vars/lvm/lvm.yaml
- dest: "{{ lvm_config_path }}"
-
-- name: check if physical volume existed
- shell: pvdisplay {{ pv_device }}
- ignore_errors: yes
- register: pv_existed
-
-- name: create a physical volume
- shell: pvcreate {{ pv_device }}
- when: pv_existed is undefined or pv_existed.rc != 0
-
-- name: check if volume group existed
- shell: vgdisplay {{ vg_name }}
- ignore_errors: yes
- register: vg_existed
-
-- name: create a volume group
- shell: vgcreate {{ vg_name }} {{ pv_device }}
- when: vg_existed is undefined or vg_existed.rc != 0
+---
+- name: install lvm2 external package when lvm backend enabled
+ apt:
+ name: lvm2
+
+- name: copy opensds lvm backend file if specify lvm backend
+ copy:
+ src: ../../../group_vars/lvm/lvm.yaml
+ dest: "{{ lvm_config_path }}"
+
+- name: check if volume group existed
+ shell: vgdisplay {{ vg_name }}
+ ignore_errors: yes
+ register: vg_existed
+
+- name: create a volume group and initialize it
+ lvg:
+ vg: "{{ vg_name }}"
+ pvs: "{{ pv_devices }}"
+ when: vg_existed is undefined or vg_existed.rc != 0
diff --git a/ci/ansible/roles/osdsdock/tasks/main.yml b/ci/ansible/roles/osdsdock/tasks/main.yml
index 68f9fdb..215cf00 100755..100644
--- a/ci/ansible/roles/osdsdock/tasks/main.yml
+++ b/ci/ansible/roles/osdsdock/tasks/main.yml
@@ -1,44 +1,44 @@
----
-- name: include scenarios/lvm.yml
- include: scenarios/lvm.yml
- when: enabled_backend == "lvm"
-
-- name: include scenarios/ceph.yml
- include: scenarios/ceph.yml
- when: enabled_backend == "ceph"
-
-- name: include scenarios/cinder.yml
- include: scenarios/cinder.yml
- when: enabled_backend == "cinder" and use_cinder_standalone == false
-
-- name: include scenarios/cinder_standalone.yml
- include: scenarios/cinder_standalone.yml
- when: enabled_backend == "cinder" and use_cinder_standalone == true
-
-- name: run osdsdock daemon service
- shell:
- cmd: |
- i=0
- while
- i="$((i+1))"
- [ "$i" -lt 4 ]
- do
- nohup bin/osdsdock &>/dev/null &
- sleep 5
- ps aux | grep osdsdock | grep -v grep && break
- done
- args:
- chdir: "{{ opensds_build_dir }}/out"
- when: container_enabled == false
-
-- name: run osdsdock containerized service
- docker:
- name: osdsdock
- image: opensdsio/opensds-dock:latest
- state: started
- net: host
- privileged: true
- volumes:
- - "/etc/opensds/:/etc/opensds"
- - "/etc/ceph/:/etc/ceph"
- when: container_enabled == true
+---
+- name: include scenarios/lvm.yml
+ include: scenarios/lvm.yml
+ when: enabled_backend == "lvm"
+
+- name: include scenarios/ceph.yml
+ include: scenarios/ceph.yml
+ when: enabled_backend == "ceph"
+
+- name: include scenarios/cinder.yml
+ include: scenarios/cinder.yml
+ when: enabled_backend == "cinder" and use_cinder_standalone == false
+
+- name: include scenarios/cinder_standalone.yml
+ include: scenarios/cinder_standalone.yml
+ when: enabled_backend == "cinder" and use_cinder_standalone == true
+
+- name: run osdsdock daemon service
+ shell:
+ cmd: |
+ i=0
+ while
+ i="$((i+1))"
+ [ "$i" -lt 4 ]
+ do
+ nohup bin/osdsdock > osdsdock.out 2> osdsdock.err < /dev/null &
+ sleep 5
+ ps aux | grep osdsdock | grep -v grep && break
+ done
+ args:
+ chdir: "{{ opensds_dir }}"
+ when: container_enabled == false
+
+- name: run osdsdock containerized service
+ docker:
+ name: osdsdock
+ image: opensdsio/opensds-dock:latest
+ state: started
+ net: host
+ privileged: true
+ volumes:
+ - "/etc/opensds/:/etc/opensds"
+ - "/etc/ceph/:/etc/ceph"
+ when: container_enabled == true
diff --git a/ci/ansible/roles/osdslet/tasks/main.yml b/ci/ansible/roles/osdslet/tasks/main.yml
index 14ab40e..02b71fc 100755..100644
--- a/ci/ansible/roles/osdslet/tasks/main.yml
+++ b/ci/ansible/roles/osdslet/tasks/main.yml
@@ -1,26 +1,26 @@
----
-- name: run osdslet daemon service
- shell:
- cmd: |
- i=0
- while
- i="$((i+1))"
- [ "$i" -lt 4 ]
- do
- nohup bin/osdslet > osdslet.out 2> osdslet.err < /dev/null &
- sleep 5
- ps aux | grep osdslet | grep -v grep && break
- done
- args:
- chdir: "{{ opensds_build_dir }}/out"
- when: container_enabled == false
-
-- name: run osdslet containerized service
- docker:
- name: osdslet
- image: opensdsio/opensds-controller:latest
- state: started
- net: host
- volumes:
- - "/etc/opensds/:/etc/opensds"
- when: container_enabled == true
+---
+- name: run osdslet daemon service
+ shell:
+ cmd: |
+ i=0
+ while
+ i="$((i+1))"
+ [ "$i" -lt 4 ]
+ do
+ nohup bin/osdslet > osdslet.out 2> osdslet.err < /dev/null &
+ sleep 5
+ ps aux | grep osdslet | grep -v grep && break
+ done
+ args:
+ chdir: "{{ opensds_dir }}"
+ when: container_enabled == false
+
+- name: run osdslet containerized service
+ docker:
+ name: osdslet
+ image: opensdsio/opensds-controller:latest
+ state: started
+ net: host
+ volumes:
+ - "/etc/opensds/:/etc/opensds"
+ when: container_enabled == true
diff --git a/ci/ansible/site.yml b/ci/ansible/site.yml
index ea43610..f0d2048 100755..100644
--- a/ci/ansible/site.yml
+++ b/ci/ansible/site.yml
@@ -1,18 +1,19 @@
----
-# Defines deployment design and assigns role to server groups
-
-- name: deploy an opensds local cluster
- hosts: all
- remote_user: root
- vars_files:
- - group_vars/common.yml
- - group_vars/osdsdb.yml
- - group_vars/osdslet.yml
- - group_vars/osdsdock.yml
- gather_facts: false
- become: True
- roles:
- - common
- - osdsdb
- - osdslet
- - osdsdock
+---
+# Defines deployment design and assigns role to server groups
+
+- name: deploy an opensds local cluster
+ hosts: all
+ remote_user: root
+ vars_files:
+ - group_vars/common.yml
+ - group_vars/osdsdb.yml
+ - group_vars/osdslet.yml
+ - group_vars/osdsdock.yml
+ gather_facts: false
+ become: True
+ roles:
+ - common
+ - osdsdb
+ - osdslet
+ - osdsdock
+ - nbp-installer
diff --git a/ci/nbp-ansible/README.md b/ci/nbp-ansible/README.md
deleted file mode 100644
index 5a4c5ab..0000000
--- a/ci/nbp-ansible/README.md
+++ /dev/null
@@ -1,51 +0,0 @@
-# nbp-ansible
-This is an installation tool for opensds northbound plugins using ansible.
-
-## Install work
-
-### Pre-config (Ubuntu 16.04)
-First download some system packages:
-```
-sudo apt-get install -y openssh-server git
-```
-Then config ```/etc/ssh/sshd_config``` file and change one line:
-```conf
-PermitRootLogin yes
-```
-Next generate ssh-token:
-```bash
-ssh-keygen -t rsa
-ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation
-```
-
-### Install ansible tool
-```bash
-sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.
-sudo apt-get update
-sudo apt-get install ansible
-ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.
-```
-
-### Configure nbp plugin variable
-##### Common environment:
-Configure the ```nbp_plugin_type``` in `group_vars/common.yml` according to your environment:
-```yaml
-nbp_plugin_type: flexvolume # flexvolume is the default integration way, but you can change it from 'csi', 'flexvolume'
-```
-
-### Check if the hosts can be reached
-```bash
-sudo ansible all -m ping -i nbp.hosts
-```
-
-### Run opensds-ansible playbook to start deploy
-```bash
-sudo ansible-playbook site.yml -i nbp.hosts
-```
-
-## Uninstall work
-
-### Run nbp-ansible playbook to clean the environment
-```bash
-sudo ansible-playbook clean.yml -i nbp.hosts
-```
diff --git a/ci/nbp-ansible/clean.yml b/ci/nbp-ansible/clean.yml
deleted file mode 100644
index 6e5f629..0000000
--- a/ci/nbp-ansible/clean.yml
+++ /dev/null
@@ -1,12 +0,0 @@
----
-# Defines some clean processes when banishing the cluster.
-
-- name: destory all opensds nbp files
- hosts: worker-nodes
- remote_user: root
- vars_files:
- - group_vars/common.yml
- gather_facts: false
- become: True
- roles:
- - cleaner
diff --git a/ci/nbp-ansible/group_vars/common.yml b/ci/nbp-ansible/group_vars/common.yml
deleted file mode 100644
index 3860660..0000000
--- a/ci/nbp-ansible/group_vars/common.yml
+++ /dev/null
@@ -1,33 +0,0 @@
----
-# Variables here are applicable to all host groups NOT roles
-
-# This sample file generated by generate_group_vars_sample.sh
-
-# Dummy variable to avoid error because ansible does not recognize the
-# file as a good configuration file when no variable in it.
-dummy:
-
-# You can override default vars defined in defaults/main.yml here,
-# but I would advice to use host or group vars instead
-
-
-###########
-# GENERAL #
-###########
-
-nbp_release: v0.1.0
-
-# These fields are not suggested to be modified
-nbp_download_url: https://github.com/opensds/nbp/releases/download/{{ nbp_release }}/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz
-nbp_tarball_url: /opt/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz
-nbp_dir: /opt/opensds-k8s-{{ nbp_release }}-linux-amd64
-
-
-###########
-# PLUGIN #
-###########
-
-nbp_plugin_type: flexvolume # flexvolume is the default integration way, but you can change it from 'csi', 'flexvolume'
-
-flexvolume_plugin_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds
-
diff --git a/ci/nbp-ansible/nbp.hosts b/ci/nbp-ansible/nbp.hosts
deleted file mode 100644
index 84d0dc6..0000000
--- a/ci/nbp-ansible/nbp.hosts
+++ /dev/null
@@ -1,2 +0,0 @@
-[worker-nodes]
-localhost ansible_connection=local \ No newline at end of file
diff --git a/ci/nbp-ansible/roles/cleaner/tasks/main.yml b/ci/nbp-ansible/roles/cleaner/tasks/main.yml
deleted file mode 100644
index 9e81756..0000000
--- a/ci/nbp-ansible/roles/cleaner/tasks/main.yml
+++ /dev/null
@@ -1,22 +0,0 @@
----
-- name: clean opensds flexvolume plugins binary file
- file:
- path: "{{ flexvolume_plugin_dir }}"
- state: absent
- force: yes
- ignore_errors: yes
- when: nbp_plugin_type == "flexvolume"
-
-- name: clean nbp release files
- file:
- path: "{{ nbp_dir }}"
- state: absent
- force: yes
- ignore_errors: yes
-
-- name: clean nbp release tarball file
- file:
- path: "{{ nbp_tarball_url }}"
- state: absent
- force: yes
- ignore_errors: yes
diff --git a/ci/nbp-ansible/roles/common/tasks/main.yml b/ci/nbp-ansible/roles/common/tasks/main.yml
deleted file mode 100644
index b612e24..0000000
--- a/ci/nbp-ansible/roles/common/tasks/main.yml
+++ /dev/null
@@ -1,24 +0,0 @@
----
-- name: Run the equivalent of "apt-get update" as a separate step
- apt:
- update_cache: yes
-
-- name: check for nbp release files existed
- stat:
- path: "{{ nbp_dir }}"
- ignore_errors: yes
- register: releasesexisted
-
-- name: download nbp release files
- get_url:
- url={{ nbp_download_url }}
- dest={{ nbp_tarball_url }}
- when:
- - releasesexisted.stat.exists is undefined or releasesexisted.stat.exists == false
-
-- name: extract the nbp release tarball
- unarchive:
- src={{ nbp_tarball_url }}
- dest=/opt/
- when:
- - releasesexisted.stat.exists is undefined or releasesexisted.stat.exists == false
diff --git a/ci/nbp-ansible/site.yml b/ci/nbp-ansible/site.yml
deleted file mode 100644
index 7e22f83..0000000
--- a/ci/nbp-ansible/site.yml
+++ /dev/null
@@ -1,13 +0,0 @@
----
-# Defines deployment design and assigns role to server groups
-
-- name: deploy opensds flexvolume plugin in all kubelet nodes
- hosts: worker-nodes
- remote_user: root
- vars_files:
- - group_vars/common.yml
- gather_facts: false
- become: True
- roles:
- - common
- - installer
diff --git a/tutorials/csi-plugin.md b/tutorials/csi-plugin.md
index e3b0174..9750791 100644
--- a/tutorials/csi-plugin.md
+++ b/tutorials/csi-plugin.md
@@ -31,19 +31,20 @@
```
### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster
-* You can startup the lastest k8s local cluster by executing commands blow:
+* You can startup the v1.9.0 k8s local cluster by executing commands blow:
```
cd $HOME
git clone https://github.com/kubernetes/kubernetes.git
cd $HOME/kubernetes
+ git checkout v1.9.0
make
echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile
ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true" LOG_LEVEL=5 hack/local-up-cluster.sh
```
### [opensds](https://github.com/opensds/opensds) local cluster
-* For testing purposes you can deploy OpenSDS referring the [OpenSDS Cluster Installation through Ansible](https://github.com/opensds/opensds/wiki/OpenSDS-Cluster-Installation-through-Ansible) wiki. Besides, you need to deploy opensds csi plugin refering to ```nbp-ansible/README.md```.
+* For testing purposes you can deploy OpenSDS refering to ```ansible/README.md```.
## Testing steps ##
diff --git a/tutorials/flexvolume-plugin.md b/tutorials/flexvolume-plugin.md
index c85d752..269da4b 100644
--- a/tutorials/flexvolume-plugin.md
+++ b/tutorials/flexvolume-plugin.md
@@ -51,7 +51,7 @@
### [opensds](https://github.com/opensds/opensds) local cluster
-* For testing purposes you can deploy OpenSDS local cluster referring to ```ansible/README.md```. Besides, you need to deploy opensds flexvolume plugin refering to ```nbp-ansible/README.md```.
+* For testing purposes you can deploy OpenSDS local cluster referring to ```ansible/README.md```.
## Testing steps ##