diff options
54 files changed, 2320 insertions, 869 deletions
diff --git a/ci/ansible/README.md b/ci/ansible/README.md deleted file mode 100644 index 8e86694..0000000 --- a/ci/ansible/README.md +++ /dev/null @@ -1,188 +0,0 @@ -# opensds-ansible
-This is an installation tool for opensds using ansible.
-
-## 1. How to install an opensds local cluster
-### Pre-config (Ubuntu 16.04)
-First download some system packages:
-```
-sudo apt-get install -y openssh-server git make gcc
-```
-Then config ```/etc/ssh/sshd_config``` file and change one line:
-```conf
-PermitRootLogin yes
-```
-Next generate ssh-token:
-```bash
-ssh-keygen -t rsa
-ssh-copy-id -i ~/.ssh/id_rsa.pub <ip_address> # IP address of the target machine of the installation
-```
-
-### Install docker
-If use a standalone cinder as backend, you also need to install docker to run cinder service. Please see the [docker installation document](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) for details.
-
-### Install ansible tool
-To install ansible, you can run `install_ansible.sh` directly or input these commands below:
-```bash
-sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.
-sudo apt-get update
-sudo apt-get install ansible
-ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.
-```
-
-### Configure opensds cluster variables:
-##### System environment:
-Configure these variables below in `group_vars/common.yml`:
-```yaml
-opensds_release: v0.1.4 # The version should be at least v0.1.4.
-nbp_release: v0.1.0 # The version should be at least v0.1.0.
-
-container_enabled: <false_or_true>
-```
-
-If you want to integrate OpenSDS with cloud platform (for example k8s), please modify `nbp_plugin_type` variable in `group_vars/common.yml`:
-```yaml
-nbp_plugin_type: standalone # standalone is the default integration way, but you can change it to 'csi', 'flexvolume'
-```
-
-#### Database configuration
-Currently OpenSDS adopts `etcd` as database backend, and the default db endpoint is `localhost:2379,localhost:2380`. But to avoid some conflicts with existing environment (k8s local cluster), we suggest you change the port of etcd cluster in `group_vars/osdsdb.yml`:
-```yaml
-db_endpoint: localhost:62379,localhost:62380
-
-etcd_host: 127.0.0.1
-etcd_port: 62379
-etcd_peer_port: 62380
-```
-
-##### LVM
-If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
-```yaml
-enabled_backend: lvm # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
-pv_devices: # Specify block devices and ensure them existed if you choose lvm
- #- /dev/sdc
- #- /dev/sdd
-vg_name: "specified_vg_name" # Specify a name for VG if choosing lvm
-```
-Modify ```group_vars/lvm/lvm.yaml```, change pool name to be the same as `vg_name` above:
-```yaml
-"vg001" # change pool name to be the same as vg_name
-```
-##### Ceph
-If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
-```yaml
-enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
-ceph_pools: # Specify pool name randomly if choosing ceph
- - rbd
- #- ssd
- #- sas
-```
-Modify ```group_vars/ceph/ceph.yaml```, change pool name to be the same as `ceph_pool_name`. But if you enable multiple pools, please append the current pool format:
-```yaml
-"rbd" # change pool name to be the same as ceph pool
-```
-Configure two files under ```group_vars/ceph```: `all.yml` and `osds.yml`. Here is an example:
-
-```group_vars/ceph/all.yml```:
-```yml
-ceph_origin: repository
-ceph_repository: community
-ceph_stable_release: luminous # Choose luminous as default version
-public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
-cluster_network: "{{ public_network }}"
-monitor_interface: eth1 # Change to the network interface on the target machine
-```
-```group_vars/ceph/osds.yml```:
-```yml
-devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
- - '/dev/sda' # Ensure this device exists and available if ceph is chosen
- - '/dev/sdb' # Ensure this device exists and available if ceph is chosen
-osd_scenario: collocated
-```
-
-##### Cinder
-If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
-```yaml
-enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
-
-# Use block-box install cinder_standalone if true, see details in:
-use_cinder_standalone: true
-# If true, you can configure cinder_container_platform, cinder_image_tag,
-# cinder_volume_group.
-
-# Default: debian:stretch, and ubuntu:xenial, centos:7 is also supported.
-cinder_container_platform: debian:stretch
-# The image tag can be arbitrarily modified, as long as follow the image naming
-# conventions, default: debian-cinder
-cinder_image_tag: debian-cinder
-# The cinder standalone use lvm driver as default driver, therefore `volume_group`
-# should be configured, the default is: cinder-volumes. The volume group will be
-# removed when use ansible script clean environment.
-cinder_volume_group: cinder-volumes
-```
-
-Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone.
-
-### Check if the hosts can be reached
-```bash
-sudo ansible all -m ping -i local.hosts
-```
-
-### Run opensds-ansible playbook to start deploy
-```bash
-sudo ansible-playbook site.yml -i local.hosts
-```
-
-## 2. How to test opensds cluster
-
-### Configure opensds CLI tool
-```bash
-sudo cp /opt/opensds-{opensds-release}-linux-amd64/bin/osdsctl /usr/local/bin
-export OPENSDS_ENDPOINT=http://127.0.0.1:50040
-export OPENSDS_AUTH_STRATEGY=noauth
-
-osdsctl pool list # Check if the pool resource is available
-```
-
-### Create a default profile first.
-```
-osdsctl profile create '{"name": "default", "description": "default policy"}'
-```
-
-### Create a volume.
-```
-osdsctl volume create 1 --name=test-001
-```
-For cinder, az needs to be specified.
-```
-osdsctl volume create 1 --name=test-001 --az nova
-```
-
-### List all volumes.
-```
-osdsctl volume list
-```
-
-### Delete the volume.
-```
-osdsctl volume delete <your_volume_id>
-```
-
-
-## 3. How to purge and clean opensds cluster
-
-### Run opensds-ansible playbook to clean the environment
-```bash
-sudo ansible-playbook clean.yml -i local.hosts
-```
-
-### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed
-```bash
-cd /opt/ceph-ansible
-sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts
-```
-
-### Remove ceph-ansible source code (optional)
-```bash
-cd ..
-sudo rm -rf /opt/ceph-ansible
-```
diff --git a/ci/ansible/clean.yml b/ci/ansible/clean.yml index fd2f1c9..0df69bf 100644 --- a/ci/ansible/clean.yml +++ b/ci/ansible/clean.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
# Defines some clean processes when banishing the cluster.
@@ -6,9 +20,12 @@ remote_user: root
vars_files:
- group_vars/common.yml
+ - group_vars/auth.yml
- group_vars/osdsdb.yml
+ - group_vars/osdslet.yml
- group_vars/osdsdock.yml
+ - group_vars/dashboard.yml
gather_facts: false
become: True
roles:
- - cleaner
\ No newline at end of file + - cleaner
diff --git a/ci/ansible/group_vars/auth.yml b/ci/ansible/group_vars/auth.yml new file mode 100644 index 0000000..a85a35a --- /dev/null +++ b/ci/ansible/group_vars/auth.yml @@ -0,0 +1,39 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +# Dummy variable to avoid error because ansible does not recognize the +# file as a good configuration file when no variable in it. +dummy: + + +########### +# GENERAL # +########### + +# OpenSDS authentication strategy, support 'noauth' and 'keystone'. +opensds_auth_strategy: keystone + +# The URL should be replaced with the keystone actual URL +keystone_os_auth_url: http://127.0.0.1/identity + +############ +# KEYSTONE # +############ + +# Execute the unstack.sh +uninstall_keystone: true + +# Execute the stack.sh +cleanup_keystone: true diff --git a/ci/ansible/group_vars/ceph/all.yml b/ci/ansible/group_vars/ceph/all.yml index 9594d33..b5f630a 100644 --- a/ci/ansible/group_vars/ceph/all.yml +++ b/ci/ansible/group_vars/ceph/all.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
# Variables here are applicable to all host groups NOT roles
@@ -9,6 +23,17 @@ dummy: # You can override vars by using host or group vars
+ceph_origin: repository
+ceph_repository: community
+ceph_stable_release: luminous
+public_network: "192.168.3.0/24"
+cluster_network: "{{ public_network }}"
+monitor_interface: eth1
+devices:
+ - '/dev/sda'
+ #- '/dev/sdb'
+osd_scenario: collocated
+
###########
# GENERAL #
###########
@@ -125,8 +150,7 @@ dummy: # - repository
# - distro
# - local
-ceph_origin: repository
-ceph_repository: community
+
#ceph_repository: "{{ 'community' if ceph_stable else 'rhcs' if ceph_rhcs else 'dev' if ceph_dev else 'uca' if ceph_stable_uca else 'custom' if ceph_custom else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
#valid_ceph_repository:
@@ -143,7 +167,7 @@ ceph_repository: community #
#ceph_mirror: http://download.ceph.com
#ceph_stable_key: https://download.ceph.com/keys/release.asc
-ceph_stable_release: luminous
+#ceph_stable_release: dummy
#ceph_stable_repo: "{{ ceph_mirror }}/debian-{{ ceph_stable_release }}"
#nfs_ganesha_stable: true # use stable repos for nfs-ganesha
@@ -313,8 +337,7 @@ ceph_stable_release: luminous # These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).
# Eg. If you want to specify for each monitor which address the monitor will bind to you can set it in your **inventory host file** by using 'monitor_address' variable.
# Preference will go to monitor_address if both monitor_address and monitor_interface are defined.
-# To use an IPv6 address, use the monitor_address setting instead (and set ip_version to ipv6)
-monitor_interface: ens3
+#monitor_interface: "{{ ceph_mon_docker_interface if ceph_mon_docker_interface != 'interface' else 'interface' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
#monitor_address: 0.0.0.0
#monitor_address_block: subnet
# set to either ipv4 or ipv6, whichever your network is using
@@ -323,9 +346,9 @@ monitor_interface: ens3 ## OSD options
#
-journal_size: 100 # OSD journal size in MB
-public_network: 100.64.128.40/24
-cluster_network: "{{ public_network }}"
+#journal_size: 5120 # OSD journal size in MB
+#public_network: "{{ ceph_mon_docker_subnet if ceph_mon_docker_subnet != '0.0.0.0/0' else '0.0.0.0/0' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#cluster_network: "{{ public_network | regex_replace(' ', '') }}"
#osd_mkfs_type: xfs
#osd_mkfs_options_xfs: -f -i size=2048
#osd_mount_options_xfs: noatime,largeio,inode64,swalloc
@@ -358,11 +381,10 @@ cluster_network: "{{ public_network }}" # These variables must be defined at least in all.yml and overrided if needed (inventory host file or group_vars/*.yml).
# Eg. If you want to specify for each radosgw node which address the radosgw will bind to you can set it in your **inventory host file** by using 'radosgw_address' variable.
# Preference will go to radosgw_address if both radosgw_address and radosgw_interface are defined.
-# To use an IPv6 address, use the radosgw_address setting instead (and set ip_version to ipv6)
#radosgw_interface: interface
#radosgw_address: "{{ '0.0.0.0' if rgw_containerized_deployment else 'address' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
#radosgw_address_block: subnet
-#radosgw_keystone: false # activate OpenStack Keystone options full detail here: http://ceph.com/docs/master/radosgw/keystone/
+#radosgw_keystone_ssl: false # activate this when using keystone PKI keys
# Rados Gateway options
#email_address: foo@bar.com
@@ -475,8 +497,8 @@ cluster_network: "{{ public_network }}" #ceph_docker_registry: docker.io
#ceph_docker_enable_centos_extra_repo: false
#ceph_docker_on_openstack: false
-#ceph_mon_docker_interface: "{{ monitor_interface }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#ceph_mon_docker_subnet: "{{ public_network }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_mon_docker_interface: "interface" # backward compatibility with stable-2.2, will disappear in stable 3.1
+#ceph_mon_docker_subnet: "0.0.0.0/0" # backward compatibility with stable-2.2, will disappear in stable 3.1
#mon_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
#osd_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
#mds_containerized_deployment: False # backward compatibility with stable-2.2, will disappear in stable 3.1
@@ -499,3 +521,8 @@ cluster_network: "{{ public_network }}" #rolling_update: false
+#####################
+# Docker pull retry #
+#####################
+#docker_pull_retry: 3
+#docker_pull_timeout: "300s"
diff --git a/ci/ansible/group_vars/ceph/ceph.hosts b/ci/ansible/group_vars/ceph/ceph.hosts index 34a7b26..ab200a3 100644 --- a/ci/ansible/group_vars/ceph/ceph.hosts +++ b/ci/ansible/group_vars/ceph/ceph.hosts @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
[mons]
localhost ansible_connection=local
diff --git a/ci/ansible/group_vars/ceph/ceph.yaml b/ci/ansible/group_vars/ceph/ceph.yaml index 5e70724..68b87b6 100644 --- a/ci/ansible/group_vars/ceph/ceph.yaml +++ b/ci/ansible/group_vars/ceph/ceph.yaml @@ -1,8 +1,30 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
configFile: /etc/ceph/ceph.conf
pool:
- "rbd": # change pool name same to ceph pool, but don't change it if you choose lvm backend
- diskType: SSD
- AZ: default
- accessProtocol: rbd
- thinProvisioned: true
- compressed: false
+ rbd: # change pool name same to ceph pool, but don't change it if you choose lvm backend
+ storageType: block
+ availabilityZone: default
+ extras:
+ dataStorage:
+ provisioningPolicy: Thin
+ isSpaceEfficient: true
+ ioConnectivity:
+ accessProtocol: rbd
+ maxIOPS: 6000000
+ maxBWS: 500
+ advanced:
+ diskType: SSD
+ latency: 5ms
diff --git a/ci/ansible/group_vars/ceph/osds.yml b/ci/ansible/group_vars/ceph/osds.yml deleted file mode 100644 index 57cf581..0000000 --- a/ci/ansible/group_vars/ceph/osds.yml +++ /dev/null @@ -1,259 +0,0 @@ ----
-# Variables here are applicable to all host groups NOT roles
-
-# This sample file generated by generate_group_vars_sample.sh
-
-# Dummy variable to avoid error because ansible does not recognize the
-# file as a good configuration file when no variable in it.
-dummy:
-
-# You can override default vars defined in defaults/main.yml here,
-# but I would advice to use host or group vars instead
-
-#raw_journal_devices: "{{ dedicated_devices }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#raw_multi_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#dmcrytpt_journal_collocation: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-#dmcrypt_dedicated_journal: False # backward compatibility with stable-2.2, will disappear in stable 3.1
-
-
-###########
-# GENERAL #
-###########
-
-# Even though OSD nodes should not have the admin key
-# at their disposal, some people might want to have it
-# distributed on OSD nodes. Setting 'copy_admin_key' to 'true'
-# will copy the admin key to the /etc/ceph/ directory
-#copy_admin_key: false
-
-
-####################
-# OSD CRUSH LOCATION
-####################
-
-# /!\
-#
-# BE EXTREMELY CAREFUL WITH THIS OPTION
-# DO NOT USE IT UNLESS YOU KNOW WHAT YOU ARE DOING
-#
-# /!\
-#
-# It is probably best to keep this option to 'false' as the default
-# suggests it. This option should only be used while doing some complex
-# CRUSH map. It allows you to force a specific location for a set of OSDs.
-#
-# The following options will build a ceph.conf with OSD sections
-# Example:
-# [osd.X]
-# osd crush location = "root=location"
-#
-# This works with your inventory file
-# To match the following 'osd_crush_location' option the inventory must look like:
-#
-# [osds]
-# osd0 ceph_crush_root=foo ceph_crush_rack=bar
-
-#crush_location: false
-#osd_crush_location: "\"root={{ ceph_crush_root }} rack={{ ceph_crush_rack }} host={{ ansible_hostname }}\""
-
-
-##############
-# CEPH OPTIONS
-##############
-
-# Devices to be used as OSDs
-# You can pre-provision disks that are not present yet.
-# Ansible will just skip them. Newly added disk will be
-# automatically configured during the next run.
-#
-
-
-# Declare devices to be used as OSDs
-# All scenario(except 3rd) inherit from the following device declaration
-
-devices:
-# - /dev/sda
-# - /dev/sdc
-# - /dev/sdd
-# - /dev/sde
-
-#devices: []
-
-
-#'osd_auto_discovery' mode prevents you from filling out the 'devices' variable above.
-# You can use this option with First and Forth and Fifth OSDS scenario.
-# Device discovery is based on the Ansible fact 'ansible_devices'
-# which reports all the devices on a system. If chosen all the disks
-# found will be passed to ceph-disk. You should not be worried on using
-# this option since ceph-disk has a built-in check which looks for empty devices.
-# Thus devices with existing partition tables will not be used.
-#
-#osd_auto_discovery: false
-
-# Encrypt your OSD device using dmcrypt
-# If set to True, no matter which osd_objecstore and osd_scenario you use the data will be encrypted
-#dmcrypt: "{{ True if dmcrytpt_journal_collocation or dmcrypt_dedicated_journal else False }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-
-
-# I. First scenario: collocated
-#
-# To enable this scenario do: osd_scenario: collocated
-#
-#
-# If osd_objectstore: filestore is enabled both 'ceph data' and 'ceph journal' partitions
-# will be stored on the same device.
-#
-# If osd_objectstore: bluestore is enabled 'ceph data', 'ceph block', 'ceph block.db', 'ceph block.wal' will be stored
-# on the same device. The device will get 2 partitions:
-# - One for 'data', called 'ceph data'
-# - One for 'ceph block', 'ceph block.db', 'ceph block.wal' called 'ceph block'
-#
-# Example of what you will get:
-# [root@ceph-osd0 ~]# blkid /dev/sda*
-# /dev/sda: PTTYPE="gpt"
-# /dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"
-#
-
-#osd_scenario: "{{ 'collocated' if journal_collocation or dmcrytpt_journal_collocation else 'non-collocated' if raw_multi_journal or dmcrypt_dedicated_journal else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
-#valid_osd_scenarios:
-# - collocated
-# - non-collocated
-# - lvm
-osd_scenario: collocated
-
-# II. Second scenario: non-collocated
-#
-# To enable this scenario do: osd_scenario: non-collocated
-#
-# If osd_objectstore: filestore is enabled 'ceph data' and 'ceph journal' partitions
-# will be stored on different devices:
-# - 'ceph data' will be stored on the device listed in 'devices'
-# - 'ceph journal' will be stored on the device listed in 'dedicated_devices'
-#
-# Let's take an example, imagine 'devices' was declared like this:
-#
-# devices:
-# - /dev/sda
-# - /dev/sdb
-# - /dev/sdc
-# - /dev/sdd
-#
-# And 'dedicated_devices' was declared like this:
-#
-# dedicated_devices:
-# - /dev/sdf
-# - /dev/sdf
-# - /dev/sdg
-# - /dev/sdg
-#
-# This will result in the following mapping:
-# - /dev/sda will have /dev/sdf1 as journal
-# - /dev/sdb will have /dev/sdf2 as a journal
-# - /dev/sdc will have /dev/sdg1 as a journal
-# - /dev/sdd will have /dev/sdg2 as a journal
-#
-#
-# If osd_objectstore: bluestore is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored
-# on a dedicated device.
-#
-# So the following will happen:
-# - The devices listed in 'devices' will get 2 partitions, one for 'block' and one for 'data'.
-# 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata.
-# 'block' will store all your actual data.
-# - The devices in 'dedicated_devices' will get 1 partition for RocksDB DB, called 'block.db'
-# and one for RocksDB WAL, called 'block.wal'
-#
-# By default dedicated_devices will represent block.db
-#
-# Example of what you will get:
-# [root@ceph-osd0 ~]# blkid /dev/sd*
-# /dev/sda: PTTYPE="gpt"
-# /dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"
-# /dev/sdb: PTTYPE="gpt"
-# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"
-# /dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"
-#dedicated_devices: []
-
-
-# More device granularity for Bluestore
-#
-# ONLY if osd_objectstore: bluestore is enabled.
-#
-# By default, if 'bluestore_wal_devices' is empty, it will get the content of 'dedicated_devices'.
-# If set, then you will have a dedicated partition on a specific device for block.wal.
-#
-# Example of what you will get:
-# [root@ceph-osd0 ~]# blkid /dev/sd*
-# /dev/sda: PTTYPE="gpt"
-# /dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"
-# /dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"
-# /dev/sdb: PTTYPE="gpt"
-# /dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"
-# /dev/sdc: PTTYPE="gpt"
-# /dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"
-#bluestore_wal_devices: "{{ dedicated_devices }}"
-
-# III. Use ceph-volume to create OSDs from logical volumes.
-# Use 'osd_scenario: lvm' to enable this scenario. Currently we only support dedicated journals
-# when using lvm, not collocated journals.
-# lvm_volumes is a list of dictionaries. Each dictionary must contain a data, journal and vg_name
-# key. Any logical volume or logical group used must be a name and not a path.
-# data must be a logical volume
-# journal can be either a lv, device or partition. You can not use the same journal for many data lvs.
-# data_vg must be the volume group name of the data lv
-# journal_vg is optional and must be the volume group name of the journal lv, if applicable
-# For example:
-# lvm_volumes:
-# - data: data-lv1
-# data_vg: vg1
-# journal: journal-lv1
-# journal_vg: vg2
-# - data: data-lv2
-# journal: /dev/sda
-# data_vg: vg1
-# - data: data-lv3
-# journal: /dev/sdb1
-# data_vg: vg2
-#lvm_volumes: []
-
-
-##########
-# DOCKER #
-##########
-
-#ceph_config_keys: [] # DON'T TOUCH ME
-
-# Resource limitation
-# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints
-# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations
-# These options can be passed using the 'ceph_osd_docker_extra_env' variable.
-#ceph_osd_docker_memory_limit: 1g
-#ceph_osd_docker_cpu_limit: 1
-
-# PREPARE DEVICE
-#
-# WARNING /!\ DMCRYPT scenario ONLY works with Docker version 1.12.5 and above
-#
-#ceph_osd_docker_devices: "{{ devices }}"
-#ceph_osd_docker_prepare_env: -e OSD_JOURNAL_SIZE={{ journal_size }}
-
-# ACTIVATE DEVICE
-#
-#ceph_osd_docker_extra_env:
-#ceph_osd_docker_run_script_path: "/usr/share" # script called by systemd to run the docker command
-
-
-###########
-# SYSTEMD #
-###########
-
-# ceph_osd_systemd_overrides will override the systemd settings
-# for the ceph-osd services.
-# For example,to set "PrivateDevices=false" you can specify:
-#ceph_osd_systemd_overrides:
-# Service:
-# PrivateDevices: False
-
diff --git a/ci/ansible/group_vars/cinder/cinder.yaml b/ci/ansible/group_vars/cinder/cinder.yaml index e7971d0..ec2d5b1 100644 --- a/ci/ansible/group_vars/cinder/cinder.yaml +++ b/ci/ansible/group_vars/cinder/cinder.yaml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
authOptions:
noAuth: true
endpoint: "http://127.0.0.1/identity"
@@ -10,8 +24,16 @@ authOptions: tenantName: "myproject"
pool:
"cinder-lvm@lvm#lvm":
- AZ: nova
- thin: true
- accessProtocol: iscsi
- thinProvisioned: true
- compressed: true
+ storageType: block
+ availabilityZone: default
+ extras:
+ dataStorage:
+ provisioningPolicy: Thin
+ isSpaceEfficient: false
+ ioConnectivity:
+ accessProtocol: iscsi
+ maxIOPS: 7000000
+ maxBWS: 600
+ advanced:
+ diskType: SSD
+ latency: 3ms
diff --git a/ci/ansible/group_vars/common.yml b/ci/ansible/group_vars/common.yml index cbdaaf6..50749e6 100644 --- a/ci/ansible/group_vars/common.yml +++ b/ci/ansible/group_vars/common.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
@@ -8,32 +22,63 @@ dummy: # GENERAL #
###########
-opensds_release: v0.1.4 # The version should be at least v0.1.4.
-nbp_release: v0.1.0 # The version should be at least v0.1.0.
-
-# These fields are not suggested to be modified
-opensds_download_url: https://github.com/opensds/opensds/releases/download/{{ opensds_release }}/opensds-{{ opensds_release }}-linux-amd64.tar.gz
-opensds_tarball_url: /opt/opensds-{{ opensds_release }}-linux-amd64.tar.gz
-opensds_dir: /opt/opensds-{{ opensds_release }}-linux-amd64
-nbp_download_url: https://github.com/opensds/nbp/releases/download/{{ nbp_release }}/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz
-nbp_tarball_url: /opt/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz
-nbp_dir: /opt/opensds-k8s-{{ nbp_release }}-linux-amd64
+# This field indicates which way user prefers to install, currently support
+# 'repository', 'release' and 'container'
+install_from: repository
+# These fields are NOT suggested to be modifie
+opensds_work_dir: /opt/opensds-linux-amd64
+nbp_work_dir: /opt/opensds-k8s-linux-amd64
opensds_config_dir: /etc/opensds
+opensds_driver_config_dir: "{{ opensds_config_dir }}/driver"
opensds_log_dir: /var/log/opensds
+##############
+# REPOSITORY #
+##############
+
+# If user specifies intalling from repository, then he can choose the specific
+# repository branch
+opensds_repo_branch: master
+nbp_repo_branch: master
+
+# These fields are NOT suggested to be modified
+opensds_remote_url: https://github.com/opensds/opensds.git
+nbp_remote_url: https://github.com/opensds/nbp.git
+
+
###########
-# PLUGIN #
+# RELEASE #
###########
-nbp_plugin_type: standalone # standalone is the default integration way, but you can change it to 'csi', 'flexvolume'
+# If user specifies intalling from release,then he can choose the specific version
+opensds_release: v0.2.0 # The version should be at least v0.2.0
+nbp_release: v0.2.0 # The version should be at least v0.2.0
-flexvolume_plugin_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds
+# These fields are NOT suggested to be modified
+opensds_download_url: https://github.com/opensds/opensds/releases/download/{{ opensds_release }}/opensds-{{ opensds_release }}-linux-amd64.tar.gz
+opensds_tarball_dir: /tmp/opensds-{{ opensds_release }}-linux-amd64
+nbp_download_url: https://github.com/opensds/nbp/releases/download/{{ nbp_release }}/opensds-k8s-{{ nbp_release }}-linux-amd64.tar.gz
+nbp_tarball_dir: /tmp/opensds-k8s-{{ nbp_release }}-linux-amd64
+
+
+#############
+# CONTAINER #
+#############
+
+container_enabled: false
###########
-#CONTAINER#
+# PLUGIN #
###########
-container_enabled: false
+# 'hotpot_only' is the default integration way, but you can change it to 'csi'
+# or 'flexvolume'
+nbp_plugin_type: hotpot_only
+# The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP
+opensds_endpoint: http://127.0.0.1:50040
+
+# These fields are NOT suggested to be modified
+flexvolume_plugin_dir: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds
diff --git a/ci/ansible/group_vars/dashboard.yml b/ci/ansible/group_vars/dashboard.yml new file mode 100644 index 0000000..b9ac8c4 --- /dev/null +++ b/ci/ansible/group_vars/dashboard.yml @@ -0,0 +1,33 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +# Dummy variable to avoid error because ansible does not recognize the +# file as a good configuration file when no variable in it. +dummy: + + +########### +# GENERAL # +########### + +# Dashboard installation types are: 'container', 'source_code' +dashboard_installation_type: container + + +########### +# DOCKER # +########### + +dashboard_docker_image: opensdsio/dashboard:latest diff --git a/ci/ansible/group_vars/lvm/lvm.yaml b/ci/ansible/group_vars/lvm/lvm.yaml index a360891..89c77b6 100644 --- a/ci/ansible/group_vars/lvm/lvm.yaml +++ b/ci/ansible/group_vars/lvm/lvm.yaml @@ -1,8 +1,31 @@ -tgtBindIp: 127.0.0.1
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tgtBindIp: 127.0.0.1 # change tgtBindIp to your real host ip, run 'ifconfig' to check
+tgtConfDir: /etc/tgt/conf.d
pool:
- "vg001": # change pool name same to vg_name, but don't change it if you choose ceph backend
- diskType: SSD
- AZ: default
- accessProtocol: iscsi
- thinProvisioned: false
- compressed: false
+ opensds-volumes: # change pool name same to vg_name, but don't change it if you choose ceph backend
+ storageType: block
+ availabilityZone: default
+ extras:
+ dataStorage:
+ provisioningPolicy: Thin
+ isSpaceEfficient: false
+ ioConnectivity:
+ accessProtocol: iscsi
+ maxIOPS: 7000000
+ maxBWS: 600
+ advanced:
+ diskType: SSD
+ latency: 5ms
diff --git a/ci/ansible/group_vars/osdsdb.yml b/ci/ansible/group_vars/osdsdb.yml index 1b6b812..1ad176e 100644 --- a/ci/ansible/group_vars/osdsdb.yml +++ b/ci/ansible/group_vars/osdsdb.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
@@ -9,8 +23,7 @@ dummy: ###########
db_driver: etcd
-db_endpoint: localhost:2379,localhost:2380
-#db_credential: opensds:password@127.0.0.1:3306/dbname
+db_endpoint: "{{ etcd_host }}:{{ etcd_port }},{{ etcd_host }}:{{ etcd_peer_port }}"
###########
@@ -22,7 +35,7 @@ etcd_host: 127.0.0.1 etcd_port: 2379
etcd_peer_port: 2380
-# These fields are not suggested to be modified
+# These fields are NOT suggested to be modified
etcd_tarball: etcd-{{ etcd_release }}-linux-amd64.tar.gz
etcd_download_url: https://github.com/coreos/etcd/releases/download/{{ etcd_release }}/{{ etcd_tarball }}
etcd_dir: /opt/etcd-{{ etcd_release }}-linux-amd64
diff --git a/ci/ansible/group_vars/osdsdock.yml b/ci/ansible/group_vars/osdsdock.yml index 1544c65..c0f54f0 100644 --- a/ci/ansible/group_vars/osdsdock.yml +++ b/ci/ansible/group_vars/osdsdock.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
@@ -10,25 +24,28 @@ dummy: # Change it according to your backend, currently support 'lvm', 'ceph', 'cinder'
enabled_backend: lvm
+# Change it according to your node type (host or target), currently support
+# 'provisioner', 'attacher'
+dock_type: provisioner
# These fields are NOT suggested to be modified
dock_endpoint: localhost:50050
dock_log_file: "{{ opensds_log_dir }}/osdsdock.log"
+
###########
# LVM #
###########
-pv_devices: # Specify block devices and ensure them existed if you choose lvm
- #- /dev/sdc
- #- /dev/sdd
-vg_name: vg001 # Specify a name randomly
# These fields are NOT suggested to be modified
lvm_name: lvm backend
lvm_description: This is a lvm backend service
lvm_driver_name: lvm
-lvm_config_path: "{{ opensds_config_dir }}/driver/lvm.yaml"
+lvm_config_path: "{{ opensds_driver_config_dir }}/lvm.yaml"
+opensds_volume_group: opensds-volumes
+
+
###########
# CEPH #
@@ -43,7 +60,8 @@ ceph_pools: # Specify pool name randomly ceph_name: ceph backend
ceph_description: This is a ceph backend service
ceph_driver_name: ceph
-ceph_config_path: "{{ opensds_config_dir }}/driver/ceph.yaml"
+ceph_config_path: "{{ opensds_driver_config_dir }}/ceph.yaml"
+
###########
# CINDER #
@@ -65,14 +83,14 @@ cinder_image_tag: debian-cinder # removed when use ansible script clean environment.
cinder_volume_group: cinder-volumes
# All source code and volume group file will be placed in the cinder_data_dir:
-cinder_data_dir: "{{ workplace }}/cinder_data_dir"
-
+cinder_data_dir: "/opt/cinder_data_dir"
-# These fields are not suggested to be modified
+# These fields are NOT suggested to be modified
cinder_name: cinder backend
cinder_description: This is a cinder backend service
cinder_driver_name: cinder
-cinder_config_path: "{{ opensds_config_dir }}/driver/cinder.yaml"
+cinder_config_path: "{{ opensds_driver_config_dir }}/cinder.yaml"
+
###########
# DOCKER #
diff --git a/ci/ansible/group_vars/osdslet.yml b/ci/ansible/group_vars/osdslet.yml index a872449..32c12fe 100644 --- a/ci/ansible/group_vars/osdslet.yml +++ b/ci/ansible/group_vars/osdslet.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
diff --git a/ci/ansible/install_ansible.sh b/ci/ansible/install_ansible.sh index b3f43bb..5e0cb6b 100644 --- a/ci/ansible/install_ansible.sh +++ b/ci/ansible/install_ansible.sh @@ -1,9 +1,25 @@ #!/bin/bash
-sudo add-apt-repository ppa:ansible/ansible # This step is needed to upgrade ansible to version 2.4.2 which is required for the ceph backend.
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This step is needed to upgrade ansible to version 2.4.2 which is required for
+# the ceph backend.
+sudo add-apt-repository ppa:ansible/ansible-2.4
sudo apt-get update
sudo apt-get install -y ansible
sleep 3
-ansible --version # Ansible version 2.4.2 or higher is required for ceph; 2.0.0.2 or higher is needed for other backends.
+ansible --version # Ansible version 2.4.2 is required for ceph; 2.0.2 or higher is needed for other backends.
diff --git a/ci/ansible/local.hosts b/ci/ansible/local.hosts index afdd826..f9bcc60 100644 --- a/ci/ansible/local.hosts +++ b/ci/ansible/local.hosts @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
[controllers]
localhost ansible_connection=local
diff --git a/ci/ansible/roles/cleaner/scenarios/auth-keystone.yml b/ci/ansible/roles/cleaner/scenarios/auth-keystone.yml new file mode 100644 index 0000000..bbac49b --- /dev/null +++ b/ci/ansible/roles/cleaner/scenarios/auth-keystone.yml @@ -0,0 +1,30 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: uninstall keystone + shell: "{{ item }}" + with_items: + - bash ./script/keystone.sh uninstall + when: opensds_auth_strategy == "keystone" and uninstall_keystone == true + ignore_errors: yes + become: yes + +- name: cleanup keystone + shell: "{{ item }}" + with_items: + - bash ./script/keystone.sh cleanup + when: opensds_auth_strategy == "keystone" and cleanup_keystone == true + ignore_errors: yes + become: yes diff --git a/ci/ansible/roles/cleaner/scenarios/backend.yml b/ci/ansible/roles/cleaner/scenarios/backend.yml new file mode 100644 index 0000000..f929164 --- /dev/null +++ b/ci/ansible/roles/cleaner/scenarios/backend.yml @@ -0,0 +1,154 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: clean the volume group of lvm + shell: + _raw_params: | + + # _clean_lvm_volume_group removes all default LVM volumes + # + # Usage: _clean_lvm_volume_group $vg + function _clean_lvm_volume_group { + local vg=$1 + + # Clean out existing volumes + sudo lvremove -f $vg + } + + # _remove_lvm_volume_group removes the volume group + # + # Usage: _remove_lvm_volume_group $vg + function _remove_lvm_volume_group { + local vg=$1 + + # Remove the volume group + sudo vgremove -f $vg + } + + # _clean_lvm_backing_file() removes the backing file of the + # volume group + # + # Usage: _clean_lvm_backing_file() $backing_file + function _clean_lvm_backing_file { + local backing_file=$1 + + # If the backing physical device is a loop device, it was probably setup by DevStack + if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then + local vg_dev + vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}') + if [[ -n "$vg_dev" ]]; then + sudo losetup -d $vg_dev + fi + rm -f $backing_file + fi + } + + # clean_lvm_volume_group() cleans up the volume group and removes the + # backing file + # + # Usage: clean_lvm_volume_group $vg + function clean_lvm_volume_group { + local vg=$1 + + _clean_lvm_volume_group $vg + _remove_lvm_volume_group $vg + # if there is no logical volume left, it's safe to attempt a cleanup + # of the backing file + if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then + _clean_lvm_backing_file {{ opensds_work_dir }}/volumegroups/${vg}.img + fi + } + + clean_lvm_volume_group {{opensds_volume_group}} + + args: + executable: /bin/bash + become: true + when: enabled_backend == "lvm" + ignore_errors: yes + +- name: stop cinder-standalone service + shell: docker-compose down + become: true + args: + chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box" + when: enabled_backend == "cinder" + ignore_errors: yes + +- name: clean the volume group of cinder + shell: + _raw_params: | + + # _clean_lvm_volume_group removes all default LVM volumes + # + # Usage: _clean_lvm_volume_group $vg + function _clean_lvm_volume_group { + local vg=$1 + + # Clean out existing volumes + sudo lvremove -f $vg + } + + # _remove_lvm_volume_group removes the volume group + # + # Usage: _remove_lvm_volume_group $vg + function _remove_lvm_volume_group { + local vg=$1 + + # Remove the volume group + sudo vgremove -f $vg + } + + # _clean_lvm_backing_file() removes the backing file of the + # volume group + # + # Usage: _clean_lvm_backing_file() $backing_file + function _clean_lvm_backing_file { + local backing_file=$1 + + # If the backing physical device is a loop device, it was probably setup by DevStack + if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then + local vg_dev + vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}') + if [[ -n "$vg_dev" ]]; then + sudo losetup -d $vg_dev + fi + rm -f $backing_file + fi + } + + # clean_lvm_volume_group() cleans up the volume group and removes the + # backing file + # + # Usage: clean_lvm_volume_group $vg + function clean_lvm_volume_group { + local vg=$1 + + _clean_lvm_volume_group $vg + _remove_lvm_volume_group $vg + # if there is no logical volume left, it's safe to attempt a cleanup + # of the backing file + if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then + _clean_lvm_backing_file {{ cinder_data_dir }}/${vg}.img + fi + } + + clean_lvm_volume_group {{cinder_volume_group}} + + args: + executable: /bin/bash + become: true + when: enabled_backend == "cinder" + ignore_errors: yes diff --git a/ci/ansible/roles/cleaner/scenarios/release.yml b/ci/ansible/roles/cleaner/scenarios/release.yml new file mode 100644 index 0000000..e365170 --- /dev/null +++ b/ci/ansible/roles/cleaner/scenarios/release.yml @@ -0,0 +1,24 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: clean up all release files if installed from release + file: + path: "{{ item }}" + state: absent + force: yes + with_items: + - "{{ opensds_tarball_dir }}" + - "{{ nbp_tarball_dir }}" + ignore_errors: yes diff --git a/ci/ansible/roles/cleaner/scenarios/repository.yml b/ci/ansible/roles/cleaner/scenarios/repository.yml new file mode 100644 index 0000000..a50689c --- /dev/null +++ b/ci/ansible/roles/cleaner/scenarios/repository.yml @@ -0,0 +1,44 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- set_fact: + go_path: "{{ lookup('env', 'GOPATH') }}" + +- name: check go_path + shell: "{{ item }}" + with_items: + - echo "The environment variable GOPATH must be set and cannot be an empty string!" + - /bin/false + when: go_path == "" + +- name: clean opensds controller data + shell: make clean + args: + chdir: "{{ go_path }}/src/github.com/opensds/opensds" + when: install_from == "repository" + +- name: clean opensds northbound plugin data + shell: make clean + args: + chdir: "{{ go_path }}/src/github.com/opensds/nbp" + when: install_from == "repository" and nbp_plugin_type != "hotpot_only" + +- name: clean opensds dashboard data + shell: make clean + args: + chdir: "{{ go_path }}/src/github.com/opensds/opensds/dashboard" + when: dashboard_installation_type == "source_code" + become: yes + ignore_errors: yes diff --git a/ci/ansible/roles/cleaner/tasks/main.yml b/ci/ansible/roles/cleaner/tasks/main.yml index fcfb79b..8399b08 100644 --- a/ci/ansible/roles/cleaner/tasks/main.yml +++ b/ci/ansible/roles/cleaner/tasks/main.yml @@ -1,68 +1,44 @@ ----
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
- name: kill osdslet daemon service
- shell: killall osdslet
- ignore_errors: yes
+ shell: killall osdslet osdsdock
when: container_enabled == false
+ ignore_errors: true
- name: kill osdslet containerized service
- docker:
- image: opensdsio/opensds-controller:latest
+ docker_container:
+ name: osdslet
+ image: "{{ controller_docker_image }}"
state: stopped
when: container_enabled == true
-- name: kill osdsdock daemon service
- shell: killall osdsdock
- ignore_errors: yes
- when: container_enabled == false
-
- name: kill osdsdock containerized service
- docker:
- image: opensdsio/opensds-dock:latest
+ docker_container:
+ name: osdsdock
+ image: "{{ dock_docker_image }}"
state: stopped
when: container_enabled == true
-- name: kill etcd daemon service
- shell: killall etcd
- ignore_errors: yes
- when: db_driver == "etcd" and container_enabled == false
-
-- name: kill etcd containerized service
- docker:
- image: "{{ etcd_docker_image }}"
+- name: stop container where dashboard is located
+ docker_container:
+ name: dashboard
+ image: "{{ dashboard_docker_image }}"
state: stopped
- when: db_driver == "etcd" and container_enabled == true
-
-- name: remove etcd service data
- file:
- path: "{{ etcd_dir }}"
- state: absent
- force: yes
- ignore_errors: yes
- when: db_driver == "etcd"
-
-- name: remove etcd tarball
- file:
- path: "/opt/{{ etcd_tarball }}"
- state: absent
- force: yes
- ignore_errors: yes
- when: db_driver == "etcd"
-
-- name: clean opensds release files
- file:
- path: "{{ opensds_dir }}"
- state: absent
- force: yes
- ignore_errors: yes
+ when: dashboard_installation_type == "container"
-- name: clean opensds release tarball file
- file:
- path: "{{ opensds_tarball_url }}"
- state: absent
- force: yes
- ignore_errors: yes
-
-- name: clean opensds flexvolume plugins binary file
+- name: clean opensds flexvolume plugins binary file if flexvolume specified
file:
path: "{{ flexvolume_plugin_dir }}"
state: absent
@@ -70,119 +46,38 @@ ignore_errors: yes
when: nbp_plugin_type == "flexvolume"
-- name: clean nbp release files
- file:
- path: "{{ nbp_dir }}"
- state: absent
- force: yes
- ignore_errors: yes
-
-- name: clean nbp release tarball file
- file:
- path: "{{ nbp_tarball_url }}"
- state: absent
- force: yes
- ignore_errors: yes
-
-- name: clean all opensds configuration files
- file:
- path: "{{ opensds_config_dir }}"
- state: absent
- force: yes
+- name: clean opensds csi plugin if csi plugin specified
+ shell: |
+ . /etc/profile
+ kubectl delete -f deploy/kubernetes
+ args:
+ chdir: "{{ nbp_work_dir }}/csi"
ignore_errors: yes
+ when: nbp_plugin_type == "csi"
-- name: clean all opensds log files
+- name: clean all configuration and log files in opensds and nbp work directory
file:
- path: "{{ opensds_log_dir }}"
+ path: "{{ item }}"
state: absent
force: yes
+ with_items:
+ - "{{ opensds_work_dir }}"
+ - "{{ nbp_work_dir }}"
+ - "{{ opensds_config_dir }}"
+ - "{{ opensds_log_dir }}"
ignore_errors: yes
-- name: check if it existed before cleaning a volume group
- shell: vgdisplay {{ vg_name }}
- ignore_errors: yes
- register: vg_existed
- when: enabled_backend == "lvm"
-
-- name: remove a volume group if lvm backend specified
- lvg:
- vg: "{{ vg_name }}"
- state: absent
- when: enabled_backend == "lvm" and vg_existed.rc == 0
-
-- name: remove physical volumes if lvm backend specified
- shell: pvremove {{ item }}
- with_items: "{{ pv_devices }}"
- when: enabled_backend == "lvm"
-
-- name: stop cinder-standalone service
- shell: docker-compose down
- become: true
- args:
- chdir: "{{ cinder_data_dir }}/cinder/contrib/block-box"
- when: enabled_backend == "cinder"
-
-- name: clean the volume group of cinder
- shell:
- _raw_params: |
-
- # _clean_lvm_volume_group removes all default LVM volumes
- #
- # Usage: _clean_lvm_volume_group $vg
- function _clean_lvm_volume_group {
- local vg=$1
+- name: include scenarios/auth-keystone.yml when specifies keystone
+ include_tasks: scenarios/auth-keystone.yml
+ when: opensds_auth_strategy == "keystone"
- # Clean out existing volumes
- sudo lvremove -f $vg
- }
+- name: include scenarios/repository.yml if installed from repository
+ include_tasks: scenarios/repository.yml
+ when: install_from == "repository" or dashboard_installation_type == "source_code"
- # _remove_lvm_volume_group removes the volume group
- #
- # Usage: _remove_lvm_volume_group $vg
- function _remove_lvm_volume_group {
- local vg=$1
+- name: include scenarios/release.yml if installed from release
+ include_tasks: scenarios/release.yml
+ when: install_from == "release"
- # Remove the volume group
- sudo vgremove -f $vg
- }
-
- # _clean_lvm_backing_file() removes the backing file of the
- # volume group
- #
- # Usage: _clean_lvm_backing_file() $backing_file
- function _clean_lvm_backing_file {
- local backing_file=$1
-
- # If the backing physical device is a loop device, it was probably setup by DevStack
- if [[ -n "$backing_file" ]] && [[ -e "$backing_file" ]]; then
- local vg_dev
- vg_dev=$(sudo losetup -j $backing_file | awk -F':' '/'.img'/ { print $1}')
- if [[ -n "$vg_dev" ]]; then
- sudo losetup -d $vg_dev
- fi
- rm -f $backing_file
- fi
- }
-
- # clean_lvm_volume_group() cleans up the volume group and removes the
- # backing file
- #
- # Usage: clean_lvm_volume_group $vg
- function clean_lvm_volume_group {
- local vg=$1
-
- _clean_lvm_volume_group $vg
- _remove_lvm_volume_group $vg
- # if there is no logical volume left, it's safe to attempt a cleanup
- # of the backing file
- if [[ -z "$(sudo lvs --noheadings -o lv_name $vg 2>/dev/null)" ]]; then
- _clean_lvm_backing_file {{ cinder_data_dir }}/${vg}.img
- fi
- }
-
- clean_lvm_volume_group {{cinder_volume_group}}
-
- args:
- executable: /bin/bash
- become: true
- when: enabled_backend == "cinder"
+- name: include scenarios/backend.yml for cleaning up storage backend service
+ include_tasks: scenarios/backend.yml
diff --git a/ci/ansible/roles/common/scenarios/container.yml b/ci/ansible/roles/common/scenarios/container.yml new file mode 100644 index 0000000..0edbcef --- /dev/null +++ b/ci/ansible/roles/common/scenarios/container.yml @@ -0,0 +1,18 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: install docker-py package with pip when enabling containerized deployment + pip: + name: docker-py diff --git a/ci/ansible/roles/common/scenarios/release.yml b/ci/ansible/roles/common/scenarios/release.yml new file mode 100644 index 0000000..500d82e --- /dev/null +++ b/ci/ansible/roles/common/scenarios/release.yml @@ -0,0 +1,38 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: check for opensds release files existed + stat: + path: "{{ opensds_tarball_dir }}" + ignore_errors: yes + register: opensdsreleasesexisted + +- name: download and extract the opensds release tarball if not exists + unarchive: + src={{ opensds_download_url }} + dest=/tmp/ + when: + - opensdsreleasesexisted.stat.exists is undefined or opensdsreleasesexisted.stat.exists == false + +- name: change the mode of all binary files in opensds release + file: + path: "{{ opensds_tarball_dir }}/bin" + mode: 0755 + recurse: yes + +- name: copy opensds tarball into opensds work directory + copy: + src: "{{ opensds_tarball_dir }}/" + dest: "{{ opensds_work_dir }}" diff --git a/ci/ansible/roles/common/scenarios/repository.yml b/ci/ansible/roles/common/scenarios/repository.yml new file mode 100644 index 0000000..3cddd34 --- /dev/null +++ b/ci/ansible/roles/common/scenarios/repository.yml @@ -0,0 +1,57 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- set_fact: + go_path: "{{ lookup('env', 'GOPATH') }}" + +- name: check go_path + shell: "{{ item }}" + with_items: + - echo "The environment variable GOPATH must be set and cannot be an empty string!" + - /bin/false + when: go_path == "" + +- name: check for opensds source code existed + stat: + path: "{{ go_path }}/src/github.com/opensds/opensds" + ignore_errors: yes + register: opensdsexisted + +- name: download opensds source code if not exists + git: + repo: "{{ opensds_remote_url }}" + dest: "{{ go_path }}/src/github.com/opensds/opensds" + version: "{{ opensds_repo_branch }}" + when: + - opensdsexisted.stat.exists is undefined or opensdsexisted.stat.exists == false + +- name: build opensds binary file + shell: make + environment: + GOPATH: "{{ go_path }}" + args: + chdir: "{{ go_path }}/src/github.com/opensds/opensds" + +- name: copy opensds binary files into opensds work directory + copy: + src: "{{ go_path }}/src/github.com/opensds/opensds/build/out/" + dest: "{{ opensds_work_dir }}" + +- name: change the permissions of opensds executable files + file: + path: "{{ opensds_work_dir }}/bin" + state: directory + mode: 0755 + recurse: yes diff --git a/ci/ansible/roles/common/tasks/main.yml b/ci/ansible/roles/common/tasks/main.yml index 7ae2234..daee059 100644 --- a/ci/ansible/roles/common/tasks/main.yml +++ b/ci/ansible/roles/common/tasks/main.yml @@ -1,84 +1,72 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
+- name: set script dir permissions
+ file:
+ path: ./script
+ mode: 0755
+ recurse: yes
+ ignore_errors: yes
+ become: yes
+
+- name: check ansible version
+ shell: "{{ item }}"
+ with_items:
+ - bash ./script/check_ansible_version.sh
+ become: yes
+
- name: run the equivalent of "apt-get update" as a separate step
apt:
update_cache: yes
-- name: install librados-dev and librbd-dev external packages
+- name: install make, gcc and pip external packages
apt:
name: "{{ item }}"
state: present
with_items:
- - librados-dev
- - librbd-dev
+ - make
+ - gcc
+ - python-pip
-- name: install docker-py package with pip when enabling containerized deployment
- pip:
- name: docker-py
- when: container_enabled == true
-
-- name: check for opensds release files existed
- stat:
- path: "{{ opensds_dir }}"
- ignore_errors: yes
- register: opensdsreleasesexisted
-
-- name: download opensds release files
- get_url:
- url={{ opensds_download_url }}
- dest={{ opensds_tarball_url }}
- when:
- - opensdsreleasesexisted.stat.exists is undefined or opensdsreleasesexisted.stat.exists == false
-
-- name: extract the opensds release tarball
- unarchive:
- src={{ opensds_tarball_url }}
- dest=/opt/
- when:
- - opensdsreleasesexisted.stat.exists is undefined or opensdsreleasesexisted.stat.exists == false
-
-- name: check for nbp release files existed
- stat:
- path: "{{ nbp_dir }}"
- ignore_errors: yes
- register: nbpreleasesexisted
-
-- name: download nbp release files
- get_url:
- url={{ nbp_download_url }}
- dest={{ nbp_tarball_url }}
- when:
- - nbpreleasesexisted.stat.exists is undefined or nbpreleasesexisted.stat.exists == false
-
-- name: extract the nbp release tarball
- unarchive:
- src={{ nbp_tarball_url }}
- dest=/opt/
- when:
- - nbpreleasesexisted.stat.exists is undefined or nbpreleasesexisted.stat.exists == false
-
-- name: change the mode of all binary files in opensds release
+- name: create opensds work directory if it doesn't exist
file:
- path: "{{ opensds_dir }}/bin"
+ path: "{{ item }}"
+ state: directory
mode: 0755
- recurse: yes
+ with_items:
+ - "{{ opensds_work_dir }}"
+ - "{{ opensds_config_dir }}"
+ - "{{ opensds_driver_config_dir }}"
+ - "{{ opensds_log_dir }}"
-- name: change the mode of all binary files in nbp release
- file:
- path: "{{ nbp_dir }}/flexvolume"
- mode: 0755
- recurse: yes
+- name: include scenarios/repository.yml when installing from repository
+ include: scenarios/repository.yml
+ when: install_from == "repository"
-- name: create opensds global config directory if it doesn't exist
- file:
- path: "{{ opensds_config_dir }}/driver"
- state: directory
- mode: 0755
+- name: include scenarios/release.yml when installing from release
+ include: scenarios/release.yml
+ when: install_from == "release"
-- name: create opensds log directory if it doesn't exist
- file:
- path: "{{ opensds_log_dir }}"
- state: directory
- mode: 0755
+- name: include scenarios/container.yml when installing from container
+ include: scenarios/container.yml
+ when: install_from == "container"
+
+- name: copy config templates into opensds global config folder
+ copy:
+ src: ../../../../conf/
+ dest: "{{ opensds_config_dir }}"
- name: configure opensds global info
shell: |
@@ -88,34 +76,24 @@ graceful = True
log_file = {{ controller_log_file }}
socket_order = inc
+ auth_strategy = {{ opensds_auth_strategy }}
[osdsdock]
api_endpoint = {{ dock_endpoint }}
log_file = {{ dock_log_file }}
+ # Choose the type of dock resource, only support 'provisioner' and 'attacher'.
+ dock_type = {{ dock_type }}
# Specify which backends should be enabled, sample,ceph,cinder,lvm and so on.
enabled_backends = {{ enabled_backend }}
- [lvm]
- name = {{ lvm_name }}
- description = {{ lvm_description }}
- driver_name = {{ lvm_driver_name }}
- config_path = {{ lvm_config_path }}
-
- [ceph]
- name = {{ ceph_name }}
- description = {{ ceph_description }}
- driver_name = {{ ceph_driver_name }}
- config_path = {{ ceph_config_path }}
-
- [cinder]
- name = {{ cinder_name }}
- description = {{ cinder_description }}
- driver_name = {{ cinder_driver_name }}
- config_path = {{ cinder_config_path }}
-
[database]
endpoint = {{ db_endpoint }}
driver = {{ db_driver }}
args:
chdir: "{{ opensds_config_dir }}"
ignore_errors: yes
+
+- name: include nbp-installer role if nbp_plugin_type != hotpot_only
+ include_role:
+ name: nbp-installer
+ when: nbp_plugin_type != "hotpot_only"
diff --git a/ci/ansible/roles/dashboard-installer/scenarios/container.yml b/ci/ansible/roles/dashboard-installer/scenarios/container.yml new file mode 100644 index 0000000..e25d90f --- /dev/null +++ b/ci/ansible/roles/dashboard-installer/scenarios/container.yml @@ -0,0 +1,26 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: install docker-py package with pip when enabling containerized deployment + pip: + name: docker-py + +- name: run dashboard containerized service + docker_container: + name: dashboard + image: opensdsio/dashboard:latest + state: started + network_mode: host + restart_policy: always diff --git a/ci/ansible/roles/dashboard-installer/scenarios/source-code.yml b/ci/ansible/roles/dashboard-installer/scenarios/source-code.yml new file mode 100644 index 0000000..bd2b1ff --- /dev/null +++ b/ci/ansible/roles/dashboard-installer/scenarios/source-code.yml @@ -0,0 +1,58 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- set_fact: + go_path: "{{ lookup('env', 'GOPATH') }}" + +- name: check go_path + shell: "{{ item }}" + with_items: + - echo "The environment variable GOPATH must be set and cannot be an empty string!" + - /bin/false + when: go_path == "" + +- name: check for opensds source code existed + stat: + path: "{{ go_path }}/src/github.com/opensds/opensds" + register: opensdsexisted + +- name: download opensds source code if not exists + git: + repo: "{{ opensds_remote_url }}" + dest: "{{ go_path }}/src/github.com/opensds/opensds" + version: "{{ opensds_repo_branch }}" + when: + - opensdsexisted.stat.exists is undefined or opensdsexisted.stat.exists == false + +- name: build and configure opensds dashboard + shell: "{{ item }}" + with_items: + - service apache2 stop + - make + - service apache2 start + args: + chdir: "{{ go_path }}/src/github.com/opensds/opensds/dashboard" + warn: false + become: yes + +- name: update nginx default config + become: yes + shell: bash ./script/set_nginx_config.sh + +- name: restart nginx + service: + name: nginx + state: restarted +
\ No newline at end of file diff --git a/ci/ansible/roles/dashboard-installer/tasks/main.yml b/ci/ansible/roles/dashboard-installer/tasks/main.yml new file mode 100644 index 0000000..ff221aa --- /dev/null +++ b/ci/ansible/roles/dashboard-installer/tasks/main.yml @@ -0,0 +1,22 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: use container to install dashboard + include_tasks: scenarios/container.yml + when: dashboard_installation_type == "container" + +- name: use source code to install dashboard + include_tasks: scenarios/source-code.yml + when: dashboard_installation_type == "source_code" diff --git a/ci/ansible/roles/nbp-installer/scenarios/csi.yml b/ci/ansible/roles/nbp-installer/scenarios/csi.yml index e69de29..93fff88 100644 --- a/ci/ansible/roles/nbp-installer/scenarios/csi.yml +++ b/ci/ansible/roles/nbp-installer/scenarios/csi.yml @@ -0,0 +1,44 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: Configure opensds endpoint IP in opensds csi plugin + lineinfile: + dest: "{{ nbp_work_dir }}/csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml" + regexp: '^ opensdsendpoint' + line: ' opensdsendpoint: {{ opensds_endpoint }}' + backup: yes + +- name: Configure opensds auth strategy in opensds csi plugin + lineinfile: + dest: "{{ nbp_work_dir }}/csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml" + regexp: '^ opensdsauthstrategy' + line: ' opensdsauthstrategy: {{ opensds_auth_strategy }}' + backup: yes + +- name: Configure keystone os auth url in opensds csi plugin + lineinfile: + dest: "{{ nbp_work_dir }}/csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml" + regexp: '^ osauthurl' + line: ' osauthurl: {{ keystone_os_auth_url }}' + backup: yes + when: opensds_auth_strategy == "keystone" + +- name: Prepare and deploy opensds csi plugin + shell: | + . /etc/profile + kubectl create -f deploy/kubernetes + args: + chdir: "{{ nbp_work_dir }}/csi" + ignore_errors: yes diff --git a/ci/ansible/roles/nbp-installer/scenarios/flexvolume.yml b/ci/ansible/roles/nbp-installer/scenarios/flexvolume.yml index 0bba93b..52ec16d 100644 --- a/ci/ansible/roles/nbp-installer/scenarios/flexvolume.yml +++ b/ci/ansible/roles/nbp-installer/scenarios/flexvolume.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
- name: Create flexvolume plugin directory if not existed
file:
@@ -7,5 +21,5 @@ - name: Copy opensds flexvolume plugin binary file into flexvolume plugin dir
copy:
- src: "{{ nbp_dir }}/flexvolume/opensds"
+ src: "{{ nbp_work_dir }}/flexvolume/opensds"
dest: "{{ flexvolume_plugin_dir }}/opensds"
diff --git a/ci/ansible/roles/nbp-installer/scenarios/release.yml b/ci/ansible/roles/nbp-installer/scenarios/release.yml new file mode 100644 index 0000000..89429d7 --- /dev/null +++ b/ci/ansible/roles/nbp-installer/scenarios/release.yml @@ -0,0 +1,38 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: check for nbp release files existed + stat: + path: "{{ nbp_tarball_dir }}" + ignore_errors: yes + register: nbpreleasesexisted + +- name: download and extract the nbp release tarball if not exists + unarchive: + src={{ nbp_download_url }} + dest=/tmp/ + when: + - nbpreleasesexisted.stat.exists is undefined or nbpreleasesexisted.stat.exists == false + +- name: change the mode of all binary files in nbp release + file: + path: "{{ nbp_tarball_dir }}/flexvolume" + mode: 0755 + recurse: yes + +- name: copy nbp tarball into nbp work directory + copy: + src: "{{ nbp_tarball_dir }}/" + dest: "{{ nbp_work_dir }}" diff --git a/ci/ansible/roles/nbp-installer/scenarios/repository.yml b/ci/ansible/roles/nbp-installer/scenarios/repository.yml new file mode 100644 index 0000000..fb8059b --- /dev/null +++ b/ci/ansible/roles/nbp-installer/scenarios/repository.yml @@ -0,0 +1,79 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- set_fact: + go_path: "{{ lookup('env', 'GOPATH') }}" + +- name: check go_path + shell: "{{ item }}" + with_items: + - echo "The environment variable GOPATH must be set and cannot be an empty string!" + - /bin/false + when: go_path == "" + +- name: check for nbp source code existed + stat: + path: "{{ go_path }}/src/github.com/opensds/nbp" + ignore_errors: yes + register: nbpexisted + +- name: download nbp source code if not exists + git: + repo: "{{ nbp_remote_url }}" + dest: "{{ go_path }}/src/github.com/opensds/nbp" + version: "{{ nbp_repo_branch }}" + when: + - nbpexisted.stat.exists is undefined or nbpexisted.stat.exists == false + +- name: build nbp binary file + shell: make + environment: + GOPATH: "{{ go_path }}" + args: + chdir: "{{ go_path }}/src/github.com/opensds/nbp" + +- name: create nbp install directory if it doesn't exist + file: + path: "{{ item }}" + state: directory + mode: 0755 + with_items: + - "{{ nbp_work_dir }}/csi" + - "{{ nbp_work_dir }}/flexvolume" + - "{{ nbp_work_dir }}/provisioner" + +- name: copy nbp csi deploy scripts into nbp work directory + copy: + src: "{{ item }}" + dest: "{{ nbp_work_dir }}/csi/" + directory_mode: yes + with_items: + - "{{ go_path }}/src/github.com/opensds/nbp/csi/server/deploy" + - "{{ go_path }}/src/github.com/opensds/nbp/csi/server/examples" + +- name: copy nbp flexvolume binary file into nbp work directory + copy: + src: "{{ go_path }}/src/github.com/opensds/nbp/.output/flexvolume.server.opensds" + dest: "{{ nbp_work_dir }}/flexvolume/opensds" + mode: 0755 + +- name: copy nbp provisioner deploy scripts into nbp work directory + copy: + src: "{{ item }}" + dest: "{{ nbp_work_dir }}/provisioner/" + directory_mode: yes + with_items: + - "{{ go_path }}/src/github.com/opensds/nbp/opensds-provisioner/deploy" + - "{{ go_path }}/src/github.com/opensds/nbp/opensds-provisioner/examples" diff --git a/ci/ansible/roles/nbp-installer/tasks/main.yml b/ci/ansible/roles/nbp-installer/tasks/main.yml index 58057f1..dd13195 100644 --- a/ci/ansible/roles/nbp-installer/tasks/main.yml +++ b/ci/ansible/roles/nbp-installer/tasks/main.yml @@ -1,8 +1,45 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
-- name: include scenarios/flexvolume.yml
+- name: install open-iscsi external packages
+ apt:
+ name: "{{ item }}"
+ state: present
+ with_items:
+ - open-iscsi
+
+- name: create nbp work directory if it doesn't exist
+ file:
+ path: "{{ item }}"
+ state: directory
+ mode: 0755
+ with_items:
+ - "{{ nbp_work_dir }}"
+
+- name: include scenarios/repository.yml when installing from repository
+ include: scenarios/repository.yml
+ when: install_from == "repository"
+
+- name: include scenarios/release.yml when installing from release
+ include: scenarios/release.yml
+ when: install_from == "release"
+
+- name: include scenarios/flexvolume.yml when nbp plugin type is flexvolume
include: scenarios/flexvolume.yml
when: nbp_plugin_type == "flexvolume"
-- name: include scenarios/csi.yml
+- name: include scenarios/csi.yml when nbp plugin type is csi
include: scenarios/csi.yml
when: nbp_plugin_type == "csi"
diff --git a/ci/ansible/roles/osdsauth/tasks/main.yml b/ci/ansible/roles/osdsauth/tasks/main.yml new file mode 100644 index 0000000..2148bb0 --- /dev/null +++ b/ci/ansible/roles/osdsauth/tasks/main.yml @@ -0,0 +1,21 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +- name: install keystone + shell: "{{ item }}" + with_items: + - bash ./script/keystone.sh install + when: opensds_auth_strategy == "keystone" + become: yes diff --git a/ci/ansible/roles/osdsdb/scenarios/container.yml b/ci/ansible/roles/osdsdb/scenarios/container.yml index d32057d..0c7664c 100644 --- a/ci/ansible/roles/osdsdb/scenarios/container.yml +++ b/ci/ansible/roles/osdsdb/scenarios/container.yml @@ -1,10 +1,24 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
- name: run etcd containerized service
- docker:
+ docker_container:
name: myetcd
image: "{{ etcd_docker_image }}"
- command: /usr/local/bin/etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} -advertise-client-urls http://{{ etcd_host }}:{{ etcd_peer_port }} -listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }}
+ command: /usr/local/bin/etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }}
state: started
- net: host
+ network_mode: host
volumes:
- "/usr/share/ca-certificates/:/etc/ssl/certs"
diff --git a/ci/ansible/roles/osdsdb/scenarios/etcd.yml b/ci/ansible/roles/osdsdb/scenarios/etcd.yml index 62abfdb..acd4937 100644 --- a/ci/ansible/roles/osdsdb/scenarios/etcd.yml +++ b/ci/ansible/roles/osdsdb/scenarios/etcd.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
- name: check for etcd existed
stat:
@@ -25,7 +39,7 @@ register: service_etcd_status
- name: run etcd daemon service
- shell: nohup ./etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} -listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }} &>>etcd.log &
+ shell: nohup ./etcd --advertise-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-client-urls http://{{ etcd_host }}:{{ etcd_port }} --listen-peer-urls http://{{ etcd_host }}:{{ etcd_peer_port }} &>>etcd.log &
become: true
args:
chdir: "{{ etcd_dir }}"
diff --git a/ci/ansible/roles/osdsdb/tasks/main.yml b/ci/ansible/roles/osdsdb/tasks/main.yml index 5b42eaf..a826d21 100644 --- a/ci/ansible/roles/osdsdb/tasks/main.yml +++ b/ci/ansible/roles/osdsdb/tasks/main.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
- name: include scenarios/etcd.yml
include: "{{ item }}"
diff --git a/ci/ansible/roles/osdsdock/scenarios/ceph.yml b/ci/ansible/roles/osdsdock/scenarios/ceph.yml index b844a29..f5ea8ef 100644 --- a/ci/ansible/roles/osdsdock/scenarios/ceph.yml +++ b/ci/ansible/roles/osdsdock/scenarios/ceph.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
- name: install ceph-common external package when ceph backend enabled
apt:
@@ -5,9 +19,21 @@ state: present
with_items:
- ceph-common
- when: enabled_backend == "ceph"
-- name: copy opensds ceph backend file if specify ceph backend
+- name: configure ceph section in opensds global info if specify ceph backend
+ shell: |
+ cat >> opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC
+
+ [ceph]
+ name = {{ ceph_name }}
+ description = {{ ceph_description }}
+ driver_name = {{ ceph_driver_name }}
+ config_path = {{ ceph_config_path }}
+ args:
+ chdir: "{{ opensds_config_dir }}"
+ ignore_errors: yes
+
+- name: copy opensds ceph backend file to ceph config file if specify ceph backend
copy:
src: ../../../group_vars/ceph/ceph.yaml
dest: "{{ ceph_config_path }}"
@@ -22,6 +48,7 @@ git:
repo: https://github.com/ceph/ceph-ansible.git
dest: /opt/ceph-ansible
+ version: stable-3.0
when:
- cephansibleexisted.stat.exists is undefined or cephansibleexisted.stat.exists == false
@@ -35,43 +62,44 @@ src: ../../../group_vars/ceph/all.yml
dest: /opt/ceph-ansible/group_vars/all.yml
-- name: copy ceph osds.yml file into ceph-ansible group_vars directory
- copy:
- src: ../../../group_vars/ceph/osds.yml
- dest: /opt/ceph-ansible/group_vars/osds.yml
-
- name: copy site.yml.sample to site.yml in ceph-ansible
copy:
src: /opt/ceph-ansible/site.yml.sample
dest: /opt/ceph-ansible/site.yml
-- name: ping all hosts
- shell: ansible all -m ping -i ceph.hosts
- become: true
- args:
- chdir: /opt/ceph-ansible
-
-- name: run ceph-ansible playbook
- shell: ansible-playbook site.yml -i ceph.hosts | tee /var/log/ceph_ansible.log
+- name: ping all hosts and run ceph-ansible playbook
+ shell: "{{ item }}"
become: true
+ with_items:
+ - ansible all -m ping -i ceph.hosts
+ - ansible-playbook site.yml -i ceph.hosts | tee /var/log/ceph_ansible.log
args:
chdir: /opt/ceph-ansible
-#- name: Check if ceph osd is running
-# shell: ps aux | grep ceph-osd | grep -v grep
-# ignore_errors: false
-# changed_when: false
-# register: service_ceph_osd_status
+- name: check if ceph osd is running
+ shell: ps aux | grep ceph-osd | grep -v grep
+ ignore_errors: false
+ changed_when: false
+ register: service_ceph_osd_status
-- name: Check if ceph mon is running
+- name: check if ceph mon is running
shell: ps aux | grep ceph-mon | grep -v grep
ignore_errors: false
changed_when: false
register: service_ceph_mon_status
-- name: Create specified pools and initialize them with default pool size.
+- name: configure profile and disable some features in ceph for kernel compatible.
+ shell: "{{ item }}"
+ become: true
+ ignore_errors: yes
+ with_items:
+ - ceph osd crush tunables hammer
+ - grep -q "^rbd default features" /etc/ceph/ceph.conf || sed -i '/\[global\]/arbd default features = 1' /etc/ceph/ceph.conf
+ when: service_ceph_mon_status.rc == 0 and service_ceph_osd_status.rc == 0
+
+- name: create specified pools and initialize them with default pool size.
shell: ceph osd pool create {{ item }} 100 && ceph osd pool set {{ item }} size 1
ignore_errors: yes
changed_when: false
with_items: "{{ ceph_pools }}"
- when: service_ceph_mon_status.rc == 0 # and service_ceph_osd_status.rc == 0
+ when: service_ceph_mon_status.rc == 0 and service_ceph_osd_status.rc == 0
diff --git a/ci/ansible/roles/osdsdock/scenarios/cinder.yml b/ci/ansible/roles/osdsdock/scenarios/cinder.yml index 6136f25..97ebe9d 100644 --- a/ci/ansible/roles/osdsdock/scenarios/cinder.yml +++ b/ci/ansible/roles/osdsdock/scenarios/cinder.yml @@ -1,5 +1,32 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
-- name: copy opensds cinder backend file if specify cinder backend
+- name: configure cinder section in opensds global info if specify cinder backend
+ shell: |
+ cat >> opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC
+
+ [cinder]
+ name = {{ cinder_name }}
+ description = {{ cinder_description }}
+ driver_name = {{ cinder_driver_name }}
+ config_path = {{ cinder_config_path }}
+ args:
+ chdir: "{{ opensds_config_dir }}"
+ ignore_errors: yes
+
+- name: copy opensds cinder backend file to cinder config path if specify cinder backend
copy:
src: ../../../group_vars/cinder/cinder.yaml
dest: "{{ cinder_config_path }}"
diff --git a/ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml b/ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml index 49f4063..b62d2d1 100644 --- a/ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml +++ b/ci/ansible/roles/osdsdock/scenarios/cinder_standalone.yml @@ -1,21 +1,42 @@ ----
-- name: install python-pip
- apt:
- name: python-pip
-
-- name: install lvm2
- apt:
- name: lvm2
+# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
-- name: install thin-provisioning-tools
+---
+- name: install python-pip, lvm2, thin-provisioning-tools and docker-compose
apt:
- name: thin-provisioning-tools
+ name: "{{ item }}"
+ state: present
+ with_items:
+ - python-pip
+ - lvm2
+ - thin-provisioning-tools
+ - docker-compose
+
+- name: configure cinder section in opensds global info if specify cinder backend
+ shell: |
+ cat >> opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC
-- name: install docker-compose
- pip:
- name: docker-compose
+ [cinder]
+ name = {{ cinder_name }}
+ description = {{ cinder_description }}
+ driver_name = {{ cinder_driver_name }}
+ config_path = {{ cinder_config_path }}
+ args:
+ chdir: "{{ opensds_config_dir }}"
+ ignore_errors: yes
-- name: copy opensds cinder backend file if specify cinder backend
+- name: copy opensds cinder backend file to cinder config path if specify cinder backend
copy:
src: ../../../group_vars/cinder/cinder.yaml
dest: "{{ cinder_config_path }}"
@@ -40,6 +61,11 @@ local vg_dev
vg_dev=`sudo losetup -f --show $backing_file`
+ # Only create physical volume if it doesn't already exist
+ if ! sudo pvs $vg_dev; then
+ sudo pvcreate $vg_dev
+ fi
+
# Only create volume group if it doesn't already exist
if ! sudo vgs $vg; then
sudo vgcreate $vg $vg_dev
@@ -120,6 +146,8 @@ sed -i "s/image: debian-cinder/image: {{ cinder_image_tag }}/g" docker-compose.yml
sed -i "s/image: lvm-debian-cinder/image: lvm-{{ cinder_image_tag }}/g" docker-compose.yml
+ sed -i "s/3306:3306/3307:3306/g" docker-compose.yml
+
sed -i "s/volume_group = cinder-volumes /volume_group = {{ cinder_volume_group }}/g" etc/cinder.conf
become: true
args:
@@ -141,5 +169,5 @@ wait_for:
host: 127.0.0.1
port: 8776
- delay: 2
+ delay: 15
timeout: 120
diff --git a/ci/ansible/roles/osdsdock/scenarios/lvm.yml b/ci/ansible/roles/osdsdock/scenarios/lvm.yml index 743fe3b..a93f1e4 100644 --- a/ci/ansible/roles/osdsdock/scenarios/lvm.yml +++ b/ci/ansible/roles/osdsdock/scenarios/lvm.yml @@ -1,20 +1,78 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
-- name: install lvm2 external package when lvm backend enabled
+- name: install lvm2 ang tgt external package when lvm backend enabled
apt:
- name: lvm2
+ name: "{{ item }}"
+ state: present
+ with_items:
+ - lvm2
+ - tgt
+ - thin-provisioning-tools
+
+- name: configure lvm section in opensds global info if specify lvm backend
+ shell: |
+ cat >> opensds.conf <<OPENSDS_GLOABL_CONFIG_DOC
+
+ [lvm]
+ name = {{ lvm_name }}
+ description = {{ lvm_description }}
+ driver_name = {{ lvm_driver_name }}
+ config_path = {{ lvm_config_path }}
+ args:
+ chdir: "{{ opensds_config_dir }}"
+ ignore_errors: yes
-- name: copy opensds lvm backend file if specify lvm backend
+- name: copy opensds lvm backend file to lvm config path if specify lvm backend
copy:
src: ../../../group_vars/lvm/lvm.yaml
dest: "{{ lvm_config_path }}"
-- name: check if volume group existed
- shell: vgdisplay {{ vg_name }}
- ignore_errors: yes
- register: vg_existed
+- name: create directory to volume group file
+ file:
+ path: "{{ opensds_work_dir }}/volumegroups"
+ state: directory
+ recurse: yes
+
+- name: create volume group in thin mode
+ shell:
+ _raw_params: |
+ function _create_lvm_volume_group {
+ local vg=$1
+ local size=$2
+
+ local backing_file={{ opensds_work_dir }}/volumegroups/${vg}.img
+ if ! sudo vgs $vg; then
+ # Only create if the file doesn't already exists
+ [[ -f $backing_file ]] || truncate -s $size $backing_file
+ local vg_dev
+ vg_dev=`sudo losetup -f --show $backing_file`
+
+ # Only create physical volume if it doesn't already exist
+ if ! sudo pvs $vg_dev; then
+ sudo pvcreate $vg_dev
+ fi
-- name: create a volume group and initialize it
- lvg:
- vg: "{{ vg_name }}"
- pvs: "{{ pv_devices }}"
- when: vg_existed is undefined or vg_existed.rc != 0
+ # Only create volume group if it doesn't already exist
+ if ! sudo vgs $vg; then
+ sudo vgcreate $vg $vg_dev
+ fi
+ fi
+ }
+ modprobe dm_thin_pool
+ _create_lvm_volume_group {{ opensds_volume_group }} 10G
+ args:
+ executable: /bin/bash
+ become: true
diff --git a/ci/ansible/roles/osdsdock/tasks/main.yml b/ci/ansible/roles/osdsdock/tasks/main.yml index 215cf00..8b18c8d 100644 --- a/ci/ansible/roles/osdsdock/tasks/main.yml +++ b/ci/ansible/roles/osdsdock/tasks/main.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
- name: include scenarios/lvm.yml
include: scenarios/lvm.yml
@@ -28,15 +42,15 @@ ps aux | grep osdsdock | grep -v grep && break
done
args:
- chdir: "{{ opensds_dir }}"
+ chdir: "{{ opensds_work_dir }}"
when: container_enabled == false
- name: run osdsdock containerized service
- docker:
+ docker_container:
name: osdsdock
image: opensdsio/opensds-dock:latest
state: started
- net: host
+ network_mode: host
privileged: true
volumes:
- "/etc/opensds/:/etc/opensds"
diff --git a/ci/ansible/roles/osdslet/tasks/main.yml b/ci/ansible/roles/osdslet/tasks/main.yml index 02b71fc..87ea816 100644 --- a/ci/ansible/roles/osdslet/tasks/main.yml +++ b/ci/ansible/roles/osdslet/tasks/main.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
- name: run osdslet daemon service
shell:
@@ -12,15 +26,15 @@ ps aux | grep osdslet | grep -v grep && break
done
args:
- chdir: "{{ opensds_dir }}"
+ chdir: "{{ opensds_work_dir }}"
when: container_enabled == false
- name: run osdslet containerized service
- docker:
+ docker_container:
name: osdslet
image: opensdsio/opensds-controller:latest
state: started
- net: host
+ network_mode: host
volumes:
- "/etc/opensds/:/etc/opensds"
when: container_enabled == true
diff --git a/ci/ansible/script/check_ansible_version.sh b/ci/ansible/script/check_ansible_version.sh new file mode 100644 index 0000000..e9e1c9b --- /dev/null +++ b/ci/ansible/script/check_ansible_version.sh @@ -0,0 +1,26 @@ +#!/usr/bin/env bash + +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +ansiblever=$(ansible --version |grep -Eow '^ansible [^ ]+' |gawk '{ print $2 }') +echo "The actual version of ansible is $ansiblever" + +if [[ "$ansiblever" < '2.4.2' ]]; then + echo "Ansible version 2.4.2 or higher is required" + exit 1 +fi + +exit 0 + diff --git a/ci/ansible/script/keystone.sh b/ci/ansible/script/keystone.sh new file mode 100644 index 0000000..3de1e8b --- /dev/null +++ b/ci/ansible/script/keystone.sh @@ -0,0 +1,178 @@ +#!/usr/bin/env bash + +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# 'stack' user is just for install keystone through devstack + +create_user(){ + if id "${STACK_USER_NAME}" &> /dev/null; then + return + fi + sudo useradd -s /bin/bash -d "${STACK_HOME}" -m "${STACK_USER_NAME}" + echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack +} + + +remove_user(){ + userdel "${STACK_USER_NAME}" -f -r + rm /etc/sudoers.d/stack +} + +devstack_local_conf(){ +DEV_STACK_LOCAL_CONF=${DEV_STACK_DIR}/local.conf +cat > "$DEV_STACK_LOCAL_CONF" << DEV_STACK_LOCAL_CONF_DOCK +[[local|localrc]] +# use TryStack git mirror +GIT_BASE=$STACK_GIT_BASE + +# If the "*_PASSWORD" variables are not set here you will be prompted to enter +# values for them by "stack.sh" and they will be added to "local.conf". +ADMIN_PASSWORD=$STACK_PASSWORD +DATABASE_PASSWORD=$STACK_PASSWORD +RABBIT_PASSWORD=$STACK_PASSWORD +SERVICE_PASSWORD=$STACK_PASSWORD + +# Neither is set by default. +HOST_IP=$HOST_IP + +# path of the destination log file. A timestamp will be appended to the given name. +LOGFILE=\$DEST/logs/stack.sh.log + +# Old log files are automatically removed after 7 days to keep things neat. Change +# the number of days by setting "LOGDAYS". +LOGDAYS=2 + +ENABLED_SERVICES=mysql,key +# Using stable/queens branches +# --------------------------------- +KEYSTONE_BRANCH=$STACK_BRANCH +KEYSTONECLIENT_BRANCH=$STACK_BRANCH +DEV_STACK_LOCAL_CONF_DOCK +chown stack:stack "$DEV_STACK_LOCAL_CONF" +} + +opensds_conf() { +cat >> "$OPENSDS_CONFIG_DIR/opensds.conf" << OPENSDS_GLOBAL_CONFIG_DOC + + +[keystone_authtoken] +memcached_servers = $HOST_IP:11211 +signing_dir = /var/cache/opensds +cafile = /opt/stack/data/ca-bundle.pem +auth_uri = http://$HOST_IP/identity +project_domain_name = Default +project_name = service +user_domain_name = Default +password = $STACK_PASSWORD +username = $OPENSDS_SERVER_NAME +auth_url = http://$HOST_IP/identity +auth_type = password + +OPENSDS_GLOBAL_CONFIG_DOC + +cp "$OPENSDS_DIR/examples/policy.json" "$OPENSDS_CONFIG_DIR" +} + +create_user_and_endpoint(){ + . "$DEV_STACK_DIR/openrc" admin admin + openstack user create --domain default --password "$STACK_PASSWORD" "$OPENSDS_SERVER_NAME" + openstack role add --project service --user opensds admin + openstack group create service + openstack group add user service opensds + openstack role add service --project service --group service + openstack group add user admins admin + openstack service create --name "opensds$OPENSDS_VERSION" --description "OpenSDS Block Storage" "opensds$OPENSDS_VERSION" + openstack endpoint create --region RegionOne "opensds$OPENSDS_VERSION" public "http://$HOST_IP:50040/$OPENSDS_VERSION/%\(tenant_id\)s" + openstack endpoint create --region RegionOne "opensds$OPENSDS_VERSION" internal "http://$HOST_IP:50040/$OPENSDS_VERSION/%\(tenant_id\)s" + openstack endpoint create --region RegionOne "opensds$OPENSDS_VERSION" admin "http://$HOST_IP:50040/$OPENSDS_VERSION/%\(tenant_id\)s" +} + +delete_redundancy_data() { + . "$DEV_STACK_DIR/openrc" admin admin + openstack project delete demo + openstack project delete alt_demo + openstack project delete invisible_to_admin + openstack user delete demo + openstack user delete alt_demo +} + +download_code(){ + if [ ! -d "${DEV_STACK_DIR}" ];then + git clone "${STACK_GIT_BASE}/openstack-dev/devstack.git" -b "${STACK_BRANCH}" "${DEV_STACK_DIR}" + chown stack:stack -R "${DEV_STACK_DIR}" + fi +} + +install(){ + create_user + download_code + opensds_conf + + # If keystone is ready to start, there is no need continue next step. + if wait_for_url "http://$HOST_IP/identity" "keystone" 0.25 4; then + return + fi + devstack_local_conf + cd "${DEV_STACK_DIR}" + su "$STACK_USER_NAME" -c "${DEV_STACK_DIR}/stack.sh" >/dev/null + create_user_and_endpoint + delete_redundancy_data +} + +cleanup() { + su "$STACK_USER_NAME" -c "${DEV_STACK_DIR}/clean.sh" >/dev/null +} + +uninstall(){ + su "$STACK_USER_NAME" -c "${DEV_STACK_DIR}/unstack.sh" >/dev/null +} + +uninstall_purge(){ + rm "${STACK_HOME:?'STACK_HOME must be defined and cannot be empty'}/*" -rf + remove_user +} + +# *************************** +TOP_DIR=$(cd $(dirname "$0") && pwd) + +# OpenSDS configuration directory +OPENSDS_CONFIG_DIR=${OPENSDS_CONFIG_DIR:-/etc/opensds} + +source "$TOP_DIR/util.sh" +source "$TOP_DIR/sdsrc" + +case "$# $1" in + "1 install") + echo "Starting install keystone..." + install + ;; + "1 uninstall") + echo "Starting uninstall keystone..." + uninstall + ;; + "1 cleanup") + echo "Starting cleanup keystone..." + cleanup + ;; + "1 uninstall_purge") + echo "Starting uninstall purge keystone..." + uninstall_purge + ;; + *) + echo "The value of the parameter can only be one of the following: install/uninstall/cleanup/uninstall_purge" + exit 1 + ;; +esac + diff --git a/ci/ansible/script/sdsrc b/ci/ansible/script/sdsrc new file mode 100644 index 0000000..d26083d --- /dev/null +++ b/ci/ansible/script/sdsrc @@ -0,0 +1,40 @@ +#!/usr/bin/env bash + +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Global +HOST_IP=${HOST_IP:-} +HOST_IP=$(get_default_host_ip "$HOST_IP" "inet") +if [ "$HOST_IP" == "" ]; then + die $LINENO "Could not determine host ip address. See local.conf for suggestions on setting HOST_IP." +fi + +# OpenSDS configuration. +OPENSDS_VERSION=${OPENSDS_VERSION:-v1beta} + +# OpenSDS service name in keystone. +OPENSDS_SERVER_NAME=${OPENSDS_SERVER_NAME:-opensds} + +# devstack keystone configuration +STACK_GIT_BASE=${STACK_GIT_BASE:-https://git.openstack.org} +STACK_USER_NAME=${STACK_USER_NAME:-stack} +STACK_PASSWORD=${STACK_PASSWORD:-opensds@123} +STACK_HOME=${STACK_HOME:-/opt/stack} +STACK_BRANCH=${STACK_BRANCH:-stable/queens} +DEV_STACK_DIR=$STACK_HOME/devstack + +GOPATH=${GOPATH:-$HOME/gopath} +OPENSDS_DIR=${GOPATH}/src/github.com/opensds/opensds + diff --git a/ci/ansible/script/set_nginx_config.sh b/ci/ansible/script/set_nginx_config.sh new file mode 100644 index 0000000..3abbfcd --- /dev/null +++ b/ci/ansible/script/set_nginx_config.sh @@ -0,0 +1,37 @@ +#!/usr/bin/env bash + +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +TOP_DIR=$(cd $(dirname "$0") && pwd) +source "$TOP_DIR/util.sh" +source "$TOP_DIR/sdsrc" + +cat > /etc/nginx/sites-available/default <<EOF + server { + listen 8088 default_server; + listen [::]:8088 default_server; + root /var/www/html; + index index.html index.htm index.nginx-debian.html; + server_name _; + location /v3/ { + proxy_pass http://$HOST_IP/identity/v3/; + } + location /v1beta/ { + proxy_pass http://$HOST_IP:50040/$OPENSDS_VERSION/; + } + } +EOF + + diff --git a/ci/ansible/script/util.sh b/ci/ansible/script/util.sh new file mode 100644 index 0000000..b0e30eb --- /dev/null +++ b/ci/ansible/script/util.sh @@ -0,0 +1,93 @@ +#!/bin/bash + +# Copyright (c) 2017 Huawei Technologies Co., Ltd. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Echo text to the log file, summary log file and stdout +# echo_summary "something to say" +function echo_summary { + echo -e "$@" +} + +wait_for_url() { + local url=$1 + local prefix=${2:-} + local wait=${3:-1} + local times=${4:-30} + + which curl >/dev/null || { + echo_summary "curl must be installed" + exit 1 + } + + local i + for i in $(seq 1 "$times"); do + local out + if out=$(curl --max-time 1 -gkfs "$url" 2>/dev/null); then + echo_summary "On try ${i}, ${prefix}: ${out}" + return 0 + fi + sleep "${wait}" + done + echo_summary "Timed out waiting for ${prefix} to answer at ${url}; tried ${times} waiting ${wait} between each" + return 1 +} + +# Prints line number and "message" in error format +# err $LINENO "message" +err() { + local exitcode=$? + local xtrace + xtrace=$(set +o | grep xtrace) + set +o xtrace + local msg="[ERROR] ${BASH_SOURCE[2]}:$1 $2" + echo "$msg" + $xtrace + return $exitcode +} + +# Prints line number and "message" then exits +# die $LINENO "message" +die() { + local exitcode=$? + set +o xtrace + local line=$1; shift + if [ $exitcode == 0 ]; then + exitcode=1 + fi + err "$line" "$*" + # Give buffers a second to flush + sleep 1 + exit $exitcode +} + +get_default_host_ip() { + local host_ip=$1 + local af=$2 + # Search for an IP unless an explicit is set by ``HOST_IP`` environment variable + if [ -z "$host_ip" ]; then + host_ip="" + # Find the interface used for the default route + host_ip_iface=${host_ip_iface:-$(ip -f "$af" route | awk '/default/ {print $5}' | head -1)} + local host_ips + host_ips=$(LC_ALL=C ip -f "$af" addr show "${host_ip_iface}" | sed /temporary/d |awk /$af'/ {split($2,parts,"/"); print parts[1]}') + local ip + for ip in $host_ips; do + host_ip=$ip + break; + done + fi + echo "$host_ip" +} + diff --git a/ci/ansible/site.yml b/ci/ansible/site.yml index f0d2048..31ebaee 100644 --- a/ci/ansible/site.yml +++ b/ci/ansible/site.yml @@ -1,3 +1,17 @@ +# Copyright (c) 2018 Huawei Technologies Co., Ltd. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
---
# Defines deployment design and assigns role to server groups
@@ -6,14 +20,17 @@ remote_user: root
vars_files:
- group_vars/common.yml
+ - group_vars/auth.yml
- group_vars/osdsdb.yml
- group_vars/osdslet.yml
- group_vars/osdsdock.yml
+ - group_vars/dashboard.yml
gather_facts: false
become: True
roles:
- common
+ - osdsauth
- osdsdb
- osdslet
- osdsdock
- - nbp-installer
+ - dashboard-installer
diff --git a/ci/conf/policy.json b/ci/conf/policy.json new file mode 100644 index 0000000..781ee48 --- /dev/null +++ b/ci/conf/policy.json @@ -0,0 +1,49 @@ +{ + "admin_or_owner": "is_admin:True or (role:admin and is_admin_project:True) or tenant_id:%(tenant_id)s", + "default": "rule:admin_or_owner", + "admin_api": "is_admin:True or (role:admin and is_admin_project:True)", + + + "profile:create":"rule:admin_api", + "profile:list":"", + "profile:get":"", + "profile:update":"rule:admin_api", + "profile:delete":"rule:admin_api", + "profile:add_extra_property": "rule:admin_api", + "profile:list_extra_properties": "", + "profile:remove_extra_property": "rule:admin_api", + "volume:create": "rule:admin_or_owner", + "volume:list": "rule:admin_or_owner", + "volume:get": "rule:admin_or_owner", + "volume:update": "rule:admin_or_owner", + "volume:extend": "rule:admin_or_owner", + "volume:delete": "rule:admin_or_owner", + "volume:create_attachment": "rule:admin_or_owner", + "volume:list_attachments": "rule:admin_or_owner", + "volume:get_attachment": "rule:admin_or_owner", + "volume:update_attachment": "rule:admin_or_owner", + "volume:delete_attachment": "rule:admin_or_owner", + "snapshot:create": "rule:admin_or_owner", + "snapshot:list": "rule:admin_or_owner", + "snapshot:get": "rule:admin_or_owner", + "snapshot:update": "rule:admin_or_owner", + "snapshot:delete": "rule:admin_or_owner", + "dock:list": "rule:admin_api", + "dock:get": "rule:admin_api", + "pool:list": "rule:admin_api", + "pool:get": "rule:admin_api", + "replication:create": "rule:admin_or_owner", + "replication:list": "rule:admin_or_owner", + "replication:list_detail": "rule:admin_or_owner", + "replication:get": "rule:admin_or_owner", + "replication:update": "rule:admin_or_owner", + "replication:delete": "rule:admin_or_owner", + "replication:action:enable": "rule:admin_or_owner", + "replication:action:disable": "rule:admin_or_owner", + "replication:action:failover": "rule:admin_or_owner", + "volume_group:create": "rule:admin_or_owner", + "volume_group:list": "rule:admin_or_owner", + "volume_group:get": "rule:admin_or_owner", + "volume_group:update": "rule:admin_or_owner", + "volume_group:delete": "rule:admin_or_owner" +}
\ No newline at end of file diff --git a/tutorials/csi-plugin.md b/tutorials/csi-plugin.md index 9750791..997f2d5 100644 --- a/tutorials/csi-plugin.md +++ b/tutorials/csi-plugin.md @@ -31,60 +31,29 @@ ```
### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster
-* You can startup the v1.9.0 k8s local cluster by executing commands blow:
+* You can startup `v1.10.0` k8s local cluster by executing commands blow:
```
cd $HOME
git clone https://github.com/kubernetes/kubernetes.git
cd $HOME/kubernetes
- git checkout v1.9.0
+ git checkout v1.10.0
make
echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile
ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true" LOG_LEVEL=5 hack/local-up-cluster.sh
```
### [opensds](https://github.com/opensds/opensds) local cluster
-* For testing purposes you can deploy OpenSDS refering to ```ansible/README.md```.
+* For testing purposes you can deploy OpenSDS refering to [OpenSDS Cluster Installation through Ansible](https://github.com/opensds/opensds/wiki/OpenSDS-Cluster-Installation-through-Ansible).
## Testing steps ##
* Change the workplace
```
- cd /opt/opensds-k8s-v0.1.0-linux-amd64
+ cd /opt/opensds-k8s-linux-amd64
```
-* Configure opensds endpoint IP
-
- ```
- vim csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml
- ```
-
- The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP.
- ```yaml
- kind: ConfigMap
- apiVersion: v1
- metadata:
- name: csi-configmap-opensdsplugin
- data:
- opensdsendpoint: http://127.0.0.1:50040
- ```
-
-* Create opensds CSI pods.
-
- ```
- kubectl create -f csi/deploy/kubernetes
- ```
-
- After this three pods can be found by ```kubectl get pods``` like below:
-
- - csi-provisioner-opensdsplugin
- - csi-attacher-opensdsplugin
- - csi-nodeplugin-opensdsplugin
-
- You can find more design details from
- [CSI Volume Plugins in Kubernetes Design Doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
-
* Create example nginx application
```
diff --git a/tutorials/flexvolume-plugin.md b/tutorials/flexvolume-plugin.md index 269da4b..cb90316 100644 --- a/tutorials/flexvolume-plugin.md +++ b/tutorials/flexvolume-plugin.md @@ -1,17 +1,15 @@ ## Prerequisite ##
-
### ubuntu
* Version information
- ```
+ ```bash
root@proxy:~# cat /etc/issue
Ubuntu 16.04.2 LTS \n \l
```
-
### docker
* Version information
- ```
+ ```bash
root@proxy:~# docker version
Client:
Version: 1.12.6
@@ -20,7 +18,7 @@ Git commit: 78d1802
Built: Tue Jan 31 23:35:14 2017
OS/Arch: linux/amd64
-
+
Server:
Version: 1.12.6
API version: 1.24
@@ -30,16 +28,33 @@ OS/Arch: linux/amd64
```
-### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster
+### [golang](https://redirector.gvt1.com/edgedl/go/go1.9.2.linux-amd64.tar.gz)
* Version information
+
+ ```bash
+ root@proxy:~# go version
+ go version go1.9.2 linux/amd64
```
+
+* You can install golang by executing commands blow:
+
+ ```bash
+ wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz
+ tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz
+ export PATH=$PATH:/usr/local/go/bin
+ export GOPATH=$HOME/gopath
+ ```
+
+### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster
+* Version information
+ ```bash
root@proxy:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-beta.0-dirty", GitCommit:"a0fb3baa71f1559fd42d1acd9cbdd8a55ab4dfff", GitTreeState:"dirty", BuildDate:"2017-12-13T09:22:09Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-beta.0-dirty", GitCommit:"a0fb3baa71f1559fd42d1acd9cbdd8a55ab4dfff", GitTreeState:"dirty", BuildDate:"2017-12-13T09:22:09Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
```
* You can startup the k8s local cluster by executing commands blow:
- ```
+ ```bash
cd $HOME
git clone https://github.com/kubernetes/kubernetes.git
cd $HOME/kubernetes
@@ -48,23 +63,71 @@ echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile
RUNTIME_CONFIG=settings.k8s.io/v1alpha1=true AUTHORIZATION_MODE=Node,RBAC hack/local-up-cluster.sh -O
```
-
+**NOTE**:
+<div> Due to opensds using etcd as the database which is same with kubernetes so you should startup kubernetes firstly.
+</div>
### [opensds](https://github.com/opensds/opensds) local cluster
-* For testing purposes you can deploy OpenSDS local cluster referring to ```ansible/README.md```.
+* For testing purposes you can deploy OpenSDS referring the [Local Cluster Installation with LVM](https://github.com/opensds/opensds/wiki/Local-Cluster-Installation-with-LVM) wiki.
## Testing steps ##
+* Load some ENVs which is setted before.
-* Create service account, role and bind them.
+ ```bash
+ source /etc/profile
+ ```
+* Download nbp source code.
+
+ using git clone
+ ```bash
+ git clone https://github.com/opensds/nbp.git $GOPATH/src/github.com/opensds/nbp
+ ```
+
+ or using go get
+ ```bash
+ go get -v github.com/opensds/nbp/...
+ ```
+
+* Build the FlexVolume.
+
+ ```bash
+ cd $GOPATH/src/github.com/opensds/nbp/flexvolume
+ go build -o opensds ./cmd/flex-plugin/
```
- cd /opt/opensds-k8s-{release version}-linux-amd64/provisioner
+
+ FlexVolume plugin binary is on the current directory.
+
+
+* Copy the OpenSDS FlexVolume binary file to k8s kubelet `volume-plugin-dir`.
+ if you don't specify the `volume-plugin-dir`, you can execute commands blow:
+
+ ```bash
+ mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds/
+ cp $GOPATH/src/github.com/opensds/nbp/flexvolume/opensds /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds/
+ ```
+
+ **NOTE**:
+ <div>
+ OpenSDS FlexVolume will get the opensds api endpoint from the environment variable `OPENSDS_ENDPOINT`, if you don't specify it, the FlexVolume will use the default vaule: `http://127.0.0.1:50040`. if you want to specify the `OPENSDS_ENDPOINT` executing command `export OPENSDS_ENDPOINT=http://ip:50040` and restart the k8s local cluster.
+</div>
+
+* Build the provisioner docker image.
+
+ ```bash
+ cd $GOPATH/src/github.com/opensds/nbp/opensds-provisioner
+ make container
+ ```
+
+* Create service account, role and bind them.
+ ```bash
+ cd $GOPATH/src/github.com/opensds/nbp/opensds-provisioner/examples
kubectl create -f serviceaccount.yaml
kubectl create -f clusterrole.yaml
kubectl create -f clusterrolebinding.yaml
```
-* Change the opensds endpoint IP in pod-provisioner.yaml
-The IP ```192.168.56.106``` should be replaced with the OpenSDS osdslet actual endpoint IP.
+* Change the opensds endpoint IP in pod-provisioner.yaml
+The IP (192.168.56.106) should be replaced with the OpenSDS osdslet actual endpoint IP.
```yaml
kind: Pod
apiVersion: v1
@@ -74,7 +137,7 @@ The IP ```192.168.56.106``` should be replaced with the OpenSDS osdslet actual e serviceAccount: opensds-provisioner
containers:
- name: opensds-provisioner
- image: opensdsio/opensds-provisioner:latest
+ image: opensdsio/opensds-provisioner
securityContext:
args:
- "-endpoint=http://192.168.56.106:50040" # should be replaced
@@ -82,19 +145,54 @@ The IP ```192.168.56.106``` should be replaced with the OpenSDS osdslet actual e ```
* Create provisioner pod.
- ```
+ ```bash
kubectl create -f pod-provisioner.yaml
```
-
+
+ Execute `kubectl get pod` to check if the opensds-provisioner is ok.
+ ```bash
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pod
+ NAME READY STATUS RESTARTS AGE
+ opensds-provisioner 1/1 Running 0 42m
+ ```
* You can use the following cammands to test the OpenSDS FlexVolume and Proversioner functions.
- ```
+ Create storage class.
+ ```bash
kubectl create -f sc.yaml # Create StorageClass
+ ```
+ Execute `kubectl get sc` to check if the storage class is ok.
+ ```bash
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get sc
+ NAME PROVISIONER AGE
+ opensds opensds/nbp-provisioner 46m
+ standard (default) kubernetes.io/host-path 49m
+ ```
+ Create PVC.
+ ```bash
kubectl create -f pvc.yaml # Create PVC
- kubectl create -f pod-application.yaml # Create busybox pod and mount the block storage.
```
+ Execute `kubectl get pvc` to check if the pvc is ok.
+ ```bash
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pvc
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ opensds-pvc Bound 731da41e-c9ee-4180-8fb3-d1f6c7f65378 1Gi RWO opensds 48m
+ ```
+ Create busybox pod.
+
+ ```bash
+ kubectl create -f pod-application.yaml # Create busybox pod and mount the block storage.
+ ```
+ Execute `kubectl get pod` to check if the busybox pod is ok.
+ ```bash
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pod
+ NAME READY STATUS RESTARTS AGE
+ busy-pod 1/1 Running 0 49m
+ opensds-provisioner 1/1 Running 0 50m
+ ```
Execute the `findmnt|grep opensds` to confirm whether the volume has been provided.
+ If there is some thing that goes wrong, you can check the log files in directory `/var/log/opensds`.
## Clean up steps ##
@@ -107,4 +205,4 @@ kubectl delete -f pod-provisioner.yaml kubectl delete -f clusterrolebinding.yaml
kubectl delete -f clusterrole.yaml
kubectl delete -f serviceaccount.yaml
-```
\ No newline at end of file +```
diff --git a/tutorials/stor4nfv-only-scenario.md b/tutorials/stor4nfv-only-scenario.md new file mode 100644 index 0000000..3b097ad --- /dev/null +++ b/tutorials/stor4nfv-only-scenario.md @@ -0,0 +1,166 @@ +## 1. How to install an opensds local cluster +### Pre-config (Ubuntu 16.04) +All the installation work is tested on `Ubuntu 16.04`, please make sure you have installed the right one. Also `root` user is suggested before the installation work starts. + +* packages + +Install following packages: +```bash +apt-get install -y git curl wget +``` +* docker + +Install docker: +```bash +wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb +dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb +``` +* golang + +Check golang version information: +```bash +root@proxy:~# go version +go version go1.9.2 linux/amd64 +``` +You can install golang by executing commands below: +```bash +wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz +tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz +echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile +echo 'export GOPATH=$HOME/gopath' >> /etc/profile +source /etc/profile +``` + +### Download opensds-installer code +```bash +git clone https://gerrit.opnfv.org/gerrit/stor4nfv +cd stor4nfv/ci/ansible +``` + +### Install ansible tool +To install ansible, run the commands below: +```bash +# This step is needed to upgrade ansible to version 2.4.2 which is required for the "include_tasks" ansible command. +chmod +x ./install_ansible.sh && ./install_ansible.sh +ansible --version # Ansible version 2.4.x is required. +``` + +### Configure opensds cluster variables: +##### System environment: +If you want to integrate stor4nfv with k8s csi, please modify `nbp_plugin_type` to `csi` and also change `opensds_endpoint` field in `group_vars/common.yml`: +```yaml +# 'hotpot_only' is the default integration way, but you can change it to 'csi' +# or 'flexvolume' +nbp_plugin_type: hotpot_only +# The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP +opensds_endpoint: http://127.0.0.1:50040 +``` + +##### LVM +If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`: +```yaml +enabled_backend: lvm +``` + +Modify ```group_vars/lvm/lvm.yaml```, change `tgtBindIp` to your real host ip if needed: +```yaml +tgtBindIp: 127.0.0.1 +``` + +##### Ceph +If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`: +```yaml +enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'. +``` + +Configure ```group_vars/ceph/all.yml``` with an example below: +```yml +ceph_origin: repository +ceph_repository: community +ceph_stable_release: luminous # Choose luminous as default version +public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address +cluster_network: "{{ public_network }}" +monitor_interface: eth1 # Change to the network interface on the target machine +devices: # For ceph devices, append ONE or MULTIPLE devices like the example below: + - '/dev/sda' # Ensure this device exists and available if ceph is chosen + #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen +osd_scenario: collocated +``` + +##### Cinder +If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`: +```yaml +enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder' + +# Use block-box install cinder_standalone if true, see details in: +use_cinder_standalone: true +``` + +Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone. + +### Check if the hosts can be reached +```bash +ansible all -m ping -i local.hosts +``` + +### Run opensds-ansible playbook to start deploy +```bash +ansible-playbook site.yml -i local.hosts +``` + +## 2. How to test opensds cluster +### OpenSDS CLI +Firstly configure opensds CLI tool: +```bash +sudo cp /opt/opensds-linux-amd64/bin/osdsctl /usr/local/bin/ +export OPENSDS_ENDPOINT=http://{your_real_host_ip}:50040 +export OPENSDS_AUTH_STRATEGY=keystone +source /opt/stack/devstack/openrc admin admin + +osdsctl pool list # Check if the pool resource is available +``` + +Then create a default profile: +``` +osdsctl profile create '{"name": "default", "description": "default policy"}' +``` + +Create a volume: +``` +osdsctl volume create 1 --name=test-001 +``` + +List all volumes: +``` +osdsctl volume list +``` + +Delete the volume: +``` +osdsctl volume delete <your_volume_id> +``` + +### OpenSDS UI +OpenSDS UI dashboard is available at `http://{your_host_ip}:8088`, please login the dashboard using the default admin credentials: `admin/opensds@123`. Create tenant, user, and profiles as admin. + +Logout of the dashboard as admin and login the dashboard again as a non-admin user to create volume, snapshot, expand volume, create volume from snapshot, create volume group. + +## 3. How to purge and clean opensds cluster + +### Run opensds-ansible playbook to clean the environment +```bash +ansible-playbook clean.yml -i local.hosts +``` + +### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed +```bash +cd /opt/ceph-ansible +sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts +``` + +In addition, clean up the logical partition on the physical block device used by ceph, using the ```fdisk``` tool. + +### Remove ceph-ansible source code (optional) +```bash +sudo rm -rf /opt/ceph-ansible +``` diff --git a/tutorials/stor4nfv-openstack-scenario.md b/tutorials/stor4nfv-openstack-scenario.md new file mode 100644 index 0000000..2b399ef --- /dev/null +++ b/tutorials/stor4nfv-openstack-scenario.md @@ -0,0 +1,120 @@ +# OpenSDS Integration with OpenStack on Ubuntu + +All the installation work is tested on `Ubuntu 16.04`, please make sure you have +installed the right one. + +## Environment Prepare + +* OpenStack (Supposed you have deployed) +```shell +openstack endpoint list # Check the endpoint of the killed cinder service +``` + +* packages + +Install following packages: +```bash +apt-get install -y git curl wget +``` +* docker + +Install docker: +```bash +wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb +dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb +``` +* golang + +Check golang version information: +```bash +root@proxy:~# go version +go version go1.9.2 linux/amd64 +``` +You can install golang by executing commands below: +```bash +wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz +tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz +echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile +echo 'export GOPATH=$HOME/gopath' >> /etc/profile +source /etc/profile +``` + +## Start deployment +### Download opensds-installer code +```bash +git clone https://gerrit.opnfv.org/gerrit/stor4nfv +cd stor4nfv/ci/ansible +``` + +### Install ansible tool +To install ansible, run the commands below: +```bash +# This step is needed to upgrade ansible to version 2.4.2 which is required for the "include_tasks" ansible command. +chmod +x ./install_ansible.sh && ./install_ansible.sh +ansible --version # Ansible version 2.4.x is required. +``` + +### Configure opensds cluster variables: +##### System environment: +Change `opensds_endpoint` field in `group_vars/common.yml`: +```yaml +# The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP +opensds_endpoint: http://127.0.0.1:50040 +``` + +Change `opensds_auth_strategy` field to `noauth` in `group_vars/auth.yml`: +```yaml +# OpenSDS authentication strategy, support 'noauth' and 'keystone'. +opensds_auth_strategy: noauth +``` + +##### Ceph +If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`: +```yaml +enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'. +``` + +Configure ```group_vars/ceph/all.yml``` with an example below: +```yml +ceph_origin: repository +ceph_repository: community +ceph_stable_release: luminous # Choose luminous as default version +public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address +cluster_network: "{{ public_network }}" +monitor_interface: eth1 # Change to the network interface on the target machine +devices: # For ceph devices, append ONE or MULTIPLE devices like the example below: + - '/dev/sda' # Ensure this device exists and available if ceph is chosen + #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen +osd_scenario: collocated +``` + +### Check if the hosts can be reached +```bash +ansible all -m ping -i local.hosts +``` + +### Run opensds-ansible playbook to start deploy +```bash +ansible-playbook site.yml -i local.hosts +``` + +And next build and run cindercompatibleapi module: +```shell +cd $GOPATH/src/github.com/opensds/opensds +go build -o ./build/out/bin/cindercompatibleapi github.com/opensds/opensds/contrib/cindercompatibleapi +``` + +## Test +```shell +export CINDER_ENDPOINT=http://10.10.3.173:8776/v3 # Use endpoint shown above +export OPENSDS_ENDPOINT=http://127.0.0.1:50040 + +./build/out/bin/cindercompatibleapi +``` + +Then you can execute some cinder cli commands to see if the result is correct, +for example if you execute the command `cinder type-list`, the result will show +the profile of opnesds. + +For detailed test instruction, please refer to the 5.3 section in +[OpenSDS Aruba PoC Plan](https://github.com/opensds/opensds/blob/development/docs/test-plans/OpenSDS_Aruba_POC_Plan.pdf). |