summaryrefslogtreecommitdiffstats
path: root/tutorials
diff options
context:
space:
mode:
authorleonwang <wanghui71@huawei.com>2018-07-20 14:55:21 +0800
committerleonwang <wanghui71@huawei.com>2018-07-20 14:55:31 +0800
commit078bb837513f3b83fdd07f2e10f9abeb0bd485db (patch)
tree9a87c67bfe3ef12683f0c6d2f8b833d6d942b155 /tutorials
parent63ff6c6ec9ebbca90ac7304a27c0430dbcecb74f (diff)
Update stor4nfv install scripts according to opensds aruba release
Since OpenSDS has published its aruba release and also supports OpenStack scenario, this patch is designed for updating stor4nfv code to prepare for integrating Compass4NFV and Apex installer on OpenStack scenario. Besides the large code changes, some tutorial docs are also added or updated for better installing this project. Change-Id: I1e4fec652c6c860028ef39448bc323839f3aa95c Signed-off-by: leonwang <wanghui71@huawei.com>
Diffstat (limited to 'tutorials')
-rw-r--r--tutorials/csi-plugin.md39
-rw-r--r--tutorials/flexvolume-plugin.md136
-rw-r--r--tutorials/stor4nfv-only-scenario.md166
-rw-r--r--tutorials/stor4nfv-openstack-scenario.md120
4 files changed, 407 insertions, 54 deletions
diff --git a/tutorials/csi-plugin.md b/tutorials/csi-plugin.md
index 9750791..997f2d5 100644
--- a/tutorials/csi-plugin.md
+++ b/tutorials/csi-plugin.md
@@ -31,60 +31,29 @@
```
### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster
-* You can startup the v1.9.0 k8s local cluster by executing commands blow:
+* You can startup `v1.10.0` k8s local cluster by executing commands blow:
```
cd $HOME
git clone https://github.com/kubernetes/kubernetes.git
cd $HOME/kubernetes
- git checkout v1.9.0
+ git checkout v1.10.0
make
echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile
ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true" LOG_LEVEL=5 hack/local-up-cluster.sh
```
### [opensds](https://github.com/opensds/opensds) local cluster
-* For testing purposes you can deploy OpenSDS refering to ```ansible/README.md```.
+* For testing purposes you can deploy OpenSDS refering to [OpenSDS Cluster Installation through Ansible](https://github.com/opensds/opensds/wiki/OpenSDS-Cluster-Installation-through-Ansible).
## Testing steps ##
* Change the workplace
```
- cd /opt/opensds-k8s-v0.1.0-linux-amd64
+ cd /opt/opensds-k8s-linux-amd64
```
-* Configure opensds endpoint IP
-
- ```
- vim csi/deploy/kubernetes/csi-configmap-opensdsplugin.yaml
- ```
-
- The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP.
- ```yaml
- kind: ConfigMap
- apiVersion: v1
- metadata:
- name: csi-configmap-opensdsplugin
- data:
- opensdsendpoint: http://127.0.0.1:50040
- ```
-
-* Create opensds CSI pods.
-
- ```
- kubectl create -f csi/deploy/kubernetes
- ```
-
- After this three pods can be found by ```kubectl get pods``` like below:
-
- - csi-provisioner-opensdsplugin
- - csi-attacher-opensdsplugin
- - csi-nodeplugin-opensdsplugin
-
- You can find more design details from
- [CSI Volume Plugins in Kubernetes Design Doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md)
-
* Create example nginx application
```
diff --git a/tutorials/flexvolume-plugin.md b/tutorials/flexvolume-plugin.md
index 269da4b..cb90316 100644
--- a/tutorials/flexvolume-plugin.md
+++ b/tutorials/flexvolume-plugin.md
@@ -1,17 +1,15 @@
## Prerequisite ##
-
### ubuntu
* Version information
- ```
+ ```bash
root@proxy:~# cat /etc/issue
Ubuntu 16.04.2 LTS \n \l
```
-
### docker
* Version information
- ```
+ ```bash
root@proxy:~# docker version
Client:
Version: 1.12.6
@@ -20,7 +18,7 @@
Git commit: 78d1802
Built: Tue Jan 31 23:35:14 2017
OS/Arch: linux/amd64
-
+
Server:
Version: 1.12.6
API version: 1.24
@@ -30,16 +28,33 @@
OS/Arch: linux/amd64
```
-### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster
+### [golang](https://redirector.gvt1.com/edgedl/go/go1.9.2.linux-amd64.tar.gz)
* Version information
+
+ ```bash
+ root@proxy:~# go version
+ go version go1.9.2 linux/amd64
```
+
+* You can install golang by executing commands blow:
+
+ ```bash
+ wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz
+ tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz
+ export PATH=$PATH:/usr/local/go/bin
+ export GOPATH=$HOME/gopath
+ ```
+
+### [kubernetes](https://github.com/kubernetes/kubernetes) local cluster
+* Version information
+ ```bash
root@proxy:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-beta.0-dirty", GitCommit:"a0fb3baa71f1559fd42d1acd9cbdd8a55ab4dfff", GitTreeState:"dirty", BuildDate:"2017-12-13T09:22:09Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-beta.0-dirty", GitCommit:"a0fb3baa71f1559fd42d1acd9cbdd8a55ab4dfff", GitTreeState:"dirty", BuildDate:"2017-12-13T09:22:09Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
```
* You can startup the k8s local cluster by executing commands blow:
- ```
+ ```bash
cd $HOME
git clone https://github.com/kubernetes/kubernetes.git
cd $HOME/kubernetes
@@ -48,23 +63,71 @@
echo alias kubectl='$HOME/kubernetes/cluster/kubectl.sh' >> /etc/profile
RUNTIME_CONFIG=settings.k8s.io/v1alpha1=true AUTHORIZATION_MODE=Node,RBAC hack/local-up-cluster.sh -O
```
-
+**NOTE**:
+<div> Due to opensds using etcd as the database which is same with kubernetes so you should startup kubernetes firstly.
+</div>
### [opensds](https://github.com/opensds/opensds) local cluster
-* For testing purposes you can deploy OpenSDS local cluster referring to ```ansible/README.md```.
+* For testing purposes you can deploy OpenSDS referring the [Local Cluster Installation with LVM](https://github.com/opensds/opensds/wiki/Local-Cluster-Installation-with-LVM) wiki.
## Testing steps ##
+* Load some ENVs which is setted before.
-* Create service account, role and bind them.
+ ```bash
+ source /etc/profile
+ ```
+* Download nbp source code.
+
+ using git clone
+ ```bash
+ git clone https://github.com/opensds/nbp.git $GOPATH/src/github.com/opensds/nbp
+ ```
+
+ or using go get
+ ```bash
+ go get -v github.com/opensds/nbp/...
+ ```
+
+* Build the FlexVolume.
+
+ ```bash
+ cd $GOPATH/src/github.com/opensds/nbp/flexvolume
+ go build -o opensds ./cmd/flex-plugin/
```
- cd /opt/opensds-k8s-{release version}-linux-amd64/provisioner
+
+ FlexVolume plugin binary is on the current directory.
+
+
+* Copy the OpenSDS FlexVolume binary file to k8s kubelet `volume-plugin-dir`.
+ if you don't specify the `volume-plugin-dir`, you can execute commands blow:
+
+ ```bash
+ mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds/
+ cp $GOPATH/src/github.com/opensds/nbp/flexvolume/opensds /usr/libexec/kubernetes/kubelet-plugins/volume/exec/opensds.io~opensds/
+ ```
+
+ **NOTE**:
+ <div>
+ OpenSDS FlexVolume will get the opensds api endpoint from the environment variable `OPENSDS_ENDPOINT`, if you don't specify it, the FlexVolume will use the default vaule: `http://127.0.0.1:50040`. if you want to specify the `OPENSDS_ENDPOINT` executing command `export OPENSDS_ENDPOINT=http://ip:50040` and restart the k8s local cluster.
+</div>
+
+* Build the provisioner docker image.
+
+ ```bash
+ cd $GOPATH/src/github.com/opensds/nbp/opensds-provisioner
+ make container
+ ```
+
+* Create service account, role and bind them.
+ ```bash
+ cd $GOPATH/src/github.com/opensds/nbp/opensds-provisioner/examples
kubectl create -f serviceaccount.yaml
kubectl create -f clusterrole.yaml
kubectl create -f clusterrolebinding.yaml
```
-* Change the opensds endpoint IP in pod-provisioner.yaml
-The IP ```192.168.56.106``` should be replaced with the OpenSDS osdslet actual endpoint IP.
+* Change the opensds endpoint IP in pod-provisioner.yaml
+The IP (192.168.56.106) should be replaced with the OpenSDS osdslet actual endpoint IP.
```yaml
kind: Pod
apiVersion: v1
@@ -74,7 +137,7 @@ The IP ```192.168.56.106``` should be replaced with the OpenSDS osdslet actual e
serviceAccount: opensds-provisioner
containers:
- name: opensds-provisioner
- image: opensdsio/opensds-provisioner:latest
+ image: opensdsio/opensds-provisioner
securityContext:
args:
- "-endpoint=http://192.168.56.106:50040" # should be replaced
@@ -82,19 +145,54 @@ The IP ```192.168.56.106``` should be replaced with the OpenSDS osdslet actual e
```
* Create provisioner pod.
- ```
+ ```bash
kubectl create -f pod-provisioner.yaml
```
-
+
+ Execute `kubectl get pod` to check if the opensds-provisioner is ok.
+ ```bash
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pod
+ NAME READY STATUS RESTARTS AGE
+ opensds-provisioner 1/1 Running 0 42m
+ ```
* You can use the following cammands to test the OpenSDS FlexVolume and Proversioner functions.
- ```
+ Create storage class.
+ ```bash
kubectl create -f sc.yaml # Create StorageClass
+ ```
+ Execute `kubectl get sc` to check if the storage class is ok.
+ ```bash
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get sc
+ NAME PROVISIONER AGE
+ opensds opensds/nbp-provisioner 46m
+ standard (default) kubernetes.io/host-path 49m
+ ```
+ Create PVC.
+ ```bash
kubectl create -f pvc.yaml # Create PVC
- kubectl create -f pod-application.yaml # Create busybox pod and mount the block storage.
```
+ Execute `kubectl get pvc` to check if the pvc is ok.
+ ```bash
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pvc
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ opensds-pvc Bound 731da41e-c9ee-4180-8fb3-d1f6c7f65378 1Gi RWO opensds 48m
+ ```
+ Create busybox pod.
+
+ ```bash
+ kubectl create -f pod-application.yaml # Create busybox pod and mount the block storage.
+ ```
+ Execute `kubectl get pod` to check if the busybox pod is ok.
+ ```bash
+ root@nbp:~/go/src/github.com/opensds/nbp/opensds-provisioner/examples# kubectl get pod
+ NAME READY STATUS RESTARTS AGE
+ busy-pod 1/1 Running 0 49m
+ opensds-provisioner 1/1 Running 0 50m
+ ```
Execute the `findmnt|grep opensds` to confirm whether the volume has been provided.
+ If there is some thing that goes wrong, you can check the log files in directory `/var/log/opensds`.
## Clean up steps ##
@@ -107,4 +205,4 @@ kubectl delete -f pod-provisioner.yaml
kubectl delete -f clusterrolebinding.yaml
kubectl delete -f clusterrole.yaml
kubectl delete -f serviceaccount.yaml
-``` \ No newline at end of file
+```
diff --git a/tutorials/stor4nfv-only-scenario.md b/tutorials/stor4nfv-only-scenario.md
new file mode 100644
index 0000000..3b097ad
--- /dev/null
+++ b/tutorials/stor4nfv-only-scenario.md
@@ -0,0 +1,166 @@
+## 1. How to install an opensds local cluster
+### Pre-config (Ubuntu 16.04)
+All the installation work is tested on `Ubuntu 16.04`, please make sure you have installed the right one. Also `root` user is suggested before the installation work starts.
+
+* packages
+
+Install following packages:
+```bash
+apt-get install -y git curl wget
+```
+* docker
+
+Install docker:
+```bash
+wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
+dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
+```
+* golang
+
+Check golang version information:
+```bash
+root@proxy:~# go version
+go version go1.9.2 linux/amd64
+```
+You can install golang by executing commands below:
+```bash
+wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz
+tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz
+echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile
+echo 'export GOPATH=$HOME/gopath' >> /etc/profile
+source /etc/profile
+```
+
+### Download opensds-installer code
+```bash
+git clone https://gerrit.opnfv.org/gerrit/stor4nfv
+cd stor4nfv/ci/ansible
+```
+
+### Install ansible tool
+To install ansible, run the commands below:
+```bash
+# This step is needed to upgrade ansible to version 2.4.2 which is required for the "include_tasks" ansible command.
+chmod +x ./install_ansible.sh && ./install_ansible.sh
+ansible --version # Ansible version 2.4.x is required.
+```
+
+### Configure opensds cluster variables:
+##### System environment:
+If you want to integrate stor4nfv with k8s csi, please modify `nbp_plugin_type` to `csi` and also change `opensds_endpoint` field in `group_vars/common.yml`:
+```yaml
+# 'hotpot_only' is the default integration way, but you can change it to 'csi'
+# or 'flexvolume'
+nbp_plugin_type: hotpot_only
+# The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP
+opensds_endpoint: http://127.0.0.1:50040
+```
+
+##### LVM
+If `lvm` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: lvm
+```
+
+Modify ```group_vars/lvm/lvm.yaml```, change `tgtBindIp` to your real host ip if needed:
+```yaml
+tgtBindIp: 127.0.0.1
+```
+
+##### Ceph
+If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
+```
+
+Configure ```group_vars/ceph/all.yml``` with an example below:
+```yml
+ceph_origin: repository
+ceph_repository: community
+ceph_stable_release: luminous # Choose luminous as default version
+public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
+cluster_network: "{{ public_network }}"
+monitor_interface: eth1 # Change to the network interface on the target machine
+devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
+ - '/dev/sda' # Ensure this device exists and available if ceph is chosen
+ #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen
+osd_scenario: collocated
+```
+
+##### Cinder
+If `cinder` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
+
+# Use block-box install cinder_standalone if true, see details in:
+use_cinder_standalone: true
+```
+
+Configure the auth and pool options to access cinder in `group_vars/cinder/cinder.yaml`. Do not need to make additional configure changes if using cinder standalone.
+
+### Check if the hosts can be reached
+```bash
+ansible all -m ping -i local.hosts
+```
+
+### Run opensds-ansible playbook to start deploy
+```bash
+ansible-playbook site.yml -i local.hosts
+```
+
+## 2. How to test opensds cluster
+### OpenSDS CLI
+Firstly configure opensds CLI tool:
+```bash
+sudo cp /opt/opensds-linux-amd64/bin/osdsctl /usr/local/bin/
+export OPENSDS_ENDPOINT=http://{your_real_host_ip}:50040
+export OPENSDS_AUTH_STRATEGY=keystone
+source /opt/stack/devstack/openrc admin admin
+
+osdsctl pool list # Check if the pool resource is available
+```
+
+Then create a default profile:
+```
+osdsctl profile create '{"name": "default", "description": "default policy"}'
+```
+
+Create a volume:
+```
+osdsctl volume create 1 --name=test-001
+```
+
+List all volumes:
+```
+osdsctl volume list
+```
+
+Delete the volume:
+```
+osdsctl volume delete <your_volume_id>
+```
+
+### OpenSDS UI
+OpenSDS UI dashboard is available at `http://{your_host_ip}:8088`, please login the dashboard using the default admin credentials: `admin/opensds@123`. Create tenant, user, and profiles as admin.
+
+Logout of the dashboard as admin and login the dashboard again as a non-admin user to create volume, snapshot, expand volume, create volume from snapshot, create volume group.
+
+## 3. How to purge and clean opensds cluster
+
+### Run opensds-ansible playbook to clean the environment
+```bash
+ansible-playbook clean.yml -i local.hosts
+```
+
+### Run ceph-ansible playbook to clean ceph cluster if ceph is deployed
+```bash
+cd /opt/ceph-ansible
+sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hosts
+```
+
+In addition, clean up the logical partition on the physical block device used by ceph, using the ```fdisk``` tool.
+
+### Remove ceph-ansible source code (optional)
+```bash
+sudo rm -rf /opt/ceph-ansible
+```
diff --git a/tutorials/stor4nfv-openstack-scenario.md b/tutorials/stor4nfv-openstack-scenario.md
new file mode 100644
index 0000000..2b399ef
--- /dev/null
+++ b/tutorials/stor4nfv-openstack-scenario.md
@@ -0,0 +1,120 @@
+# OpenSDS Integration with OpenStack on Ubuntu
+
+All the installation work is tested on `Ubuntu 16.04`, please make sure you have
+installed the right one.
+
+## Environment Prepare
+
+* OpenStack (Supposed you have deployed)
+```shell
+openstack endpoint list # Check the endpoint of the killed cinder service
+```
+
+* packages
+
+Install following packages:
+```bash
+apt-get install -y git curl wget
+```
+* docker
+
+Install docker:
+```bash
+wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
+dpkg -i docker-ce_18.03.1~ce-0~ubuntu_amd64.deb
+```
+* golang
+
+Check golang version information:
+```bash
+root@proxy:~# go version
+go version go1.9.2 linux/amd64
+```
+You can install golang by executing commands below:
+```bash
+wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz
+tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz
+echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile
+echo 'export GOPATH=$HOME/gopath' >> /etc/profile
+source /etc/profile
+```
+
+## Start deployment
+### Download opensds-installer code
+```bash
+git clone https://gerrit.opnfv.org/gerrit/stor4nfv
+cd stor4nfv/ci/ansible
+```
+
+### Install ansible tool
+To install ansible, run the commands below:
+```bash
+# This step is needed to upgrade ansible to version 2.4.2 which is required for the "include_tasks" ansible command.
+chmod +x ./install_ansible.sh && ./install_ansible.sh
+ansible --version # Ansible version 2.4.x is required.
+```
+
+### Configure opensds cluster variables:
+##### System environment:
+Change `opensds_endpoint` field in `group_vars/common.yml`:
+```yaml
+# The IP (127.0.0.1) should be replaced with the opensds actual endpoint IP
+opensds_endpoint: http://127.0.0.1:50040
+```
+
+Change `opensds_auth_strategy` field to `noauth` in `group_vars/auth.yml`:
+```yaml
+# OpenSDS authentication strategy, support 'noauth' and 'keystone'.
+opensds_auth_strategy: noauth
+```
+
+##### Ceph
+If `ceph` is chosen as storage backend, modify `group_vars/osdsdock.yml`:
+```yaml
+enabled_backend: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.
+```
+
+Configure ```group_vars/ceph/all.yml``` with an example below:
+```yml
+ceph_origin: repository
+ceph_repository: community
+ceph_stable_release: luminous # Choose luminous as default version
+public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
+cluster_network: "{{ public_network }}"
+monitor_interface: eth1 # Change to the network interface on the target machine
+devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
+ - '/dev/sda' # Ensure this device exists and available if ceph is chosen
+ #- '/dev/sdb' # Ensure this device exists and available if ceph is chosen
+osd_scenario: collocated
+```
+
+### Check if the hosts can be reached
+```bash
+ansible all -m ping -i local.hosts
+```
+
+### Run opensds-ansible playbook to start deploy
+```bash
+ansible-playbook site.yml -i local.hosts
+```
+
+And next build and run cindercompatibleapi module:
+```shell
+cd $GOPATH/src/github.com/opensds/opensds
+go build -o ./build/out/bin/cindercompatibleapi github.com/opensds/opensds/contrib/cindercompatibleapi
+```
+
+## Test
+```shell
+export CINDER_ENDPOINT=http://10.10.3.173:8776/v3 # Use endpoint shown above
+export OPENSDS_ENDPOINT=http://127.0.0.1:50040
+
+./build/out/bin/cindercompatibleapi
+```
+
+Then you can execute some cinder cli commands to see if the result is correct,
+for example if you execute the command `cinder type-list`, the result will show
+the profile of opnesds.
+
+For detailed test instruction, please refer to the 5.3 section in
+[OpenSDS Aruba PoC Plan](https://github.com/opensds/opensds/blob/development/docs/test-plans/OpenSDS_Aruba_POC_Plan.pdf).