summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/images/arch-layout-k8s-ha.pngbin0 -> 56989 bytes
-rw-r--r--docs/images/arch-layout-k8s-noha.pngbin0 -> 268126 bytes
-rw-r--r--docs/xci-user-guide.rst148
-rwxr-xr-xxci/installer/kubespray/deploy.sh43
-rw-r--r--xci/installer/kubespray/files/k8s-cluster.yml7
-rw-r--r--xci/installer/kubespray/playbooks/configure-opnfvhost.yml7
-rw-r--r--xci/installer/osa/files/ha/inventory4
-rw-r--r--xci/installer/osa/files/mini/inventory4
-rw-r--r--xci/installer/osa/files/noha/inventory4
-rw-r--r--xci/installer/osa/playbooks/configure-opnfvhost.yml15
-rw-r--r--xci/installer/osa/playbooks/configure-targethosts.yml42
-rw-r--r--xci/playbooks/configure-localhost.yml5
-rw-r--r--xci/playbooks/roles/.gitignore7
-rw-r--r--xci/playbooks/roles/prepare-functest/templates/env.j22
-rw-r--r--xci/playbooks/roles/prepare-functest/templates/run-functest.sh.j224
-rwxr-xr-xxci/scripts/vm/start-new-vm.sh6
16 files changed, 225 insertions, 93 deletions
diff --git a/docs/images/arch-layout-k8s-ha.png b/docs/images/arch-layout-k8s-ha.png
new file mode 100644
index 00000000..e0870305
--- /dev/null
+++ b/docs/images/arch-layout-k8s-ha.png
Binary files differ
diff --git a/docs/images/arch-layout-k8s-noha.png b/docs/images/arch-layout-k8s-noha.png
new file mode 100644
index 00000000..0ee8bceb
--- /dev/null
+++ b/docs/images/arch-layout-k8s-noha.png
Binary files differ
diff --git a/docs/xci-user-guide.rst b/docs/xci-user-guide.rst
index 7a411257..8f506fc4 100644
--- a/docs/xci-user-guide.rst
+++ b/docs/xci-user-guide.rst
@@ -41,6 +41,7 @@ The sandbox provides
* multiple OPNFV scenarios to install
* ability to select different versions of upstream components to base the work on
* ability to enable additional OpenStack services or disable others
+* ability to install kubernetes with different network plugins
One last point to highlight here is that the XCI itself uses the sandbox for
development and test purposes so it is continuously tested to ensure it works
@@ -50,10 +51,11 @@ purposes.
Components of the Sandbox
===================================
-The sandbox uses OpenStack projects for VM node creation, provisioning
-and OpenStack installation. XCI Team provides playbooks, roles, and scripts
-to ensure the components utilized by the sandbox work in a way that serves
-the users in the best possible way.
+The sandbox uses OpenStack tools for VM node creation and provisioning.
+OpenStack and Kubernetes installations are done using the tools from corresponding
+upstream projects with no changes to them. XCI Team provides playbooks,
+roles, and scripts to ensure the components utilized by the sandbox
+work in a way that serves the users in the best possible way.
* **openstack/bifrost:** Bifrost (pronounced bye-frost) is a set of Ansible
playbooks that automates the task of deploying a base image onto a set
@@ -70,6 +72,13 @@ the users in the best possible way.
More information about this project can be seen on
`OpenStack Ansible documentation <https://docs.openstack.org/developer/openstack-ansible/>`_.
+* **kubernetes-incubator/kubespray:** Kubespray is a composition of Ansible playbooks,
+ inventory, provisioning tools, and domain knowledge for generic Kubernetes
+ clusters configuration management tasks. The aim of kubespray is deploying a
+ production ready Kubernetes cluster.
+ More information about this project can be seen on
+ `Kubespray documentation <https://kubernetes.io/docs/getting-started-guides/kubespray/>`_.
+
* **opnfv/releng-xci:** OPNFV Releng Project provides additional scripts, Ansible
playbooks and configuration options in order for developers to have an easy
way of using openstack/bifrost and openstack/openstack-ansible by just
@@ -85,29 +94,29 @@ deployed using VM nodes.
Available flavors are listed on the table below.
-+------------------+------------------------+---------------------+-------------------------+
-| Flavor | Number of VM Nodes | VM Specs Per Node | Time Estimates |
-+==================+========================+=====================+=========================+
-| All in One (aio) | | 1 VM Node | | vCPUs: 8 | | Provisioning: 10 mins |
-| | | controller & compute | | RAM: 12GB | | Deployment: 90 mins |
-| | | on single/same node | | Disk: 80GB | | Total: 100 mins |
-| | | 1 compute node | | NICs: 1 | | |
-+------------------+------------------------+---------------------+-------------------------+
-| Mini | | 3 VM Nodes | | vCPUs: 6 | | Provisioning: 12 mins |
-| | | 1 deployment node | | RAM: 12GB | | Deployment: 65 mins |
-| | | 1 controller node | | Disk: 80GB | | Total: 77 mins |
-| | | 1 compute node | | NICs: 1 | | |
-+------------------+------------------------+---------------------+-------------------------+
-| No HA | | 4 VM Nodes | | vCPUs: 6 | | Provisioning: 12 mins |
-| | | 1 deployment node | | RAM: 12GB | | Deployment: 70 mins |
-| | | 1 controller node | | Disk: 80GB | | Total: 82 mins |
-| | | 2 compute nodes | | NICs: 1 | | |
-+------------------+------------------------+---------------------+-------------------------+
-| HA | | 6 VM Nodes | | vCPUs: 6 | | Provisioning: 15 mins |
-| | | 1 deployment node | | RAM: 12GB | | Deployment: 105 mins |
-| | | 3 controller nodes | | Disk: 80GB | | Total: 120 mins |
-| | | 2 compute nodes | | NICs: 1 | | |
-+------------------+------------------------+---------------------+-------------------------+
++------------------+------------------------+---------------------+--------------------------+--------------------------+
+| Flavor | Number of VM Nodes | VM Specs Per Node | Time Estimates Openstack | Time Estimates Kubernetes|
++==================+========================+=====================+==========================+==========================+
+| All in One (aio) | | 1 VM Node | | vCPUs: 8 | | Provisioning: 10 mins | | Provisioning: 10 mins |
+| | | controller & compute | | RAM: 12GB | | Deployment: 90 mins | | Deployment: 30 mins |
+| | | on single/same node | | Disk: 80GB | | Total: 100 mins | | Total: 40 mins |
+| | | 1 compute node | | NICs: 1 | | | | |
++------------------+------------------------+---------------------+--------------------------+--------------------------+
+| Mini | | 3 VM Nodes | | vCPUs: 6 | | Provisioning: 12 mins | | Provisioning: 12 mins |
+| | | 1 deployment node | | RAM: 12GB | | Deployment: 65 mins | | Deployment: 35 mins |
+| | | 1 controller node | | Disk: 80GB | | Total: 77 mins | | Total: 47 mins |
+| | | 1 compute node | | NICs: 1 | | | | |
++------------------+------------------------+---------------------+--------------------------+--------------------------+
+| No HA | | 4 VM Nodes | | vCPUs: 6 | | Provisioning: 12 mins | | Provisioning: 12 mins |
+| | | 1 deployment node | | RAM: 12GB | | Deployment: 70 mins | | Deployment: 35 mins |
+| | | 1 controller node | | Disk: 80GB | | Total: 82 mins | | Total: 47 mins |
+| | | 2 compute nodes | | NICs: 1 | | | | |
++------------------+------------------------+---------------------+--------------------------+--------------------------+
+| HA | | 6 VM Nodes | | vCPUs: 6 | | Provisioning: 15 mins | | Provisioning: 15 mins |
+| | | 1 deployment node | | RAM: 12GB | | Deployment: 105 mins | | Deployment: 40 mins |
+| | | 3 controller nodes | | Disk: 80GB | | Total: 120 mins | | Total: 55 mins |
+| | | 2 compute nodes | | NICs: 1 | | | | |
++------------------+------------------------+---------------------+--------------------------+--------------------------+
The specs for VMs are configurable and the more vCPU/RAM the better.
@@ -122,8 +131,8 @@ depending on
* installed/activated OpenStack services
* internet connection bandwidth
-Flavor Layouts
---------------
+Flavor Layouts - OpenStack Based Deployments
+--------------------------------------------
All flavors are created and deployed based on the upstream OpenStack Ansible (OSA)
guidelines.
@@ -165,6 +174,44 @@ flavors.
.. image:: images/arch-layout-test.png
:scale: 75 %
+Flavor Layouts - Kubernetes Based Deployments
+---------------------------------------------
+
+All flavors are created and deployed based on the upstream kubespray guidelines.
+
+For network plugins, calico is used. flannel, weaver, contive, canal and cilium
+are supported currently
+
+The differences between the flavors are documented below.
+
+**All in One**
+
+As shown on the table in the previous section, this flavor consists of a single
+node. All the kubernetes services run on the same node, which acts as master
+and worker at the same time.
+
+**Mini/No HA/HA**
+
+These flavors consist of multiple nodes.
+
+* **opnfv**: This node is used for driving the installation towards target nodes
+ in order to ensure the deployment process is isolated from the physical host
+ and always done on a clean machine.
+* **master**: provide the kubernetes cluster’s control plane.
+* **node**: a worker machine in Kubernetes, previously known as a minion.
+
+HA flavor has 3 master nodes and a load balancer is set up as part of the deployment process.
+The access to the Kubernetes cluster is done through the load balancer.
+
+Please see the diagrams below for the host and service layout for these
+flavors.
+
+.. image:: images/arch-layout-k8s-noha.png
+ :scale: 75 %
+
+.. image:: images/arch-layout-k8s-ha.png
+ :scale: 75 %
+
User Guide
==========
@@ -200,7 +247,12 @@ How to Use
| ``cd releng-xci/xci``
-4. Execute the sandbox script
+4. If you want to deploy Kubernetes based scenario, set the variables as below. Otherwise skip.
+
+ | ``export INSTALLER_TYPE=kubespray``
+ | ``export DEPLOY_SCENARIO=k8-nosdn-nofeature``
+
+5. Execute the sandbox script
| ``./xci-deploy.sh``
@@ -241,8 +293,14 @@ default.
5. Set the version to use for openstack-ansible
+ 1) if deploying OpenStack based scenario
+
| ``export OPENSTACK_OSA_VERSION=master``
+ 2) if deploying Kubernetes based scenario
+
+ | ``export KUBESPRAY_VERSION=master``
+
6. Set where the logs should be stored
| ``export LOG_PATH=/home/jenkins/xcilogs``
@@ -256,7 +314,7 @@ behaviors, especially if it is changed to ``master``. If you are not
sure about how good the version you intend to use is, it is advisable to
use the pinned versions instead.
-**Verifying the Basic Operation**
+**Verifying the Openstack Basic Operation**
You can verify the basic operation using the commands below.
@@ -276,6 +334,23 @@ You can also access the Horizon UI by using the URL, username, and
the password displayed on your console upon the completion of the
deployment.
+**Verifying the Kubernetes Basic Operation**
+
+You can verify the basic operation using the commands below.
+
+1. Login to opnfv host
+
+ | ``ssh root@192.168.122.2``
+
+2. Issue kubectl commands
+
+ | ``kubectl get nodes``
+
+You can also access the Kubernetes Dashboard UI by using the URL,
+username, and the password displayed on your console upon the
+completion of the deployment.
+
+
**Debugging Tips**
If ``xci-deploy.sh`` fails midway through and you happen to fix whatever
@@ -295,11 +370,12 @@ Here are steps that take place upon the execution of the sandbox script
2. Installs ansible on the host where sandbox script is executed.
3. Creates and provisions VM nodes based on the flavor chosen by the user.
4. Configures the host where the sandbox script is executed.
-5. Configures the deployment host which the OpenStack installation will
- be driven from.
-6. Configures the target hosts where OpenStack will be installed.
-7. Configures the target hosts as controller(s) and compute(s) nodes.
-8. Starts the OpenStack installation.
+5. Configures the deployment host which the OpenStack/Kubernetes
+ installation will be driven from.
+6. Configures the target hosts where OpenStack/Kubernetes will be installed.
+7. Configures the target hosts as controller(s)/compute(s) or master(s)/worker(s)
+ depending on the deployed scenario.
+8. Starts the OpenStack/Kubernetes installation.
.. image:: images/xci-basic-flow.png
:height: 640px
diff --git a/xci/installer/kubespray/deploy.sh b/xci/installer/kubespray/deploy.sh
index 548ed771..7695894b 100755
--- a/xci/installer/kubespray/deploy.sh
+++ b/xci/installer/kubespray/deploy.sh
@@ -7,11 +7,13 @@
# which accompanies this distribution, and is available at
# http://www.apache.org/licenses/LICENSE-2.0
##############################################################################
+set -o errexit
+set -o nounset
+set -o pipefail
K8_XCI_PLAYBOOKS="$(dirname $(realpath ${BASH_SOURCE[0]}))/playbooks"
export ANSIBLE_ROLES_PATH=$HOME/.ansible/roles:/etc/ansible/roles:${XCI_PATH}/xci/playbooks/roles
-
#-------------------------------------------------------------------------------
# Configure localhost
#-------------------------------------------------------------------------------
@@ -72,14 +74,41 @@ fi
echo "Info: Using kubespray to deploy the kubernetes cluster"
echo "-----------------------------------------------------------------------"
-ssh root@$OPNFV_HOST_IP "cd releng-xci/.cache/repos/kubespray;\
+ssh root@$OPNFV_HOST_IP "set -o pipefail; cd releng-xci/.cache/repos/kubespray;\
ansible-playbook ${XCI_ANSIBLE_PARAMS} \
-i opnfv_inventory/inventory.cfg cluster.yml -b | tee setup-kubernetes.log"
scp root@$OPNFV_HOST_IP:~/releng-xci/.cache/repos/kubespray/setup-kubernetes.log \
$LOG_PATH/setup-kubernetes.log
-# check the log to see if we have any error
-if grep -q 'failed=1\|unreachable=1' $LOG_PATH/setup-kubernetes.log; then
- echo "Error: Kubernetes cluster setup failed!"
- exit 1
-fi
+echo
+echo "-----------------------------------------------------------------------"
echo "Info: Kubernetes installation is successfully completed!"
+echo "-----------------------------------------------------------------------"
+
+# Configure the kubernetes authentication in opnfv host.
+ssh root@$OPNFV_HOST_IP "mkdir -p ~/.kube/;\
+ cp -f ~/admin.conf ~/.kube/config; \
+ cp -f ~/kubectl /usr/local/bin"
+
+echo "Login opnfv host ssh root@$OPNFV_HOST_IP
+according to the user-guide to create a service
+https://kubernetes.io/docs/user-guide/walkthrough/k8s201/"
+
+echo
+echo "-----------------------------------------------------------------------"
+echo "Info: Kubernetes login details"
+echo "-----------------------------------------------------------------------"
+
+# Get the dashborad URL
+DASHBOARD_SERVICE=$(ssh root@$OPNFV_HOST_IP "kubectl get service -n kube-system |grep kubernetes-dashboard")
+DASHBOARD_PORT=$(echo ${DASHBOARD_SERVICE} | awk '{print $5}' |awk -F "[:/]" '{print $2}')
+KUBER_SERVER_URL=$(ssh root@$OPNFV_HOST_IP "grep -r server ~/.kube/config")
+echo "Info: Kubernetes Dashboard URL:"
+echo $KUBER_SERVER_URL | awk '{print $2}'| sed -n "s#:[0-9]*\$#:$DASHBOARD_PORT#p"
+
+# Get the dashborad user and password
+MASTER_IP=$(echo ${KUBER_SERVER_URL} | awk '{print $2}' |awk -F "[:/]" '{print $4}')
+USER_CSV=$(ssh root@$MASTER_IP " cat /etc/kubernetes/users/known_users.csv")
+USERNAME=$(echo $USER_CSV |awk -F ',' '{print $2}')
+PASSWORD=$(echo $USER_CSV |awk -F ',' '{print $1}')
+echo "Info: Dashboard username: ${USERNAME}"
+echo "Info: Dashboard password: ${PASSWORD}"
diff --git a/xci/installer/kubespray/files/k8s-cluster.yml b/xci/installer/kubespray/files/k8s-cluster.yml
index aeee573a..20d3091d 100644
--- a/xci/installer/kubespray/files/k8s-cluster.yml
+++ b/xci/installer/kubespray/files/k8s-cluster.yml
@@ -157,7 +157,7 @@ kube_users:
## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth)
#kube_oidc_auth: false
-#kube_basic_auth: false
+kube_basic_auth: true
#kube_token_auth: false
@@ -270,9 +270,10 @@ local_volumes_enabled: false
persistent_volumes_enabled: false
# Make a copy of kubeconfig on the host that runs Ansible in GITDIR/artifacts
-# kubeconfig_localhost: false
+kubeconfig_localhost: true
# Download kubectl onto the host that runs Ansible in GITDIR/artifacts
-# kubectl_localhost: false
+kubectl_localhost: true
+artifacts_dir: "{{ ansible_env.HOME }}"
# dnsmasq
# dnsmasq_upstream_dns_servers:
diff --git a/xci/installer/kubespray/playbooks/configure-opnfvhost.yml b/xci/installer/kubespray/playbooks/configure-opnfvhost.yml
index 4db9ac1a..d6e1d7b8 100644
--- a/xci/installer/kubespray/playbooks/configure-opnfvhost.yml
+++ b/xci/installer/kubespray/playbooks/configure-opnfvhost.yml
@@ -62,6 +62,13 @@
state: present
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
when: XCI_FLAVOR == 'aio'
+
+ - name: change dashboard server type to NodePort
+ lineinfile:
+ path: "{{ remote_xci_path }}/.cache/repos/kubespray/roles/kubernetes-apps/ansible/templates/dashboard.yml.j2"
+ insertafter: 'targetPort'
+ line: " type: NodePort"
+
- name: pip install ansible
pip:
name: ansible
diff --git a/xci/installer/osa/files/ha/inventory b/xci/installer/osa/files/ha/inventory
index 1ef4502a..f5d882ef 100644
--- a/xci/installer/osa/files/ha/inventory
+++ b/xci/installer/osa/files/ha/inventory
@@ -9,3 +9,7 @@ controller02 ansible_ssh_host=192.168.122.5
[compute]
compute00 ansible_ssh_host=192.168.122.6
compute01 ansible_ssh_host=192.168.122.7
+
+[openstack:children]
+controller
+compute
diff --git a/xci/installer/osa/files/mini/inventory b/xci/installer/osa/files/mini/inventory
index 63a1bfab..4224131f 100644
--- a/xci/installer/osa/files/mini/inventory
+++ b/xci/installer/osa/files/mini/inventory
@@ -6,3 +6,7 @@ controller00 ansible_ssh_host=192.168.122.3
[compute]
compute00 ansible_ssh_host=192.168.122.4
+
+[openstack:children]
+controller
+compute
diff --git a/xci/installer/osa/files/noha/inventory b/xci/installer/osa/files/noha/inventory
index 90b31531..0e3b8d84 100644
--- a/xci/installer/osa/files/noha/inventory
+++ b/xci/installer/osa/files/noha/inventory
@@ -7,3 +7,7 @@ controller00 ansible_ssh_host=192.168.122.3
[compute]
compute00 ansible_ssh_host=192.168.122.4
compute01 ansible_ssh_host=192.168.122.5
+
+[openstack:children]
+controller
+compute
diff --git a/xci/installer/osa/playbooks/configure-opnfvhost.yml b/xci/installer/osa/playbooks/configure-opnfvhost.yml
index 8b596b3c..4c30f4d1 100644
--- a/xci/installer/osa/playbooks/configure-opnfvhost.yml
+++ b/xci/installer/osa/playbooks/configure-opnfvhost.yml
@@ -75,12 +75,6 @@
- name: copy user_variables_ceph.yml
shell: "/bin/cp -rf {{ remote_xci_flavor_files }}/user_variables_ceph.yml {{OPENSTACK_OSA_ETC_PATH}}/user_variables_ceph.yml"
when: XCI_CEPH_ENABLED == "true"
- # TODO: We need to get rid of this as soon as the issue is fixed upstream
- - name: change the haproxy state from disable to enable
- replace:
- dest: "{{OPENSTACK_OSA_PATH}}/playbooks/os-keystone-install.yml"
- regexp: '(\s+)haproxy_state: disabled'
- replace: '\1haproxy_state: enabled'
- name: copy OPNFV OpenStack playbook
shell: "/bin/cp -rf {{ remote_xci_path }}/xci/installer/osa/files/setup-openstack.yml {{OPENSTACK_OSA_PATH}}/playbooks"
- name: copy pinned versions of OSA Roles and global requirements
@@ -136,6 +130,10 @@
content: "{{ xci_ssl_key }}"
dest: "/etc/ssl/private/xci.key"
become: true
+ - name: fetch xci environment
+ copy:
+ src: "{{ XCI_PATH }}/.cache/xci.env"
+ dest: /root/xci.env
- hosts: localhost
remote_user: root
@@ -173,3 +171,8 @@
src: "{{ ansible_env.HOME }}/openrc"
dest: "{{ XCI_PATH }}/.cache/openrc"
flat: true
+
+ - name: add public key to host
+ copy:
+ src: "{{ XCI_PATH }}/xci/files/authorized_keys"
+ dest: /root/.ssh/authorized_keys
diff --git a/xci/installer/osa/playbooks/configure-targethosts.yml b/xci/installer/osa/playbooks/configure-targethosts.yml
index 4341a884..31c3e02e 100644
--- a/xci/installer/osa/playbooks/configure-targethosts.yml
+++ b/xci/installer/osa/playbooks/configure-targethosts.yml
@@ -1,13 +1,5 @@
---
-- hosts: all
- remote_user: root
- tasks:
- - name: add public key to host
- copy:
- src: "{{ XCI_PATH }}/xci/files/authorized_keys"
- dest: /root/.ssh/authorized_keys
-
-- hosts: controller
+- hosts: openstack
remote_user: root
vars_files:
- "{{ XCI_PATH }}/xci/var/opnfv.yml"
@@ -21,25 +13,15 @@
- "{{ XCI_FLAVOR_ANSIBLE_FILE_PATH }}/flavor-vars.yml"
roles:
- role: bootstrap-host
-
-- hosts: compute
- remote_user: root
- vars_files:
- - "{{ XCI_PATH }}/xci/var/opnfv.yml"
-
- pre_tasks:
- - name: Load distribution variables
- include_vars:
- file: "{{ item }}"
- with_items:
- - "{{ XCI_PATH }}/xci/var/{{ ansible_os_family }}.yml"
- - "{{ XCI_FLAVOR_ANSIBLE_FILE_PATH }}/flavor-vars.yml"
- roles:
- - role: bootstrap-host
- - role: configure-ceph
- when: XCI_CEPH_ENABLED == "true"
-
-- hosts: compute00
- remote_user: root
- roles:
- role: configure-nfs
+ when:
+ - "'compute' in group_names"
+ - role: configure-ceph
+ when:
+ - XCI_CEPH_ENABLED == "true"
+ - "'compute' in group_names"
+ tasks:
+ - name: add public key to host
+ copy:
+ src: "{{ XCI_PATH }}/xci/files/authorized_keys"
+ dest: /root/.ssh/authorized_keys
diff --git a/xci/playbooks/configure-localhost.yml b/xci/playbooks/configure-localhost.yml
index a5b0e3fa..0e3cde6e 100644
--- a/xci/playbooks/configure-localhost.yml
+++ b/xci/playbooks/configure-localhost.yml
@@ -98,3 +98,8 @@
- OPENSTACK_OSA_DEV_PATH != ""
when:
- INSTALLER_TYPE == "osa"
+
+ - name: Dump XCI execution environment to a file
+ shell: env > "{{ XCI_PATH }}/.cache/xci.env"
+ args:
+ executable: /bin/bash
diff --git a/xci/playbooks/roles/.gitignore b/xci/playbooks/roles/.gitignore
deleted file mode 100644
index 296233e0..00000000
--- a/xci/playbooks/roles/.gitignore
+++ /dev/null
@@ -1,7 +0,0 @@
-*
-!.gitignore
-!clone-repository/
-!bootstrap-host/
-!configure-nfs/
-!prepare-functest/
-!remote-folders/
diff --git a/xci/playbooks/roles/prepare-functest/templates/env.j2 b/xci/playbooks/roles/prepare-functest/templates/env.j2
index 43a581bd..af271ac7 100644
--- a/xci/playbooks/roles/prepare-functest/templates/env.j2
+++ b/xci/playbooks/roles/prepare-functest/templates/env.j2
@@ -1,7 +1,5 @@
-INSTALLER_TYPE=osa
INSTALLER_IP=192.168.122.2
EXTERNAL_NETWORK={{ external_network }}
-DEPLOY_SCENARIO="os-nosdn-nofeature-noha"
CI_LOOP=daily
TEST_DB_URL=http://testresults.opnfv.org/test/api/v1/results
ENERGY_RECORDER_API_URL=http://energy.opnfv.fr/resources
diff --git a/xci/playbooks/roles/prepare-functest/templates/run-functest.sh.j2 b/xci/playbooks/roles/prepare-functest/templates/run-functest.sh.j2
index c0b9bc88..a0ac9970 100644
--- a/xci/playbooks/roles/prepare-functest/templates/run-functest.sh.j2
+++ b/xci/playbooks/roles/prepare-functest/templates/run-functest.sh.j2
@@ -1,5 +1,8 @@
#!/bin/bash
+# Variables that we need to pass from XCI to functest
+XCI_ENV=(INSTALLER_TYPE XCI_FLAVOR)
+
source /root/openrc
openstack --insecure network create --external \
@@ -14,6 +17,27 @@ openstack --insecure subnet create --network {{ external_network }} \
mkdir ~/results/
mkdir ~/images && cd ~/images && wget -q http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img && cd ~
+# Extract variables from xci.env file
+if [[ -e /root/xci.env ]]; then
+ for x in ${XCI_ENV[@]}; do
+ grep "^${x}=" /root/xci.env >> /root/env
+ done
+ # Parse the XCI's DEPLOY_SCENARIO and XCI_FLAVOR variables and
+ # set the functest container's DEPLOY_SCENARIO variable in the
+ # following format <scenario>-<flavor>. But the XCI's mini flavor
+ # is converted into noha.
+ DEPLOY_SCENARIO=`grep -Po '(?<=DEPLOY_SCENARIO=).*' /root/xci.env`
+ XCI_FLAVOR=`grep -Po '(?<=XCI_FLAVOR=).*' /root/xci.env`
+ XCI_FLAVOR=${XCI_FLAVOR/mini/noha}
+ echo "DEPLOY_SCENARIO=$DEPLOY_SCENARIO-$XCI_FLAVOR" >> /root/env
+fi
+
+# Dump the env file
+echo "------------------------------------------------------"
+echo "------------- functest environment file --------------"
+cat /root/env
+echo "------------------------------------------------------"
+
sudo docker run --env-file env \
-v $(pwd)/openrc:/home/opnfv/functest/conf/env_file \
-v $(pwd)/images:/home/opnfv/functest/images \
diff --git a/xci/scripts/vm/start-new-vm.sh b/xci/scripts/vm/start-new-vm.sh
index ec0edeea..6e5c8194 100755
--- a/xci/scripts/vm/start-new-vm.sh
+++ b/xci/scripts/vm/start-new-vm.sh
@@ -120,7 +120,7 @@ COMMON_DISTRO_PKGS=(vim strace gdb htop dnsmasq docker iptables ebtables virt-ma
case ${ID,,} in
*suse)
pkg_mgr_cmd="sudo zypper -q -n ref"
- pkg_mgr_cmd+=" && sudo zypper -q -n install ${COMMON_DISTRO_PKGS[@]} qemu-kvm qemu-tools libvirt-daemon libvirt-client libvirt-daemon-driver-qemu"
+ pkg_mgr_cmd+=" && sudo zypper -q -n install ${COMMON_DISTRO_PKGS[@]} qemu-tools libvirt-daemon libvirt-client libvirt-daemon-driver-qemu"
;;
centos)
pkg_mgr_cmd="yum updateinfo"
@@ -188,6 +188,8 @@ sudo chown $uid:$gid -R $XCI_CACHE_DIR/clean_vm/images/
cp ${XCI_CACHE_DIR}/clean_vm/images/${OS}.qcow2* ${BASE_PATH}/
cp ${XCI_CACHE_DIR}/clean_vm/images/${OS}.qcow2.sha256.txt ${BASE_PATH}/deployment_image.qcow2.sha256.txt
cp ${XCI_CACHE_DIR}/clean_vm/images/${OS}.qcow2 ${BASE_PATH}/deployment_image.qcow2
+
+cd ${BASE_PATH}
declare -r OS_IMAGE_FILE=${OS}.qcow2
[[ ! -e ${OS_IMAGE_FILE} ]] && echo "${OS_IMAGE_FILE} not found! This should never happen!" && exit 1
@@ -355,7 +357,7 @@ echo "Verifying test script exists..."
$vm_ssh ${VM_NAME} "bash -c 'stat ~/releng-xci/run_jenkins_test.sh'"
if [[ $? != 0 ]]; then
echo "Failed to find a 'run_jenkins_test.sh' script..."
- if ${DEFAULT_XCI_TEST}; then
+ if [[ ${DEFAULT_XCI_TEST} == true ]]; then
echo "Creating a default test case to run xci-deploy.sh"
cat > ${BASE_PATH}/run_jenkins_test.sh <<EOF
#!/bin/bash