summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rwxr-xr-xbifrost/scripts/bifrost-provision.sh5
-rw-r--r--docs/xci-user-guide.rst70
-rwxr-xr-xxci/config/pinned-versions2
-rw-r--r--xci/file/install-ansible.sh1
-rw-r--r--xci/playbooks/configure-opnfvhost.yml4
-rw-r--r--xci/playbooks/roles/prepare-functest/templates/env.j22
-rwxr-xr-xxci/scripts/vm/build-dib-os.sh8
-rwxr-xr-xxci/scripts/vm/start-new-vm.sh95
-rwxr-xr-xxci/xci-deploy.sh13
9 files changed, 122 insertions, 78 deletions
diff --git a/bifrost/scripts/bifrost-provision.sh b/bifrost/scripts/bifrost-provision.sh
index bd9493e6..0a2987dd 100755
--- a/bifrost/scripts/bifrost-provision.sh
+++ b/bifrost/scripts/bifrost-provision.sh
@@ -16,9 +16,12 @@ BIFROST_HOME=$SCRIPT_HOME/..
ANSIBLE_INSTALL_ROOT=${ANSIBLE_INSTALL_ROOT:-/opt/stack}
ENABLE_VENV="false"
USE_DHCP="false"
-USE_VENV="false"
+USE_VENV="true"
BUILD_IMAGE=true
PROVISION_WAIT_TIMEOUT=${PROVISION_WAIT_TIMEOUT:-3600}
+# This is normally exported by XCI env but we should initialize it here
+# in case we run this script on its own for debug purposes
+XCI_ANSIBLE_VERBOSITY=${XCI_ANSIBLE_VERBOSITY:-}
# Ensure the right inventory files is used based on branch
CURRENT_BIFROST_BRANCH=$(git rev-parse --abbrev-ref HEAD)
diff --git a/docs/xci-user-guide.rst b/docs/xci-user-guide.rst
index 3957e2c8..11321383 100644
--- a/docs/xci-user-guide.rst
+++ b/docs/xci-user-guide.rst
@@ -12,30 +12,30 @@ The Sandbox
===========
Users and developers need to have an easy way to bring up an environment that
-fits to their purpose in a simple way in order to spend time on features they
+fits their purpose in a simple way so they can spend time on features they
are developing, bugs they are fixing, trying things out, for learning purposes
-or just for fun rather than dealing with the tools and mechanisms used for
+or just for fun rather than dealing with tools and mechanisms used for
creating and provisioning nodes, installing different components they do not
-intend to touch, and so on.
+intend to touch, etc.
-We also have reality. For example, not all users and developers have full Pharos
-baremetal PODs or powerful machines waiting for them to use while doing their
-work or want to use different Linux distributions due to different reasons.
+However, we also have to deal with reality. For example, not all users and developers
+have full Pharos baremetal PODs or powerful machines available for their
+work or they may want to use different Linux distributions for different reasons.
It is important to take these into account and provide different configuration
-options for the sandbox based on the requirements the people have on the
+options for the sandbox based on the requirements that people have on the
environment they will be using.
Based on the observations we made and the feedback we received from the OPNFV
-users and the developers, XCI Team has created a sandbox that is highly
-configurable, simple and at the same time capable to provide a realistic
-environment for the people to do their work. The sandbox makes it possible to
-bring up the complete environment with a single command and offers variety of
+users and developers, XCI Team has created a sandbox that is highly
+configurable, simple and at the same time capable of providing a realistic
+environment for people to do their work. The sandbox makes it possible to
+bring up the complete environment with a single command and offers a variety of
options to change how the stack should be deployed. The configuration of the
-sandbox is as easy as setting few environment variables.
+sandbox is as easy as setting a few environment variables.
The sandbox provides
-* automated way to bring up and tear down complete stack
+* automated way to bring up and tear down a complete stack
* various flavors to pick and use
* support for different Linux distributions
* multiple OPNFV scenarios to install
@@ -44,7 +44,7 @@ The sandbox provides
One last point to highlight here is that the XCI itself uses the sandbox for
development and test purposes so it is continuously tested to ensure it works
-for XCI and for the users and the developers who are using it for different
+for XCI and for users and developers who are using it for different
purposes.
Components of the Sandbox
@@ -53,11 +53,11 @@ Components of the Sandbox
The sandbox uses OpenStack projects for VM node creation, provisioning
and OpenStack installation. XCI Team provides playbooks, roles, and scripts
to ensure the components utilized by the sandbox work in a way that serves
-the users in best possible way.
+the users in the best possible way.
* **openstack/bifrost:** Bifrost (pronounced bye-frost) is a set of Ansible
playbooks that automates the task of deploying a base image onto a set
- of known hardware using ironic. It provides modular utility for one-off
+ of known hardware using Ironic. It provides modular utility for one-off
operating system deployment with as few operational requirements as
reasonably possible. Bifrost supports different operating systems such as
Ubuntu, CentOS, and openSUSE.
@@ -71,9 +71,9 @@ the users in best possible way.
`OpenStack Ansible documentation <https://docs.openstack.org/developer/openstack-ansible/>`_.
* **opnfv/releng-xci:** OPNFV Releng Project provides additional scripts, Ansible
- playbooks and configuration options in order for developers to have easy
+ playbooks and configuration options in order for developers to have an easy
way of using openstack/bifrost and openstack/openstack-ansible by just
- setting couple of environment variables and executing a single script.
+ setting a couple of environment variables and executing a single script.
More infromation about this project can be seen on
`OPNFV Releng documentation <https://wiki.opnfv.org/display/releng>`_.
@@ -112,7 +112,7 @@ Available flavors are listed on the table below.
The specs for VMs are configurable and the more vCPU/RAM the better.
-Estimated times listed above are provided as guidance and they might vary
+Estimated times listed above are provided as a guidance and they might vary
depending on
* the physical (or virtual) host where the sandbox is run
@@ -133,8 +133,8 @@ The VMs are attached to default libvirt network and has single NIC where VLANs
are created on. Different Linux bridges for management, storage and tunnel
networks are created on these VLANs.
-Use of more *production-like* network setup with multiple interfaces is in the
-backlog.
+Use of more *production-like* network setup with multiple interfaces is in our
+backlog. Enabling OVS as default is currently in progress.
For storage, Cinder with NFS backend is used. Work to enable CEPH is currently
ongoing.
@@ -143,10 +143,10 @@ The differences between the flavors are documented below.
**All in One**
-As shown on the table in previous section, this flavor consists of single
+As shown on the table in the previous section, this flavor consists of a single
node. All the OpenStack services, including compute run on the same node.
-The flavor All in One (aio) is deployed based on the process described on
+The flavor All in One (aio) is deployed based on the process described in the
upstream documentation. Please check `OpenStack Ansible Developer Quick Start <https://docs.openstack.org/openstack-ansible/pike/contributor/quickstart-aio.html>`_ for details.
**Mini/No HA/HA**
@@ -155,7 +155,7 @@ These flavors consist of multiple nodes.
* **opnfv**: This node is used for driving the installation towards target nodes
in order to ensure the deployment process is isolated from the physical host
- and always done on clean machine.
+ and always done on a clean machine.
* **controller**: OpenStack control plane runs on this node.
* **compute**: OpenStack compute service runs on this node.
@@ -218,7 +218,7 @@ The openrc file will be available on ``opnfv`` host in ``$HOME``.
**Advanced Usage**
The flavor to deploy and the versions of upstream components to use can
-be configured by the users by setting certain environment variables.
+be configured by the users by setting certain environment variables.
Below example deploys noha flavor using the latest of openstack-ansible
master branch and stores logs in different location than what is set as
default.
@@ -253,7 +253,7 @@ default.
Please note that changing the version to use may result in unexpected
behaviors, especially if it is changed to ``master``. If you are not
-sure about how good the version you intend to use, it is advisable to
+sure about how good the version you intend to use is, it is advisable to
use the pinned versions instead.
**Verifying the Basic Operation**
@@ -272,14 +272,14 @@ You can verify the basic operation using the commands below.
| ``openstack service list``
-You can also access to the Horizon UI by using the URL, username, and
+You can also access the Horizon UI by using the URL, username, and
the password displayed on your console upon the completion of the
deployment.
**Debugging Tips**
If ``xci-deploy.sh`` fails midway through and you happen to fix whatever
-problem it was that caused the failure in the first place, please run
+problem caused the failure in the first place, please run
the script again. Do not attempt to continue the deployment using helper
scripts such as ``bifrost-provision.sh``.
@@ -288,7 +288,7 @@ Look at various logs in ``$LOG_PATH`` directory. (default one is /tmp/.xci-deplo
Behind the Scenes
-----------------
-Here are the steps that take place upon the execution of the sandbox script
+Here are steps that take place upon the execution of the sandbox script
``xci-deploy.sh``:
1. Sources environment variables in order to set things up properly.
@@ -329,8 +329,8 @@ provides. They can be seen from
`pinned-versions <https://git.opnfv.org/releng-xci/tree/xci/config/pinned-versions>`_.
OPNFV runs periodic jobs against upstream projects openstack/bifrost and
-openstack/openstack-ansible using latest on master branch, continuously
-chasing upstream to find well working version.
+openstack/openstack-ansible using the latest on master branch, continuously
+chasing upstream to find a well working version.
Once a working version is identified, the versions of the upstream components
are then bumped in releng-xci repo.
@@ -357,17 +357,17 @@ Testing
Sandbox is continuously tested by OPNFV XCI to ensure the changes do not impact
users. In fact, OPNFV XCI itself uses the sandbox to ensure it is always in
-working state..
+working state.
Support
=======
-OPNFV XCI issues are tracked on OPNFV JIRA Releng project. If you encounter
-and issue or identify a bug, please submit an issue to JIRA using
+OPNFV XCI issues are tracked in OPNFV JIRA Releng project. If you encounter
+an issue or identify a bug, please submit an issue to JIRA using
`this link <https://jira.opnfv.org/projects/RELENG>`_. Please label the issue
you are submitting with ``xci`` label.
-If you have questions or comments, you can ask them on ``#opnfv-pharos`` IRC
+If you have questions or comments, you can ask them on the ``#opnfv-pharos`` IRC
channel on Freenode.
References
diff --git a/xci/config/pinned-versions b/xci/config/pinned-versions
index 4c760918..1e392132 100755
--- a/xci/config/pinned-versions
+++ b/xci/config/pinned-versions
@@ -26,6 +26,6 @@
# use releng-xci from master until the development work with the sandbox is complete
export OPNFV_RELENG_VERSION="master"
# HEAD of bifrost "master" as of 29.06.2017
-export OPENSTACK_BIFROST_VERSION=${OPENSTACK_BIFROST_VERSION:-"7c9bb5e07c6bc3b42c9a9e8457e5eef511075b38"}
+export OPENSTACK_BIFROST_VERSION=${OPENSTACK_BIFROST_VERSION:-"db9f2f556bf92558275c0422beafb5e68eff59f1"}
# HEAD of osa "master" as of 05.09.2017
export OPENSTACK_OSA_VERSION=${OPENSTACK_OSA_VERSION:-"d32bb257cbad2410711d6cdf54faff828605026e"}
diff --git a/xci/file/install-ansible.sh b/xci/file/install-ansible.sh
index ca7763ad..bc7bd1e4 100644
--- a/xci/file/install-ansible.sh
+++ b/xci/file/install-ansible.sh
@@ -44,6 +44,7 @@ case ${ID,,} in
ubuntu|debian)
OS_FAMILY="Debian"
+ export DEBIAN_FRONTEND=noninteractive
INSTALLER_CMD="sudo -H -E apt-get -y install"
CHECK_CMD="dpkg -l"
PKG_MAP=( [gcc]=gcc
diff --git a/xci/playbooks/configure-opnfvhost.yml b/xci/playbooks/configure-opnfvhost.yml
index daaddfbd..faae623f 100644
--- a/xci/playbooks/configure-opnfvhost.yml
+++ b/xci/playbooks/configure-opnfvhost.yml
@@ -75,8 +75,8 @@
shell: "/bin/cp -rf {{OPENSTACK_OSA_PATH}}/etc/openstack_deploy {{OPENSTACK_OSA_ETC_PATH}}"
- name: copy openstack_user_config.yml
shell: "/bin/cp -rf {{XCI_FLAVOR_ANSIBLE_FILE_PATH}}/openstack_user_config.yml {{OPENSTACK_OSA_ETC_PATH}}"
- - name: copy user_variables.yml
- shell: "/bin/cp -rf {{XCI_FLAVOR_ANSIBLE_FILE_PATH}}/user_variables.yml {{OPENSTACK_OSA_ETC_PATH}}"
+ - name: copy all user override files
+ shell: "/bin/cp -rf {{XCI_FLAVOR_ANSIBLE_FILE_PATH}}/user_*.yml {{OPENSTACK_OSA_ETC_PATH}}"
- name: copy cinder.yml
shell: "/bin/cp -rf {{OPNFV_RELENG_PATH}}/xci/file/cinder.yml {{OPENSTACK_OSA_ETC_PATH}}/env.d"
# TODO: We need to get rid of this as soon as the issue is fixed upstream
diff --git a/xci/playbooks/roles/prepare-functest/templates/env.j2 b/xci/playbooks/roles/prepare-functest/templates/env.j2
index b928accd..87093325 100644
--- a/xci/playbooks/roles/prepare-functest/templates/env.j2
+++ b/xci/playbooks/roles/prepare-functest/templates/env.j2
@@ -1,4 +1,4 @@
INSTALLER_TYPE=osa
INSTALLER_IP=192.168.122.2
-EXTERNAL_NETWORK="{{ external_network }}"
+EXTERNAL_NETWORK={{ external_network }}
DEPLOY_SCENARIO="os-nosdn-nofeature-noha"
diff --git a/xci/scripts/vm/build-dib-os.sh b/xci/scripts/vm/build-dib-os.sh
index 7547d40e..7688ee6e 100755
--- a/xci/scripts/vm/build-dib-os.sh
+++ b/xci/scripts/vm/build-dib-os.sh
@@ -24,17 +24,21 @@ fi
# Prepare new working directory
dib_workdir="${XCI_CACHE_DIR:-${HOME}/.cache/opnfv_xci_deploy}/clean_vm/images"
[[ ! -d $dib_workdir ]] && mkdir -p $dib_workdir
-chmod 777 -R $dib_workdir
# Record our information
uid=$(id -u)
gid=$(id -g)
+sudo chmod 777 -R $dib_workdir
+sudo chown $uid:$gid -R $dib_workdir
+
echo "Getting the latest docker image..."
eval $docker_cmd pull hwoarang/docker-dib-xci:latest
# Get rid of stale files
-rm -rf $dib_workdir/*.qcow2 $dib_workdir/*.sha256.txt $dib_workdir/*.d
+rm -rf $dib_workdir/${ONE_DISTRO}.qcow2 \
+ $dib_workdir/${ONE_DISTRO}.sha256.txt \
+ $dib_workdir/${ONE_DISTRO}.d
echo "Initiating dib build..."
eval $docker_cmd run --name ${docker_name} \
--rm --privileged=true -e ONE_DISTRO=${ONE_DISTRO} \
diff --git a/xci/scripts/vm/start-new-vm.sh b/xci/scripts/vm/start-new-vm.sh
index 4ad41f64..9b5cdd8e 100755
--- a/xci/scripts/vm/start-new-vm.sh
+++ b/xci/scripts/vm/start-new-vm.sh
@@ -15,6 +15,10 @@ set -e
# executed on a CI or not.
export JENKINS_HOME="${JENKINS_HOME:-${HOME}}"
+# Set this option to destroy the VM on failures. This is helpful when we
+# don't want to preserve the VM for debugging purposes.
+export XCI_KEEP_CLEAN_VM_ON_FAILURES=${XCI_KEEP_CLEAN_VM_ON_FAILURES:-true}
+
export DEFAULT_XCI_TEST=${DEFAULT_XCI_TEST:-false}
# JIT Build of OS image to load on the clean VM
export XCI_BUILD_CLEAN_VM_OS=${XCI_BUILD_CLEAN_VM_OS:-true}
@@ -24,6 +28,15 @@ export XCI_UPDATE_CLEAN_VM_OS=${XCI_UPDATE_CLEAN_VM_OS:-false}
grep -q -i ^Y$ /sys/module/kvm_intel/parameters/nested || { echo "Nested virtualization is not enabled but it's needed for XCI to work"; exit 1; }
+destroy_vm_on_failures() {
+ local exit_err=${xci_error:-130}
+ if ! ${XCI_KEEP_CLEAN_VM_ON_FAILURES}; then
+ sudo virsh destroy ${VM_NAME}_xci_vm
+ sudo virsh undefine ${VM_NAME}_xci_vm
+ fi
+ exit $exit_err
+}
+
usage() {
echo """
$0 <distro>
@@ -32,6 +45,22 @@ usage() {
"""
}
+wait_for_pkg_mgr() {
+ local pkg_mgr=$1
+ local _retries=30
+ while [[ $_retries -gt 0 ]]; do
+ if pgrep -a $pkg_mgr &> /dev/null; then
+ echo "There is another $pkg_mgr process running... ($_retries retries left)"
+ sleep 30
+ (( _retries = _retries - 1 ))
+ else
+ return 0
+ fi
+ done
+ echo "$pkg_mgr still running... Maybe stuck?"
+ exit 1
+}
+
update_clean_vm_files() {
local opnfv_url="http://artifacts.opnfv.org/releng/xci/images"
local vm_cache=${XCI_CACHE_DIR}/clean_vm/images
@@ -82,6 +111,10 @@ declare -r XCI_CACHE_DIR=${HOME}/.cache/opnfv_xci_deploy
echo "Preparing new virtual machine '${VM_NAME}'..."
+echo "Destroying previous '${VM_NAME}' instances..."
+sudo virsh destroy ${VM_NAME} || true
+sudo virsh undefine ${VM_NAME} || true
+
source /etc/os-release
echo "Installing host (${ID,,}) dependencies..."
# check we can run sudo
@@ -95,13 +128,19 @@ if ! sudo -n "true"; then
exit 1
fi
case ${ID,,} in
- *suse) sudo zypper -q -n in virt-manager qemu-kvm qemu-tools libvirt-daemon docker libvirt-client libvirt-daemon-driver-qemu iptables ebtables dnsmasq
- ;;
- centos) sudo yum install -q -y epel-release
- sudo yum install -q -y in virt-manager qemu-kvm qemu-kvm-tools qemu-img libvirt-daemon-kvm docker iptables ebtables dnsmasq
- ;;
- ubuntu) sudo apt-get install -y -q=3 virt-manager qemu-kvm libvirt-bin qemu-utils docker.io docker iptables ebtables dnsmasq
- ;;
+ *suse)
+ wait_for_pkg_mgr zypper
+ sudo zypper -q -n in virt-manager qemu-kvm qemu-tools libvirt-daemon docker libvirt-client libvirt-daemon-driver-qemu iptables ebtables dnsmasq
+ ;;
+ centos)
+ wait_for_pkg_mgr yum
+ sudo yum install -q -y epel-release
+ sudo yum install -q -y in virt-manager qemu-kvm qemu-kvm-tools qemu-img libvirt-daemon-kvm docker iptables ebtables dnsmasq
+ ;;
+ ubuntu)
+ wait_for_pkg_mgr apt-get
+ sudo apt-get install -y -q=3 virt-manager qemu-kvm libvirt-bin qemu-utils docker.io docker iptables ebtables dnsmasq
+ ;;
esac
echo "Ensuring libvirt and docker services are running..."
@@ -110,6 +149,11 @@ sudo systemctl -q start docker
echo "Preparing XCI cache..."
mkdir -p ${XCI_CACHE_DIR}/ ${XCI_CACHE_DIR}/clean_vm/images/
+# Record our information
+uid=$(id -u)
+gid=$(id -g)
+sudo chmod 777 -R $XCI_CACHE_DIR/clean_vm/images/
+sudo chown $uid:$gid -R $XCI_CACHE_DIR/clean_vm/images/
if ${XCI_BUILD_CLEAN_VM_OS}; then
echo "Building new ${OS} image..."
@@ -132,6 +176,9 @@ fi
# Doesn't matter if we just built an image or got one from artifacts. In both
# cases there should be a copy in the cache so copy it over.
sudo rm -f ${BASE_PATH}/${OS}.qcow2
+# Fix perms again...
+sudo chmod 777 -R $XCI_CACHE_DIR/clean_vm/images/
+sudo chown $uid:$gid -R $XCI_CACHE_DIR/clean_vm/images/
cp ${XCI_CACHE_DIR}/clean_vm/images/${OS}.qcow2 ${BASE_PATH}/
declare -r OS_IMAGE_FILE=${OS}.qcow2
@@ -164,15 +211,13 @@ fi
sudo virsh net-list --autostart | grep -q ${NETWORK} || sudo virsh net-autostart ${NETWORK}
sudo virsh net-list --inactive | grep -q ${NETWORK} && sudo virsh net-start ${NETWORK}
-echo "Destroying previous instances if necessary..."
-sudo virsh destroy ${VM_NAME} || true
-sudo virsh undefine ${VM_NAME} || true
-
echo "Installing virtual machine '${VM_NAME}'..."
sudo virt-install -n ${VM_NAME} --memory ${MEMORY} --vcpus ${NCPUS} --cpu ${CPU} \
--import --disk=${OS_IMAGE_FILE},cache=unsafe --network network=${NETWORK} \
--graphics none --hvm --noautoconsole
+trap destroy_vm_on_failures EXIT
+
_retries=30
while [[ $_retries -ne 0 ]]; do
_ip=$(sudo virsh domifaddr ${VM_NAME} | grep -o --colour=never 192.168.140.[[:digit:]]* | cat )
@@ -193,7 +238,8 @@ chmod 600 ${BASE_PATH}/xci/scripts/vm/id_rsa_for_dib*
ssh-keygen -R $_ip || true
ssh-keygen -R ${VM_NAME} || true
-declare -r vm_ssh="ssh -o StrictHostKeyChecking=no -i ${BASE_PATH}/xci/scripts/vm/id_rsa_for_dib -l devuser"
+# Initial ssh command until we setup everything
+vm_ssh="ssh -o StrictHostKeyChecking=no -i ${BASE_PATH}/xci/scripts/vm/id_rsa_for_dib -l devuser"
_retries=30
_ssh_exit=0
@@ -213,12 +259,12 @@ done
echo "Congratulations! Your shiny new '${VM_NAME}' virtual machine is fully operational! Enjoy!"
-echo "Adding ${VM_NAME}_xci_vm entry to /etc/hosts"
+echo "Adding ${VM_NAME} entry to /etc/hosts"
sudo sed -i "/.*${VM_NAME}.*/d" /etc/hosts
sudo bash -c "echo '${_ip} ${VM_NAME}' >> /etc/hosts"
echo "Dropping a minimal .ssh/config file"
-cat > $HOME/.ssh/config<<EOF
+cat > $HOME/.ssh/xci-vm-config<<EOF
Host *
StrictHostKeyChecking no
ServerAliveInterval 60
@@ -235,12 +281,15 @@ StrictHostKeyChecking no
ProxyCommand ssh -l devuser \$(echo %h | sed 's/_opnfv//') 'nc 192.168.122.2 %p'
EOF
+# Final ssh command which will also test the configuration file
+declare -r vm_ssh="ssh -F $HOME/.ssh/xci-vm-config"
+
echo "Preparing test environment..."
# *_xci_vm hostname is invalid. Letst just use distro name
-$vm_ssh $_ip "sudo hostname ${VM_NAME/_xci*}"
+$vm_ssh ${VM_NAME} "sudo hostname ${VM_NAME/_xci*}"
# Start with good dns
-$vm_ssh $_ip 'sudo bash -c "echo nameserver 8.8.8.8 > /etc/resolv.conf"'
-$vm_ssh $_ip 'sudo bash -c "echo nameserver 8.8.4.4 >> /etc/resolv.conf"'
+$vm_ssh ${VM_NAME} 'sudo bash -c "echo nameserver 8.8.8.8 > /etc/resolv.conf"'
+$vm_ssh ${VM_NAME} 'sudo bash -c "echo nameserver 8.8.4.4 >> /etc/resolv.conf"'
cat > ${BASE_PATH}/vm_hosts.txt <<EOF
127.0.0.1 localhost ${VM_NAME/_xci*}
::1 localhost ipv6-localhost ipv6-loopback
@@ -257,22 +306,22 @@ do_copy() {
--exclude "${VM_NAME}*" \
--exclude "${OS}*" \
--exclude "build.log" \
- -e "$vm_ssh" ${BASE_PATH}/ $_ip:~/releng-xci/
+ -e "$vm_ssh" ${BASE_PATH}/ ${VM_NAME}:~/releng-xci/
}
do_copy
rm ${BASE_PATH}/vm_hosts.txt
# Copy keypair
-$vm_ssh $_ip "cp --preserve=all ~/releng-xci/xci/scripts/vm/id_rsa_for_dib /home/devuser/.ssh/id_rsa"
-$vm_ssh $_ip "cp --preserve=all ~/releng-xci/xci/scripts/vm/id_rsa_for_dib.pub /home/devuser/.ssh/id_rsa.pub"
-$vm_ssh $_ip "sudo mv /home/devuser/releng-xci/vm_hosts.txt /etc/hosts"
+$vm_ssh ${VM_NAME} "cp --preserve=all ~/releng-xci/xci/scripts/vm/id_rsa_for_dib /home/devuser/.ssh/id_rsa"
+$vm_ssh ${VM_NAME} "cp --preserve=all ~/releng-xci/xci/scripts/vm/id_rsa_for_dib.pub /home/devuser/.ssh/id_rsa.pub"
+$vm_ssh ${VM_NAME} "sudo mv /home/devuser/releng-xci/vm_hosts.txt /etc/hosts"
set +e
_has_test=true
echo "Verifying test script exists..."
-$vm_ssh $_ip "bash -c 'stat ~/releng-xci/run_jenkins_test.sh'"
+$vm_ssh ${VM_NAME} "bash -c 'stat ~/releng-xci/run_jenkins_test.sh'"
if [[ $? != 0 ]]; then
echo "Failed to find a 'run_jenkins_test.sh' script..."
if ${DEFAULT_XCI_TEST}; then
@@ -292,7 +341,7 @@ fi
if ${_has_test}; then
echo "Running test..."
- $vm_ssh $_ip "bash ~/releng-xci/run_jenkins_test.sh"
+ $vm_ssh ${VM_NAME} "bash ~/releng-xci/run_jenkins_test.sh"
xci_error=$?
else
echo "No jenkins test was found. The virtual machine will remain idle!"
diff --git a/xci/xci-deploy.sh b/xci/xci-deploy.sh
index c798135f..d73cf5cd 100755
--- a/xci/xci-deploy.sh
+++ b/xci/xci-deploy.sh
@@ -145,10 +145,6 @@ sudo sed -i "s/^Defaults.*env_reset/#&/" /etc/sudoers
cd $XCI_PATH/../bifrost/
sudo -E bash ./scripts/destroy-env.sh
cd $XCI_PATH/playbooks
-# NOTE(hwoarang) we need newer ansible to work on the following playbook
-sudo pip uninstall -y ansible || true
-sudo -H pip uninstall -y ansible || true
-sudo pip install ansible==${XCI_ANSIBLE_PIP_VERSION}
ansible-playbook ${XCI_ANSIBLE_VERBOSITY} -i inventory provision-vm-nodes.yml
cd ${OPENSTACK_BIFROST_PATH}
bash ./scripts/bifrost-provision.sh
@@ -169,15 +165,6 @@ echo
echo "Info: Configuring localhost for openstack-ansible"
echo "-----------------------------------------------------------------------"
-# NOTE(hwoarang) we need newer ansible to work on the OSA playbooks. Make sure
-# all installations are gone. This is ugly and has to be removed as soon as we
-# are able to deploy bifrost in vent or when bifrost start working with newest
-# ansible
-pip uninstall -y ansible || true
-sudo -H pip uninstall -y ansible || true
-sudo pip install --force-reinstall ansible==${XCI_ANSIBLE_PIP_VERSION}
-# Start fresh
-hash -r
cd $XCI_PATH/playbooks
ansible-playbook ${XCI_ANSIBLE_VERBOSITY} -i inventory configure-localhost.yml
echo "-----------------------------------------------------------------------"