summaryrefslogtreecommitdiffstats
path: root/foreman/ci
diff options
context:
space:
mode:
authorJonas Bjurel <jonas.bjurel@ericsson.com>2015-10-03 16:54:43 +0200
committerJonas Bjurel <jonas.bjurel@ericsson.com>2015-10-03 16:54:43 +0200
commit11dbe27afb96c5b54b9f4f0a1c8b21194f59dc7b (patch)
tree1ee6814c36c7af010cff9d67a1cf1643b233a378 /foreman/ci
parent0d4a1f4143d71fc616f456a3708d5c8c2a24ec3f (diff)
Moving tag arno.2015.1.0 from genesis to fuel/stable/arnoarno.2015.1.0
Change-Id: I8bb3e28a814e04ad15e8a4b24b40bd7685600f46 Signed-off-by: Jonas Bjurel <jonas.bjurel@ericsson.com>
Diffstat (limited to 'foreman/ci')
-rw-r--r--foreman/ci/README.md86
-rw-r--r--foreman/ci/Vagrantfile93
-rwxr-xr-xforeman/ci/bootstrap.sh55
-rwxr-xr-xforeman/ci/build.sh398
-rwxr-xr-xforeman/ci/clean.sh152
-rwxr-xr-xforeman/ci/deploy.sh694
-rw-r--r--foreman/ci/inventory/lf_pod2_ksgen_settings.yml357
-rwxr-xr-xforeman/ci/nat_setup.sh44
-rw-r--r--foreman/ci/opnfv_ksgen_settings.yml338
-rw-r--r--foreman/ci/reload_playbook.yml16
-rwxr-xr-xforeman/ci/vm_nodes_provision.sh91
11 files changed, 2324 insertions, 0 deletions
diff --git a/foreman/ci/README.md b/foreman/ci/README.md
new file mode 100644
index 000000000..9417ee55d
--- /dev/null
+++ b/foreman/ci/README.md
@@ -0,0 +1,86 @@
+# Foreman/QuickStack Automatic Deployment README
+
+A simple bash script (deploy.sh) will provision out a Foreman/QuickStack VM Server and 4-5 other baremetal or VM nodes in an OpenStack HA + OpenDaylight environment.
+
+##Pre-Requisites
+####Baremetal:
+* At least 5 baremetal servers, with 3 interfaces minimum, all connected to separate VLANs
+* DHCP should not be running in any VLAN. Foreman will act as a DHCP server.
+* On the baremetal server that will be your JumpHost, you need to have the 3 interfaces configured with IP addresses
+* On baremetal JumpHost you will need an RPM based linux (CentOS 7 will do) with the kernel up to date (yum update kernel) + at least 2GB of RAM
+* Nodes will need to be set to PXE boot first in priority, and off the first NIC, connected to the same VLAN as NIC 1 * of your JumpHost
+* Nodes need to have BMC/OOB management via IPMI setup
+* Internet access via first (Admin) or third interface (Public)
+* No other hypervisors should be running on JumpHost
+
+####VM Nodes:
+* JumpHost with 3 interfaces, configured with IP, connected to separate VLANS
+* DHCP should not be running in any VLAN. Foreman will act as a DHCP Server
+* On baremetal JumpHost you will need an RPM based linux (CentOS 7 will do) with the kernel up to date (yum update kernel) + at least 24GB of RAM
+* Internet access via the first (Admin) or third interface (Public)
+* No other hypervisors should be running on JumpHost
+
+##How It Works
+
+###deploy.sh:
+
+* Detects your network configuration (3 or 4 usable interfaces)
+* Modifies a “ksgen.yml” settings file and Vagrantfile with necessary network info
+* Installs Vagrant and dependencies
+* Downloads Centos7 Vagrant basebox, and issues a “vagrant up” to start the VM
+* The Vagrantfile points to bootstrap.sh as the provisioner to takeover rest of the install
+
+###bootstrap.sh:
+
+* Is initiated inside of the VM once it is up
+* Installs Khaleesi, Ansible, and Python dependencies
+* Makes a call to Khaleesi to start a playbook: opnfv.yml + “ksgen.yml” settings file
+
+###Khaleesi (Ansible):
+
+* Runs through the playbook to install Foreman/QuickStack inside of the VM
+* Configures services needed for a JumpHost: DHCP, TFTP, DNS
+* Uses info from “ksgen.yml” file to add your nodes into Foreman and set them to Build mode
+
+####Baremetal Only:
+* Issues an API call to Foreman to rebuild all nodes
+* Ansible then waits to make sure nodes come back via ssh checks
+* Ansible then waits for puppet to run on each node and complete
+
+####VM Only:
+* deploy.sh then brings up 5 more Vagrant VMs
+* Checks into Foreman and tells Foreman nodes are built
+* Configures and starts puppet on each node
+
+##Execution Instructions
+
+* On your JumpHost, clone 'git clone https://github.com/trozet/bgs_vagrant.git' to as root to /root/
+
+####Baremetal Only:
+* Edit opnvf_ksgen_settings.yml → “nodes” section:
+
+ * For each node, compute, controller1..3:
+ * mac_address - change to mac_address of that node's Admin NIC (1st NIC)
+ * bmc_ip - change to IP of BMC (out-of-band) IP
+ * bmc_mac - same as above, but MAC address
+ * bmc_user - IPMI username
+ * bmc_pass - IPMI password
+
+ * For each controller node:
+ * private_mac - change to mac_address of node's Private NIC (2nd NIC)
+
+* Execute deploy.sh via: ./deploy.sh -base_config /root/bgs_vagrant/opnfv_ksgen_settings.yml
+
+####VM Only:
+* Execute deploy.sh via: ./deploy.sh -virtual
+* Install directory for each VM will be in /tmp (for example /tmp/compute, /tmp/controller1)
+
+####Both Approaches:
+* Install directory for foreman-server is /tmp/bgs_vagrant/ - This is where vagrant will be launched from automatically
+* To access the VM you can 'cd /tmp/bgs_vagrant' and type 'vagrant ssh'
+* To access Foreman enter the IP address shown in 'cat /tmp/bgs_vagrant/opnfv_ksgen_settings.yml | grep foreman_url'
+* The user/pass by default is admin//octopus
+
+##Redeploying
+Make sure you run ./clean.sh for the baremetal deployment with your opnfv_ksgen_settings.yml file as "-base_config". This will ensure that your nodes are turned off and that your VM is destroyed ("vagrant destroy" in the /tmp/bgs_vagrant directory).
+For VM redeployment, make sure you "vagrant destroy" in each /tmp/<node> as well if you want to redeploy. To check and make sure no VMs are still running on your Jumphost you can use "vboxmanage list runningvms".
diff --git a/foreman/ci/Vagrantfile b/foreman/ci/Vagrantfile
new file mode 100644
index 000000000..100e12db0
--- /dev/null
+++ b/foreman/ci/Vagrantfile
@@ -0,0 +1,93 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+# All Vagrant configuration is done below. The "2" in Vagrant.configure
+# configures the configuration version (we support older styles for
+# backwards compatibility). Please don't change it unless you know what
+# you're doing.
+Vagrant.configure(2) do |config|
+ # The most common configuration options are documented and commented below.
+ # For a complete reference, please see the online documentation at
+ # https://docs.vagrantup.com.
+
+ # Every Vagrant development environment requires a box. You can search for
+ # boxes at https://atlas.hashicorp.com/search.
+ config.vm.box = "chef/centos-7.0"
+
+ # Disable automatic box update checking. If you disable this, then
+ # boxes will only be checked for updates when the user runs
+ # `vagrant box outdated`. This is not recommended.
+ # config.vm.box_check_update = false
+
+ # Create a forwarded port mapping which allows access to a specific port
+ # within the machine from a port on the host machine. In the example below,
+ # accessing "localhost:8080" will access port 80 on the guest machine.
+ # config.vm.network "forwarded_port", guest: 80, host: 8080
+
+ # Create a private network, which allows host-only access to the machine
+ # using a specific IP.
+ # config.vm.network "private_network", ip: "192.168.33.10"
+
+ # Create a public network, which generally matched to bridged network.
+ # Bridged networks make the machine appear as another physical device on
+ # your network.
+ # config.vm.network "public_network"
+ config.vm.network "public_network", ip: "10.4.1.2", bridge: 'eth_replace0'
+ config.vm.network "public_network", ip: "10.4.9.2", bridge: 'eth_replace1'
+ config.vm.network "public_network", ip: "10.2.84.2", bridge: 'eth_replace2'
+ config.vm.network "public_network", ip: "10.3.84.2", bridge: 'eth_replace3'
+
+ # IP address of your LAN's router
+ default_gw = ""
+ nat_flag = false
+
+ # Share an additional folder to the guest VM. The first argument is
+ # the path on the host to the actual folder. The second argument is
+ # the path on the guest to mount the folder. And the optional third
+ # argument is a set of non-required options.
+ # config.vm.synced_folder "../data", "/vagrant_data"
+
+ # Provider-specific configuration so you can fine-tune various
+ # backing providers for Vagrant. These expose provider-specific options.
+ # Example for VirtualBox:
+ #
+ config.vm.provider "virtualbox" do |vb|
+ # # Display the VirtualBox GUI when booting the machine
+ # vb.gui = true
+ #
+ # # Customize the amount of memory on the VM:
+ vb.memory = 2048
+ vb.cpus = 2
+ end
+ #
+ # View the documentation for the provider you are using for more
+ # information on available options.
+
+ # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
+ # such as FTP and Heroku are also available. See the documentation at
+ # https://docs.vagrantup.com/v2/push/atlas.html for more information.
+ # config.push.define "atlas" do |push|
+ # push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
+ # end
+
+ # Enable provisioning with a shell script. Additional provisioners such as
+ # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
+ # documentation for more information about their specific syntax and use.
+ # config.vm.provision "shell", inline: <<-SHELL
+ # sudo apt-get update
+ # sudo apt-get install -y apache2
+ # SHELL
+
+ config.ssh.username = 'root'
+ config.ssh.password = 'vagrant'
+ config.ssh.insert_key = 'true'
+ config.vm.provision "ansible" do |ansible|
+ ansible.playbook = "reload_playbook.yml"
+ end
+ config.vm.provision :shell, :inline => "mount -t vboxsf vagrant /vagrant"
+ config.vm.provision :shell, :inline => "route add default gw #{default_gw}"
+ if nat_flag
+ config.vm.provision :shell, path: "nat_setup.sh"
+ end
+ config.vm.provision :shell, path: "bootstrap.sh"
+end
diff --git a/foreman/ci/bootstrap.sh b/foreman/ci/bootstrap.sh
new file mode 100755
index 000000000..4bc22ed26
--- /dev/null
+++ b/foreman/ci/bootstrap.sh
@@ -0,0 +1,55 @@
+#!/usr/bin/env bash
+
+#bootstrap script for installing/running Khaleesi in Foreman/QuickStack VM
+#author: Tim Rozet (trozet@redhat.com)
+#
+#Uses Vagrant and VirtualBox
+#VagrantFile uses bootsrap.sh which Installs Khaleesi
+#Khaleesi will install and configure Foreman/QuickStack
+#
+#Pre-requisties:
+#Target system should be Centos7
+#Ensure the host's kernel is up to date (yum update)
+
+##VARS
+reset=`tput sgr0`
+blue=`tput setaf 4`
+red=`tput setaf 1`
+green=`tput setaf 2`
+
+##END VARS
+
+
+# Install EPEL repo for access to many other yum repos
+# Major version is pinned to force some consistency for Arno
+yum install -y epel-release-7*
+
+# Install other required packages
+# Major version is pinned to force some consistency for Arno
+if ! yum -y install python-pip-1* python-virtualenv-1* gcc-4* git-1* sshpass-1* ansible-1* python-requests-1*; then
+ printf '%s\n' 'bootstrap.sh: failed to install required packages' >&2
+ exit 1
+fi
+
+cd /opt
+
+echo "Cloning khaleesi to /opt"
+
+if [ ! -d khaleesi ]; then
+ if ! git clone -b v1.0 https://github.com/trozet/khaleesi.git; then
+ printf '%s\n' 'bootstrap.sh: Unable to git clone khaleesi' >&2
+ exit 1
+ fi
+fi
+
+cd khaleesi
+
+cp ansible.cfg.example ansible.cfg
+
+echo "Completed Installing Khaleesi"
+
+cd /opt/khaleesi/
+
+ansible localhost -m setup -i local_hosts
+
+./run.sh --no-logs --use /vagrant/opnfv_ksgen_settings.yml playbooks/opnfv.yml
diff --git a/foreman/ci/build.sh b/foreman/ci/build.sh
new file mode 100755
index 000000000..7a1ef523c
--- /dev/null
+++ b/foreman/ci/build.sh
@@ -0,0 +1,398 @@
+#!/bin/bash
+set -e
+##############################################################################
+# Copyright (c) 2015 Ericsson AB and others.
+# stefan.k.berg@ericsson.com
+# jonas.bjurel@ericsson.com
+# dradez@redhat.com
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Apache License, Version 2.0
+# which accompanies this distribution, and is available at
+# http://www.apache.org/licenses/LICENSE-2.0
+##############################################################################
+
+trap 'echo "Exiting ..."; \
+if [ -f ${LOCK_FILE} ]; then \
+ if [ $(cat ${LOCK_FILE}) -eq $$ ]; then \
+ rm -f ${LOCK_FILE}; \
+ fi; \
+fi;' EXIT
+
+############################################################################
+# BEGIN of usage description
+#
+usage ()
+{
+cat << EOF
+$0 Builds the Foreman OPNFV Deployment ISO
+
+usage: $0 [-s spec-file] [-c cache-URI] [-l log-file] [-f Flags] build-directory
+
+OPTIONS:
+ -s spec-file ($BUILD_SPEC), define the build-spec file, default ../build/config.mk
+ -c cache base URI ($BUILD_CACHE_URI), specifies the base URI to a build cache to be used/updated - the name is automatically generated from the md5sum of the spec-file, http://, ftp://, file://[absolute path] suported.
+
+ -l log-file ($BUILD_LOG), specifies the output log-file (stdout and stderr), if not specified logs are output to console as normal
+ -v version tag to be applied to the build result
+ -r alternative remote access method script/program. curl is default.
+ -t run small build-script unit test.
+ -T run large build-script unit test.
+ -f build flags ($BUILD_FLAGS):
+ o s: Do nothing, succeed
+ o f: Do nothing, fail
+ o t: run build unit tests
+ o i: run interactive (-t flag to docker run)
+ o P: Populate a new local cache and push it to the (-c cache-URI) cache artifactory if -c option is present, currently file://, http:// and ftp:// are supported
+ o d: Detatch - NOT YET SUPPORTED
+
+ build-directory ($BUILD_DIR), specifies the directory for the output artifacts (.iso file).
+
+ -h help, prints this help text
+
+Description:
+build.sh builds opnfv .iso artifact.
+To reduce build time it uses build cache on a local or remote location. The cache is rebuilt and uploaded if either of the below conditions are met:
+1) The P(opulate) flag is set and the -c cache-base-URI is provided, if -c is not provided the cache will stay local.
+2) If the cache is invalidated by one of the following conditions:
+ - The config spec md5sum does not compare to the md5sum for the spec which the cache was built.
+ - The git Commit-Id on the remote repos/HEAD defined in the spec file does not correspont with the Commit-Id for what the cache was built with.
+3) A valid cache does not exist on the specified -c cache-base-URI.
+
+The cache URI object name is foreman_cache-"md5sum(spec file)"
+
+Logging by default to console, but can be directed elsewhere with the -l option in which case both stdout and stderr is redirected to that destination.
+
+Built in unit testing of components is enabled by adding the t(est) flag.
+
+Return codes:
+ - 0 Success!
+ - 1-99 Unspecified build error
+ - 100-199 Build system internal error (not build it self)
+ o 101 Build system instance busy
+ - 200 Build failure
+
+Examples:
+build -c http://opnfv.org/artifactory/foreman/cache -d ~/jenkins/genesis/foreman/ci/output -f ti
+NOTE: At current the build scope is set to the git root of the repository, -d destination locations outside that scope will not work
+EOF
+}
+#
+# END of usage description
+############################################################################
+
+############################################################################
+# BEGIN of variables to customize
+#
+BUILD_BASE=$(readlink -e ../build/)
+RESULT_DIR="${BUILD_BASE}/release"
+BUILD_SPEC="${BUILD_BASE}/config.mk"
+CACHE_DIR="cache"
+LOCAL_CACHE_ARCH_NAME="foreman-cache"
+REMOTE_CACHE_ARCH_NAME="foreman_cache-$(md5sum ${BUILD_SPEC}| cut -f1 -d " ")"
+REMOTE_ACCESS_METHD=curl
+INCLUDE_DIR=../include
+#
+# END of variables to customize
+############################################################################
+
+############################################################################
+# BEGIN of script assigned variables
+#
+SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
+LOCK_FILE="${SCRIPT_DIR}/.build.lck"
+CACHE_TMP="${SCRIPT_DIR}/tmp"
+TEST_SUCCEED=0
+TEST_FAIL=0
+UNIT_TEST=0
+UPDATE_CACHE=0
+POPULATE_CACHE=0
+RECURSIV=0
+DETACH=0
+DEBUG=0
+INTEGRATION_TEST=0
+FULL_INTEGRATION_TEST=0
+INTERACTIVE=0
+BUILD_CACHE_URI=
+BUILD_SPEC=
+BUILD_DIR=
+BUILD_LOG=
+BUILD_VERSION=
+MAKE_ARGS=
+#
+# END of script assigned variables
+############################################################################
+
+############################################################################
+# BEGIN of include pragmas
+#
+source ${INCLUDE_DIR}/build.sh.debug
+#
+# END of include
+############################################################################
+
+############################################################################
+# BEGIN of main
+#
+while getopts "s:c:d:v:f:l:r:RtTh" OPTION
+do
+ case $OPTION in
+ h)
+ usage
+ rc=0
+ exit $rc
+ ;;
+
+ s)
+ BUILD_SPEC=${OPTARG}
+ ;;
+
+ c)
+ BUILD_CACHE_URI=${OPTARG}
+ ;;
+
+ d)
+ BUILD_DIR=${OPTARG}
+ ;;
+
+ l)
+ BUILD_LOG=${OPTARG}
+ ;;
+
+ v)
+ BUILD_VERSION=${OPTARG}
+ ;;
+
+ f)
+ BUILD_FLAGS=${OPTARG}
+ ;;
+
+ r) REMOTE_ACCESS_METHD=${OPTARG}
+ ;;
+
+ R)
+ RECURSIVE=1
+ ;;
+
+ t)
+ INTEGRATION_TEST=1
+ ;;
+
+ T)
+ INTEGRATION_TEST=1
+ FULL_INTEGRATION_TEST=1
+ ;;
+
+ *)
+ echo "${OPTION} is not a valid argument"
+ rc=100
+ exit $rc
+ ;;
+ esac
+done
+
+if [ -z $BUILD_DIR ]; then
+ BUILD_DIR=$(echo $@ | cut -d ' ' -f ${OPTIND})
+fi
+
+for ((i=0; i<${#BUILD_FLAGS};i++)); do
+ case ${BUILD_FLAGS:$i:1} in
+ s)
+ rc=0
+ exit $rc
+ ;;
+
+ f)
+ rc=1
+ exit $rc
+ ;;
+
+ t)
+ UNIT_TEST=1
+ ;;
+
+ i)
+ INTERACTIVE=1
+ ;;
+
+ P)
+ POPULATE_CACHE=1
+ ;;
+
+ d)
+ DETACH=1
+ echo "Detach is not yet supported - exiting ...."
+ rc=100
+ exit $rc
+ ;;
+
+ D)
+ DEBUG=1
+ ;;
+
+ *)
+ echo "${BUILD_FLAGS:$i:1} is not a valid build flag - exiting ...."
+ rc=100
+ exit $rc
+ ;;
+ esac
+done
+
+shift $((OPTIND-1))
+
+if [ ${INTEGRATION_TEST} -eq 1 ]; then
+ integration-test
+ rc=0
+ exit $rc
+fi
+
+if [ ! -f ${BUILD_SPEC} ]; then
+ echo "spec file does not exist: $BUILD_SPEC - exiting ...."
+ rc=100
+ exit $rc
+fi
+
+if [ -z ${BUILD_DIR} ]; then
+ echo "Missing build directory - exiting ...."
+ rc=100
+ exit $rc
+fi
+
+if [ ! -z ${BUILD_LOG} ]; then
+ if [[ ${RECURSIVE} -ne 1 ]]; then
+ set +e
+ eval $0 -R $@ > ${BUILD_LOG} 2>&1
+ rc=$?
+ set -e
+ if [ $rc -ne 0]; then
+ exit $rc
+ fi
+ fi
+fi
+
+if [ ${TEST_SUCCEED} -eq 1 ]; then
+ sleep 1
+ rc=0
+ exit $rc
+fi
+
+if [ ${TEST_FAIL} -eq 1 ]; then
+ sleep 1
+ rc=1
+ exit $rc
+fi
+
+if [ -e ${LOCK_FILE} ]; then
+ echo "A build job is already running, exiting....."
+ rc=101
+ exit $rc
+fi
+
+echo $$ > ${LOCK_FILE}
+
+if [ ! -z ${BUILD_CACHE_URI} ]; then
+ if [ ${POPULATE_CACHE} -ne 1 ]; then
+ rm -rf ${CACHE_TMP}/cache
+ mkdir -p ${CACHE_TMP}/cache
+ echo "Downloading cach file ${BUILD_CACHE_URI}/${REMOTE_CACHE_ARCH_NAME} ..."
+ set +e
+ ${REMOTE_ACCESS_METHD} -o ${CACHE_TMP}/cache/${LOCAL_CACHE_ARCH_NAME}.tgz ${BUILD_CACHE_URI}/${REMOTE_CACHE_ARCH_NAME}.tgz
+ rc=$?
+ set -e
+ if [ $rc -ne 0 ]; then
+ echo "Remote cache does not exist, or is not accessible - a new cache will be built ..."
+ POPULATE_CACHE=1
+ else
+ echo "Unpacking cache file ..."
+ tar -C ${CACHE_TMP}/cache -xvf ${CACHE_TMP}/cache/${LOCAL_CACHE_ARCH_NAME}.tgz
+ cp ${CACHE_TMP}/cache/cache/.versions ${BUILD_BASE}/.
+ set +e
+ make -C ${BUILD_BASE} validate-cache;
+ rc=$?
+ set -e
+
+ if [ $rc -ne 0 ]; then
+ echo "Cache invalid - a new cache will be built "
+ POPULATE_CACHE=1
+ else
+ cp -rf ${CACHE_TMP}/cache/cache/. ${BUILD_BASE}
+ fi
+ rm -rf ${CACHE_TMP}/cache
+ fi
+ fi
+fi
+
+if [ ${POPULATE_CACHE} -eq 1 ]; then
+ if [ ${DEBUG} -eq 0 ]; then
+ set +e
+ cd ${BUILD_BASE} && make clean
+ rc=$?
+ set -e
+ if [ $rc -ne 0 ]; then
+ echo "Build - make clean failed, exiting ..."
+ rc=100
+ exit $rc
+ fi
+ fi
+fi
+
+if [ ! -z ${BUILD_VERSION} ]; then
+ MAKE_ARGS+="REVSTATE=${BUILD_VERSION} "
+fi
+
+if [ ${UNIT_TEST} -eq 1 ]; then
+ MAKE_ARGS+="UNIT_TEST=TRUE "
+else
+ MAKE_ARGS+="UNIT_TEST=FALSE "
+fi
+
+if [ ${INTERACTIVE} -eq 1 ]; then
+ MAKE_ARGS+="INTERACTIVE=TRUE "
+else
+ MAKE_ARGS+="INTERACTIVE=FALSE "
+fi
+
+MAKE_ARGS+=all
+
+if [ ${DEBUG} -eq 0 ]; then
+ set +e
+ cd ${BUILD_BASE} && make ${MAKE_ARGS}
+ rc=$?
+ set -e
+ if [ $rc -gt 0 ]; then
+ echo "Build: make all failed, exiting ..."
+ rc=200
+ exit $rc
+ fi
+else
+debug_make
+fi
+set +e
+make -C ${BUILD_BASE} prepare-cache
+rc=$?
+set -e
+
+if [ $rc -gt 0 ]; then
+ echo "Build: make prepare-cache failed - exiting ..."
+ rc=100
+ exit $rc
+fi
+echo "Copying built OPNFV .iso file to target directory ${BUILD_DIR} ..."
+rm -rf ${BUILD_DIR}
+mkdir -p ${BUILD_DIR}
+cp ${BUILD_BASE}/.versions ${BUILD_DIR}
+cp ${RESULT_DIR}/*.iso* ${BUILD_DIR}
+
+if [ $POPULATE_CACHE -eq 1 ]; then
+ if [ ! -z ${BUILD_CACHE_URI} ]; then
+ echo "Building cache ..."
+ tar --dereference -C ${BUILD_BASE} -caf ${BUILD_BASE}/${LOCAL_CACHE_ARCH_NAME}.tgz ${CACHE_DIR}
+ echo "Uploading cache ${BUILD_CACHE_URI}/${REMOTE_CACHE_ARCH_NAME}"
+ ${REMOTE_ACCESS_METHD} -T ${BUILD_BASE}/${LOCAL_CACHE_ARCH_NAME}.tgz ${BUILD_CACHE_URI}/${REMOTE_CACHE_ARCH_NAME}.tgz
+ rm ${BUILD_BASE}/${LOCAL_CACHE_ARCH_NAME}.tgz
+ fi
+fi
+echo "Success!!!"
+exit 0
+#
+# END of main
+############################################################################
diff --git a/foreman/ci/clean.sh b/foreman/ci/clean.sh
new file mode 100755
index 000000000..f61ac9372
--- /dev/null
+++ b/foreman/ci/clean.sh
@@ -0,0 +1,152 @@
+#!/usr/bin/env bash
+
+#Clean script to uninstall provisioning server for Foreman/QuickStack
+#author: Tim Rozet (trozet@redhat.com)
+#
+#Uses Vagrant and VirtualBox
+#
+#Destroys Vagrant VM running in /tmp/bgs_vagrant
+#Shuts down all nodes found in Khaleesi settings
+#Removes hypervisor kernel modules (VirtualBox)
+
+##VARS
+reset=`tput sgr0`
+blue=`tput setaf 4`
+red=`tput setaf 1`
+green=`tput setaf 2`
+##END VARS
+
+##FUNCTIONS
+display_usage() {
+ echo -e "\n\n${blue}This script is used to uninstall Foreman/QuickStack Installer and Clean OPNFV Target System${reset}\n\n"
+ echo -e "\nUsage:\n$0 [arguments] \n"
+ echo -e "\n -no_parse : No variable parsing into config. Flag. \n"
+ echo -e "\n -base_config : Full path of ksgen settings file to parse. Required. Will provide BMC info to shutdown hosts. Example: -base_config /opt/myinventory.yml \n"
+}
+
+##END FUNCTIONS
+
+if [[ ( $1 == "--help") || $1 == "-h" ]]; then
+ display_usage
+ exit 0
+fi
+
+echo -e "\n\n${blue}This script is used to uninstall Foreman/QuickStack Installer and Clean OPNFV Target System${reset}\n\n"
+echo "Use -h to display help"
+sleep 2
+
+while [ "`echo $1 | cut -c1`" = "-" ]
+do
+ echo $1
+ case "$1" in
+ -base_config)
+ base_config=$2
+ shift 2
+ ;;
+ *)
+ display_usage
+ exit 1
+ ;;
+esac
+done
+
+
+# Install ipmitool
+# Major version is pinned to force some consistency for Arno
+if ! yum list installed | grep -i ipmitool; then
+ if ! yum -y install ipmitool-1*; then
+ echo "${red}Unable to install ipmitool!${reset}"
+ exit 1
+ fi
+else
+ echo "${blue}Skipping ipmitool as it is already installed!${reset}"
+fi
+
+###find all the bmc IPs and number of nodes
+node_counter=0
+output=`grep bmc_ip $base_config | grep -Eo '[0-9]+.[0-9]+.[0-9]+.[0-9]+'`
+for line in ${output} ; do
+ bmc_ip[$node_counter]=$line
+ ((node_counter++))
+done
+
+max_nodes=$((node_counter-1))
+
+###find bmc_users per node
+node_counter=0
+output=`grep bmc_user $base_config | sed 's/\s*bmc_user:\s*//'`
+for line in ${output} ; do
+ bmc_user[$node_counter]=$line
+ ((node_counter++))
+done
+
+###find bmc_pass per node
+node_counter=0
+output=`grep bmc_pass $base_config | sed 's/\s*bmc_pass:\s*//'`
+for line in ${output} ; do
+ bmc_pass[$node_counter]=$line
+ ((node_counter++))
+done
+
+for mynode in `seq 0 $max_nodes`; do
+ echo "${blue}Node: ${bmc_ip[$mynode]} ${bmc_user[$mynode]} ${bmc_pass[$mynode]} ${reset}"
+ if ipmitool -I lanplus -P ${bmc_pass[$mynode]} -U ${bmc_user[$mynode]} -H ${bmc_ip[$mynode]} chassis power off; then
+ echo "${blue}Node: $mynode, ${bmc_ip[$mynode]} powered off!${reset}"
+ else
+ echo "${red}Error: Unable to power off $mynode, ${bmc_ip[$mynode]} ${reset}"
+ exit 1
+ fi
+done
+
+###check to see if vbox is installed
+vboxpkg=`rpm -qa | grep VirtualBox`
+if [ $? -eq 0 ]; then
+ skip_vagrant=0
+else
+ skip_vagrant=1
+fi
+
+###destroy vagrant
+if [ $skip_vagrant -eq 0 ]; then
+ cd /tmp/bgs_vagrant
+ if vagrant destroy -f; then
+ echo "${blue}Successfully destroyed Foreman VM ${reset}"
+ else
+ echo "${red}Unable to destroy Foreman VM ${reset}"
+ echo "${blue}Checking if vagrant was already destroyed and no process is active...${reset}"
+ if ps axf | grep vagrant; then
+ echo "${red}Vagrant VM still exists...exiting ${reset}"
+ exit 1
+ else
+ echo "${blue}Vagrant process doesn't exist. Moving on... ${reset}"
+ fi
+ fi
+
+ ###kill virtualbox
+ echo "${blue}Killing VirtualBox ${reset}"
+ killall virtualbox
+ killall VBoxHeadless
+
+ ###remove virtualbox
+ echo "${blue}Removing VirtualBox ${reset}"
+ yum -y remove $vboxpkg
+
+else
+ echo "${blue}Skipping Vagrant destroy + Vbox Removal as VirtualBox package is already removed ${reset}"
+fi
+
+
+###remove kernel modules
+echo "${blue}Removing kernel modules ${reset}"
+for kernel_mod in vboxnetadp vboxnetflt vboxpci vboxdrv; do
+ if ! rmmod $kernel_mod; then
+ if rmmod $kernel_mod 2>&1 | grep -i 'not currently loaded'; then
+ echo "${blue} $kernel_mod is not currently loaded! ${reset}"
+ else
+ echo "${red}Error trying to remove Kernel Module: $kernel_mod ${reset}"
+ exit 1
+ fi
+ else
+ echo "${blue}Removed Kernel Module: $kernel_mod ${reset}"
+ fi
+done
diff --git a/foreman/ci/deploy.sh b/foreman/ci/deploy.sh
new file mode 100755
index 000000000..86f03a743
--- /dev/null
+++ b/foreman/ci/deploy.sh
@@ -0,0 +1,694 @@
+#!/usr/bin/env bash
+
+#Deploy script to install provisioning server for Foreman/QuickStack
+#author: Tim Rozet (trozet@redhat.com)
+#
+#Uses Vagrant and VirtualBox
+#VagrantFile uses bootsrap.sh which Installs Khaleesi
+#Khaleesi will install and configure Foreman/QuickStack
+#
+#Pre-requisties:
+#Supports 3 or 4 network interface configuration
+#Target system must be RPM based
+#Ensure the host's kernel is up to date (yum update)
+#Provisioned nodes expected to have following order of network connections (note: not all have to exist, but order is maintained):
+#eth0- admin network
+#eth1- private network (+storage network in 3 NIC config)
+#eth2- public network
+#eth3- storage network
+#script assumes /24 subnet mask
+
+##VARS
+reset=`tput sgr0`
+blue=`tput setaf 4`
+red=`tput setaf 1`
+green=`tput setaf 2`
+
+declare -A interface_arr
+##END VARS
+
+##FUNCTIONS
+display_usage() {
+ echo -e "\n\n${blue}This script is used to deploy Foreman/QuickStack Installer and Provision OPNFV Target System${reset}\n\n"
+ echo -e "\n${green}Make sure you have the latest kernel installed before running this script! (yum update kernel +reboot)${reset}\n"
+ echo -e "\nUsage:\n$0 [arguments] \n"
+ echo -e "\n -no_parse : No variable parsing into config. Flag. \n"
+ echo -e "\n -base_config : Full path of settings file to parse. Optional. Will provide a new base settings file rather than the default. Example: -base_config /opt/myinventory.yml \n"
+ echo -e "\n -virtual : Node virtualization instead of baremetal. Flag. \n"
+}
+
+##find ip of interface
+##params: interface name
+function find_ip {
+ ip addr show $1 | grep -Eo '^\s+inet\s+[\.0-9]+' | awk '{print $2}'
+}
+
+##finds subnet of ip and netmask
+##params: ip, netmask
+function find_subnet {
+ IFS=. read -r i1 i2 i3 i4 <<< "$1"
+ IFS=. read -r m1 m2 m3 m4 <<< "$2"
+ printf "%d.%d.%d.%d\n" "$((i1 & m1))" "$((i2 & m2))" "$((i3 & m3))" "$((i4 & m4))"
+}
+
+##increments subnet by a value
+##params: ip, value
+##assumes low value
+function increment_subnet {
+ IFS=. read -r i1 i2 i3 i4 <<< "$1"
+ printf "%d.%d.%d.%d\n" "$i1" "$i2" "$i3" "$((i4 | $2))"
+}
+
+
+##finds netmask of interface
+##params: interface
+##returns long format 255.255.x.x
+function find_netmask {
+ ifconfig $1 | grep -Eo 'netmask\s+[\.0-9]+' | awk '{print $2}'
+}
+
+##finds short netmask of interface
+##params: interface
+##returns short format, ex: /21
+function find_short_netmask {
+ echo "/$(ip addr show $1 | grep -Eo '^\s+inet\s+[\/\.0-9]+' | awk '{print $2}' | cut -d / -f2)"
+}
+
+##increments next IP
+##params: ip
+##assumes a /24 subnet
+function next_ip {
+ baseaddr="$(echo $1 | cut -d. -f1-3)"
+ lsv="$(echo $1 | cut -d. -f4)"
+ if [ "$lsv" -ge 254 ]; then
+ return 1
+ fi
+ ((lsv++))
+ echo $baseaddr.$lsv
+}
+
+##removes the network interface config from Vagrantfile
+##params: interface
+##assumes you are in the directory of Vagrantfile
+function remove_vagrant_network {
+ sed -i 's/^.*'"$1"'.*$//' Vagrantfile
+}
+
+##check if IP is in use
+##params: ip
+##ping ip to get arp entry, then check arp
+function is_ip_used {
+ ping -c 5 $1 > /dev/null 2>&1
+ arp -n | grep "$1 " | grep -iv incomplete > /dev/null 2>&1
+}
+
+##find next usable IP
+##params: ip
+function next_usable_ip {
+ new_ip=$(next_ip $1)
+ while [ "$new_ip" ]; do
+ if ! is_ip_used $new_ip; then
+ echo $new_ip
+ return 0
+ fi
+ new_ip=$(next_ip $new_ip)
+ done
+ return 1
+}
+
+##increment ip by value
+##params: ip, amount to increment by
+##increment_ip $next_private_ip 10
+function increment_ip {
+ baseaddr="$(echo $1 | cut -d. -f1-3)"
+ lsv="$(echo $1 | cut -d. -f4)"
+ incrval=$2
+ lsv=$((lsv+incrval))
+ if [ "$lsv" -ge 254 ]; then
+ return 1
+ fi
+ echo $baseaddr.$lsv
+}
+
+##translates yaml into variables
+##params: filename, prefix (ex. "config_")
+##usage: parse_yaml opnfv_ksgen_settings.yml "config_"
+parse_yaml() {
+ local prefix=$2
+ local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo @|tr @ '\034')
+ sed -ne "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
+ -e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $1 |
+ awk -F$fs '{
+ indent = length($1)/2;
+ vname[indent] = $2;
+ for (i in vname) {if (i > indent) {delete vname[i]}}
+ if (length($3) > 0) {
+ vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
+ printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
+ }
+ }'
+}
+
+##END FUNCTIONS
+
+if [[ ( $1 == "--help") || $1 == "-h" ]]; then
+ display_usage
+ exit 0
+fi
+
+echo -e "\n\n${blue}This script is used to deploy Foreman/QuickStack Installer and Provision OPNFV Target System${reset}\n\n"
+echo "Use -h to display help"
+sleep 2
+
+while [ "`echo $1 | cut -c1`" = "-" ]
+do
+ echo $1
+ case "$1" in
+ -base_config)
+ base_config=$2
+ shift 2
+ ;;
+ -no_parse)
+ no_parse="TRUE"
+ shift 1
+ ;;
+ -virtual)
+ virtual="TRUE"
+ shift 1
+ ;;
+ *)
+ display_usage
+ exit 1
+ ;;
+esac
+done
+
+##disable selinux
+/sbin/setenforce 0
+
+# Install EPEL repo for access to many other yum repos
+# Major version is pinned to force some consistency for Arno
+yum install -y epel-release-7*
+
+# Install other required packages
+# Major versions are pinned to force some consistency for Arno
+if ! yum install -y binutils-2* gcc-4* make-3* patch-2* libgomp-4* glibc-headers-2* glibc-devel-2* kernel-headers-3* kernel-devel-3* dkms-2* psmisc-22*; then
+ printf '%s\n' 'deploy.sh: Unable to install depdency packages' >&2
+ exit 1
+fi
+
+##install VirtualBox repo
+if cat /etc/*release | grep -i "Fedora release"; then
+ vboxurl=http://download.virtualbox.org/virtualbox/rpm/fedora/\$releasever/\$basearch
+else
+ vboxurl=http://download.virtualbox.org/virtualbox/rpm/el/\$releasever/\$basearch
+fi
+
+cat > /etc/yum.repos.d/virtualbox.repo << EOM
+[virtualbox]
+name=Oracle Linux / RHEL / CentOS-\$releasever / \$basearch - VirtualBox
+baseurl=$vboxurl
+enabled=1
+gpgcheck=1
+gpgkey=https://www.virtualbox.org/download/oracle_vbox.asc
+skip_if_unavailable = 1
+keepcache = 0
+EOM
+
+##install VirtualBox
+if ! yum list installed | grep -i virtualbox; then
+ if ! yum -y install VirtualBox-4.3; then
+ printf '%s\n' 'deploy.sh: Unable to install virtualbox package' >&2
+ exit 1
+ fi
+fi
+
+##install kmod-VirtualBox
+if ! lsmod | grep vboxdrv; then
+ if ! sudo /etc/init.d/vboxdrv setup; then
+ printf '%s\n' 'deploy.sh: Unable to install kernel module for virtualbox' >&2
+ exit 1
+ fi
+else
+ printf '%s\n' 'deploy.sh: Skipping kernel module for virtualbox. Already Installed'
+fi
+
+##install Ansible
+if ! yum list installed | grep -i ansible; then
+ if ! yum -y install ansible-1*; then
+ printf '%s\n' 'deploy.sh: Unable to install Ansible package' >&2
+ exit 1
+ fi
+fi
+
+##install Vagrant
+if ! rpm -qa | grep vagrant; then
+ if ! rpm -Uvh https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.2_x86_64.rpm; then
+ printf '%s\n' 'deploy.sh: Unable to install vagrant package' >&2
+ exit 1
+ fi
+else
+ printf '%s\n' 'deploy.sh: Skipping Vagrant install as it is already installed.'
+fi
+
+##add centos 7 box to vagrant
+if ! vagrant box list | grep chef/centos-7.0; then
+ if ! vagrant box add chef/centos-7.0 --provider virtualbox; then
+ printf '%s\n' 'deploy.sh: Unable to download centos7 box for Vagrant' >&2
+ exit 1
+ fi
+else
+ printf '%s\n' 'deploy.sh: Skipping Vagrant box add as centos-7.0 is already installed.'
+fi
+
+##install workaround for centos7
+if ! vagrant plugin list | grep vagrant-centos7_fix; then
+ if ! vagrant plugin install vagrant-centos7_fix; then
+ printf '%s\n' 'deploy.sh: Warning: unable to install vagrant centos7 workaround' >&2
+ fi
+else
+ printf '%s\n' 'deploy.sh: Skipping Vagrant plugin as centos7 workaround is already installed.'
+fi
+
+cd /tmp/
+
+##remove bgs vagrant incase it wasn't cleaned up
+rm -rf /tmp/bgs_vagrant
+
+##clone bgs vagrant
+##will change this to be opnfv repo when commit is done
+if ! git clone -b v1.0 https://github.com/trozet/bgs_vagrant.git; then
+ printf '%s\n' 'deploy.sh: Unable to clone vagrant repo' >&2
+ exit 1
+fi
+
+cd bgs_vagrant
+
+echo "${blue}Detecting network configuration...${reset}"
+##detect host 1 or 3 interface configuration
+#output=`ip link show | grep -E "^[0-9]" | grep -Ev ": lo|tun|virbr|vboxnet" | awk '{print $2}' | sed 's/://'`
+output=`ifconfig | grep -E "^[a-zA-Z0-9]+:"| grep -Ev "lo|tun|virbr|vboxnet" | awk '{print $1}' | sed 's/://'`
+
+if [ ! "$output" ]; then
+ printf '%s\n' 'deploy.sh: Unable to detect interfaces to bridge to' >&2
+ exit 1
+fi
+
+##find number of interfaces with ip and substitute in VagrantFile
+if_counter=0
+for interface in ${output}; do
+
+ if [ "$if_counter" -ge 4 ]; then
+ break
+ fi
+ interface_ip=$(find_ip $interface)
+ if [ ! "$interface_ip" ]; then
+ continue
+ fi
+ new_ip=$(next_usable_ip $interface_ip)
+ if [ ! "$new_ip" ]; then
+ continue
+ fi
+ interface_arr[$interface]=$if_counter
+ interface_ip_arr[$if_counter]=$new_ip
+ subnet_mask=$(find_netmask $interface)
+ if [ "$if_counter" -eq 1 ]; then
+ private_subnet_mask=$subnet_mask
+ private_short_subnet_mask=$(find_short_netmask $interface)
+ fi
+ if [ "$if_counter" -eq 2 ]; then
+ public_subnet_mask=$subnet_mask
+ public_short_subnet_mask=$(find_short_netmask $interface)
+ fi
+ if [ "$if_counter" -eq 3 ]; then
+ storage_subnet_mask=$subnet_mask
+ fi
+ sed -i 's/^.*eth_replace'"$if_counter"'.*$/ config.vm.network "public_network", ip: '\""$new_ip"\"', bridge: '\'"$interface"\'', netmask: '\""$subnet_mask"\"'/' Vagrantfile
+ ((if_counter++))
+done
+
+##now remove interface config in Vagrantfile for 1 node
+##if 1, 3, or 4 interfaces set deployment type
+##if 2 interfaces remove 2nd interface and set deployment type
+if [ "$if_counter" == 1 ]; then
+ deployment_type="single_network"
+ remove_vagrant_network eth_replace1
+ remove_vagrant_network eth_replace2
+ remove_vagrant_network eth_replace3
+elif [ "$if_counter" == 2 ]; then
+ deployment_type="single_network"
+ second_interface=`echo $output | awk '{print $2}'`
+ remove_vagrant_network $second_interface
+ remove_vagrant_network eth_replace2
+elif [ "$if_counter" == 3 ]; then
+ deployment_type="three_network"
+ remove_vagrant_network eth_replace3
+else
+ deployment_type="multi_network"
+fi
+
+echo "${blue}Network detected: ${deployment_type}! ${reset}"
+
+if route | grep default; then
+ echo "${blue}Default Gateway Detected ${reset}"
+ host_default_gw=$(ip route | grep default | awk '{print $3}')
+ echo "${blue}Default Gateway: $host_default_gw ${reset}"
+ default_gw_interface=$(ip route get $host_default_gw | awk '{print $3}')
+ case "${interface_arr[$default_gw_interface]}" in
+ 0)
+ echo "${blue}Default Gateway Detected on Admin Interface!${reset}"
+ sed -i 's/^.*default_gw =.*$/ default_gw = '\""$host_default_gw"\"'/' Vagrantfile
+ node_default_gw=$host_default_gw
+ ;;
+ 1)
+ echo "${red}Default Gateway Detected on Private Interface!${reset}"
+ echo "${red}Private subnet should be private and not have Internet access!${reset}"
+ exit 1
+ ;;
+ 2)
+ echo "${blue}Default Gateway Detected on Public Interface!${reset}"
+ sed -i 's/^.*default_gw =.*$/ default_gw = '\""$host_default_gw"\"'/' Vagrantfile
+ echo "${blue}Will setup NAT from Admin -> Public Network on VM!${reset}"
+ sed -i 's/^.*nat_flag =.*$/ nat_flag = true/' Vagrantfile
+ echo "${blue}Setting node gateway to be VM Admin IP${reset}"
+ node_default_gw=${interface_ip_arr[0]}
+ public_gateway=$default_gw
+ ;;
+ 3)
+ echo "${red}Default Gateway Detected on Storage Interface!${reset}"
+ echo "${red}Storage subnet should be private and not have Internet access!${reset}"
+ exit 1
+ ;;
+ *)
+ echo "${red}Unable to determine which interface default gateway is on..Exiting!${reset}"
+ exit 1
+ ;;
+ esac
+else
+ #assumes 24 bit mask
+ defaultgw=`echo ${interface_ip_arr[0]} | cut -d. -f1-3`
+ firstip=.1
+ defaultgw=$defaultgw$firstip
+ echo "${blue}Unable to find default gateway. Assuming it is $defaultgw ${reset}"
+ sed -i 's/^.*default_gw =.*$/ default_gw = '\""$defaultgw"\"'/' Vagrantfile
+ node_default_gw=$defaultgw
+fi
+
+if [ $base_config ]; then
+ if ! cp -f $base_config opnfv_ksgen_settings.yml; then
+ echo "{red}ERROR: Unable to copy $base_config to opnfv_ksgen_settings.yml${reset}"
+ exit 1
+ fi
+fi
+
+if [ $no_parse ]; then
+echo "${blue}Skipping parsing variables into settings file as no_parse flag is set${reset}"
+
+else
+
+echo "${blue}Gathering network parameters for Target System...this may take a few minutes${reset}"
+##Edit the ksgen settings appropriately
+##ksgen settings will be stored in /vagrant on the vagrant machine
+##if single node deployment all the variables will have the same ip
+##interface names will be enp0s3, enp0s8, enp0s9 in chef/centos7
+
+sed -i 's/^.*default_gw:.*$/default_gw:'" $node_default_gw"'/' opnfv_ksgen_settings.yml
+
+##replace private interface parameter
+##private interface will be of hosts, so we need to know the provisioned host interface name
+##we add biosdevname=0, net.ifnames=0 to the kickstart to use regular interface naming convention on hosts
+##replace IP for parameters with next IP that will be given to controller
+if [ "$deployment_type" == "single_network" ]; then
+ ##we also need to assign IP addresses to nodes
+ ##for single node, foreman is managing the single network, so we can't reserve them
+ ##not supporting single network anymore for now
+ echo "{blue}Single Network type is unsupported right now. Please check your interface configuration. Exiting. ${reset}"
+ exit 0
+
+elif [[ "$deployment_type" == "multi_network" || "$deployment_type" == "three_network" ]]; then
+
+ if [ "$deployment_type" == "three_network" ]; then
+ sed -i 's/^.*network_type:.*$/network_type: three_network/' opnfv_ksgen_settings.yml
+ fi
+
+ sed -i 's/^.*deployment_type:.*$/ deployment_type: '"$deployment_type"'/' opnfv_ksgen_settings.yml
+
+ ##get ip addresses for private network on controllers to make dhcp entries
+ ##required for controllers_ip_array global param
+ next_private_ip=${interface_ip_arr[1]}
+ type=_private
+ for node in controller1 controller2 controller3; do
+ next_private_ip=$(next_usable_ip $next_private_ip)
+ if [ ! "$next_private_ip" ]; then
+ printf '%s\n' 'deploy.sh: Unable to find next ip for private network for control nodes' >&2
+ exit 1
+ fi
+ sed -i 's/'"$node$type"'/'"$next_private_ip"'/g' opnfv_ksgen_settings.yml
+ controller_ip_array=$controller_ip_array$next_private_ip,
+ done
+
+ ##replace global param for contollers_ip_array
+ controller_ip_array=${controller_ip_array%?}
+ sed -i 's/^.*controllers_ip_array:.*$/ controllers_ip_array: '"$controller_ip_array"'/' opnfv_ksgen_settings.yml
+
+ ##now replace all the VIP variables. admin//private can be the same IP
+ ##we have to use IP's here that won't be allocated to hosts at provisioning time
+ ##therefore we increment the ip by 10 to make sure we have a safe buffer
+ next_private_ip=$(increment_ip $next_private_ip 10)
+
+ grep -E '*private_vip|loadbalancer_vip|db_vip|amqp_vip|*admin_vip' opnfv_ksgen_settings.yml | while read -r line ; do
+ sed -i 's/^.*'"$line"'.*$/ '"$line $next_private_ip"'/' opnfv_ksgen_settings.yml
+ next_private_ip=$(next_usable_ip $next_private_ip)
+ if [ ! "$next_private_ip" ]; then
+ printf '%s\n' 'deploy.sh: Unable to find next ip for private network for vip replacement' >&2
+ exit 1
+ fi
+ done
+
+ ##replace foreman site
+ next_public_ip=${interface_ip_arr[2]}
+ sed -i 's/^.*foreman_url:.*$/ foreman_url:'" https:\/\/$next_public_ip"'\/api\/v2\//' opnfv_ksgen_settings.yml
+ ##replace public vips
+ next_public_ip=$(increment_ip $next_public_ip 10)
+ grep -E '*public_vip' opnfv_ksgen_settings.yml | while read -r line ; do
+ sed -i 's/^.*'"$line"'.*$/ '"$line $next_public_ip"'/' opnfv_ksgen_settings.yml
+ next_public_ip=$(next_usable_ip $next_public_ip)
+ if [ ! "$next_public_ip" ]; then
+ printf '%s\n' 'deploy.sh: Unable to find next ip for public network for vip replcement' >&2
+ exit 1
+ fi
+ done
+
+ ##replace public_network param
+ public_subnet=$(find_subnet $next_public_ip $public_subnet_mask)
+ sed -i 's/^.*public_network:.*$/ public_network:'" $public_subnet"'/' opnfv_ksgen_settings.yml
+ ##replace private_network param
+ private_subnet=$(find_subnet $next_private_ip $private_subnet_mask)
+ sed -i 's/^.*private_network:.*$/ private_network:'" $private_subnet"'/' opnfv_ksgen_settings.yml
+ ##replace storage_network
+ if [ "$deployment_type" == "three_network" ]; then
+ sed -i 's/^.*storage_network:.*$/ storage_network:'" $private_subnet"'/' opnfv_ksgen_settings.yml
+ else
+ next_storage_ip=${interface_ip_arr[3]}
+ storage_subnet=$(find_subnet $next_storage_ip $storage_subnet_mask)
+ sed -i 's/^.*storage_network:.*$/ storage_network:'" $storage_subnet"'/' opnfv_ksgen_settings.yml
+ fi
+
+ ##replace public_subnet param
+ public_subnet=$public_subnet'\'$public_short_subnet_mask
+ sed -i 's/^.*public_subnet:.*$/ public_subnet:'" $public_subnet"'/' opnfv_ksgen_settings.yml
+ ##replace private_subnet param
+ private_subnet=$private_subnet'\'$private_short_subnet_mask
+ sed -i 's/^.*private_subnet:.*$/ private_subnet:'" $private_subnet"'/' opnfv_ksgen_settings.yml
+
+ ##replace public_dns param to be foreman server
+ sed -i 's/^.*public_dns:.*$/ public_dns: '${interface_ip_arr[2]}'/' opnfv_ksgen_settings.yml
+
+ ##replace public_gateway
+ if [ -z "$public_gateway" ]; then
+ ##if unset then we assume its the first IP in the public subnet
+ public_subnet=$(find_subnet $next_public_ip $public_subnet_mask)
+ public_gateway=$(increment_subnet $public_subnet 1)
+ fi
+ sed -i 's/^.*public_gateway:.*$/ public_gateway:'" $public_gateway"'/' opnfv_ksgen_settings.yml
+
+ ##we have to define an allocation range of the public subnet to give
+ ##to neutron to use as floating IPs
+ ##we should control this subnet, so this range should work .150-200
+ ##but generally this is a bad idea and we are assuming at least a /24 subnet here
+ public_subnet=$(find_subnet $next_public_ip $public_subnet_mask)
+ public_allocation_start=$(increment_subnet $public_subnet 150)
+ public_allocation_end=$(increment_subnet $public_subnet 200)
+
+ sed -i 's/^.*public_allocation_start:.*$/ public_allocation_start:'" $public_allocation_start"'/' opnfv_ksgen_settings.yml
+ sed -i 's/^.*public_allocation_end:.*$/ public_allocation_end:'" $public_allocation_end"'/' opnfv_ksgen_settings.yml
+
+else
+ printf '%s\n' 'deploy.sh: Unknown network type: $deployment_type' >&2
+ exit 1
+fi
+
+echo "${blue}Parameters Complete. Settings have been set for Foreman. ${reset}"
+
+fi
+
+if [ $virtual ]; then
+ echo "${blue} Virtual flag detected, setting Khaleesi playbook to be opnfv-vm.yml ${reset}"
+ sed -i 's/opnfv.yml/opnfv-vm.yml/' bootstrap.sh
+fi
+
+echo "${blue}Starting Vagrant! ${reset}"
+
+##stand up vagrant
+if ! vagrant up; then
+ printf '%s\n' 'deploy.sh: Unable to start vagrant' >&2
+ exit 1
+else
+ echo "${blue}Foreman VM is up! ${reset}"
+fi
+
+if [ $virtual ]; then
+
+##Bring up VM nodes
+echo "${blue}Setting VMs up... ${reset}"
+nodes=`sed -nr '/nodes:/{:start /workaround/!{N;b start};//p}' opnfv_ksgen_settings.yml | sed -n '/^ [A-Za-z0-9]\+:$/p' | sed 's/\s*//g' | sed 's/://g'`
+##due to ODL Helium bug of OVS connecting to ODL too early, we need controllers to install first
+##this is fix kind of assumes more than I would like to, but for now it should be OK as we always have
+##3 static controllers
+compute_nodes=`echo $nodes | tr " " "\n" | grep -v controller | tr "\n" " "`
+controller_nodes=`echo $nodes | tr " " "\n" | grep controller | tr "\n" " "`
+nodes=${controller_nodes}${compute_nodes}
+
+for node in ${nodes}; do
+ cd /tmp
+
+ ##remove VM nodes incase it wasn't cleaned up
+ rm -rf /tmp/$node
+
+ ##clone bgs vagrant
+ ##will change this to be opnfv repo when commit is done
+ if ! git clone -b v1.0 https://github.com/trozet/bgs_vagrant.git $node; then
+ printf '%s\n' 'deploy.sh: Unable to clone vagrant repo' >&2
+ exit 1
+ fi
+
+ cd $node
+
+ if [ $base_config ]; then
+ if ! cp -f $base_config opnfv_ksgen_settings.yml; then
+ echo "{red}ERROR: Unable to copy $base_config to opnfv_ksgen_settings.yml${reset}"
+ exit 1
+ fi
+ fi
+
+ ##parse yaml into variables
+ eval $(parse_yaml opnfv_ksgen_settings.yml "config_")
+ ##find node type
+ node_type=config_nodes_${node}_type
+ node_type=$(eval echo \$$node_type)
+
+ ##find number of interfaces with ip and substitute in VagrantFile
+ output=`ifconfig | grep -E "^[a-zA-Z0-9]+:"| grep -Ev "lo|tun|virbr|vboxnet" | awk '{print $1}' | sed 's/://'`
+
+ if [ ! "$output" ]; then
+ printf '%s\n' 'deploy.sh: Unable to detect interfaces to bridge to' >&2
+ exit 1
+ fi
+
+
+ if_counter=0
+ for interface in ${output}; do
+
+ if [ "$if_counter" -ge 4 ]; then
+ break
+ fi
+ interface_ip=$(find_ip $interface)
+ if [ ! "$interface_ip" ]; then
+ continue
+ fi
+ case "${if_counter}" in
+ 0)
+ mac_string=config_nodes_${node}_mac_address
+ mac_addr=$(eval echo \$$mac_string)
+ mac_addr=$(echo $mac_addr | sed 's/:\|-//g')
+ if [ $mac_addr == "" ]; then
+ echo "${red} Unable to find mac_address for $node! ${reset}"
+ exit 1
+ fi
+ ;;
+ 1)
+ if [ "$node_type" == "controller" ]; then
+ mac_string=config_nodes_${node}_private_mac
+ mac_addr=$(eval echo \$$mac_string)
+ if [ $mac_addr == "" ]; then
+ echo "${red} Unable to find private_mac for $node! ${reset}"
+ exit 1
+ fi
+ else
+ ##generate random mac
+ mac_addr=$(echo -n 00-60-2F; dd bs=1 count=3 if=/dev/random 2>/dev/null |hexdump -v -e '/1 "-%02X"')
+ fi
+ mac_addr=$(echo $mac_addr | sed 's/:\|-//g')
+ ;;
+ *)
+ mac_addr=$(echo -n 00-60-2F; dd bs=1 count=3 if=/dev/random 2>/dev/null |hexdump -v -e '/1 "-%02X"')
+ mac_addr=$(echo $mac_addr | sed 's/:\|-//g')
+ ;;
+ esac
+ sed -i 's/^.*eth_replace'"$if_counter"'.*$/ config.vm.network "public_network", bridge: '\'"$interface"\'', :mac => '\""$mac_addr"\"'/' Vagrantfile
+ ((if_counter++))
+ done
+
+ ##now remove interface config in Vagrantfile for 1 node
+ ##if 1, 3, or 4 interfaces set deployment type
+ ##if 2 interfaces remove 2nd interface and set deployment type
+ if [ "$if_counter" == 1 ]; then
+ deployment_type="single_network"
+ remove_vagrant_network eth_replace1
+ remove_vagrant_network eth_replace2
+ remove_vagrant_network eth_replace3
+ elif [ "$if_counter" == 2 ]; then
+ deployment_type="single_network"
+ second_interface=`echo $output | awk '{print $2}'`
+ remove_vagrant_network $second_interface
+ remove_vagrant_network eth_replace2
+ elif [ "$if_counter" == 3 ]; then
+ deployment_type="three_network"
+ remove_vagrant_network eth_replace3
+ else
+ deployment_type="multi_network"
+ fi
+
+ ##modify provisioning to do puppet install, config, and foreman check-in
+ ##substitute host_name and dns_server in the provisioning script
+ host_string=config_nodes_${node}_hostname
+ host_name=$(eval echo \$$host_string)
+ sed -i 's/^host_name=REPLACE/host_name='$host_name'/' vm_nodes_provision.sh
+ ##dns server should be the foreman server
+ sed -i 's/^dns_server=REPLACE/dns_server='${interface_ip_arr[0]}'/' vm_nodes_provision.sh
+
+ ## remove bootstrap and NAT provisioning
+ sed -i '/nat_setup.sh/d' Vagrantfile
+ sed -i 's/bootstrap.sh/vm_nodes_provision.sh/' Vagrantfile
+
+ ## modify default_gw to be node_default_gw
+ sed -i 's/^.*default_gw =.*$/ default_gw = '\""$node_default_gw"\"'/' Vagrantfile
+
+ ## modify VM memory to be 4gig
+ sed -i 's/^.*vb.memory =.*$/ vb.memory = 4096/' Vagrantfile
+
+ echo "${blue}Starting Vagrant Node $node! ${reset}"
+
+ ##stand up vagrant
+ if ! vagrant up; then
+ echo "${red} Unable to start $node ${reset}"
+ exit 1
+ else
+ echo "${blue} $node VM is up! ${reset}"
+ fi
+
+done
+
+ echo "${blue} All VMs are UP! ${reset}"
+
+fi
diff --git a/foreman/ci/inventory/lf_pod2_ksgen_settings.yml b/foreman/ci/inventory/lf_pod2_ksgen_settings.yml
new file mode 100644
index 000000000..72935c9ad
--- /dev/null
+++ b/foreman/ci/inventory/lf_pod2_ksgen_settings.yml
@@ -0,0 +1,357 @@
+global_params:
+ admin_email: opnfv@opnfv.com
+ ha_flag: "true"
+ odl_flag: "true"
+ private_network:
+ storage_network:
+ controllers_hostnames_array: oscontroller1,oscontroller2,oscontroller3
+ controllers_ip_array:
+ amqp_vip:
+ private_subnet:
+ cinder_admin_vip:
+ cinder_private_vip:
+ cinder_public_vip:
+ db_vip:
+ glance_admin_vip:
+ glance_private_vip:
+ glance_public_vip:
+ heat_admin_vip:
+ heat_private_vip:
+ heat_public_vip:
+ heat_cfn_admin_vip:
+ heat_cfn_private_vip:
+ heat_cfn_public_vip:
+ horizon_admin_vip:
+ horizon_private_vip:
+ horizon_public_vip:
+ keystone_admin_vip:
+ keystone_private_vip:
+ keystone_public_vip:
+ loadbalancer_vip:
+ neutron_admin_vip:
+ neutron_private_vip:
+ neutron_public_vip:
+ nova_admin_vip:
+ nova_private_vip:
+ nova_public_vip:
+ external_network_flag: "true"
+ public_gateway:
+ public_dns:
+ public_network:
+ public_subnet:
+ public_allocation_start:
+ public_allocation_end:
+ deployment_type:
+network_type: multi_network
+default_gw:
+foreman:
+ seed_values:
+ - { name: heat_cfn, oldvalue: true, newvalue: false }
+workaround_puppet_version_lock: false
+opm_branch: master
+installer:
+ name: puppet
+ short_name: pupt
+ network:
+ auto_assign_floating_ip: false
+ variant:
+ short_name: m2vx
+ plugin:
+ name: neutron
+workaround_openstack_packstack_rpm: false
+tempest:
+ repo:
+ Fedora:
+ '19': http://REPLACE_ME/~REPLACE_ME/openstack-tempest-icehouse/fedora-19/
+ '20': http://REPLACE_ME/~REPLACE_ME/openstack-tempest-icehouse/fedora-20/
+ RedHat:
+ '7.0': https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/
+ use_virtual_env: false
+ public_allocation_end: 10.2.84.71
+ skip:
+ files: null
+ tests: null
+ public_allocation_start: 10.2.84.51
+ physnet: physnet1
+ use_custom_repo: false
+ public_subnet_cidr: 10.2.84.0/24
+ public_subnet_gateway: 10.2.84.1
+ additional_default_settings:
+ - section: compute
+ option: flavor_ref
+ value: 1
+ cirros_image_file: cirros-0.3.1-x86_64-disk.img
+ setup_method: tempest/rpm
+ test_name: all
+ rdo:
+ version: juno
+ rpm: http://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ rpm:
+ version: 20141201
+ dir: ~{{ nodes.tempest.remote_user }}/tempest-dir
+tmp:
+ node_prefix: '{{ node.prefix | reject("none") | join("-") }}-'
+ anchors:
+ - https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ - http://repos.fedorapeople.org/repos/openstack/openstack-juno/
+opm_repo: https://github.com/redhat-openstack/openstack-puppet-modules.git
+workaround_vif_plugging: false
+openstack_packstack_rpm: http://REPLACE_ME/brewroot/packages/openstack-puppet-modules/2013.2/9.el6ost/noarch/openstack-puppet-modules-2013.2-9.el6ost.noarch.rpm
+nodes:
+ compute1:
+ name: oscompute11.opnfv.com
+ hostname: oscompute11.opnfv.com
+ short_name: oscompute11
+ type: compute
+ host_type: baremetal
+ hostgroup: Compute
+ mac_address: "00:25:b5:a0:00:5e"
+ bmc_ip: 172.30.8.74
+ bmc_mac: "74:a2:e6:a4:14:9c"
+ bmc_user: admin
+ bmc_pass: octopus
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: ""
+ groups:
+ - compute
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ compute2:
+ name: oscompute12.opnfv.com
+ hostname: oscompute12.opnfv.com
+ short_name: oscompute12
+ type: compute
+ host_type: baremetal
+ hostgroup: Compute
+ mac_address: "00:25:b5:a0:00:3e"
+ bmc_ip: 172.30.8.73
+ bmc_mac: "a8:9d:21:a0:15:9c"
+ bmc_user: admin
+ bmc_pass: octopus
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: ""
+ groups:
+ - compute
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller1:
+ name: oscontroller1.opnfv.com
+ hostname: oscontroller1.opnfv.com
+ short_name: oscontroller1
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network_ODL
+ mac_address: "00:25:b5:a0:00:af"
+ bmc_ip: 172.30.8.66
+ bmc_mac: "a8:9d:21:c9:8b:56"
+ bmc_user: admin
+ bmc_pass: octopus
+ private_ip: controller1_private
+ private_mac: "00:25:b5:b0:00:1f"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller2:
+ name: oscontroller2.opnfv.com
+ hostname: oscontroller2.opnfv.com
+ short_name: oscontroller2
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network
+ mac_address: "00:25:b5:a0:00:9e"
+ bmc_ip: 172.30.8.75
+ bmc_mac: "a8:9d:21:c9:4d:26"
+ bmc_user: admin
+ bmc_pass: octopus
+ private_ip: controller2_private
+ private_mac: "00:25:b5:b0:00:de"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller3:
+ name: oscontroller3.opnfv.com
+ hostname: oscontroller3.opnfv.com
+ short_name: oscontroller3
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network
+ mac_address: "00:25:b5:a0:00:7e"
+ bmc_ip: 172.30.8.65
+ bmc_mac: "a8:9d:21:c9:3a:92"
+ bmc_user: admin
+ bmc_pass: octopus
+ private_ip: controller3_private
+ private_mac: "00:25:b5:b0:00:be"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+workaround_mysql_centos7: true
+distro:
+ name: centos
+ centos:
+ '7.0':
+ repos: []
+ short_name: c
+ short_version: 70
+ version: '7.0'
+ rhel:
+ '7.0':
+ kickstart_url: http://REPLACE_ME/released/RHEL-7/7.0/Server/x86_64/os/
+ repos:
+ - section: rhel7-server-rpms
+ name: Packages for RHEL 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/repos/rhel-7.0/x86_64/
+ gpgcheck: 0
+ - section: rhel-7-server-update-rpms
+ name: Update Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/repos/rhel-7.0-z/x86_64/
+ gpgcheck: 0
+ - section: rhel-7-server-optional-rpms
+ name: Optional Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/released/RHEL-7/7.0/Server-optional/x86_64/os/
+ gpgcheck: 0
+ - section: rhel-7-server-extras-rpms
+ name: Optional Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/EXTRAS-7.0-RHEL-7-20140610.0/compose/Server/x86_64/os/
+ gpgcheck: 0
+ '6.5':
+ kickstart_url: http://REPLACE_ME/released/RHEL-6/6.5/Server/x86_64/os/
+ repos:
+ - section: rhel6.5-server-rpms
+ name: Packages for RHEL 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/$basearch/os/Server
+ gpgcheck: 0
+ - section: rhel-6.5-server-update-rpms
+ name: Update Packages for Enterprise Linux 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/rel-eng/repos/RHEL-6.5-Z/$basearch/
+ gpgcheck: 0
+ - section: rhel-6.5-server-optional-rpms
+ name: Optional Packages for Enterprise Linux 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/optional/$basearch/os
+ gpgcheck: 0
+ - section: rhel6.5-server-rpms-32bit
+ name: Packages for RHEL 6.5 - i386
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/i386/os/Server
+ gpgcheck: 0
+ enabled: 1
+ - section: rhel-6.5-server-update-rpms-32bit
+ name: Update Packages for Enterprise Linux 6.5 - i686
+ baseurl: http://REPLACE_ME.REPLACE_ME/rel-eng/repos/RHEL-6.5-Z/i686/
+ gpgcheck: 0
+ enabled: 1
+ - section: rhel-6.5-server-optional-rpms-32bit
+ name: Optional Packages for Enterprise Linux 6.5 - i386
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/optional/i386/os
+ gpgcheck: 0
+ enabled: 1
+ subscription:
+ username: REPLACE_ME
+ password: HWj8TE28Qi0eP2c
+ pool: 8a85f9823e3d5e43013e3ddd4e2a0977
+ config:
+ selinux: permissive
+ ntp_server: 0.pool.ntp.org
+ dns_servers:
+ - 10.4.1.1
+ - 10.4.0.2
+ reboot_delay: 1
+ initial_boot_timeout: 180
+node:
+ prefix:
+ - rdo
+ - pupt
+ - ffqiotcxz1
+ - null
+product:
+ repo_type: production
+ name: rdo
+ short_name: rdo
+ rpm:
+ CentOS: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ Fedora: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ RedHat: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ short_version: ju
+ repo:
+ production:
+ CentOS:
+ 7.0.1406: http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ '6.5': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '7.0': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ Fedora:
+ '20': http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-20
+ '21': http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-21
+ RedHat:
+ '6.6': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '6.5': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '7.0': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ version: juno
+ config:
+ enable_epel: y
+ short_repo: prod
+tester:
+ name: tempest
+distro_reboot_options: '--no-wall '' Reboot is triggered by Ansible'' '
+job:
+ verbosity: 1
+ archive:
+ - '{{ tempest.dir }}/etc/tempest.conf'
+ - '{{ tempest.dir }}/etc/tempest.conf.sample'
+ - '{{ tempest.dir }}/*.log'
+ - '{{ tempest.dir }}/*.xml'
+ - /root/
+ - /var/log/
+ - /etc/nova
+ - /etc/ceilometer
+ - /etc/cinder
+ - /etc/glance
+ - /etc/keystone
+ - /etc/neutron
+ - /etc/ntp
+ - /etc/puppet
+ - /etc/qpid
+ - /etc/qpidd.conf
+ - /root
+ - /etc/yum.repos.d
+ - /etc/yum.repos.d
+topology:
+ name: multinode
+ short_name: mt
+workaround_neutron_ovs_udev_loop: true
+workaround_glance_table_utf8: false
+verbosity:
+ debug: 0
+ info: 1
+ warning: 2
+ warn: 2
+ errors: 3
+provisioner:
+ username: admin
+ network:
+ type: nova
+ name: external
+ skip: skip_provision
+ foreman_url: https://10.2.84.2/api/v2/
+ password: octopus
+ type: foreman
+workaround_nova_compute_fix: false
+workarounds:
+ enabled: true
diff --git a/foreman/ci/nat_setup.sh b/foreman/ci/nat_setup.sh
new file mode 100755
index 000000000..349e416d6
--- /dev/null
+++ b/foreman/ci/nat_setup.sh
@@ -0,0 +1,44 @@
+#!/usr/bin/env bash
+
+#NAT setup script to setup NAT from Admin -> Public interface
+#on a Vagrant VM
+#Called by Vagrantfile in conjunction with deploy.sh
+#author: Tim Rozet (trozet@redhat.com)
+#
+#Uses Vagrant and VirtualBox
+#VagrantFile uses nat_setup.sh which sets up NAT
+#
+
+##make sure firewalld is stopped and disabled
+if ! systemctl stop firewalld; then
+ printf '%s\n' 'nat_setup.sh: Unable to stop firewalld' >&2
+ exit 1
+fi
+
+systemctl disable firewalld
+
+# Install iptables
+# Major version is pinned to force some consistency for Arno
+if ! yum -y install iptables-services-1*; then
+ printf '%s\n' 'nat_setup.sh: Unable to install iptables-services' >&2
+ exit 1
+fi
+
+##start and enable iptables service
+if ! systemctl start iptables; then
+ printf '%s\n' 'nat_setup.sh: Unable to start iptables-services' >&2
+ exit 1
+fi
+
+systemctl enable iptables
+
+##enable IP forwarding
+echo 1 > /proc/sys/net/ipv4/ip_forward
+
+##Configure iptables
+/sbin/iptables -t nat -I POSTROUTING -o enp0s10 -j MASQUERADE
+/sbin/iptables -I FORWARD 1 -i enp0s10 -o enp0s8 -m state --state RELATED,ESTABLISHED -j ACCEPT
+/sbin/iptables -I FORWARD 1 -i enp0s8 -o enp0s10 -j ACCEPT
+/sbin/iptables -I INPUT 1 -j ACCEPT
+/sbin/iptables -I OUTPUT 1 -j ACCEPT
+
diff --git a/foreman/ci/opnfv_ksgen_settings.yml b/foreman/ci/opnfv_ksgen_settings.yml
new file mode 100644
index 000000000..21840ddf8
--- /dev/null
+++ b/foreman/ci/opnfv_ksgen_settings.yml
@@ -0,0 +1,338 @@
+global_params:
+ admin_email: opnfv@opnfv.com
+ ha_flag: "true"
+ odl_flag: "true"
+ private_network:
+ storage_network:
+ controllers_hostnames_array: oscontroller1,oscontroller2,oscontroller3
+ controllers_ip_array:
+ amqp_vip:
+ private_subnet:
+ cinder_admin_vip:
+ cinder_private_vip:
+ cinder_public_vip:
+ db_vip:
+ glance_admin_vip:
+ glance_private_vip:
+ glance_public_vip:
+ heat_admin_vip:
+ heat_private_vip:
+ heat_public_vip:
+ heat_cfn_admin_vip:
+ heat_cfn_private_vip:
+ heat_cfn_public_vip:
+ horizon_admin_vip:
+ horizon_private_vip:
+ horizon_public_vip:
+ keystone_admin_vip:
+ keystone_private_vip:
+ keystone_public_vip:
+ loadbalancer_vip:
+ neutron_admin_vip:
+ neutron_private_vip:
+ neutron_public_vip:
+ nova_admin_vip:
+ nova_private_vip:
+ nova_public_vip:
+ external_network_flag: "true"
+ public_gateway:
+ public_dns:
+ public_network:
+ public_subnet:
+ public_allocation_start:
+ public_allocation_end:
+ deployment_type:
+network_type: multi_network
+default_gw:
+foreman:
+ seed_values:
+ - { name: heat_cfn, oldvalue: true, newvalue: false }
+workaround_puppet_version_lock: false
+opm_branch: master
+installer:
+ name: puppet
+ short_name: pupt
+ network:
+ auto_assign_floating_ip: false
+ variant:
+ short_name: m2vx
+ plugin:
+ name: neutron
+workaround_openstack_packstack_rpm: false
+tempest:
+ repo:
+ Fedora:
+ '19': http://REPLACE_ME/~REPLACE_ME/openstack-tempest-icehouse/fedora-19/
+ '20': http://REPLACE_ME/~REPLACE_ME/openstack-tempest-icehouse/fedora-20/
+ RedHat:
+ '7.0': https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/
+ use_virtual_env: false
+ public_allocation_end: 10.2.84.71
+ skip:
+ files: null
+ tests: null
+ public_allocation_start: 10.2.84.51
+ physnet: physnet1
+ use_custom_repo: false
+ public_subnet_cidr: 10.2.84.0/24
+ public_subnet_gateway: 10.2.84.1
+ additional_default_settings:
+ - section: compute
+ option: flavor_ref
+ value: 1
+ cirros_image_file: cirros-0.3.1-x86_64-disk.img
+ setup_method: tempest/rpm
+ test_name: all
+ rdo:
+ version: juno
+ rpm: http://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ rpm:
+ version: 20141201
+ dir: ~{{ nodes.tempest.remote_user }}/tempest-dir
+tmp:
+ node_prefix: '{{ node.prefix | reject("none") | join("-") }}-'
+ anchors:
+ - https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ - http://repos.fedorapeople.org/repos/openstack/openstack-juno/
+opm_repo: https://github.com/redhat-openstack/openstack-puppet-modules.git
+workaround_vif_plugging: false
+openstack_packstack_rpm: http://REPLACE_ME/brewroot/packages/openstack-puppet-modules/2013.2/9.el6ost/noarch/openstack-puppet-modules-2013.2-9.el6ost.noarch.rpm
+nodes:
+ compute:
+ name: oscompute11.opnfv.com
+ hostname: oscompute11.opnfv.com
+ short_name: oscompute11
+ type: compute
+ host_type: baremetal
+ hostgroup: Compute
+ mac_address: "10:23:45:67:89:AB"
+ bmc_ip: 10.4.17.2
+ bmc_mac: "10:23:45:67:88:AB"
+ bmc_user: root
+ bmc_pass: root
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: ""
+ groups:
+ - compute
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller1:
+ name: oscontroller1.opnfv.com
+ hostname: oscontroller1.opnfv.com
+ short_name: oscontroller1
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network_ODL
+ mac_address: "10:23:45:67:89:AC"
+ bmc_ip: 10.4.17.3
+ bmc_mac: "10:23:45:67:88:AC"
+ bmc_user: root
+ bmc_pass: root
+ private_ip: controller1_private
+ private_mac: "10:23:45:67:87:AC"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller2:
+ name: oscontroller2.opnfv.com
+ hostname: oscontroller2.opnfv.com
+ short_name: oscontroller2
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network
+ mac_address: "10:23:45:67:89:AD"
+ bmc_ip: 10.4.17.4
+ bmc_mac: "10:23:45:67:88:AD"
+ bmc_user: root
+ bmc_pass: root
+ private_ip: controller2_private
+ private_mac: "10:23:45:67:87:AD"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller3:
+ name: oscontroller3.opnfv.com
+ hostname: oscontroller3.opnfv.com
+ short_name: oscontroller3
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network
+ mac_address: "10:23:45:67:89:AE"
+ bmc_ip: 10.4.17.5
+ bmc_mac: "10:23:45:67:88:AE"
+ bmc_user: root
+ bmc_pass: root
+ private_ip: controller3_private
+ private_mac: "10:23:45:67:87:AE"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+workaround_mysql_centos7: true
+distro:
+ name: centos
+ centos:
+ '7.0':
+ repos: []
+ short_name: c
+ short_version: 70
+ version: '7.0'
+ rhel:
+ '7.0':
+ kickstart_url: http://REPLACE_ME/released/RHEL-7/7.0/Server/x86_64/os/
+ repos:
+ - section: rhel7-server-rpms
+ name: Packages for RHEL 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/repos/rhel-7.0/x86_64/
+ gpgcheck: 0
+ - section: rhel-7-server-update-rpms
+ name: Update Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/repos/rhel-7.0-z/x86_64/
+ gpgcheck: 0
+ - section: rhel-7-server-optional-rpms
+ name: Optional Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/released/RHEL-7/7.0/Server-optional/x86_64/os/
+ gpgcheck: 0
+ - section: rhel-7-server-extras-rpms
+ name: Optional Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/EXTRAS-7.0-RHEL-7-20140610.0/compose/Server/x86_64/os/
+ gpgcheck: 0
+ '6.5':
+ kickstart_url: http://REPLACE_ME/released/RHEL-6/6.5/Server/x86_64/os/
+ repos:
+ - section: rhel6.5-server-rpms
+ name: Packages for RHEL 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/$basearch/os/Server
+ gpgcheck: 0
+ - section: rhel-6.5-server-update-rpms
+ name: Update Packages for Enterprise Linux 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/rel-eng/repos/RHEL-6.5-Z/$basearch/
+ gpgcheck: 0
+ - section: rhel-6.5-server-optional-rpms
+ name: Optional Packages for Enterprise Linux 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/optional/$basearch/os
+ gpgcheck: 0
+ - section: rhel6.5-server-rpms-32bit
+ name: Packages for RHEL 6.5 - i386
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/i386/os/Server
+ gpgcheck: 0
+ enabled: 1
+ - section: rhel-6.5-server-update-rpms-32bit
+ name: Update Packages for Enterprise Linux 6.5 - i686
+ baseurl: http://REPLACE_ME.REPLACE_ME/rel-eng/repos/RHEL-6.5-Z/i686/
+ gpgcheck: 0
+ enabled: 1
+ - section: rhel-6.5-server-optional-rpms-32bit
+ name: Optional Packages for Enterprise Linux 6.5 - i386
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/optional/i386/os
+ gpgcheck: 0
+ enabled: 1
+ subscription:
+ username: REPLACE_ME
+ password: HWj8TE28Qi0eP2c
+ pool: 8a85f9823e3d5e43013e3ddd4e2a0977
+ config:
+ selinux: permissive
+ ntp_server: 0.pool.ntp.org
+ dns_servers:
+ - 10.4.1.1
+ - 10.4.0.2
+ reboot_delay: 1
+ initial_boot_timeout: 180
+node:
+ prefix:
+ - rdo
+ - pupt
+ - ffqiotcxz1
+ - null
+product:
+ repo_type: production
+ name: rdo
+ short_name: rdo
+ rpm:
+ CentOS: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ Fedora: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ RedHat: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ short_version: ju
+ repo:
+ production:
+ CentOS:
+ 7.0.1406: http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ '6.5': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '7.0': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ Fedora:
+ '20': http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-20
+ '21': http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-21
+ RedHat:
+ '6.6': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '6.5': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '7.0': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ version: juno
+ config:
+ enable_epel: y
+ short_repo: prod
+tester:
+ name: tempest
+distro_reboot_options: '--no-wall '' Reboot is triggered by Ansible'' '
+job:
+ verbosity: 1
+ archive:
+ - '{{ tempest.dir }}/etc/tempest.conf'
+ - '{{ tempest.dir }}/etc/tempest.conf.sample'
+ - '{{ tempest.dir }}/*.log'
+ - '{{ tempest.dir }}/*.xml'
+ - /root/
+ - /var/log/
+ - /etc/nova
+ - /etc/ceilometer
+ - /etc/cinder
+ - /etc/glance
+ - /etc/keystone
+ - /etc/neutron
+ - /etc/ntp
+ - /etc/puppet
+ - /etc/qpid
+ - /etc/qpidd.conf
+ - /root
+ - /etc/yum.repos.d
+ - /etc/yum.repos.d
+topology:
+ name: multinode
+ short_name: mt
+workaround_neutron_ovs_udev_loop: true
+workaround_glance_table_utf8: false
+verbosity:
+ debug: 0
+ info: 1
+ warning: 2
+ warn: 2
+ errors: 3
+provisioner:
+ username: admin
+ network:
+ type: nova
+ name: external
+ skip: skip_provision
+ foreman_url: https://10.2.84.2/api/v2/
+ password: octopus
+ type: foreman
+workaround_nova_compute_fix: false
+workarounds:
+ enabled: true
+
diff --git a/foreman/ci/reload_playbook.yml b/foreman/ci/reload_playbook.yml
new file mode 100644
index 000000000..9e3d053b5
--- /dev/null
+++ b/foreman/ci/reload_playbook.yml
@@ -0,0 +1,16 @@
+---
+- hosts: all
+ tasks:
+ - name: restart machine
+ shell: sleep 2 && shutdown -r now "Ansible updates triggered"
+ async: 1
+ poll: 0
+ ignore_errors: true
+
+ - name: waiting for server to come back
+ local_action: wait_for host="{{ ansible_ssh_host }}"
+ port="{{ ansible_ssh_port }}"
+ state=started
+ delay=60
+ timeout=180
+ sudo: false
diff --git a/foreman/ci/vm_nodes_provision.sh b/foreman/ci/vm_nodes_provision.sh
new file mode 100755
index 000000000..d0bba6452
--- /dev/null
+++ b/foreman/ci/vm_nodes_provision.sh
@@ -0,0 +1,91 @@
+#!/usr/bin/env bash
+
+#bootstrap script for VM OPNFV nodes
+#author: Tim Rozet (trozet@redhat.com)
+#
+#Uses Vagrant and VirtualBox
+#VagrantFile uses vm_nodes_provision.sh which configures linux on nodes
+#Depends on Foreman being up to be able to register and apply puppet
+#
+#Pre-requisties:
+#Target system should be Centos7 Vagrant VM
+
+##VARS
+reset=`tput sgr0`
+blue=`tput setaf 4`
+red=`tput setaf 1`
+green=`tput setaf 2`
+
+host_name=REPLACE
+dns_server=REPLACE
+##END VARS
+
+##set hostname
+echo "${blue} Setting Hostname ${reset}"
+hostnamectl set-hostname $host_name
+
+##remove NAT DNS
+echo "${blue} Removing DNS server on first interface ${reset}"
+if ! grep 'PEERDNS=no' /etc/sysconfig/network-scripts/ifcfg-enp0s3; then
+ echo "PEERDNS=no" >> /etc/sysconfig/network-scripts/ifcfg-enp0s3
+ systemctl restart NetworkManager
+fi
+
+if ! ping www.google.com -c 5; then
+ echo "${red} No internet connection, check your route and DNS setup ${reset}"
+ exit 1
+fi
+
+# Install EPEL repo for access to many other yum repos
+# Major version is pinned to force some consistency for Arno
+yum install -y epel-release-7*
+
+# Update device-mapper-libs, needed for libvirtd on compute nodes
+# Major version is pinned to force some consistency for Arno
+if ! yum -y upgrade device-mapper-libs-1*; then
+ echo "${red} WARN: Unable to upgrade device-mapper-libs...nova-compute may not function ${reset}"
+fi
+
+# Install other required packages
+# Major version is pinned to force some consistency for Arno
+echo "${blue} Installing Puppet ${reset}"
+if ! yum install -y puppet-3*; then
+ printf '%s\n' 'vm_nodes_provision.sh: failed to install required packages' >&2
+ exit 1
+fi
+
+echo "${blue} Configuring puppet ${reset}"
+cat > /etc/puppet/puppet.conf << EOF
+
+[main]
+vardir = /var/lib/puppet
+logdir = /var/log/puppet
+rundir = /var/run/puppet
+ssldir = \$vardir/ssl
+
+[agent]
+pluginsync = true
+report = true
+ignoreschedules = true
+daemon = false
+ca_server = foreman-server.opnfv.com
+certname = $host_name
+environment = production
+server = foreman-server.opnfv.com
+runinterval = 600
+
+EOF
+
+# Setup puppet to run on system reboot
+/sbin/chkconfig --level 345 puppet on
+
+/usr/bin/puppet agent --config /etc/puppet/puppet.conf -o --tags no_such_tag --server foreman-server.opnfv.com --no-daemonize
+
+sync
+
+# Inform the build system that we are done.
+echo "Informing Foreman that we are built"
+wget -q -O /dev/null --no-check-certificate http://foreman-server.opnfv.com:80/unattended/built
+
+echo "Starting puppet"
+systemctl start puppet