summaryrefslogtreecommitdiffstats
path: root/foreman/ci
diff options
context:
space:
mode:
authorDaniel Farrell <dfarrell@redhat.com>2015-06-01 10:06:36 -0400
committerDaniel Farrell <dfarrell@redhat.com>2015-06-01 10:43:51 -0400
commit220bcb74645f5beba93282a38bac0276be199a71 (patch)
treecdef237c5e8a806e5349d074ade2e0a0b6ca4273 /foreman/ci
parentdb9f29f35ef27bf9af45cb37661bfad8f1543f8b (diff)
Copy Foreman deploy logic from bgs_vagrant repo
This code was developed in a scratch space GitHub repo, mostly by Tim. As part of the clean-up process for Arno, it should be migrated to Genesis and all future work should be done via Genesis. This is trozet/bgs_vagrant as of f27548. I didn't copy the clean.sh, deploy.sh and build.sh scripts from bgs_vagrant in this commit. They differ from those in Genesis and need more attention for a proper migration. See: https://github.com/trozet/bgs_vagrant JIRA: BGS-53 Change-Id: I512e0ea0d02f8d99048db771221abc88aa60e2d5 Signed-off-by: Daniel Farrell <dfarrell@redhat.com>
Diffstat (limited to 'foreman/ci')
-rw-r--r--foreman/ci/README.md86
-rw-r--r--foreman/ci/Vagrantfile93
-rwxr-xr-xforeman/ci/bootstrap.sh77
-rwxr-xr-xforeman/ci/nat_setup.sh43
-rw-r--r--foreman/ci/opnfv_ksgen_settings.yml338
-rw-r--r--foreman/ci/reload_playbook.yml16
-rwxr-xr-xforeman/ci/vm_nodes_provision.sh98
7 files changed, 751 insertions, 0 deletions
diff --git a/foreman/ci/README.md b/foreman/ci/README.md
new file mode 100644
index 0000000..9417ee5
--- /dev/null
+++ b/foreman/ci/README.md
@@ -0,0 +1,86 @@
+# Foreman/QuickStack Automatic Deployment README
+
+A simple bash script (deploy.sh) will provision out a Foreman/QuickStack VM Server and 4-5 other baremetal or VM nodes in an OpenStack HA + OpenDaylight environment.
+
+##Pre-Requisites
+####Baremetal:
+* At least 5 baremetal servers, with 3 interfaces minimum, all connected to separate VLANs
+* DHCP should not be running in any VLAN. Foreman will act as a DHCP server.
+* On the baremetal server that will be your JumpHost, you need to have the 3 interfaces configured with IP addresses
+* On baremetal JumpHost you will need an RPM based linux (CentOS 7 will do) with the kernel up to date (yum update kernel) + at least 2GB of RAM
+* Nodes will need to be set to PXE boot first in priority, and off the first NIC, connected to the same VLAN as NIC 1 * of your JumpHost
+* Nodes need to have BMC/OOB management via IPMI setup
+* Internet access via first (Admin) or third interface (Public)
+* No other hypervisors should be running on JumpHost
+
+####VM Nodes:
+* JumpHost with 3 interfaces, configured with IP, connected to separate VLANS
+* DHCP should not be running in any VLAN. Foreman will act as a DHCP Server
+* On baremetal JumpHost you will need an RPM based linux (CentOS 7 will do) with the kernel up to date (yum update kernel) + at least 24GB of RAM
+* Internet access via the first (Admin) or third interface (Public)
+* No other hypervisors should be running on JumpHost
+
+##How It Works
+
+###deploy.sh:
+
+* Detects your network configuration (3 or 4 usable interfaces)
+* Modifies a “ksgen.yml” settings file and Vagrantfile with necessary network info
+* Installs Vagrant and dependencies
+* Downloads Centos7 Vagrant basebox, and issues a “vagrant up” to start the VM
+* The Vagrantfile points to bootstrap.sh as the provisioner to takeover rest of the install
+
+###bootstrap.sh:
+
+* Is initiated inside of the VM once it is up
+* Installs Khaleesi, Ansible, and Python dependencies
+* Makes a call to Khaleesi to start a playbook: opnfv.yml + “ksgen.yml” settings file
+
+###Khaleesi (Ansible):
+
+* Runs through the playbook to install Foreman/QuickStack inside of the VM
+* Configures services needed for a JumpHost: DHCP, TFTP, DNS
+* Uses info from “ksgen.yml” file to add your nodes into Foreman and set them to Build mode
+
+####Baremetal Only:
+* Issues an API call to Foreman to rebuild all nodes
+* Ansible then waits to make sure nodes come back via ssh checks
+* Ansible then waits for puppet to run on each node and complete
+
+####VM Only:
+* deploy.sh then brings up 5 more Vagrant VMs
+* Checks into Foreman and tells Foreman nodes are built
+* Configures and starts puppet on each node
+
+##Execution Instructions
+
+* On your JumpHost, clone 'git clone https://github.com/trozet/bgs_vagrant.git' to as root to /root/
+
+####Baremetal Only:
+* Edit opnvf_ksgen_settings.yml → “nodes” section:
+
+ * For each node, compute, controller1..3:
+ * mac_address - change to mac_address of that node's Admin NIC (1st NIC)
+ * bmc_ip - change to IP of BMC (out-of-band) IP
+ * bmc_mac - same as above, but MAC address
+ * bmc_user - IPMI username
+ * bmc_pass - IPMI password
+
+ * For each controller node:
+ * private_mac - change to mac_address of node's Private NIC (2nd NIC)
+
+* Execute deploy.sh via: ./deploy.sh -base_config /root/bgs_vagrant/opnfv_ksgen_settings.yml
+
+####VM Only:
+* Execute deploy.sh via: ./deploy.sh -virtual
+* Install directory for each VM will be in /tmp (for example /tmp/compute, /tmp/controller1)
+
+####Both Approaches:
+* Install directory for foreman-server is /tmp/bgs_vagrant/ - This is where vagrant will be launched from automatically
+* To access the VM you can 'cd /tmp/bgs_vagrant' and type 'vagrant ssh'
+* To access Foreman enter the IP address shown in 'cat /tmp/bgs_vagrant/opnfv_ksgen_settings.yml | grep foreman_url'
+* The user/pass by default is admin//octopus
+
+##Redeploying
+Make sure you run ./clean.sh for the baremetal deployment with your opnfv_ksgen_settings.yml file as "-base_config". This will ensure that your nodes are turned off and that your VM is destroyed ("vagrant destroy" in the /tmp/bgs_vagrant directory).
+For VM redeployment, make sure you "vagrant destroy" in each /tmp/<node> as well if you want to redeploy. To check and make sure no VMs are still running on your Jumphost you can use "vboxmanage list runningvms".
diff --git a/foreman/ci/Vagrantfile b/foreman/ci/Vagrantfile
new file mode 100644
index 0000000..100e12d
--- /dev/null
+++ b/foreman/ci/Vagrantfile
@@ -0,0 +1,93 @@
+# -*- mode: ruby -*-
+# vi: set ft=ruby :
+
+# All Vagrant configuration is done below. The "2" in Vagrant.configure
+# configures the configuration version (we support older styles for
+# backwards compatibility). Please don't change it unless you know what
+# you're doing.
+Vagrant.configure(2) do |config|
+ # The most common configuration options are documented and commented below.
+ # For a complete reference, please see the online documentation at
+ # https://docs.vagrantup.com.
+
+ # Every Vagrant development environment requires a box. You can search for
+ # boxes at https://atlas.hashicorp.com/search.
+ config.vm.box = "chef/centos-7.0"
+
+ # Disable automatic box update checking. If you disable this, then
+ # boxes will only be checked for updates when the user runs
+ # `vagrant box outdated`. This is not recommended.
+ # config.vm.box_check_update = false
+
+ # Create a forwarded port mapping which allows access to a specific port
+ # within the machine from a port on the host machine. In the example below,
+ # accessing "localhost:8080" will access port 80 on the guest machine.
+ # config.vm.network "forwarded_port", guest: 80, host: 8080
+
+ # Create a private network, which allows host-only access to the machine
+ # using a specific IP.
+ # config.vm.network "private_network", ip: "192.168.33.10"
+
+ # Create a public network, which generally matched to bridged network.
+ # Bridged networks make the machine appear as another physical device on
+ # your network.
+ # config.vm.network "public_network"
+ config.vm.network "public_network", ip: "10.4.1.2", bridge: 'eth_replace0'
+ config.vm.network "public_network", ip: "10.4.9.2", bridge: 'eth_replace1'
+ config.vm.network "public_network", ip: "10.2.84.2", bridge: 'eth_replace2'
+ config.vm.network "public_network", ip: "10.3.84.2", bridge: 'eth_replace3'
+
+ # IP address of your LAN's router
+ default_gw = ""
+ nat_flag = false
+
+ # Share an additional folder to the guest VM. The first argument is
+ # the path on the host to the actual folder. The second argument is
+ # the path on the guest to mount the folder. And the optional third
+ # argument is a set of non-required options.
+ # config.vm.synced_folder "../data", "/vagrant_data"
+
+ # Provider-specific configuration so you can fine-tune various
+ # backing providers for Vagrant. These expose provider-specific options.
+ # Example for VirtualBox:
+ #
+ config.vm.provider "virtualbox" do |vb|
+ # # Display the VirtualBox GUI when booting the machine
+ # vb.gui = true
+ #
+ # # Customize the amount of memory on the VM:
+ vb.memory = 2048
+ vb.cpus = 2
+ end
+ #
+ # View the documentation for the provider you are using for more
+ # information on available options.
+
+ # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
+ # such as FTP and Heroku are also available. See the documentation at
+ # https://docs.vagrantup.com/v2/push/atlas.html for more information.
+ # config.push.define "atlas" do |push|
+ # push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
+ # end
+
+ # Enable provisioning with a shell script. Additional provisioners such as
+ # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
+ # documentation for more information about their specific syntax and use.
+ # config.vm.provision "shell", inline: <<-SHELL
+ # sudo apt-get update
+ # sudo apt-get install -y apache2
+ # SHELL
+
+ config.ssh.username = 'root'
+ config.ssh.password = 'vagrant'
+ config.ssh.insert_key = 'true'
+ config.vm.provision "ansible" do |ansible|
+ ansible.playbook = "reload_playbook.yml"
+ end
+ config.vm.provision :shell, :inline => "mount -t vboxsf vagrant /vagrant"
+ config.vm.provision :shell, :inline => "route add default gw #{default_gw}"
+ if nat_flag
+ config.vm.provision :shell, path: "nat_setup.sh"
+ end
+ config.vm.provision :shell, path: "bootstrap.sh"
+end
diff --git a/foreman/ci/bootstrap.sh b/foreman/ci/bootstrap.sh
new file mode 100755
index 0000000..1b36478
--- /dev/null
+++ b/foreman/ci/bootstrap.sh
@@ -0,0 +1,77 @@
+#!/usr/bin/env bash
+
+#bootstrap script for installing/running Khaleesi in Foreman/QuickStack VM
+#author: Tim Rozet (trozet@redhat.com)
+#
+#Uses Vagrant and VirtualBox
+#VagrantFile uses bootsrap.sh which Installs Khaleesi
+#Khaleesi will install and configure Foreman/QuickStack
+#
+#Pre-requisties:
+#Target system should be Centos7
+#Ensure the host's kernel is up to date (yum update)
+
+##VARS
+reset=`tput sgr0`
+blue=`tput setaf 4`
+red=`tput setaf 1`
+green=`tput setaf 2`
+
+##END VARS
+
+
+##install EPEL
+if ! yum repolist | grep "epel/"; then
+ if ! rpm -Uvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm; then
+ printf '%s\n' 'bootstrap.sh: Unable to configure EPEL repo' >&2
+ exit 1
+ fi
+else
+ printf '%s\n' 'bootstrap.sh: Skipping EPEL repo as it is already configured.'
+fi
+
+##install python,gcc,git
+if ! yum -y install python-pip python-virtualenv gcc git; then
+ printf '%s\n' 'bootstrap.sh: Unable to install python,gcc,git packages' >&2
+ exit 1
+fi
+
+##Install sshpass
+if ! yum -y install sshpass; then
+ printf '%s\n' 'bootstrap.sh: Unable to install sshpass' >&2
+ exit 1
+fi
+
+cd /opt
+
+echo "Cloning khaleesi to /opt"
+
+if [ ! -d khaleesi ]; then
+ if ! git clone -b opnfv https://github.com/trozet/khaleesi.git; then
+ printf '%s\n' 'bootstrap.sh: Unable to git clone khaleesi' >&2
+ exit 1
+ fi
+fi
+
+if ! pip install ansible; then
+ printf '%s\n' 'bootstrap.sh: Unable to install ansible' >&2
+ exit 1
+fi
+
+if ! pip install requests; then
+ printf '%s\n' 'bootstrap.sh: Unable to install requests python package' >&2
+ exit 1
+fi
+
+
+cd khaleesi
+
+cp ansible.cfg.example ansible.cfg
+
+echo "Completed Installing Khaleesi"
+
+cd /opt/khaleesi/
+
+ansible localhost -m setup -i local_hosts
+
+./run.sh --no-logs --use /vagrant/opnfv_ksgen_settings.yml playbooks/opnfv.yml
diff --git a/foreman/ci/nat_setup.sh b/foreman/ci/nat_setup.sh
new file mode 100755
index 0000000..398a826
--- /dev/null
+++ b/foreman/ci/nat_setup.sh
@@ -0,0 +1,43 @@
+#!/usr/bin/env bash
+
+#NAT setup script to setup NAT from Admin -> Public interface
+#on a Vagrant VM
+#Called by Vagrantfile in conjunction with deploy.sh
+#author: Tim Rozet (trozet@redhat.com)
+#
+#Uses Vagrant and VirtualBox
+#VagrantFile uses nat_setup.sh which sets up NAT
+#
+
+##make sure firewalld is stopped and disabled
+if ! systemctl stop firewalld; then
+ printf '%s\n' 'nat_setup.sh: Unable to stop firewalld' >&2
+ exit 1
+fi
+
+systemctl disable firewalld
+
+##install iptables
+if ! yum -y install iptables-services; then
+ printf '%s\n' 'nat_setup.sh: Unable to install iptables-services' >&2
+ exit 1
+fi
+
+##start and enable iptables service
+if ! systemctl start iptables; then
+ printf '%s\n' 'nat_setup.sh: Unable to start iptables-services' >&2
+ exit 1
+fi
+
+systemctl enable iptables
+
+##enable IP forwarding
+echo 1 > /proc/sys/net/ipv4/ip_forward
+
+##Configure iptables
+/sbin/iptables -t nat -I POSTROUTING -o enp0s10 -j MASQUERADE
+/sbin/iptables -I FORWARD 1 -i enp0s10 -o enp0s8 -m state --state RELATED,ESTABLISHED -j ACCEPT
+/sbin/iptables -I FORWARD 1 -i enp0s8 -o enp0s10 -j ACCEPT
+/sbin/iptables -I INPUT 1 -j ACCEPT
+/sbin/iptables -I OUTPUT 1 -j ACCEPT
+
diff --git a/foreman/ci/opnfv_ksgen_settings.yml b/foreman/ci/opnfv_ksgen_settings.yml
new file mode 100644
index 0000000..21840dd
--- /dev/null
+++ b/foreman/ci/opnfv_ksgen_settings.yml
@@ -0,0 +1,338 @@
+global_params:
+ admin_email: opnfv@opnfv.com
+ ha_flag: "true"
+ odl_flag: "true"
+ private_network:
+ storage_network:
+ controllers_hostnames_array: oscontroller1,oscontroller2,oscontroller3
+ controllers_ip_array:
+ amqp_vip:
+ private_subnet:
+ cinder_admin_vip:
+ cinder_private_vip:
+ cinder_public_vip:
+ db_vip:
+ glance_admin_vip:
+ glance_private_vip:
+ glance_public_vip:
+ heat_admin_vip:
+ heat_private_vip:
+ heat_public_vip:
+ heat_cfn_admin_vip:
+ heat_cfn_private_vip:
+ heat_cfn_public_vip:
+ horizon_admin_vip:
+ horizon_private_vip:
+ horizon_public_vip:
+ keystone_admin_vip:
+ keystone_private_vip:
+ keystone_public_vip:
+ loadbalancer_vip:
+ neutron_admin_vip:
+ neutron_private_vip:
+ neutron_public_vip:
+ nova_admin_vip:
+ nova_private_vip:
+ nova_public_vip:
+ external_network_flag: "true"
+ public_gateway:
+ public_dns:
+ public_network:
+ public_subnet:
+ public_allocation_start:
+ public_allocation_end:
+ deployment_type:
+network_type: multi_network
+default_gw:
+foreman:
+ seed_values:
+ - { name: heat_cfn, oldvalue: true, newvalue: false }
+workaround_puppet_version_lock: false
+opm_branch: master
+installer:
+ name: puppet
+ short_name: pupt
+ network:
+ auto_assign_floating_ip: false
+ variant:
+ short_name: m2vx
+ plugin:
+ name: neutron
+workaround_openstack_packstack_rpm: false
+tempest:
+ repo:
+ Fedora:
+ '19': http://REPLACE_ME/~REPLACE_ME/openstack-tempest-icehouse/fedora-19/
+ '20': http://REPLACE_ME/~REPLACE_ME/openstack-tempest-icehouse/fedora-20/
+ RedHat:
+ '7.0': https://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7/
+ use_virtual_env: false
+ public_allocation_end: 10.2.84.71
+ skip:
+ files: null
+ tests: null
+ public_allocation_start: 10.2.84.51
+ physnet: physnet1
+ use_custom_repo: false
+ public_subnet_cidr: 10.2.84.0/24
+ public_subnet_gateway: 10.2.84.1
+ additional_default_settings:
+ - section: compute
+ option: flavor_ref
+ value: 1
+ cirros_image_file: cirros-0.3.1-x86_64-disk.img
+ setup_method: tempest/rpm
+ test_name: all
+ rdo:
+ version: juno
+ rpm: http://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ rpm:
+ version: 20141201
+ dir: ~{{ nodes.tempest.remote_user }}/tempest-dir
+tmp:
+ node_prefix: '{{ node.prefix | reject("none") | join("-") }}-'
+ anchors:
+ - https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ - http://repos.fedorapeople.org/repos/openstack/openstack-juno/
+opm_repo: https://github.com/redhat-openstack/openstack-puppet-modules.git
+workaround_vif_plugging: false
+openstack_packstack_rpm: http://REPLACE_ME/brewroot/packages/openstack-puppet-modules/2013.2/9.el6ost/noarch/openstack-puppet-modules-2013.2-9.el6ost.noarch.rpm
+nodes:
+ compute:
+ name: oscompute11.opnfv.com
+ hostname: oscompute11.opnfv.com
+ short_name: oscompute11
+ type: compute
+ host_type: baremetal
+ hostgroup: Compute
+ mac_address: "10:23:45:67:89:AB"
+ bmc_ip: 10.4.17.2
+ bmc_mac: "10:23:45:67:88:AB"
+ bmc_user: root
+ bmc_pass: root
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: ""
+ groups:
+ - compute
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller1:
+ name: oscontroller1.opnfv.com
+ hostname: oscontroller1.opnfv.com
+ short_name: oscontroller1
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network_ODL
+ mac_address: "10:23:45:67:89:AC"
+ bmc_ip: 10.4.17.3
+ bmc_mac: "10:23:45:67:88:AC"
+ bmc_user: root
+ bmc_pass: root
+ private_ip: controller1_private
+ private_mac: "10:23:45:67:87:AC"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller2:
+ name: oscontroller2.opnfv.com
+ hostname: oscontroller2.opnfv.com
+ short_name: oscontroller2
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network
+ mac_address: "10:23:45:67:89:AD"
+ bmc_ip: 10.4.17.4
+ bmc_mac: "10:23:45:67:88:AD"
+ bmc_user: root
+ bmc_pass: root
+ private_ip: controller2_private
+ private_mac: "10:23:45:67:87:AD"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+ controller3:
+ name: oscontroller3.opnfv.com
+ hostname: oscontroller3.opnfv.com
+ short_name: oscontroller3
+ type: controller
+ host_type: baremetal
+ hostgroup: Controller_Network
+ mac_address: "10:23:45:67:89:AE"
+ bmc_ip: 10.4.17.5
+ bmc_mac: "10:23:45:67:88:AE"
+ bmc_user: root
+ bmc_pass: root
+ private_ip: controller3_private
+ private_mac: "10:23:45:67:87:AE"
+ ansible_ssh_pass: "Op3nStack"
+ admin_password: "octopus"
+ groups:
+ - controller
+ - foreman_nodes
+ - puppet
+ - rdo
+ - neutron
+workaround_mysql_centos7: true
+distro:
+ name: centos
+ centos:
+ '7.0':
+ repos: []
+ short_name: c
+ short_version: 70
+ version: '7.0'
+ rhel:
+ '7.0':
+ kickstart_url: http://REPLACE_ME/released/RHEL-7/7.0/Server/x86_64/os/
+ repos:
+ - section: rhel7-server-rpms
+ name: Packages for RHEL 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/repos/rhel-7.0/x86_64/
+ gpgcheck: 0
+ - section: rhel-7-server-update-rpms
+ name: Update Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/repos/rhel-7.0-z/x86_64/
+ gpgcheck: 0
+ - section: rhel-7-server-optional-rpms
+ name: Optional Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/released/RHEL-7/7.0/Server-optional/x86_64/os/
+ gpgcheck: 0
+ - section: rhel-7-server-extras-rpms
+ name: Optional Packages for Enterprise Linux 7 - $basearch
+ baseurl: http://REPLACE_ME/rel-eng/EXTRAS-7.0-RHEL-7-20140610.0/compose/Server/x86_64/os/
+ gpgcheck: 0
+ '6.5':
+ kickstart_url: http://REPLACE_ME/released/RHEL-6/6.5/Server/x86_64/os/
+ repos:
+ - section: rhel6.5-server-rpms
+ name: Packages for RHEL 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/$basearch/os/Server
+ gpgcheck: 0
+ - section: rhel-6.5-server-update-rpms
+ name: Update Packages for Enterprise Linux 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/rel-eng/repos/RHEL-6.5-Z/$basearch/
+ gpgcheck: 0
+ - section: rhel-6.5-server-optional-rpms
+ name: Optional Packages for Enterprise Linux 6.5 - $basearch
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/optional/$basearch/os
+ gpgcheck: 0
+ - section: rhel6.5-server-rpms-32bit
+ name: Packages for RHEL 6.5 - i386
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/i386/os/Server
+ gpgcheck: 0
+ enabled: 1
+ - section: rhel-6.5-server-update-rpms-32bit
+ name: Update Packages for Enterprise Linux 6.5 - i686
+ baseurl: http://REPLACE_ME.REPLACE_ME/rel-eng/repos/RHEL-6.5-Z/i686/
+ gpgcheck: 0
+ enabled: 1
+ - section: rhel-6.5-server-optional-rpms-32bit
+ name: Optional Packages for Enterprise Linux 6.5 - i386
+ baseurl: http://REPLACE_ME.REPLACE_ME/released/RHEL-6/6.5/Server/optional/i386/os
+ gpgcheck: 0
+ enabled: 1
+ subscription:
+ username: REPLACE_ME
+ password: HWj8TE28Qi0eP2c
+ pool: 8a85f9823e3d5e43013e3ddd4e2a0977
+ config:
+ selinux: permissive
+ ntp_server: 0.pool.ntp.org
+ dns_servers:
+ - 10.4.1.1
+ - 10.4.0.2
+ reboot_delay: 1
+ initial_boot_timeout: 180
+node:
+ prefix:
+ - rdo
+ - pupt
+ - ffqiotcxz1
+ - null
+product:
+ repo_type: production
+ name: rdo
+ short_name: rdo
+ rpm:
+ CentOS: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ Fedora: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ RedHat: https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm
+ short_version: ju
+ repo:
+ production:
+ CentOS:
+ 7.0.1406: http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ '6.5': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '7.0': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ Fedora:
+ '20': http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-20
+ '21': http://repos.fedorapeople.org/repos/openstack/openstack-juno/fedora-21
+ RedHat:
+ '6.6': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '6.5': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-6
+ '7.0': http://repos.fedorapeople.org/repos/openstack/openstack-juno/epel-7
+ version: juno
+ config:
+ enable_epel: y
+ short_repo: prod
+tester:
+ name: tempest
+distro_reboot_options: '--no-wall '' Reboot is triggered by Ansible'' '
+job:
+ verbosity: 1
+ archive:
+ - '{{ tempest.dir }}/etc/tempest.conf'
+ - '{{ tempest.dir }}/etc/tempest.conf.sample'
+ - '{{ tempest.dir }}/*.log'
+ - '{{ tempest.dir }}/*.xml'
+ - /root/
+ - /var/log/
+ - /etc/nova
+ - /etc/ceilometer
+ - /etc/cinder
+ - /etc/glance
+ - /etc/keystone
+ - /etc/neutron
+ - /etc/ntp
+ - /etc/puppet
+ - /etc/qpid
+ - /etc/qpidd.conf
+ - /root
+ - /etc/yum.repos.d
+ - /etc/yum.repos.d
+topology:
+ name: multinode
+ short_name: mt
+workaround_neutron_ovs_udev_loop: true
+workaround_glance_table_utf8: false
+verbosity:
+ debug: 0
+ info: 1
+ warning: 2
+ warn: 2
+ errors: 3
+provisioner:
+ username: admin
+ network:
+ type: nova
+ name: external
+ skip: skip_provision
+ foreman_url: https://10.2.84.2/api/v2/
+ password: octopus
+ type: foreman
+workaround_nova_compute_fix: false
+workarounds:
+ enabled: true
+
diff --git a/foreman/ci/reload_playbook.yml b/foreman/ci/reload_playbook.yml
new file mode 100644
index 0000000..9e3d053
--- /dev/null
+++ b/foreman/ci/reload_playbook.yml
@@ -0,0 +1,16 @@
+---
+- hosts: all
+ tasks:
+ - name: restart machine
+ shell: sleep 2 && shutdown -r now "Ansible updates triggered"
+ async: 1
+ poll: 0
+ ignore_errors: true
+
+ - name: waiting for server to come back
+ local_action: wait_for host="{{ ansible_ssh_host }}"
+ port="{{ ansible_ssh_port }}"
+ state=started
+ delay=60
+ timeout=180
+ sudo: false
diff --git a/foreman/ci/vm_nodes_provision.sh b/foreman/ci/vm_nodes_provision.sh
new file mode 100755
index 0000000..8b24aed
--- /dev/null
+++ b/foreman/ci/vm_nodes_provision.sh
@@ -0,0 +1,98 @@
+#!/usr/bin/env bash
+
+#bootstrap script for VM OPNFV nodes
+#author: Tim Rozet (trozet@redhat.com)
+#
+#Uses Vagrant and VirtualBox
+#VagrantFile uses vm_nodes_provision.sh which configures linux on nodes
+#Depends on Foreman being up to be able to register and apply puppet
+#
+#Pre-requisties:
+#Target system should be Centos7 Vagrant VM
+
+##VARS
+reset=`tput sgr0`
+blue=`tput setaf 4`
+red=`tput setaf 1`
+green=`tput setaf 2`
+
+host_name=REPLACE
+dns_server=REPLACE
+##END VARS
+
+##set hostname
+echo "${blue} Setting Hostname ${reset}"
+hostnamectl set-hostname $host_name
+
+##remove NAT DNS
+echo "${blue} Removing DNS server on first interface ${reset}"
+if ! grep 'PEERDNS=no' /etc/sysconfig/network-scripts/ifcfg-enp0s3; then
+ echo "PEERDNS=no" >> /etc/sysconfig/network-scripts/ifcfg-enp0s3
+ systemctl restart NetworkManager
+fi
+
+if ! ping www.google.com -c 5; then
+ echo "${red} No internet connection, check your route and DNS setup ${reset}"
+ exit 1
+fi
+
+##install EPEL
+if ! yum repolist | grep "epel/"; then
+ if ! rpm -Uvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm; then
+ printf '%s\n' 'vm_provision_nodes.sh: Unable to configure EPEL repo' >&2
+ exit 1
+ fi
+else
+ printf '%s\n' 'vm_nodes_provision.sh: Skipping EPEL repo as it is already configured.'
+fi
+
+##install device-mapper-libs
+##needed for libvirtd on compute nodes
+if ! yum -y upgrade device-mapper-libs; then
+ echo "${red} WARN: Unable to upgrade device-mapper-libs...nova-compute may not function ${reset}"
+fi
+
+echo "${blue} Installing Puppet ${reset}"
+##install puppet
+if ! yum list installed | grep -i puppet; then
+ if ! yum -y install puppet; then
+ printf '%s\n' 'vm_nodes_provision.sh: Unable to install puppet package' >&2
+ exit 1
+ fi
+fi
+
+echo "${blue} Configuring puppet ${reset}"
+cat > /etc/puppet/puppet.conf << EOF
+
+[main]
+vardir = /var/lib/puppet
+logdir = /var/log/puppet
+rundir = /var/run/puppet
+ssldir = \$vardir/ssl
+
+[agent]
+pluginsync = true
+report = true
+ignoreschedules = true
+daemon = false
+ca_server = foreman-server.opnfv.com
+certname = $host_name
+environment = production
+server = foreman-server.opnfv.com
+runinterval = 600
+
+EOF
+
+# Setup puppet to run on system reboot
+/sbin/chkconfig --level 345 puppet on
+
+/usr/bin/puppet agent --config /etc/puppet/puppet.conf -o --tags no_such_tag --server foreman-server.opnfv.com --no-daemonize
+
+sync
+
+# Inform the build system that we are done.
+echo "Informing Foreman that we are built"
+wget -q -O /dev/null --no-check-certificate http://foreman-server.opnfv.com:80/unattended/built
+
+echo "Starting puppet"
+systemctl start puppet