summaryrefslogtreecommitdiffstats
path: root/jjb/ipv6
AgeCommit message (Collapse)AuthorFilesLines
2017-03-15jjb: Set disable-strict-forbidden-file-verification to 'true'Markos Chandras1-0/+1
Previously, if an upstream patchset contained a change for a file listed in 'forbidden-file-paths' the job would not be triggered. This is not desirable since such a patchset may contain important changes so we enable the 'disable-strict-forbidden-file-verification' option which triggers the job unless the patchset only contains changes for the files listed in 'forbidden-file-paths'. Note: The diff was generated using the following script: for i in $(grep -l -r forbidden-file-paths *);do sed -i "s/\(^.*\)forbidden-file-paths/\1disable-strict-forbidden-file-verification: \'true\'\n&/" $i; done Please double check that the changes look sensible for each team's project. Change-Id: Ifa86d3a39b36375b2fd52b449e29c8dc757499b4 Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-02-01merge GIT_BRANCH and GERRIT_BRANCH into BRANCHRyota MIBU1-1/+0
GIT_BRANCH parameter is different; master or stable/danube in daily jobs, but 'refs/changes/57/27657/6' in verify jobs. This breaks some job builders triggered by the both types of jobs. We have GERRIT_BRANCH parameter for verify jobs to identify stream and expected branch into the patch will be merged after it got +2 and submitted. To avoid further confusion and to have common job builders for daily and verify jobs, this patch introduce BRANCH parameter. GERRIT_BRANCH is now deprecated. Change-Id: Ibcd42c1cd8a0be0f330878b21d3011f1ec97043b Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2017-01-09Remove colorado jobs and create danube jobsFatih Degirmenci1-1/+1
Danube stream for the projects with daily jobs is disabled. Danube stream for the projects without daily jobs is left as enabled as there will be no changes coming to danube branch until branch is created. Dovetail, Apex and Fuel jobs have not been updated yet. Change-Id: Ice39826c8f829157fa864370557837290838f634 Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2016-12-19clean scm defenitionsRyota MIBU1-4/+1
This patch makes sure we use 2 types of scm, and allows us to specify them by adding one line in job-templates. - git-scm # for daily jobs and merge jobs - git-scm-gerrit # for verify jobs [New] Change-Id: Iddc8a5e0e115193c7081a6d6c53da209900e95c8 Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2016-12-01Restrict Gerrit Triggers to OPNFV Gerrit ServerTrevor Bramwell1-0/+1
With the addition of ODL and OpenStack Gerrit servers, it's important we don't generate additional noise by accidentally building against these Gerrit servers when we don't intend to. JIRA: RELENG-179 Change-Id: Ia163c6c3eaa58e8e21dc6548a839062fcbde39ed Signed-off-by: Trevor Bramwell <tbramwell@linuxfoundation.org>
2016-08-22Create project jobs for colorado branchFatih Degirmenci1-1/+5
Daily jobs for Colorado branch for installer and test projects have not been created yet and it needs to be done via separate patches. Change-Id: I34517e89dfc502ce5741733e01bf8425d513df02 Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2016-06-15Tie all verify/merge/build to opnfv-build-ubuntuFatih Degirmenci1-1/+1
We reconfigured centos build server now and it is important to know which jobs require centos and which ones ubuntu. The machines with ubuntu were labelled with opnfv-build previously, preventing us from keeping track of which projects require what OS. This tries to solve that. Change-Id: I1fb2912ec49f5bc2781853e500508d9992d59fbb Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2016-01-07Enable verify and merge jobs for stable/brahmaputra branchFatih Degirmenci1-5/+4
Change-Id: I5f811a0db6c1725e02b3bfd51d8c7c21b12633a2 Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2015-12-14Fix branch parameter and change stream name to brahmaputraFatih Degirmenci1-3/+3
Change-Id: I9005cb7cee44873b37fb310e5850d85d887c958d Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2015-12-11Remove job_defaults from jobsFatih Degirmenci1-9/+2
Change-Id: Id936700af4b842d9a79db9004ed02f5d571ed17a Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2015-12-11Cleanup jjb filesFatih Degirmenci1-96/+28
- Remove the jobs that do nothing such as merge and daily jobs - Leave only verify jobs for all the projects as placeholders - Introduce stable/brahmaputra branch and keep it disabled Please note that the "real" jobs for bottlenecks, compass4nfv, functest, and qtip are not deleted. Change-Id: I80031f77a11c2bf5173fbb7be98294285e3cc2ef Signed-off-by: Fatih Degirmenci <fatih.degirmenci@ericsson.com>
2015-12-07jjb: use default logrotate setting in all projectsRyota MIBU1-12/+0
Many projects have own logrotate definitions copied from the template. This patch make sure all projects to use the default logrotate setting, so that the infra admin can configure the values easily. This patch also fixes the logrotate rule for artifacts to keep them in the same duration as console logs exist. Note, this won't effect the hold time of artifacts in artifact.opnfv.org . Change-Id: I708a675c7e87e5f830ee36009f0c6913c003b2ed Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2015-12-01jjb: add default logrotate to releng-defaults.yamlRyota MIBU1-12/+0
Change-Id: I373d24be32e154b25d685df47e6d06ad352877c4 Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2015-12-01jjb: move project-style to releng-defaults.yamlRyota MIBU1-12/+0
Change-Id: Iced99bd62a8a246984e67dc28be7d4dca149e22b Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2015-11-27jjb: move ssh wrappers to releng-defaults.yamlRyota MIBU1-12/+0
Change-Id: I8c26ca0e0cc8d5e6a57c9cb05be663f84f2293d2 Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2015-09-07Remove swp filesRyota MIBU1-0/+0
Change-Id: I2c6d5afed15a86a41d6215c94b8560e0a04d0b3e Signed-off-by: Ryota MIBU <r-mibu@cq.jp.nec.com>
2015-04-08Fix {branch} parameter not foundThanh Ha1-2/+2
JIRA: 0000 Change-Id: I784e48c181bc2c1fda7d52539e8775a253d0b128 Signed-off-by: Thanh Ha <thanh.ha@linuxfoundation.org>
2015-03-06Add merge and verify jobs for all projects that lack said jobsAric Gardner1-1/+120
Change-Id: Ib85d6e162d2ebb37d0df60738c16d678ebc5326e Signed-off-by: Aric Gardner <agardner@linuxfoundation.org>
2015-03-01Remove unnecessary CFG filesThanh Ha1-3/+0
These files are used by templates from opendaylight/releng/builder's python scripts to generate JJB files automatically. Those scripts don't appear to exist here. Change-Id: I410188ea09221fbd5294121b6ebc15731e6bc794 Signed-off-by: Thanh Ha <thanh.ha@linuxfoundation.org>
2015-02-18Inital commit for jenkins job builderAric Gardner3-0/+69
Change-Id: I8c50158e55a6ddb46fd1f74dbc81e668402e089f Signed-off-by: Aric Gardner <agardner@linuxfoundation.org>
26' href='#n626'>626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641
#!/usr/bin/python
#
# Copyright (c) 2017 Cable Television Laboratories, Inc. ("CableLabs")
#                    and others.  All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This script is responsible for deploying virtual environments
import argparse
import logging
import os
import re

from snaps import file_utils
from snaps.openstack.create_flavor import FlavorSettings, OpenStackFlavor
from snaps.openstack.create_image import ImageSettings
from snaps.openstack.create_instance import VmInstanceSettings
from snaps.openstack.create_network import PortSettings, NetworkSettings
from snaps.openstack.create_router import RouterSettings
from snaps.openstack.create_keypairs import KeypairSettings
from snaps.openstack.os_credentials import OSCreds, ProxySettings
from snaps.openstack.utils import deploy_utils
from snaps.provisioning import ansible_utils

__author__ = 'spisarski'

logger = logging.getLogger('deploy_venv')

ARG_NOT_SET = "argument not set"


def __get_os_credentials(os_conn_config):
    """
    Returns an object containing all of the information required to access OpenStack APIs
    :param os_conn_config: The configuration holding the credentials
    :return: an OSCreds instance
    """
    proxy_settings = None
    http_proxy = os_conn_config.get('http_proxy')
    if http_proxy:
        tokens = re.split(':', http_proxy)
        ssh_proxy_cmd = os_conn_config.get('ssh_proxy_cmd')
        proxy_settings = ProxySettings(tokens[0], tokens[1], ssh_proxy_cmd)

    return OSCreds(username=os_conn_config.get('username'),
                   password=os_conn_config.get('password'),
                   auth_url=os_conn_config.get('auth_url'),
                   project_name=os_conn_config.get('project_name'),
                   proxy_settings=proxy_settings)


def __parse_ports_config(config):
    """
    Parses the "ports" configuration
    :param config: The dictionary to parse
    :return: a list of PortConfig objects
    """
    out = list()
    for port_config in config:
        out.append(PortSettings(config=port_config.get('port')))
    return out


def __create_flavors(os_conn_config, flavors_config, cleanup=False):
    """
    Returns a dictionary of flavors where the key is the image name and the value is the image object
    :param os_conn_config: The OpenStack connection credentials
    :param flavors_config: The list of image configurations
    :param cleanup: Denotes whether or not this is being called for cleanup or not
    :return: dictionary
    """
    flavors = {}

    if flavors_config:
        try:
            for flavor_config_dict in flavors_config:
                flavor_config = flavor_config_dict.get('flavor')
                if flavor_config and flavor_config.get('name'):
                    flavor_creator = OpenStackFlavor(__get_os_credentials(os_conn_config),
                                                     FlavorSettings(flavor_config))
                    flavor_creator.create(cleanup=cleanup)
                    flavors[flavor_config['name']] = flavor_creator
        except Exception as e:
            for key, flavor_creator in flavors.items():
                flavor_creator.clean()
            raise e
        logger.info('Created configured flavors')

    return flavors


def __create_images(os_conn_config, images_config, cleanup=False):
    """
    Returns a dictionary of images where the key is the image name and the value is the image object
    :param os_conn_config: The OpenStack connection credentials
    :param images_config: The list of image configurations
    :param cleanup: Denotes whether or not this is being called for cleanup or not
    :return: dictionary
    """
    images = {}

    if images_config:
        try:
            for image_config_dict in images_config:
                image_config = image_config_dict.get('image')
                if image_config and image_config.get('name'):
                    images[image_config['name']] = deploy_utils.create_image(__get_os_credentials(os_conn_config),
                                                                             ImageSettings(image_config), cleanup)
        except Exception as e:
            for key, image_creator in images.items():
                image_creator.clean()
            raise e
        logger.info('Created configured images')

    return images


def __create_networks(os_conn_config, network_confs, cleanup=False):
    """
    Returns a dictionary of networks where the key is the network name and the value is the network object
    :param os_conn_config: The OpenStack connection credentials
    :param network_confs: The list of network configurations
    :param cleanup: Denotes whether or not this is being called for cleanup or not
    :return: dictionary
    """
    network_dict = {}

    if network_confs:
        try:
            for network_conf in network_confs:
                net_name = network_conf['network']['name']
                os_creds = __get_os_credentials(os_conn_config)
                network_dict[net_name] = deploy_utils.create_network(
                    os_creds, NetworkSettings(config=network_conf['network']), cleanup)
        except Exception as e:
            for key, net_creator in network_dict.items():
                net_creator.clean()
            raise e

        logger.info('Created configured networks')

    return network_dict


def __create_routers(os_conn_config, router_confs, cleanup=False):
    """
    Returns a dictionary of networks where the key is the network name and the value is the network object
    :param os_conn_config: The OpenStack connection credentials
    :param router_confs: The list of router configurations
    :param cleanup: Denotes whether or not this is being called for cleanup or not
    :return: dictionary
    """
    router_dict = {}
    os_creds = __get_os_credentials(os_conn_config)

    if router_confs:
        try:
            for router_conf in router_confs:
                router_name = router_conf['router']['name']
                router_dict[router_name] = deploy_utils.create_router(
                    os_creds, RouterSettings(config=router_conf['router']), cleanup)
        except Exception as e:
            for key, router_creator in router_dict.items():
                router_creator.clean()
            raise e

        logger.info('Created configured networks')

    return router_dict


def __create_keypairs(os_conn_config, keypair_confs, cleanup=False):
    """
    Returns a dictionary of keypairs where the key is the keypair name and the value is the keypair object
    :param os_conn_config: The OpenStack connection credentials
    :param keypair_confs: The list of keypair configurations
    :param cleanup: Denotes whether or not this is being called for cleanup or not
    :return: dictionary
    """
    keypairs_dict = {}
    if keypair_confs:
        try:
            for keypair_dict in keypair_confs:
                keypair_config = keypair_dict['keypair']
                kp_settings = KeypairSettings(keypair_config)
                keypairs_dict[keypair_config['name']] = deploy_utils.create_keypair(
                    __get_os_credentials(os_conn_config), kp_settings, cleanup)
        except Exception as e:
            for key, keypair_creator in keypairs_dict.items():
                keypair_creator.clean()
            raise e

        logger.info('Created configured keypairs')

    return keypairs_dict


def __create_instances(os_conn_config, instances_config, image_dict, keypairs_dict, cleanup=False):
    """
    Returns a dictionary of instances where the key is the instance name and the value is the VM object
    :param os_conn_config: The OpenStack connection credentials
    :param instances_config: The list of VM instance configurations
    :param image_dict: A dictionary of images that will probably be used to instantiate the VM instance
    :param keypairs_dict: A dictionary of keypairs that will probably be used to instantiate the VM instance
    :param cleanup: Denotes whether or not this is being called for cleanup or not
    :return: dictionary
    """
    os_creds = __get_os_credentials(os_conn_config)

    vm_dict = {}

    if instances_config:
        try:
            for instance_config in instances_config:
                conf = instance_config.get('instance')
                if conf:
                    if image_dict:
                        image_creator = image_dict.get(conf.get('imageName'))
                        if image_creator:
                            instance_settings = VmInstanceSettings(config=instance_config['instance'])
                            kp_name = conf.get('keypair_name')
                            vm_dict[conf['name']] = deploy_utils.create_vm_instance(
                                os_creds, instance_settings, image_creator.image_settings,
                                keypair_creator=keypairs_dict[kp_name], cleanup=cleanup)
                        else:
                            raise Exception('Image creator instance not found. Cannot instantiate')
                    else:
                        raise Exception('Image dictionary is None. Cannot instantiate')
                else:
                    raise Exception('Instance configuration is None. Cannot instantiate')
        except Exception as e:
            logger.error('Unexpected error creating instances. Attempting to cleanup environment - ' + str(e))
            for key, inst_creator in vm_dict.items():
                inst_creator.clean()
            raise e

        logger.info('Created configured instances')
    # TODO Should there be an error if there isn't an instances config
    return vm_dict


def __apply_ansible_playbooks(ansible_configs, os_conn_config, vm_dict, image_dict, flavor_dict, env_file):
    """
    Applies ansible playbooks to running VMs with floating IPs
    :param ansible_configs: a list of Ansible configurations
    :param os_conn_config: the OpenStack connection configuration used to create an OSCreds instance
    :param vm_dict: the dictionary of newly instantiated VMs where the name is the key
    :param image_dict: the dictionary of newly instantiated images where the name is the key
    :param flavor_dict: the dictionary of newly instantiated flavors where the name is the key
    :param env_file: the path of the environment for setting the CWD so playbook location is relative to the deployment
                     file
    :return: t/f - true if successful
    """
    logger.info("Applying Ansible Playbooks")
    if ansible_configs:
        # Ensure all hosts are accepting SSH session requests
        for vm_inst in list(vm_dict.values()):
            if not vm_inst.vm_ssh_active(block=True):
                logger.warning("Timeout waiting for instance to respond to SSH requests")
                return False

        # Set CWD so the deployment file's playbook location can leverage relative paths
        orig_cwd = os.getcwd()
        env_dir = os.path.dirname(env_file)
        os.chdir(env_dir)

        # Apply playbooks
        for ansible_config in ansible_configs:
            os_creds = __get_os_credentials(os_conn_config)
            __apply_ansible_playbook(ansible_config, os_creds, vm_dict, image_dict, flavor_dict)

        # Return to original directory
        os.chdir(orig_cwd)

    return True


def __apply_ansible_playbook(ansible_config, os_creds, vm_dict, image_dict, flavor_dict):
    """
    Applies an Ansible configuration setting
    :param ansible_config: the configuration settings
    :param os_creds: the OpenStack credentials object
    :param vm_dict: the dictionary of newly instantiated VMs where the name is the key
    :param image_dict: the dictionary of newly instantiated images where the name is the key
    :param flavor_dict: the dictionary of newly instantiated flavors where the name is the key
    """
    if ansible_config:
        remote_user, floating_ips, private_key_filepath, proxy_settings = __get_connection_info(ansible_config, vm_dict)
        if floating_ips:
            retval = ansible_utils.apply_playbook(
                ansible_config['playbook_location'], floating_ips, remote_user, private_key_filepath,
                variables=__get_variables(ansible_config.get('variables'), os_creds, vm_dict, image_dict, flavor_dict),
                proxy_setting=proxy_settings)
            if retval != 0:
                # Not a fatal type of event
                logger.warning('Unable to apply playbook found at location - ' + ansible_config('playbook_location'))


def __get_connection_info(ansible_config, vm_dict):
    """
    Returns a tuple of data required for connecting to the running VMs
    (remote_user, [floating_ips], private_key_filepath, proxy_settings)
    :param ansible_config: the configuration settings
    :param vm_dict: the dictionary of VMs where the VM name is the key
    :return: tuple where the first element is the user and the second is a list of floating IPs and the third is the
    private key file location and the fourth is an instance of the snaps.ProxySettings class
    (note: in order to work, each of the hosts need to have the same sudo_user and private key file location values)
    """
    if ansible_config.get('hosts'):
        hosts = ansible_config['hosts']
        if len(hosts) > 0:
            floating_ips = list()
            remote_user = None
            private_key_filepath = None
            proxy_settings = None
            for host in hosts:
                vm = vm_dict.get(host)
                if vm:
                    fip = vm.get_floating_ip()
                    if fip:
                        remote_user = vm.get_image_user()

                        if fip:
                            floating_ips.append(fip.ip)
                        else:
                            raise Exception('Could not find floating IP for VM - ' + vm.name)

                        private_key_filepath = vm.keypair_settings.private_filepath
                        proxy_settings = vm.get_os_creds().proxy_settings
                else:
                    logger.error('Could not locate VM with name - ' + host)

            return remote_user, floating_ips, private_key_filepath, proxy_settings
    return None


def __get_variables(var_config, os_creds, vm_dict, image_dict, flavor_dict):
    """
    Returns a dictionary of substitution variables to be used for Ansible templates
    :param var_config: the variable configuration settings
    :param os_creds: the OpenStack credentials object
    :param vm_dict: the dictionary of newly instantiated VMs where the name is the key
    :param image_dict: the dictionary of newly instantiated images where the name is the key
    :param flavor_dict: the dictionary of newly instantiated flavors where the name is the key
    :return: dictionary or None
    """
    if var_config and vm_dict and len(vm_dict) > 0:
        variables = dict()
        for key, value in var_config.items():
            value = __get_variable_value(value, os_creds, vm_dict, image_dict, flavor_dict)
            if key and value:
                variables[key] = value
                logger.info("Set Jinga2 variable with key [" + key + "] the value [" + value + ']')
            else:
                logger.warning('Key [' + str(key) + '] or Value [' + str(value) + '] must not be None')
        return variables
    return None


def __get_variable_value(var_config_values, os_creds, vm_dict, image_dict, flavor_dict):
    """
    Returns the associated variable value for use by Ansible for substitution purposes
    :param var_config_values: the configuration dictionary
    :param os_creds: the OpenStack credentials object
    :param vm_dict: the dictionary of newly instantiated VMs where the name is the key
    :param image_dict: the dictionary of newly instantiated images where the name is the key
    :param flavor_dict: the dictionary of newly instantiated flavors where the name is the key
    :return:
    """
    if var_config_values['type'] == 'string':
        return __get_string_variable_value(var_config_values)
    if var_config_values['type'] == 'vm-attr':
        return __get_vm_attr_variable_value(var_config_values, vm_dict)
    if var_config_values['type'] == 'os_creds':
        return __get_os_creds_variable_value(var_config_values, os_creds)
    if var_config_values['type'] == 'port':
        return __get_vm_port_variable_value(var_config_values, vm_dict)
    if var_config_values['type'] == 'image':
        return __get_image_variable_value(var_config_values, image_dict)
    if var_config_values['type'] == 'flavor':
        return __get_flavor_variable_value(var_config_values, flavor_dict)
    return None


def __get_string_variable_value(var_config_values):
    """
    Returns the associated string value
    :param var_config_values: the configuration dictionary
    :return: the value contained in the dictionary with the key 'value'
    """
    return var_config_values['value']


def __get_vm_attr_variable_value(var_config_values, vm_dict):
    """
    Returns the associated value contained on a VM instance
    :param var_config_values: the configuration dictionary
    :param vm_dict: the dictionary containing all VMs where the key is the VM's name
    :return: the value
    """
    vm = vm_dict.get(var_config_values['vm_name'])
    if vm:
        if var_config_values['value'] == 'floating_ip':
            return vm.get_floating_ip().ip
        if var_config_values['value'] == 'image_user':
            return vm.get_image_user()


def __get_os_creds_variable_value(var_config_values, os_creds):
    """
    Returns the associated OS credentials value
    :param var_config_values: the configuration dictionary
    :param os_creds: the credentials
    :return: the value
    """
    logger.info("Retrieving OS Credentials")
    if os_creds:
        if var_config_values['value'] == 'username':
            logger.info("Returning OS username")
            return os_creds.username
        elif var_config_values['value'] == 'password':
            logger.info("Returning OS password")
            return os_creds.password
        elif var_config_values['value'] == 'auth_url':
            logger.info("Returning OS auth_url")
            return os_creds.auth_url
        elif var_config_values['value'] == 'project_name':
            logger.info("Returning OS project_name")
            return os_creds.project_name

    logger.info("Returning none")
    return None


def __get_vm_port_variable_value(var_config_values, vm_dict):
    """
    Returns the associated OS credentials value
    :param var_config_values: the configuration dictionary
    :param vm_dict: the dictionary containing all VMs where the key is the VM's name
    :return: the value
    """
    port_name = var_config_values.get('port_name')
    vm_name = var_config_values.get('vm_name')

    if port_name and vm_name:
        vm = vm_dict.get(vm_name)
        if vm:
            port_value_id = var_config_values.get('port_value')
            if port_value_id:
                if port_value_id == 'mac_address':
                    return vm.get_port_mac(port_name)
                if port_value_id == 'ip_address':
                    return vm.get_port_ip(port_name)


def __get_image_variable_value(var_config_values, image_dict):
    """
    Returns the associated image value
    :param var_config_values: the configuration dictionary
    :param image_dict: the dictionary containing all images where the key is the name
    :return: the value
    """
    logger.info("Retrieving image values")

    if image_dict:
        if var_config_values.get('image_name'):
            image_creator = image_dict.get(var_config_values['image_name'])
            if image_creator:
                if var_config_values.get('value') and var_config_values['value'] == 'id':
                    return image_creator.get_image().id
                if var_config_values.get('value') and var_config_values['value'] == 'user':
                    return image_creator.image_settings.image_user

    logger.info("Returning none")
    return None


def __get_flavor_variable_value(var_config_values, flavor_dict):
    """
    Returns the associated flavor value
    :param var_config_values: the configuration dictionary
    :param flavor_dict: the dictionary containing all flavor creators where the key is the name
    :return: the value or None
    """
    logger.info("Retrieving flavor values")

    if flavor_dict:
        if var_config_values.get('flavor_name'):
            flavor_creator = flavor_dict.get(var_config_values['flavor_name'])
            if flavor_creator:
                if var_config_values.get('value') and var_config_values['value'] == 'id':
                    return flavor_creator.get_flavor().id

    logger.info("Returning none")
    return None


def main(arguments):
    """
    Will need to set environment variable ANSIBLE_HOST_KEY_CHECKING=False or ...
    Create a file located in /etc/ansible/ansible/cfg or ~/.ansible.cfg containing the following content:

    [defaults]
    host_key_checking = False

    CWD must be this directory where this script is located.

    :return: To the OS
    """
    log_level = logging.INFO
    if arguments.log_level != 'INFO':
        log_level = logging.DEBUG
    logging.basicConfig(level=log_level)

    logger.info('Starting to Deploy')
    config = file_utils.read_yaml(arguments.environment)
    logger.debug('Read configuration file - ' + arguments.environment)

    if config:
        os_config = config.get('openstack')

        os_conn_config = None
        flavor_dict = {}
        image_dict = {}
        network_dict = {}
        router_dict = {}
        keypairs_dict = {}
        vm_dict = {}

        if os_config:
            try:
                os_conn_config = os_config.get('connection')

                # Create flavors
                flavor_dict = __create_flavors(os_conn_config, os_config.get('flavors'),
                                               arguments.clean is not ARG_NOT_SET)

                # Create images
                image_dict = __create_images(os_conn_config, os_config.get('images'),
                                             arguments.clean is not ARG_NOT_SET)

                # Create network
                network_dict = __create_networks(os_conn_config, os_config.get('networks'),
                                                 arguments.clean is not ARG_NOT_SET)

                # Create network
                router_dict = __create_routers(os_conn_config, os_config.get('routers'),
                                               arguments.clean is not ARG_NOT_SET)

                # Create keypairs
                keypairs_dict = __create_keypairs(os_conn_config, os_config.get('keypairs'),
                                                  arguments.clean is not ARG_NOT_SET)

                # Create instance
                vm_dict = __create_instances(os_conn_config, os_config.get('instances'), image_dict, keypairs_dict,
                                             arguments.clean is not ARG_NOT_SET)
                logger.info('Completed creating/retrieving all configured instances')
            except Exception as e:
                logger.error('Unexpected error deploying environment. Rolling back due to - ' + str(e))
                __cleanup(vm_dict, keypairs_dict, router_dict, network_dict, image_dict, flavor_dict, True)
                raise e

        # Must enter either block
        if arguments.clean is not ARG_NOT_SET:
            # Cleanup Environment
            __cleanup(vm_dict, keypairs_dict, router_dict, network_dict, image_dict, flavor_dict,
                      arguments.clean_image is not ARG_NOT_SET)
        elif arguments.deploy is not ARG_NOT_SET:
            logger.info('Configuring NICs where required')
            for vm in vm_dict.values():
                vm.config_nics()
            logger.info('Completed NIC configuration')

            # Provision VMs
            ansible_config = config.get('ansible')
            if ansible_config and vm_dict:
                if not __apply_ansible_playbooks(ansible_config, os_conn_config, vm_dict, image_dict, flavor_dict,
                                                 arguments.environment):
                    logger.error("Problem applying ansible playbooks")
    else:
        logger.error('Unable to read configuration file - ' + arguments.environment)
        exit(1)

    exit(0)


def __cleanup(vm_dict, keypairs_dict, router_dict, network_dict, image_dict, flavor_dict, clean_image=False):
    for key, vm_inst in vm_dict.items():
        vm_inst.clean()
    for key, kp_inst in keypairs_dict.items():
        kp_inst.clean()
    for key, router_inst in router_dict.items():
        try:
            router_inst.clean()
        except Exception:
            logger.warning("Router not found continuing to next component")
    for key, net_inst in network_dict.items():
        try:
            net_inst.clean()
        except Exception:
            logger.warning("Network not found continuing to next component")
    if clean_image:
        for key, image_inst in image_dict.items():
            image_inst.clean()
    for key, flavor_inst in flavor_dict.items():
        flavor_inst.clean()


if __name__ == '__main__':
    # To ensure any files referenced via a relative path will begin from the diectory in which this file resides
    os.chdir(os.path.dirname(os.path.realpath(__file__)))

    parser = argparse.ArgumentParser()
    parser.add_argument('-d', '--deploy', dest='deploy', nargs='?', default=ARG_NOT_SET,
                        help='When used, environment will be deployed and provisioned')
    parser.add_argument('-c', '--clean', dest='clean', nargs='?', default=ARG_NOT_SET,
                        help='When used, the environment will be removed')
    parser.add_argument('-i', '--clean-image', dest='clean_image', nargs='?', default=ARG_NOT_SET,
                        help='When cleaning, if this is set, the image will be cleaned too')
    parser.add_argument('-e', '--env', dest='environment', required=True,
                        help='The environment configuration YAML file - REQUIRED')
    parser.add_argument('-l', '--log-level', dest='log_level', default='INFO', help='Logging Level (INFO|DEBUG)')
    args = parser.parse_args()

    if args.deploy is ARG_NOT_SET and args.clean is ARG_NOT_SET:
        print('Must enter either -d for deploy or -c for cleaning up and environment')
        exit(1)
    if args.deploy is not ARG_NOT_SET and args.clean is not ARG_NOT_SET:
        print('Cannot enter both options -d/--deploy and -c/--clean')
        exit(1)
    main(args)