diff options
23 files changed, 243 insertions, 176 deletions
diff --git a/docs/specs/infra_manager.rst b/docs/specs/infra_manager.rst new file mode 100644 index 00000000..a8ecb548 --- /dev/null +++ b/docs/specs/infra_manager.rst @@ -0,0 +1,130 @@ +PDF and IDF support in XCI +########################### +:date: 2018-04-30 + +This spec introduces the work required to adapt XCI to use PDF and IDF which +will be used for virtual and baremetal deployments + +Definition of Terms +=================== +* Baremetal deployment: Deployment on physical servers as opposed to deploying +software on virtual machines or containers running in the same physical server + +* Virtual deployment: Deployment on virtual machines, i.e. the servers where +nodes will be deployed are virtualized. For example, in OpenStack, computes and +controllers will be virtual machines. This deployment is normally done on just +one physical server + +* PDF: It stands for POD Descriptor File, which is a document that lists the +hardware characteristics of a set of physical or virtual machines which form +the infrastructure. Example: + +https://git.opnfv.org/pharos/tree/config/pdf/pod1.yaml + +* IDF: It stands for Installer Descriptor File, which is a document that +includes useful information for the installers to accomplish the baremetal +deployment. Example: + +https://git.opnfv.org/fuel/tree/mcp/config/labs/local/idf-pod1.yaml + +Problem description +=================== + +Currently, XCI only supports virtualized deployments running in one server. This +is good when the user has limited resources, however, baremetal is the preferred +way to deploy NFV platforms in lab or production environments. Besides, this +limits the scope of the testing greatly because we cannot test NFV hardware +specific features such as SRIOV. + +Proposed change +=============== + +Introduce the infra_manager tool which will prepare the infrastructure for XCI +to drive the deployment in a set of virtual or baremetal nodes. This tool will +execute two tasks: + +1 - Creation of virtual nodes or initialization of the preparations for +baremetal nodes +2 - OS provisioning on nodes, both virtual or baremetal + +Once those steps are ready, XCI will continue with the deployment of the +scenario on the provisioned nodes. + +The infra_manager tool will consume the PDF and IDF files describing the +infrastructure as input. It will then use a <yet-to-be-created-tool> to do +step 1 and bifrost to boot the Operating System in the nodes. + +Among other services Bifrost uses: +- Disk image builder (dib) to generate the OS images +- dnsmasq as the DHCP server which will provide the pxe boot mechanism +- ipmitool to manage the servers + +Bifrost will be deployed inside a VM in the jumphost. + +For the time being, we will create the infrastructure based on the defined XCI +flavors, however, the implementation should not hinder the possibility of +having one pdf and idf per scenario, defining the characteristics and the +number of nodes to be deployed. + +Code impact +----------- + +The new code will be introduced in a new directory called infra_manager under +releng-xci/xci/prototypes + +Tentative User guide +-------------------- + +Assuming the user cloned releng-xci in the jumphost, the following should be +done: + +1 - Move the idf and pdf files which describe the infrastructure to +releng-xci/xci/prototypes/infra_manager/var. There is an example under xci/var + +2 - Export the XCI_FLAVOR variable (e.g. export XCI_FLAVOR=noha) + +3 - Run the <yet-to-be-created-tool> to create the virtual nodes based on the +provided PDF information (cpu, ram, disk...) or initialize the preparations for +baremetal nodes + +4 - Start the bifrost process to boot the nodes + +5 - Run the VIM deployer script: +releng-xci/xci/installer/$inst/deploy.sh + +where $inst = {osa, kubespray, kolla} + +In case of problems, the best way to debug is accessing the bifrost vm and use: + +* bifrost-utils +* ipmitool +* check the DHCP messages in /var/log/syslog + + +Implementation +============== + +Assignee(s) +----------- + +Primary assignee: + Manuel Buil (mbuil) + Jack Morgan (jmorgan1) + Somebody_else_please (niceperson) + +Work items +---------- + +1. Provide support for a dynamically generated inventory based on PDF and IDF. +This mechanism could be used for both baremetal and virtual deployments. + +2. Contribute the servers-prepare.sh script + +3. Contribute the nodes-deploy.sh script + +4. Integrate the three previous components correctly + +5. Provide support for the XCI supported operating systems (opensuse, Ubuntu, +centos) + +6. Allow pdf and idf per scenario diff --git a/xci/config/env-vars b/xci/config/env-vars index bf333bdf..fe75cb80 100755 --- a/xci/config/env-vars +++ b/xci/config/env-vars @@ -50,7 +50,7 @@ export ANSIBLE_HOST_KEY_CHECKING=False # subject of the certificate export XCI_SSL_SUBJECT=${XCI_SSL_SUBJECT:-"/C=US/ST=California/L=San Francisco/O=IT/CN=xci.releng.opnfv.org"} export DEPLOY_SCENARIO=${DEPLOY_SCENARIO:-"os-nosdn-nofeature"} -# Kubespray requires that ansible version is 2.4.0.0 -export XCI_KUBE_ANSIBLE_PIP_VERSION=2.4.0.0 +# Kubespray requires that ansible version is 2.4.4 +export XCI_KUBE_ANSIBLE_PIP_VERSION=2.4.4 # OpenStack global requirements version export OPENSTACK_REQUIREMENTS_VERSION=${OPENSTACK_REQUIREMENTS_VERSION:-$(awk '/requirements_git_install_branch:/ {print $2}' ${XCI_PATH}/xci/installer/osa/files/openstack_services.yml)} diff --git a/xci/config/pinned-versions b/xci/config/pinned-versions index 72a0ff61..ccfc2704 100755 --- a/xci/config/pinned-versions +++ b/xci/config/pinned-versions @@ -43,6 +43,5 @@ export KEEPALIVED_VERSION=$(grep -E '.*name: keepalived' -A 3 \ export HAPROXY_VERSION=$(grep -E '.*name: haproxy_server' -A 3 \ ${XCI_PATH}/xci/installer/osa/files/ansible-role-requirements.yml \ | tail -n1 | sed -n 's/\(^.*: \)\([0-9a-z].*$\)/\2/p') -# HEAD of kubspray "master" as of 27.02.2018 -# kubespray's bug Reference: https://github.com/kubernetes-incubator/kubespray/issues/2400 -export KUBESPRAY_VERSION=${KUBESPRAY_VERSION:-"5d9bb300d716880610c34dd680c167d2d728984d"} +# HEAD of kubspray "master" as of 16.05.2018 +export KUBESPRAY_VERSION=${KUBESPRAY_VERSION:-"38e727dbe1bdf5316fae8d645718cc8279fbda20"} diff --git a/xci/files/xci-destroy-env.sh b/xci/files/xci-destroy-env.sh index 97b76c7c..3de21795 100755 --- a/xci/files/xci-destroy-env.sh +++ b/xci/files/xci-destroy-env.sh @@ -21,6 +21,8 @@ rm -rf /opt/stack # HOME is normally set by sudo -H rm -rf ${HOME}/.config/openstack rm -rf ${HOME}/.ansible +# Wipe repos +rm -rf ${XCI_CACHE}/repos # bifrost installs everything on venv so we need to look there if virtualbmc is not installed on the host. if which vbmc &>/dev/null || { [[ -e ${XCI_VENV}/bifrost/bin/activate ]] && source ${XCI_VENV}/bifrost/bin/activate; }; then diff --git a/xci/installer/kubespray/deploy.sh b/xci/installer/kubespray/deploy.sh index 02a9d430..bcd7dc1d 100755 --- a/xci/installer/kubespray/deploy.sh +++ b/xci/installer/kubespray/deploy.sh @@ -28,8 +28,7 @@ echo "Info: Configuring localhost for kubespray" echo "-----------------------------------------------------------------------" cd $XCI_PLAYBOOKS ansible-playbook ${XCI_ANSIBLE_PARAMS} -e XCI_PATH="${XCI_PATH}" \ - -i ${XCI_FLAVOR_ANSIBLE_FILE_PATH}/inventory/inventory.cfg \ - configure-localhost.yml + -i dynamic_inventory.py configure-localhost.yml echo "-----------------------------------------------------------------------" echo "Info: Configured localhost for kubespray" @@ -46,9 +45,8 @@ echo "Info: Configured localhost for kubespray" echo "Info: Configuring opnfv deployment host for kubespray" echo "-----------------------------------------------------------------------" cd $K8_XCI_PLAYBOOKS -ansible-playbook ${XCI_ANSIBLE_PARAMS} -e XCI_PATH="${XCI_PATH}" \ - -i ${XCI_FLAVOR_ANSIBLE_FILE_PATH}/inventory/inventory.cfg \ - configure-opnfvhost.yml +ansible-playbook ${XCI_ANSIBLE_PARAMS} \ + -i ${XCI_PLAYBOOKS}/dynamic_inventory.py configure-opnfvhost.yml echo "-----------------------------------------------------------------------" echo "Info: Configured opnfv deployment host for kubespray" @@ -65,25 +63,23 @@ if [ $XCI_FLAVOR != "aio" ]; then echo "Info: Configuring target hosts for kubespray" echo "-----------------------------------------------------------------------" cd $K8_XCI_PLAYBOOKS - ansible-playbook ${XCI_ANSIBLE_PARAMS} -e XCI_PATH="${XCI_PATH}" \ - -i ${XCI_FLAVOR_ANSIBLE_FILE_PATH}/inventory/inventory.cfg \ - configure-targethosts.yml + ansible-playbook ${XCI_ANSIBLE_PARAMS} \ + -i ${XCI_PLAYBOOKS}/dynamic_inventory.py configure-targethosts.yml echo "-----------------------------------------------------------------------" echo "Info: Configured target hosts for kubespray" fi echo "Info: Using kubespray to deploy the kubernetes cluster" echo "-----------------------------------------------------------------------" -ssh root@$OPNFV_HOST_IP "set -o pipefail; cd releng-xci/.cache/repos/kubespray;\ - ansible-playbook \ - -i opnfv_inventory/inventory.cfg cluster.yml -b | tee setup-kubernetes.log" +ssh root@$OPNFV_HOST_IP "set -o pipefail; export XCI_FLAVOR=$XCI_FLAVOR; export INSTALLER_TYPE=$INSTALLER_TYPE; \ + cd releng-xci/.cache/repos/kubespray/; ansible-playbook \ + -i opnfv_inventory/dynamic_inventory.py cluster.yml -b | tee setup-kubernetes.log" scp root@$OPNFV_HOST_IP:~/releng-xci/.cache/repos/kubespray/setup-kubernetes.log \ - $LOG_PATH/setup-kubernetes.log + $LOG_PATH/setup-kubernetes.log cd $K8_XCI_PLAYBOOKS ansible-playbook ${XCI_ANSIBLE_PARAMS} \ - -i ${XCI_FLAVOR_ANSIBLE_FILE_PATH}/inventory/inventory.cfg \ - configure-kubenet.yml + -i ${XCI_PLAYBOOKS}/dynamic_inventory.py configure-kubenet.yml echo echo "-----------------------------------------------------------------------" echo "Info: Kubernetes installation is successfully completed!" diff --git a/xci/installer/kubespray/files/aio/inventory/inventory.cfg b/xci/installer/kubespray/files/aio/inventory/inventory.cfg deleted file mode 100644 index a72d0fec..00000000 --- a/xci/installer/kubespray/files/aio/inventory/inventory.cfg +++ /dev/null @@ -1,20 +0,0 @@ -[all] -opnfv ansible_host=192.168.122.2 ip=192.168.122.2 - -[kube-master] -opnfv - -[kube-node] -opnfv - -[etcd] -opnfv - -[k8s-cluster:children] -kube-node -kube-master - -[calico-rr] - -[vault] -opnfv diff --git a/xci/installer/kubespray/files/ha/inventory/inventory.cfg b/xci/installer/kubespray/files/ha/inventory/inventory.cfg deleted file mode 100644 index aae36329..00000000 --- a/xci/installer/kubespray/files/ha/inventory/inventory.cfg +++ /dev/null @@ -1,32 +0,0 @@ -[all] -opnfv ansible_host=192.168.122.2 ip=192.168.122.2 -master1 ansible_host=192.168.122.3 ip=192.168.122.3 -master2 ansible_host=192.168.122.4 ip=192.168.122.4 -master3 ansible_host=192.168.122.5 ip=192.168.122.5 -node1 ansible_host=192.168.122.6 ip=192.168.122.6 -node2 ansible_host=192.168.122.7 ip=192.168.122.7 - -[kube-master] -master1 -master2 -master3 - -[kube-node] -node1 -node2 - -[etcd] -master1 -master2 -master3 - -[k8s-cluster:children] -kube-node -kube-master - -[calico-rr] - -[vault] -master1 -master2 -master3 diff --git a/xci/installer/kubespray/files/mini/inventory/inventory.cfg b/xci/installer/kubespray/files/mini/inventory/inventory.cfg deleted file mode 100644 index bf8bf19b..00000000 --- a/xci/installer/kubespray/files/mini/inventory/inventory.cfg +++ /dev/null @@ -1,22 +0,0 @@ -[all] -opnfv ansible_host=192.168.122.2 ip=192.168.122.2 -master1 ansible_host=192.168.122.3 ip=192.168.122.3 -node1 ansible_host=192.168.122.4 ip=192.168.122.4 - -[kube-master] -master1 - -[kube-node] -node1 - -[etcd] -master1 - -[k8s-cluster:children] -kube-node -kube-master - -[calico-rr] - -[vault] -master1 diff --git a/xci/installer/kubespray/files/noha/inventory/inventory.cfg b/xci/installer/kubespray/files/noha/inventory/inventory.cfg deleted file mode 100644 index 73c1e0a1..00000000 --- a/xci/installer/kubespray/files/noha/inventory/inventory.cfg +++ /dev/null @@ -1,24 +0,0 @@ -[all] -opnfv ansible_host=192.168.122.2 ip=192.168.122.2 -master1 ansible_host=192.168.122.3 ip=192.168.122.3 -node1 ansible_host=192.168.122.4 ip=192.168.122.4 -node2 ansible_host=192.168.122.5 ip=192.168.122.5 - -[kube-master] -master1 - -[kube-node] -node1 -node2 - -[etcd] -master1 - -[k8s-cluster:children] -kube-node -kube-master - -[calico-rr] - -[vault] -master1 diff --git a/xci/installer/kubespray/playbooks/configure-opnfvhost.yml b/xci/installer/kubespray/playbooks/configure-opnfvhost.yml index 01904ba3..c6b29dc0 100644 --- a/xci/installer/kubespray/playbooks/configure-opnfvhost.yml +++ b/xci/installer/kubespray/playbooks/configure-opnfvhost.yml @@ -34,16 +34,18 @@ file: path: "{{ remote_xci_path }}/.cache/repos/kubespray/opnfv_inventory" state: absent - - name: copy kubespray inventory directory - command: "cp -rf {{ remote_xci_flavor_files }}/inventory \ - {{ remote_xci_path }}/.cache/repos/kubespray/opnfv_inventory" - args: - creates: "{{ remote_xci_path }}/.cache/repos/kubespray/opnfv_inventory" + - name: make sure kubespray/opnfv_inventory/group_vars/ exist file: path: "{{ remote_xci_path }}/.cache/repos/kubespray/opnfv_inventory/group_vars" state: directory + - name: copy kubespray inventory directory + file: + src: "{{ remote_xci_playbooks }}/dynamic_inventory.py" + path: "{{ remote_xci_path }}/.cache/repos/kubespray/opnfv_inventory/dynamic_inventory.py" + state: link + - name: Reload XCI deployment host facts setup: filter: ansible_local @@ -74,6 +76,7 @@ with_items: - { name: 'ansible', version: "{{ xci_kube_ansible_pip_version }}" } - { name: 'netaddr' } + - { name: 'ansible-modules-hashivault' } - name: Configure SSL certificates include_tasks: "{{ xci_path }}/xci/playbooks/manage-ssl-certs.yml" diff --git a/xci/opnfv-scenario-requirements.yml b/xci/opnfv-scenario-requirements.yml index 9aa16824..2c34e5c6 100644 --- a/xci/opnfv-scenario-requirements.yml +++ b/xci/opnfv-scenario-requirements.yml @@ -29,7 +29,7 @@ - scenario: os-nosdn-nofeature scm: git src: https://gerrit.opnfv.org/gerrit/releng-xci-scenarios - version: master + version: 6.0.1 role: scenarios/os-nosdn-nofeature/role/os-nosdn-nofeature installers: - installer: osa @@ -45,7 +45,7 @@ - scenario: os-odl-nofeature scm: git src: https://gerrit.opnfv.org/gerrit/releng-xci-scenarios - version: master + version: 6.0.1 role: scenarios/os-odl-nofeature/role/os-odl-nofeature installers: - installer: osa @@ -60,7 +60,7 @@ - scenario: k8-nosdn-nofeature scm: git src: https://gerrit.opnfv.org/gerrit/releng-xci-scenarios - version: master + version: 6.0.1 role: scenarios/k8-nosdn-nofeature/role/k8-nosdn-nofeature installers: - installer: kubespray @@ -92,7 +92,7 @@ - scenario: k8-canal-nofeature scm: git src: https://gerrit.opnfv.org/gerrit/releng-xci-scenarios - version: master + version: 6.0.1 role: scenarios/k8-canal-nofeature/role/k8-canal-nofeature installers: - installer: kubespray @@ -109,7 +109,7 @@ - scenario: k8-calico-nofeature scm: git src: https://gerrit.opnfv.org/gerrit/releng-xci-scenarios - version: master + version: 6.0.1 role: scenarios/k8-calico-nofeature/role/k8-calico-nofeature installers: - installer: kubespray @@ -126,7 +126,7 @@ - scenario: k8-flannel-nofeature scm: git src: https://gerrit.opnfv.org/gerrit/releng-xci-scenarios - version: master + version: 6.0.1 role: scenarios/k8-flannel-nofeature/role/k8-flannel-nofeature installers: - installer: kubespray @@ -139,3 +139,20 @@ - ubuntu - centos - opensuse + +- scenario: k8-contiv-nofeature + scm: git + src: https://gerrit.opnfv.org/gerrit/releng-xci-scenarios + version: master + role: scenarios/k8-contiv-nofeature/role/k8-contiv-nofeature + installers: + - installer: kubespray + flavors: + - aio + - ha + - noha + - mini + distros: + - ubuntu + - centos + - opensuse diff --git a/xci/playbooks/configure-localhost.yml b/xci/playbooks/configure-localhost.yml index 5f091c92..5b64c785 100644 --- a/xci/playbooks/configure-localhost.yml +++ b/xci/playbooks/configure-localhost.yml @@ -25,7 +25,6 @@ state: absent recurse: no with_items: - - "{{ xci_cache }}/repos" - "{{ log_path }} " - "{{ opnfv_ssh_host_keys_path }}" diff --git a/xci/playbooks/dynamic_inventory.py b/xci/playbooks/dynamic_inventory.py index 8f498742..6d9d217f 100755 --- a/xci/playbooks/dynamic_inventory.py +++ b/xci/playbooks/dynamic_inventory.py @@ -13,6 +13,7 @@ # Based on https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/cobbler.py import argparse +import glob import os import sys import yaml @@ -30,19 +31,26 @@ class XCIInventory(object): self.inventory['_meta']['hostvars'] = {} self.installer = os.environ.get('INSTALLER_TYPE', 'osa') self.flavor = os.environ.get('XCI_FLAVOR', 'mini') + self.flavor_files = os.path.dirname(os.path.realpath(__file__)) + "/../installer/" + self.installer + "/files/" + self.flavor # Static information for opnfv host for now self.add_host('opnfv') - self.add_hostvar('opnfv', 'ansible_ssh_host', '192.168.122.2') + self.add_hostvar('opnfv', 'ansible_host', '192.168.122.2') + self.add_hostvar('opnfv', 'ip', '192.168.122.2') self.add_to_group('deployment', 'opnfv') self.add_to_group('opnfv', 'opnfv') self.opnfv_networks = {} self.opnfv_networks['opnfv'] = {} - self.opnfv_networks['opnfv']['admin'] = '172.29.236.10' - self.opnfv_networks['opnfv']['public'] = '192.168.122.2' - self.opnfv_networks['opnfv']['private'] = '172.29.240.10' - self.opnfv_networks['opnfv']['storage'] = '172.29.244.10' + self.opnfv_networks['opnfv']['admin'] = {} + self.opnfv_networks['opnfv']['admin']['address'] = '172.29.236.10/22' + self.opnfv_networks['opnfv']['public'] = {} + self.opnfv_networks['opnfv']['public']['address'] = '192.168.122.2/24' + self.opnfv_networks['opnfv']['public']['gateway'] = '192.168.122.1' + self.opnfv_networks['opnfv']['private'] = {} + self.opnfv_networks['opnfv']['private']['address'] = '172.29.240.10/22' + self.opnfv_networks['opnfv']['storage'] = {} + self.opnfv_networks['opnfv']['storage']['address'] = '172.29.244.10/24' self.read_pdf_idf() @@ -93,11 +101,15 @@ class XCIInventory(object): pdf_host_info = filter(lambda x: x['name'] == host, pdf['nodes'])[0] native_vlan_if = filter(lambda x: x['vlan'] == 'native', pdf_host_info['interfaces']) self.add_hostvar(hostname, 'ansible_host', native_vlan_if[0]['address']) + self.add_hostvar(hostname, 'ip', native_vlan_if[0]['address']) host_networks[hostname] = {} # And now record the rest of the information - for network in idf['idf']['net_config'].keys(): + for network, ndata in idf['idf']['net_config'].items(): network_interface_num = idf['idf']['net_config'][network]['interface'] - host_networks[hostname][network] = pdf_host_info['interfaces'][int(network_interface_num)]['address'] + host_networks[hostname][network] = {} + host_networks[hostname][network]['address'] = pdf_host_info['interfaces'][int(network_interface_num)]['address'] + "/" + str(ndata['mask']) + if 'gateway' in ndata.keys(): + host_networks[hostname][network]['gateway'] = str(ndata['gateway']) + "/" + str(ndata['mask']) host_networks.update(self.opnfv_networks) @@ -107,6 +119,24 @@ class XCIInventory(object): for parent in idf['xci'][self.installer]['groups'].keys(): map(lambda x: self.add_group(x, parent), idf['xci'][self.installer]['groups'][parent]) + # Read additional group variables + self.read_additional_group_vars() + + def read_additional_group_vars(self): + if not os.path.exists(self.flavor_files + "/inventory/group_vars"): + return + group_dir = self.flavor_files + "/inventory/group_vars/*.yml" + group_file = glob.glob(group_dir) + for g in group_file: + with open(g) as f: + try: + group_vars = yaml.safe_load(f) + except yaml.YAMLError as e: + print(e) + sys.exit(1) + for k,v in group_vars.items(): + self.add_groupvar(os.path.basename(g.replace('.yml', '')), k, v) + def dump(self, data): print (json.dumps(data, sort_keys=True, indent=2)) @@ -134,8 +164,8 @@ class XCIInventory(object): self.inventory['_meta']['hostvars'][host].update({param: value}) def add_groupvar(self, group, param, value): - if group not in self.groupvars(group): - self.inventory[group]['vars'] = {} + if param not in self.groupvars(group): + self.inventory[group]['vars'][param] = {} self.inventory[group]['vars'].update({param: value}) def hostvars(self): diff --git a/xci/playbooks/get-opnfv-scenario-requirements.yml b/xci/playbooks/get-opnfv-scenario-requirements.yml index 945a7802..a9165709 100644 --- a/xci/playbooks/get-opnfv-scenario-requirements.yml +++ b/xci/playbooks/get-opnfv-scenario-requirements.yml @@ -86,8 +86,6 @@ state: directory become: true - # NOTE(hwoarang) We have to check all levels of the local fact before we add it - # otherwise Ansible will fail. - name: Record scenario information ini_file: create: yes @@ -97,10 +95,6 @@ value: "{{ xci_scenario.role | basename }}" path: "/etc/ansible/facts.d/xci.fact" become: true - when: ansible_local is not defined - or (ansible_local is defined and ansible_local.xci is not defined) - or (ansible_local is defined and ansible_local.xci is defined and ansible_local.xci.scenarios is not defined) - or (ansible_local is defined and ansible_local.xci is defined and ansible_local.xci.scenarios is defined and ansible_local.xci.scenarios.role is not defined) - name: Fail if {{ deploy_scenario }} is not supported fail: diff --git a/xci/playbooks/roles/bootstrap-host/tasks/network_debian.yml b/xci/playbooks/roles/bootstrap-host/tasks/network_debian.yml index 380e4c52..3cac1e22 100644 --- a/xci/playbooks/roles/bootstrap-host/tasks/network_debian.yml +++ b/xci/playbooks/roles/bootstrap-host/tasks/network_debian.yml @@ -45,10 +45,10 @@ - { name: "{{ ansible_local.xci.network.xci_interface }}.10", vlan_id: 10 } - { name: "{{ ansible_local.xci.network.xci_interface }}.30", vlan_id: 30 } - { name: "{{ ansible_local.xci.network.xci_interface }}.20", vlan_id: 20 } - - { name: "br-mgmt", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.10", ip: "{{ host_info[inventory_hostname].admin }}", prefix: "255.255.252.0" } - - { name: "br-vxlan", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.30", ip: "{{ host_info[inventory_hostname].private }}", prefix: "255.255.252.0" } - - { name: "br-vlan", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}", ip: "{{ host_info[inventory_hostname].public }}", prefix: "255.255.255.0", gateway: "192.168.122.1" } - - { name: "br-storage", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.20", ip: "{{ host_info[inventory_hostname].storage }}", prefix: "255.255.252.0" } + - { name: "br-mgmt", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.10", network: "{{ host_info[inventory_hostname].admin }}" } + - { name: "br-vxlan", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.30", network: "{{ host_info[inventory_hostname].private }}" } + - { name: "br-vlan", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}", network: "{{ host_info[inventory_hostname].public }}" } + - { name: "br-storage", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.20", network: "{{ host_info[inventory_hostname].storage }}" } loop_control: label: "{{ item.name }}" diff --git a/xci/playbooks/roles/bootstrap-host/tasks/network_redhat.yml b/xci/playbooks/roles/bootstrap-host/tasks/network_redhat.yml index 9dce50b6..b06a8695 100644 --- a/xci/playbooks/roles/bootstrap-host/tasks/network_redhat.yml +++ b/xci/playbooks/roles/bootstrap-host/tasks/network_redhat.yml @@ -17,17 +17,17 @@ - { name: "{{ ansible_local.xci.network.xci_interface }}.10", bridge: "br-mgmt" , vlan_id: 10 } - { name: "{{ ansible_local.xci.network.xci_interface }}.20", bridge: "br-storage", vlan_id: 20 } - { name: "{{ ansible_local.xci.network.xci_interface }}.30", bridge: "br-vxlan" , vlan_id: 30 } - - { name: "br-vlan" , ip: "{{ host_info[inventory_hostname].public }}", prefix: 24 } - - { name: "br-mgmt" , ip: "{{ host_info[inventory_hostname].admin }}", prefix: 22 } - - { name: "br-storage", ip: "{{ host_info[inventory_hostname].storage }}", prefix: 22 } - - { name: "br-vxlan" , ip: "{{ host_info[inventory_hostname].private}}", prefix: 22 } + - { name: "br-vlan" , network: "{{ host_info[inventory_hostname].public }}" } + - { name: "br-mgmt" , network: "{{ host_info[inventory_hostname].admin }}" } + - { name: "br-storage", network: "{{ host_info[inventory_hostname].storage }}" } + - { name: "br-vxlan" , network: "{{ host_info[inventory_hostname].private }}" } loop_control: label: "{{ item.name }}" - name: Add default route through br-vlan lineinfile: path: "/etc/sysconfig/network-scripts/ifcfg-br-vlan" - line: "GATEWAY=192.168.122.1" + line: "GATEWAY={{ host_info[inventory_hostname]['public']['gateway'] | ipaddr('address') }}" - name: restart network service service: diff --git a/xci/playbooks/roles/bootstrap-host/tasks/network_suse.yml b/xci/playbooks/roles/bootstrap-host/tasks/network_suse.yml index b1059c81..c9c9d83c 100644 --- a/xci/playbooks/roles/bootstrap-host/tasks/network_suse.yml +++ b/xci/playbooks/roles/bootstrap-host/tasks/network_suse.yml @@ -17,10 +17,10 @@ - { name: "{{ ansible_local.xci.network.xci_interface }}.10", vlan_id: 10 } - { name: "{{ ansible_local.xci.network.xci_interface }}.30", vlan_id: 30 } - { name: "{{ ansible_local.xci.network.xci_interface }}.20", vlan_id: 20 } - - { name: "br-mgmt", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.10", ip: "{{ host_info[inventory_hostname].admin }}/22" } - - { name: "br-vxlan", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.30", ip: "{{ host_info[inventory_hostname].private }}/22" } - - { name: "br-vlan", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}", ip: "{{ host_info[inventory_hostname].public }}/24" } - - { name: "br-storage", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.20", ip: "{{ host_info[inventory_hostname].storage }}/22" } + - { name: "br-mgmt", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.10", network: "{{ host_info[inventory_hostname].admin }}" } + - { name: "br-vxlan", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.30", network: "{{ host_info[inventory_hostname].private }}" } + - { name: "br-vlan", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}", network: "{{ host_info[inventory_hostname].public }}" } + - { name: "br-storage", bridge_ports: "{{ ansible_local.xci.network.xci_interface }}.20", network: "{{ host_info[inventory_hostname].storage }}" } loop_control: label: "{{ item.name }}" @@ -35,7 +35,7 @@ src: "{{ installer_type }}/{{ ansible_os_family | lower }}.routes.j2" dest: "/etc/sysconfig/network/ifroute-{{ item.name }}" with_items: - - { name: "br-vlan", gateway: "192.168.122.1", route: "default" } + - { name: "br-vlan", gateway: "{{ host_info[inventory_hostname]['public']['gateway'] }}", route: "default" } - name: restart network service service: diff --git a/xci/playbooks/roles/bootstrap-host/templates/osa/debian.interface.j2 b/xci/playbooks/roles/bootstrap-host/templates/osa/debian.interface.j2 index 56db509b..3eddce45 100644 --- a/xci/playbooks/roles/bootstrap-host/templates/osa/debian.interface.j2 +++ b/xci/playbooks/roles/bootstrap-host/templates/osa/debian.interface.j2 @@ -25,14 +25,12 @@ iface {{ item.name }} inet static post-down ip link del br-vlan-veth || true bridge_ports br-vlan-veth {% endif %} -{% if item.ip is defined %} - address {{ item.ip }} +{% if item.network is defined %} + address {{ item.network.address | ipaddr('address') }} + netmask {{ item.network.address | ipaddr('netmask') }} {% endif %} -{% if item.prefix is defined %} - netmask {{ item.prefix }} -{% endif %} -{% if item.gateway is defined %} - gateway {{ item.gateway }} +{% if item.network is defined and item.network.gateway is defined %} + gateway {{ item.network.gateway | ipaddr('address') }} {% endif %} {% endif %} diff --git a/xci/playbooks/roles/bootstrap-host/templates/osa/redhat.interface.j2 b/xci/playbooks/roles/bootstrap-host/templates/osa/redhat.interface.j2 index d3364385..fa957764 100644 --- a/xci/playbooks/roles/bootstrap-host/templates/osa/redhat.interface.j2 +++ b/xci/playbooks/roles/bootstrap-host/templates/osa/redhat.interface.j2 @@ -14,9 +14,6 @@ TYPE=Bridge DELAY=0 STP=off {% endif %} -{% if item.ip is defined %} -IPADDR={{ item.ip }} -{% endif %} -{% if item.prefix is defined %} -PREFIX={{ item.prefix }} +{% if item.network is defined %} +IPADDR={{ item.network.address }} {% endif %} diff --git a/xci/playbooks/roles/bootstrap-host/templates/osa/suse.interface.j2 b/xci/playbooks/roles/bootstrap-host/templates/osa/suse.interface.j2 index 27b01eb4..70811a09 100644 --- a/xci/playbooks/roles/bootstrap-host/templates/osa/suse.interface.j2 +++ b/xci/playbooks/roles/bootstrap-host/templates/osa/suse.interface.j2 @@ -10,8 +10,8 @@ BRIDGE_FORWARDDELAY='0' BRIDGE_STP=off BRIDGE_PORTS={{ item.bridge_ports }} {% endif %} -{% if item.ip is defined %} -IPADDR={{ item.ip }} +{% if item.network is defined %} +IPADDR={{ item.network.address }} {% endif %} PRE_UP_SCRIPT="compat:suse:network-config-suse" POST_DOWN_SCRIPT="compat:suse:network-config-suse" diff --git a/xci/playbooks/roles/bootstrap-host/templates/osa/suse.routes.j2 b/xci/playbooks/roles/bootstrap-host/templates/osa/suse.routes.j2 index 7c868447..93941fad 100644 --- a/xci/playbooks/roles/bootstrap-host/templates/osa/suse.routes.j2 +++ b/xci/playbooks/roles/bootstrap-host/templates/osa/suse.routes.j2 @@ -1 +1 @@ -{{ item.route }} {{ item.gateway }} +{{ item.route }} {{ item.gateway | ipaddr('address') }} diff --git a/xci/var/idf.yml b/xci/var/idf.yml index 148508d9..0238baed 100644 --- a/xci/var/idf.yml +++ b/xci/var/idf.yml @@ -83,7 +83,7 @@ xci: groups: k8s-cluster: - kube-node - - kude-master + - kube-master hostnames: opnfv: opnfv node1: master1 diff --git a/xci/xci-deploy.sh b/xci/xci-deploy.sh index f22f5560..c1654151 100755 --- a/xci/xci-deploy.sh +++ b/xci/xci-deploy.sh @@ -100,8 +100,8 @@ echo "-------------------------------------------------------------------------" # Get scenario variables overrides #------------------------------------------------------------------------------- -source $(find $XCI_PATH/xci/scenarios/${DEPLOY_SCENARIO} -name xci_overrides) &>/dev/null || \ - source $(find $XCI_SCENARIOS_CACHE/${DEPLOY_SCENARIO} -name xci_overrides) &>/dev/null || : +source $(find $XCI_SCENARIOS_CACHE/${DEPLOY_SCENARIO} -name xci_overrides) &>/dev/null && + echo "Sourced ${DEPLOY_SCENARIO} overrides files successfully!" || : #------------------------------------------------------------------------------- # Log info to console |