diff options
author | 2017-05-04 11:36:05 -0400 | |
---|---|---|
committer | 2017-05-19 14:26:56 -0400 | |
commit | e3a14b510778a4562875600f7b393b27c1dc8eba (patch) | |
tree | 38be0bc4c5277f37cefe01fafef54609c512f667 /installers/apex | |
parent | c8eba7272eb090642e2489a9dafab10060c1e238 (diff) |
Hello, OPNFV installer projects
Firstly, this patchset looks a bit messy at the onset. relevant
parts are
installers/apex/*.j2
installers/joid/*.j2
installers/compass4nfv/*.j2
and the new verify job that runs check-jinja2.sh
If you look at installers/*/pod_config.yaml.j2 you will see the network
settings for apex joid and compass4nfv installers, the possible to template
hard coded values have been replaced with jinja2 vales, that are populated by
reading one of labs/*/*/config/pod.yaml
eg:
nodes:
- name: pod1-node1
becomes
- name: {{ conf['nodes'][0]['name'] }}
In my last patchset I had ignored data already present in the pod.yaml (which is defined in the pharos spec here: https://gerrit.opnfv.org/gerrit/gitweb?p=pharos.git;a=blob;f=config/pod1.yaml )
I created by own yaml file in an attempt to figure out what all the
installers needed to know to install on any given pod.
this was counter productive.
I have included a script (securedlab/check-jinja2.sh) that will check all
securedlab/installers/*/pod_config.yaml.j2
against all
securedlab/labs/*/pod*.yaml
This is a first step towards having your installers run on any pod that
has a pod file created for it. (securedlab/labs/*/pod[pod-number].yaml)
Moving forward I would like your input on identifing variables in your
installers configs that are needed for deployment but not covered by
securedlab/labs/*/pod*.yaml
Thanks for your time and feedback
Best Regards,
Aric
Change-Id: I5f2f2b403f219a1ec4b35e46a5bc49037a0a89cf
Signed-off-by: agardner <agardner@linuxfoundation.org>
Diffstat (limited to 'installers/apex')
-rwxr-xr-x | installers/apex/network_settings.jinja2 | 216 | ||||
-rw-r--r-- | installers/apex/pod_config.yaml.j2 | 61 |
2 files changed, 61 insertions, 216 deletions
diff --git a/installers/apex/network_settings.jinja2 b/installers/apex/network_settings.jinja2 deleted file mode 100755 index 4ef349c..0000000 --- a/installers/apex/network_settings.jinja2 +++ /dev/null @@ -1,216 +0,0 @@ -# This configuration file defines Network Environment for a -# Baremetal Deployment of OPNFV. It contains default values -# for 5 following networks: -# -# - admin -# - tenant* -# - external* -# - storage* -# - api* -# *) optional networks -# -# Optional networks will be consolidated with the admin network -# if not explicitly configured. -# -# See short description of the networks in the comments below. -# -# "admin" is the short name for Control Plane Network. -# This network should be IPv4 even it is an IPv6 deployment -# IPv6 does not have PXE boot support. -# During OPNFV deployment it is used for node provisioning which will require -# PXE booting as well as running a DHCP server on this network. Be sure to -# disable any other DHCP/TFTP server on this network. -# -# "tenant" is the network used for tenant traffic. -# -# "external" is the network which should have internet or external -# connectivity. External OpenStack networks will be configured to egress this -# network. There can be multiple external networks, but only one assigned as -# "public" which OpenStack public API's will register. -# -# "storage" is the network for storage I/O. -# -# "api" is an optional network for splitting out OpenStack service API -# communication. This should be used for IPv6 deployments. - - -#Meta data for the network configuration -network-config-metadata: - title: LF-POD-1 Network config - version: 0.1 - created: Mon Dec 28 2015 - comment: None - -# DNS Settings -dns-domain: opnfvlf.org -dns-search: opnfvlf.org -dns_nameservers: - - 8.8.8.8 - - 8.8.4.4 -# NTP servers -ntp: - - 0.se.pool.ntp.org - - 1.se.pool.ntp.org -# Syslog server -syslog: - server: 10.128.1.24 - transport: 'tcp' - -# Common network settings -networks: - admin: - enabled: true - installer_vm: - nic_type: interface - members: - - enp6s0 - vlan: native - ip: 192.30.9.1 - usable_ip_range: - - 192.30.9.12 - - 192.30.9.99 - gateway: 192.30.9.1 - cidr: 192.30.9.0/24 - dhcp_range: - - 192.30.9.2 - - 192.30.9.10 - nic_mapping: - compute: - phys_type: interface - members: - - enp6s0 - controller: - phys_type: interface - members: - - enp6s0 - - tenant: - enabled: true - cidr: 11.0.0.0/24 - mtu: 1500 - overlay_id_range: 2,65535 - - segmentation_type: vxlan - - nic_mapping: - compute: - phys_type: interface - uio_driver: uio_pci_generic # UIO driver to use for DPDK scenarios. The value is ignored for non-DPDK scenarios. - vlan: native - members: - - enp7s0 - controller: - phys_type: interface - vlan: native - members: - - enp7s0 - - external: - - public: - enabled: true - mtu: 1500 - installer_vm: - nic_type: interface - vlan: native - members: - - enp8s0 - ip: 172.30.9.67 - cidr: 172.30.9.0/24 - gateway: 172.30.9.1 - floating_ip_range: - - 172.30.9.200 - - 172.30.9.220 - usable_ip_range: - - 172.30.9.70 - - 172.30.9.199 - - nic_mapping: - compute: - phys_type: interface - vlan: native - members: - - enp8s0 - controller: - phys_type: interface - vlan: native - members: - - enp8s0 - external_overlay: - name: Public_internet - type: flat - gateway: 172.30.9.1 - - private_cloud: - enabled: false - mtu: 1500 - installer_vm: - nic_type: interface - vlan: 101 - members: - - em1 - ip: 192.168.38.1 - cidr: 192.168.38.0/24 - gateway: 192.168.38.1 - floating_ip_range: - - 192.168.38.200 - - 192.168.38.220 - usable_ip_range: - - 192.168.38.10 - - 192.168.38.199 - - nic_mapping: - compute: - phys_type: interface - vlan: 101 - members: - - enp8s0 - controller: - phys_type: interface - vlan: 101 - members: - - enp8s0 - external_overlay: - name: private_cloud - type: vlan - segmentation_id: 101 - gateway: 192.168.38.1 - - storage: - enabled: true - cidr: 12.0.0.0/24 - mtu: 1500 - nic_mapping: - compute: - phys_type: interface - vlan: native - members: - - enp9s0 - controller: - phys_type: interface - vlan: native - members: - - enp9s0 - - api: - enabled: false - cidr: fd00:fd00:fd00:4000::/64 - vlan: 13 - mtu: 1500 - nic_mapping: - compute: - phys_type: interface - vlan: native - members: - - enp10s0 - controller: - phys_type: interface - vlan: native - members: - - enp10s0 - -# Apex specific settings -apex: - networks: - admin: - introspection_range: - - 192.30.9.100 - - 192.30.9.120 diff --git a/installers/apex/pod_config.yaml.j2 b/installers/apex/pod_config.yaml.j2 new file mode 100644 index 0000000..2554b47 --- /dev/null +++ b/installers/apex/pod_config.yaml.j2 @@ -0,0 +1,61 @@ +nodes: + node1: + mac_address: "{{ conf['nodes'][0]['remote_mangement']['mac_address'] }}" + ipmi_ip: {{ conf['nodes'][0]['remote_mangement']['address'] }} + ipmi_user: {{ conf['jumphost']['remote_para']['user'] }} + ipmi_pass: {{ conf['jumphost']['remote_para']['pass'] }} + pm_type: "pxe_{{ conf['jumphost']['remote_para']['type'] }}tool" + cpus: {{ conf['nodes'][0]['node']['cpus'] }} + memory: {{ conf['nodes'][0]['node']['memory'] }} + disk: 40 + disk_device: sdb + arch: "{{ conf['nodes'][0]['node']['arch'] }}" + capabilities: "profile:control" + node2: + mac_address: "{{ conf['nodes'][1]['remote_mangement']['mac_address'] }}" + ipmi_ip: {{ conf['nodes'][1]['remote_mangement']['address'] }} + ipmi_user: {{ conf['jumphost']['remote_para']['user'] }} + ipmi_pass: {{ conf['jumphost']['remote_para']['pass'] }} + pm_type: "pxe_{{ conf['jumphost']['remote_para']['type'] }}tool" + cpus: {{ conf['nodes'][1]['node']['cpus'] }} + memory: {{ conf['nodes'][1]['node']['memory'] }} + disk: 40 + disk_device: sdb + arch: "{{ conf['nodes'][1]['node']['arch'] }}" + capabilities: "profile:control" + node3: + mac_address: "{{ conf['nodes'][2]['remote_mangement']['mac_address'] }}" + ipmi_ip: {{ conf['nodes'][2]['remote_mangement']['address'] }} + ipmi_user: {{ conf['jumphost']['remote_para']['user'] }} + ipmi_pass: {{ conf['jumphost']['remote_para']['pass'] }} + pm_type: "pxe_{{ conf['jumphost']['remote_para']['type'] }}tool" + cpus: {{ conf['nodes'][2]['node']['cpus'] }} + memory: {{ conf['nodes'][2]['node']['memory'] }} + disk: 40 + disk_device: sdb + arch: "{{ conf['nodes'][2]['node']['arch'] }}" + capabilities: "profile:control" + node4: + mac_address: "{{ conf['nodes'][3]['remote_mangement']['mac_address'] }}" + ipmi_ip: {{ conf['nodes'][3]['remote_mangement']['address'] }} + ipmi_user: {{ conf['jumphost']['remote_para']['user'] }} + ipmi_pass: {{ conf['jumphost']['remote_para']['pass'] }} + pm_type: "pxe_{{ conf['jumphost']['remote_para']['type'] }}tool" + cpus: {{ conf['nodes'][3]['node']['cpus'] }} + memory: {{ conf['nodes'][3]['node']['memory'] }} + disk: 40 + disk_device: sdb + arch: "{{ conf['nodes'][3]['node']['arch'] }}" + capabilities: "profile:compute" + node5: + mac_address: "{{ conf['nodes'][4]['remote_mangement']['mac_address'] }}" + ipmi_ip: {{ conf['nodes'][4]['remote_mangement']['address'] }} + ipmi_user: {{ conf['jumphost']['remote_para']['user'] }} + ipmi_pass: {{ conf['jumphost']['remote_para']['pass'] }} + pm_type: "pxe_{{ conf['jumphost']['remote_para']['type'] }}tool" + cpus: {{ conf['nodes'][4]['node']['cpus'] }} + memory: {{ conf['nodes'][4]['node']['memory'] }} + disk: 40 + disk_device: sdb + arch: "{{ conf['nodes'][4]['node']['arch'] }}" + capabilities: "profile:compute" |