diff options
author | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2017-08-01 22:18:41 +0200 |
---|---|---|
committer | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2017-08-17 02:59:30 +0200 |
commit | 5039d069265df15ed3d8e41f7a1c7f9457a9d58a (patch) | |
tree | 18a9160f72be9a01ef0008e3aa9912e18262057d /mcp/reclass/classes/cluster/baremetal-mcp-ocata-ovs-ha/infra/kvm.yml | |
parent | 9720ddf955b76d678a08dc7ea53684400c659ce3 (diff) |
Bring in baremetal support
- ci/deploy.sh: fail if default scenario file is missing;
- start by copying reclass/classes/cluster/virtual-mcp-ocata-ovs as
classes/cluster/baremetal-mcp-ocata-ovs;
- add new state (maas) that will handle MaaS configuration;
- Split PXE network in two for baremetal:
* rename old "pxe" virtual network to "mcpcontrol", make it
non-configurable and identical for baremetal/virtual deploys;
* new "pxebr" bridge is dedicated for MaaS fabric network, which
comes with its own DHCP, TFTP etc.;
- Drop hardcoded PXE gateway & static IP for MaaS node, since
"mcpcontrol" remains a NAT-ed virtual network, with its own DHCP;
- Keep internet access available on first interfaces for cfg01/mas01;
- Align MaaS IP addrs (all x.y.z.3), add public IP for easy debug
via MaaS dashboard;
- Add static IP in new network segment (192.168.11.3/24) on MaaS
node's PXE interface;
- Set MaaS PXE interface MTU 1500 (weird network errors with jumbo);
- MaaS node: Add NAT iptables traffic forward from "mcpcontrol" to
"pxebr" interfaces;
- MaaS: Add harcoded lf-pod2 machine info (fixed identation in v6);
- Switch our targeted scenario to HA;
* scenario: s/os-nosdn-nofeature-noha/os-nosdn-nofeature-ha/
- maas region: Use mcp.rsa.pub from ~ubuntu/.ssh/authorized_keys;
- add route for 192.168.11.0/24 via mas01 on cfg01;
- fix race condition on kvm nodes network setup:
* add "noifupdown" support in salt formula for linux.network;
* keep primary eth/br-mgmt unconfigured till reboot;
TODO:
- Read all this info from PDF (Pod Descriptor File) later;
- investigate leftover references to eno2, eth3;
- add public network interfaces config, IPs;
- improve wait conditions for MaaS commision/deploy;
- report upstream breakage in system.single;
Change-Id: Ie8dd584b140991d2bd992acdfe47f5644bf51409
Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com>
Signed-off-by: Guillermo Herrero <Guillermo.Herrero@enea.com>
Signed-off-by: Charalampos Kominos <Charalampos.Kominos@enea.com>
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'mcp/reclass/classes/cluster/baremetal-mcp-ocata-ovs-ha/infra/kvm.yml')
-rw-r--r-- | mcp/reclass/classes/cluster/baremetal-mcp-ocata-ovs-ha/infra/kvm.yml | 150 |
1 files changed, 150 insertions, 0 deletions
diff --git a/mcp/reclass/classes/cluster/baremetal-mcp-ocata-ovs-ha/infra/kvm.yml b/mcp/reclass/classes/cluster/baremetal-mcp-ocata-ovs-ha/infra/kvm.yml new file mode 100644 index 000000000..5c33f9ecd --- /dev/null +++ b/mcp/reclass/classes/cluster/baremetal-mcp-ocata-ovs-ha/infra/kvm.yml @@ -0,0 +1,150 @@ +classes: +- system.linux.system.repo.mcp.openstack +- system.linux.system.repo.mcp.extra +- system.linux.system.repo.saltstack.xenial +- service.keepalived.cluster.single +- system.glusterfs.server.volume.glance +- system.glusterfs.server.volume.keystone +- system.glusterfs.server.cluster +- system.salt.control.virt +- system.salt.control.cluster.openstack_control_cluster +- system.salt.control.cluster.openstack_proxy_cluster +- system.salt.control.cluster.openstack_database_cluster +- system.salt.control.cluster.openstack_message_queue_cluster +- system.salt.control.cluster.openstack_telemetry_cluster +# - system.salt.control.cluster.stacklight_server_cluster +# - system.salt.control.cluster.stacklight_log_cluster +# - system.salt.control.cluster.stacklight_telemetry_cluster +- cluster.baremetal-mcp-ocata-ovs-ha.infra +parameters: + _param: + linux_system_codename: xenial + cluster_vip_address: ${_param:infra_kvm_address} + cluster_node01_address: ${_param:infra_kvm_node01_address} + cluster_node02_address: ${_param:infra_kvm_node02_address} + cluster_node03_address: ${_param:infra_kvm_node03_address} + keepalived_vip_interface: br-ctl + keepalived_vip_virtual_router_id: 69 + deploy_nic: enp6s0 + salt: + control: + size: #RAM 4096,8192,16384,32768,65536 + ##Default production sizing + openstack.control: + cpu: 6 + ram: 8192 + disk_profile: small + net_profile: default + openstack.database: + cpu: 6 + ram: 8192 + disk_profile: large + net_profile: default + openstack.message_queue: + cpu: 6 + ram: 8192 + disk_profile: small + net_profile: default + openstack.telemetry: + cpu: 4 + ram: 4096 + disk_profile: xxlarge + net_profile: default + openstack.proxy: + cpu: 4 + ram: 4096 + disk_profile: small + net_profile: default +# stacklight.log: +# cpu: 2 +# ram: 4096 +# disk_profile: xxlarge +# net_profile: default +# stacklight.server: +# cpu: 2 +# ram: 4096 +# disk_profile: small +# net_profile: default +# stacklight.telemetry: +# cpu: 2 +# ram: 4096 +# disk_profile: xxlarge +# net_profile: default + cluster: + internal: + node: + prx02: + provider: kvm03.${_param:cluster_domain} + mdb01: + image: ${_param:salt_control_xenial_image} + mdb02: + image: ${_param:salt_control_xenial_image} + mdb03: + image: ${_param:salt_control_xenial_image} + ctl01: + image: ${_param:salt_control_xenial_image} + ctl02: + image: ${_param:salt_control_xenial_image} + ctl03: + image: ${_param:salt_control_xenial_image} + dbs01: + image: ${_param:salt_control_xenial_image} + dbs02: + image: ${_param:salt_control_xenial_image} + dbs03: + image: ${_param:salt_control_xenial_image} + msg01: + image: ${_param:salt_control_xenial_image} + msg02: + image: ${_param:salt_control_xenial_image} + msg03: + image: ${_param:salt_control_xenial_image} + prx01: + image: ${_param:salt_control_xenial_image} + prx02: + image: ${_param:salt_control_xenial_image} + virt: + nic: + default: + eth0: + bridge: br-mgmt + model: virtio + eth1: + bridge: br-ctl + model: virtio + linux: + network: + interface: + eth3: + enabled: true + type: eth + proto: manual + address: 0.0.0.0 + netmask: 255.255.255.0 + name: ${_param:deploy_nic} + noifupdown: true + br-mgmt: + enabled: true + proto: dhcp + type: bridge + name_servers: + - 8.8.8.8 + - 8.8.4.4 + use_interfaces: + - ${_param:deploy_nic} + noifupdown: true + vlan300: + enabled: true + proto: manual + type: vlan + name: ${_param:deploy_nic}.300 + use_interfaces: + - ${_param:deploy_nic} + br-ctl: + enabled: true + type: bridge + proto: static + address: ${_param:single_address} + netmask: 255.255.255.0 + use_interfaces: + - ${_param:deploy_nic}.300 |