aboutsummaryrefslogtreecommitdiffstats
path: root/mcp/reclass/classes/cluster/baremetal-mcp-ocata-common/haproxy_openstack_api.yml
diff options
context:
space:
mode:
authorAlexandru Avadanii <Alexandru.Avadanii@enea.com>2017-08-01 22:18:41 +0200
committerAlexandru Avadanii <Alexandru.Avadanii@enea.com>2017-08-17 02:59:30 +0200
commit5039d069265df15ed3d8e41f7a1c7f9457a9d58a (patch)
tree18a9160f72be9a01ef0008e3aa9912e18262057d /mcp/reclass/classes/cluster/baremetal-mcp-ocata-common/haproxy_openstack_api.yml
parent9720ddf955b76d678a08dc7ea53684400c659ce3 (diff)
Bring in baremetal support
- ci/deploy.sh: fail if default scenario file is missing; - start by copying reclass/classes/cluster/virtual-mcp-ocata-ovs as classes/cluster/baremetal-mcp-ocata-ovs; - add new state (maas) that will handle MaaS configuration; - Split PXE network in two for baremetal: * rename old "pxe" virtual network to "mcpcontrol", make it non-configurable and identical for baremetal/virtual deploys; * new "pxebr" bridge is dedicated for MaaS fabric network, which comes with its own DHCP, TFTP etc.; - Drop hardcoded PXE gateway & static IP for MaaS node, since "mcpcontrol" remains a NAT-ed virtual network, with its own DHCP; - Keep internet access available on first interfaces for cfg01/mas01; - Align MaaS IP addrs (all x.y.z.3), add public IP for easy debug via MaaS dashboard; - Add static IP in new network segment (192.168.11.3/24) on MaaS node's PXE interface; - Set MaaS PXE interface MTU 1500 (weird network errors with jumbo); - MaaS node: Add NAT iptables traffic forward from "mcpcontrol" to "pxebr" interfaces; - MaaS: Add harcoded lf-pod2 machine info (fixed identation in v6); - Switch our targeted scenario to HA; * scenario: s/os-nosdn-nofeature-noha/os-nosdn-nofeature-ha/ - maas region: Use mcp.rsa.pub from ~ubuntu/.ssh/authorized_keys; - add route for 192.168.11.0/24 via mas01 on cfg01; - fix race condition on kvm nodes network setup: * add "noifupdown" support in salt formula for linux.network; * keep primary eth/br-mgmt unconfigured till reboot; TODO: - Read all this info from PDF (Pod Descriptor File) later; - investigate leftover references to eno2, eth3; - add public network interfaces config, IPs; - improve wait conditions for MaaS commision/deploy; - report upstream breakage in system.single; Change-Id: Ie8dd584b140991d2bd992acdfe47f5644bf51409 Signed-off-by: Michael Polenchuk <mpolenchuk@mirantis.com> Signed-off-by: Guillermo Herrero <Guillermo.Herrero@enea.com> Signed-off-by: Charalampos Kominos <Charalampos.Kominos@enea.com> Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'mcp/reclass/classes/cluster/baremetal-mcp-ocata-common/haproxy_openstack_api.yml')
-rw-r--r--mcp/reclass/classes/cluster/baremetal-mcp-ocata-common/haproxy_openstack_api.yml166
1 files changed, 166 insertions, 0 deletions
diff --git a/mcp/reclass/classes/cluster/baremetal-mcp-ocata-common/haproxy_openstack_api.yml b/mcp/reclass/classes/cluster/baremetal-mcp-ocata-common/haproxy_openstack_api.yml
new file mode 100644
index 000000000..e63e9d5c9
--- /dev/null
+++ b/mcp/reclass/classes/cluster/baremetal-mcp-ocata-common/haproxy_openstack_api.yml
@@ -0,0 +1,166 @@
+parameters:
+ _param:
+ haproxy_check: check inter 15s fastinter 2s downinter 4s rise 3 fall 3
+ haproxy:
+ proxy:
+ listen:
+ cinder_api:
+ type: openstack-service
+ service_name: cinder
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 8776
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 8776
+ params: ${_param:haproxy_check}
+ glance_api:
+ type: openstack-service
+ service_name: glance
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 9292
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 9292
+ params: ${_param:haproxy_check}
+ glance_registry_api:
+ type: general-service
+ service_name: glance
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 9191
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 9191
+ params: ${_param:haproxy_check}
+ glare:
+ type: general-service
+ service_name: glare
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 9494
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 9494
+ params: ${_param:haproxy_check}
+ heat_cloudwatch_api:
+ type: openstack-service
+ service_name: heat
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 8003
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 8003
+ params: ${_param:haproxy_check}
+ heat_api:
+ type: openstack-service
+ service_name: heat
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 8004
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 8004
+ params: ${_param:haproxy_check}
+ heat_cfn_api:
+ type: openstack-service
+ service_name: heat
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 8000
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 8000
+ params: ${_param:haproxy_check}
+ keystone_public_api:
+ type: openstack-service
+ service_name: keystone
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 5000
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 5000
+ params: ${_param:haproxy_check}
+ keystone_admin_api:
+ type: openstack-service
+ service_name: keystone
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 35357
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 35357
+ params: ${_param:haproxy_check}
+ neutron_api:
+ type: openstack-service
+ service_name: neutron
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 9696
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 9696
+ params: ${_param:haproxy_check}
+ nova_placement_api:
+ mode: http
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 8778
+ options:
+ - httpclose
+ - httplog
+ health-check:
+ http:
+ options:
+ - expect status 401
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 8778
+ params: ${_param:haproxy_check}
+ nova_ec2_api:
+ type: general-service
+ service_name: nova
+ check: false
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 8773
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 8773
+ params: ${_param:haproxy_check}
+ nova_api:
+ type: openstack-service
+ service_name: nova
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 8774
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 8774
+ params: ${_param:haproxy_check}
+ nova_metadata_api:
+ type: openstack-service
+ binds:
+ - address: ${_param:cluster_vip_address}
+ port: 8775
+ servers:
+ - name: ctl01
+ host: ${_param:cluster_node01_address}
+ port: 8775
+ params: ${_param:haproxy_check}