diff options
author | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2018-02-01 00:28:17 +0100 |
---|---|---|
committer | Alexandru Avadanii <Alexandru.Avadanii@enea.com> | 2018-02-05 06:04:07 +0100 |
commit | 6acecf3b104a072c60d071364344b9ff04994168 (patch) | |
tree | fa2e0bede90a83464f0818c07f38c06b77bdf42b /mcp/reclass/classes/cluster/mcp-pike-common-ha/infra/config.yml | |
parent | cefce150621699e9d9d3ac5c884a28ee4766c24d (diff) |
[baremetal] Rename all to drop baremetal prefix
A few things differ between baremetal and virtual nodes:
- provisioning method;
- network setup;
Since now we support completely dynamic network config based on PDF +
IDF, as well as dynamic provisioning of VMs on jumpserver (as virtual
cluster nodes), respectively MaaS-driven baremetal provisioning, let's
drop the 'baremetal-' prefix from cluster model names and prepare for
unified scenarios.
Note that some limitations still apply, e.g. virtual nodes are spawned
only on jumpserver (localhost) for now.
JIRA: FUEL-310
Change-Id: If20077ac37c6f15961468abc58db7e16f2c29260
Signed-off-by: Alexandru Avadanii <Alexandru.Avadanii@enea.com>
Diffstat (limited to 'mcp/reclass/classes/cluster/mcp-pike-common-ha/infra/config.yml')
-rw-r--r-- | mcp/reclass/classes/cluster/mcp-pike-common-ha/infra/config.yml | 163 |
1 files changed, 163 insertions, 0 deletions
diff --git a/mcp/reclass/classes/cluster/mcp-pike-common-ha/infra/config.yml b/mcp/reclass/classes/cluster/mcp-pike-common-ha/infra/config.yml new file mode 100644 index 000000000..0fb8e6418 --- /dev/null +++ b/mcp/reclass/classes/cluster/mcp-pike-common-ha/infra/config.yml @@ -0,0 +1,163 @@ +############################################################################## +# Copyright (c) 2017 Mirantis Inc., Enea AB and others. +# All rights reserved. This program and the accompanying materials +# are made available under the terms of the Apache License, Version 2.0 +# which accompanies this distribution, and is available at +# http://www.apache.org/licenses/LICENSE-2.0 +############################################################################## +--- +classes: + - service.git.client + - system.linux.system.single + - system.linux.system.repo.mcp.salt + - system.linux.system.repo.saltstack.xenial + - system.salt.master.api + - system.salt.master.pkg + - system.salt.minion.ca.salt_master + - system.reclass.storage.salt + - system.reclass.storage.system.physical_control_cluster + - system.reclass.storage.system.openstack_control_cluster + - system.reclass.storage.system.openstack_proxy_cluster + - system.reclass.storage.system.openstack_database_cluster + - system.reclass.storage.system.openstack_message_queue_cluster + - system.reclass.storage.system.openstack_telemetry_cluster + # - system.reclass.storage.system.stacklight_log_cluster + # - system.reclass.storage.system.stacklight_monitor_cluster + # - system.reclass.storage.system.stacklight_telemetry_cluster + - system.reclass.storage.system.infra_maas_single + - cluster.mcp-pike-common-ha.infra.lab_proxy_pdf +parameters: + _param: + salt_master_base_environment: prd + reclass_data_repository: local + salt_master_environment_repository: "https://github.com/tcpcloud" + salt_master_environment_revision: master + single_address: ${_param:infra_config_address} + deploy_address: ${_param:infra_config_deploy_address} + pxe_address: ${_param:opnfv_infra_config_pxe_address} + salt_master_host: ${_param:infra_config_deploy_address} + # yamllint disable rule:line-length + salt_api_password_hash: "$6$sGnRlxGf$al5jMCetLP.vfI/fTl3Z0N7Za1aeiexL487jAtyRABVfT3NlwZxQGVhO7S1N8OwS/34VHYwZQA8lkXwKMN/GS1" + dhcp_nic: ${_param:opnfv_fn_vm_primary_interface} + single_nic: ${_param:opnfv_fn_vm_secondary_interface} + pxe_nic: ${_param:opnfv_fn_vm_tertiary_interface} + linux: + network: + interface: + dhcp: + enabled: true + type: eth + proto: dhcp + name: ${_param:dhcp_nic} + single: + enabled: true + type: eth + proto: static + name: ${_param:single_nic} + address: ${_param:single_address} + netmask: 255.255.255.0 + pxe: + enabled: true + type: eth + proto: static + name: ${_param:pxe_nic} + address: ${_param:pxe_address} + netmask: 255.255.255.0 + salt: + master: + accept_policy: open_mode + file_recv: true + reclass: + storage: + data_source: + engine: local + node: + infra_kvm_node01: + params: + keepalived_vip_priority: 100 + linux_system_codename: xenial + infra_kvm_node02: + params: + keepalived_vip_priority: 101 + linux_system_codename: xenial + infra_kvm_node03: + params: + keepalived_vip_priority: 102 + linux_system_codename: xenial + openstack_telemetry_node01: + params: + linux_system_codename: xenial + openstack_telemetry_node02: + params: + linux_system_codename: xenial + openstack_telemetry_node03: + params: + linux_system_codename: xenial + openstack_message_queue_node01: + params: + linux_system_codename: xenial + openstack_message_queue_node02: + params: + linux_system_codename: xenial + openstack_message_queue_node03: + params: + linux_system_codename: xenial + openstack_proxy_node01: + params: + linux_system_codename: xenial + openstack_proxy_node02: + params: + linux_system_codename: xenial + # stacklight_log_node01: + # classes: + # - system.elasticsearch.client.single + # stacklight_monitor_node01: + # classes: + # - system.grafana.client.single + # - system.kibana.client.single + openstack_control_node01: + classes: + - cluster.mcp-pike-common-ha.openstack_control_init + params: + linux_system_codename: xenial + openstack_control_node02: + params: + linux_system_codename: xenial + openstack_control_node03: + params: + linux_system_codename: xenial + openstack_database_node01: + classes: + - cluster.mcp-pike-common-ha.openstack_database_init + params: + linux_system_codename: xenial + openstack_database_node02: + params: + linux_system_codename: xenial + openstack_database_node03: + params: + linux_system_codename: xenial + openstack_compute_node01: + name: ${_param:openstack_compute_node01_hostname} + domain: ${_param:cluster_domain} + classes: + - cluster.${_param:cluster_name}.openstack.compute + params: + salt_master_host: ${_param:reclass_config_master} + linux_system_codename: xenial + control_address: ${_param:openstack_compute_node01_control_address} + single_address: ${_param:openstack_compute_node01_single_address} + tenant_address: ${_param:openstack_compute_node01_tenant_address} + external_address: ${_param:openstack_compute_node01_external_address} + openstack_compute_node02: + name: ${_param:openstack_compute_node02_hostname} + domain: ${_param:cluster_domain} + classes: + - cluster.${_param:cluster_name}.openstack.compute + params: + salt_master_host: ${_param:reclass_config_master} + linux_system_codename: xenial + control_address: ${_param:openstack_compute_node02_control_address} + single_address: ${_param:openstack_compute_node02_single_address} + tenant_address: ${_param:openstack_compute_node02_tenant_address} + external_address: ${_param:openstack_compute_node02_external_address} |