summaryrefslogtreecommitdiffstats
path: root/fuel/prototypes/auto-deploy/documentation
diff options
context:
space:
mode:
Diffstat (limited to 'fuel/prototypes/auto-deploy/documentation')
-rw-r--r--fuel/prototypes/auto-deploy/documentation/1-introduction.txt36
-rw-r--r--fuel/prototypes/auto-deploy/documentation/2-dea.txt1082
-rw-r--r--fuel/prototypes/auto-deploy/documentation/3-dha.txt65
-rw-r--r--fuel/prototypes/auto-deploy/documentation/4-dha-adapter-api.txt128
-rw-r--r--fuel/prototypes/auto-deploy/documentation/5-dea-api.txt47
5 files changed, 1358 insertions, 0 deletions
diff --git a/fuel/prototypes/auto-deploy/documentation/1-introduction.txt b/fuel/prototypes/auto-deploy/documentation/1-introduction.txt
new file mode 100644
index 000000000..c4efed5a8
--- /dev/null
+++ b/fuel/prototypes/auto-deploy/documentation/1-introduction.txt
@@ -0,0 +1,36 @@
+The structure is being reworked. This page is an introduction to DEA
+and DHA.
+
+Introduction
+
+The aim of the deployment prototype is to try out a (hopefully)
+logical setup to support Fuel deployment on a variety of different
+hardware platforms using a common data format to describe the
+deployment itself and another data format to describe the hardware in
+question.
+
+DEA.yaml The DEA.yaml file describes a Fuel deployment, complete with
+ all settings. The easiest way to create this file is to use
+ the "create_templates.sh" script in an existing deployment to
+ copy its configuration to the DEA.yaml file.
+
+DHA.yaml The DHA.yaml file describes to hardware setup for an
+ installation. This file denotes among other things which DHA
+ adapter to use when deploying Fuel on this hardware setup.
+
+DHA adapter interface: The DHA adapter interface contains a number of
+ functions calls available to the automatic Fuel deployer script
+ (deploy.sh). Each adapter creates an implementation of this
+ interface in order for the deployer to orchestrate the
+ installation. There's currently an example DHA adapter "libvirt"
+ that is able to deploy Fuel in a nested KVM environment. Future
+ adapters could support HP C7000, Dell R620 or other types of
+ hardware.
+
+ It is important to note that a certain DHA adapter could implement
+ the dha_fuelCustomInstall() function, which for instance could
+ install the Fuel master as a VM or using PXE.
+
+A typical installation would be kicked off by the following command:
+
+./deploy.sh <isofile to deploy> <dea.yaml> <dha.yaml>
diff --git a/fuel/prototypes/auto-deploy/documentation/2-dea.txt b/fuel/prototypes/auto-deploy/documentation/2-dea.txt
new file mode 100644
index 000000000..36f805c26
--- /dev/null
+++ b/fuel/prototypes/auto-deploy/documentation/2-dea.txt
@@ -0,0 +1,1082 @@
+The structure is being reworked. This page describes the DEA.yaml
+file.
+
+The DEA.yaml file describes an actual Fuel deployment. This YAML file
+can either be edited from an existing template or created from an
+existing deployment by running the "create_templates.sh" script.
+
+The top level fields and their origin
+
+compute: Network translations for the compute nodes (from astute.yaml)
+Hoping that this is sufficient and we don't need to be more granular!
+
+controller: Network translations for the compute nodes (from
+astute.yaml) Hoping that this is sufficient and we don't need to be
+more granular!
+
+created: Creation time for this DEA file.
+
+environment_mode: Environment mode from "fuel env" (ha_compact,
+multinode, ...)
+
+fuel: The networking, DNS and NTP information from the Fuel node
+astute.yaml.
+
+network: The "fuel network" part.
+
+nodes: A data structure describing the role and network configuration
+for all nodes.
+
+opnfv: This structure contains two sub structures "controller" and
+"compute" containing the "opnfv" namespace from their respective
+astute.yaml.
+
+settings: The "fuel settings" part. This is the complete settings,
+thinking it can come in handy for future modifications. I think that
+the "pre_deploy.sh" should be replaced by us customising these
+settings instead (way into the future though).
+
+title: Deployment Environment Adapter (DEA)
+
+version: DEA API to be used for parsing this file. Currently 1.1.
+
+Live example (looooong!)
+
+# DEA API version supported
+version: 1.1
+created: Wed Apr 22 09:43:22 UTC 2015
+comment: Small libvirt deployment
+nodes:
+- id: 1
+ interfaces:
+ eth0:
+ - fuelweb_admin
+ - management
+ eth1:
+ - storage
+ eth2:
+ - private
+ eth3:
+ - public
+ role: compute
+- id: 2
+ interfaces:
+ eth0:
+ - fuelweb_admin
+ - management
+ eth1:
+ - storage
+ eth2:
+ - private
+ eth3:
+ - public
+ role: controller
+environment_mode: multinode
+fuel:
+ ADMIN_NETWORK:
+ dhcp_pool_end: 10.20.0.254
+ dhcp_pool_start: 10.20.0.3
+ ipaddress: 10.20.0.2
+ netmask: 255.255.255.0
+ DNS_DOMAIN: domain.tld
+ DNS_SEARCH: domain.tld
+ DNS_UPSTREAM: 8.8.8.8
+ FUEL_ACCESS:
+ password: admin
+ user: admin
+ HOSTNAME: fuel
+ NTP1: 0.pool.ntp.org
+ NTP2: 1.pool.ntp.org
+ NTP3: 2.pool.ntp.org
+controller:
+- action: add-br
+ name: br-eth0
+- action: add-port
+ bridge: br-eth0
+ name: eth0
+- action: add-br
+ name: br-eth1
+- action: add-port
+ bridge: br-eth1
+ name: eth1
+- action: add-br
+ name: br-eth2
+- action: add-port
+ bridge: br-eth2
+ name: eth2
+- action: add-br
+ name: br-eth3
+- action: add-port
+ bridge: br-eth3
+ name: eth3
+- action: add-br
+ name: br-ex
+- action: add-br
+ name: br-mgmt
+- action: add-br
+ name: br-storage
+- action: add-br
+ name: br-fw-admin
+- action: add-patch
+ bridges:
+ - br-eth1
+ - br-storage
+ tags:
+ - 102
+ - 0
+ vlan_ids:
+ - 102
+ - 0
+- action: add-patch
+ bridges:
+ - br-eth0
+ - br-mgmt
+ tags:
+ - 101
+ - 0
+ vlan_ids:
+ - 101
+ - 0
+- action: add-patch
+ bridges:
+ - br-eth0
+ - br-fw-admin
+ trunks:
+ - 0
+- action: add-patch
+ bridges:
+ - br-eth3
+ - br-ex
+ trunks:
+ - 0
+- action: add-br
+ name: br-prv
+- action: add-patch
+ bridges:
+ - br-eth2
+ - br-prv
+compute:
+- action: add-br
+ name: br-eth0
+- action: add-port
+ bridge: br-eth0
+ name: eth0
+- action: add-br
+ name: br-eth1
+- action: add-port
+ bridge: br-eth1
+ name: eth1
+- action: add-br
+ name: br-eth2
+- action: add-port
+ bridge: br-eth2
+ name: eth2
+- action: add-br
+ name: br-eth3
+- action: add-port
+ bridge: br-eth3
+ name: eth3
+- action: add-br
+ name: br-mgmt
+- action: add-br
+ name: br-storage
+- action: add-br
+ name: br-fw-admin
+- action: add-patch
+ bridges:
+ - br-eth1
+ - br-storage
+ tags:
+ - 102
+ - 0
+ vlan_ids:
+ - 102
+ - 0
+- action: add-patch
+ bridges:
+ - br-eth0
+ - br-mgmt
+ tags:
+ - 101
+ - 0
+ vlan_ids:
+ - 101
+ - 0
+- action: add-patch
+ bridges:
+ - br-eth0
+ - br-fw-admin
+ trunks:
+ - 0
+- action: add-br
+ name: br-prv
+- action: add-patch
+ bridges:
+ - br-eth2
+ - br-prv
+opnfv:
+ compute:
+ dns:
+ compute:
+ - 8.8.8.8
+ - 8.8.4.4
+ controller:
+ - 8.8.8.8
+ - 8.8.4.4
+ hosts:
+ - address: 46.253.206.181
+ fqdn: tor.e1.se
+ name: tor
+ ntp:
+ compute: 'server node-4.domain.tld
+
+ '
+ controller: 'server 0.ubuntu.pool.ntp.org
+
+ server 1.ubuntu.pool.ntp.org
+
+ server 2.ubuntu.pool.ntp.org
+
+ server 3.ubuntu.pool.ntp.org
+
+ '
+ controller:
+ dns:
+ compute:
+ - 8.8.8.8
+ - 8.8.4.4
+ controller:
+ - 8.8.8.8
+ - 8.8.4.4
+ hosts:
+ - address: 46.253.206.181
+ fqdn: tor.e1.se
+ name: tor
+ ntp:
+ compute: 'server node-4.domain.tld
+
+ '
+ controller: 'server 0.ubuntu.pool.ntp.org
+
+ server 1.ubuntu.pool.ntp.org
+
+ server 2.ubuntu.pool.ntp.org
+
+ server 3.ubuntu.pool.ntp.org
+
+ '
+network:
+ networking_parameters:
+ base_mac: fa:16:3e:00:00:00
+ dns_nameservers:
+ - 8.8.4.4
+ - 8.8.8.8
+ floating_ranges:
+ - - 172.16.0.130
+ - 172.16.0.254
+ gre_id_range:
+ - 2
+ - 65535
+ internal_cidr: 192.168.111.0/24
+ internal_gateway: 192.168.111.1
+ net_l23_provider: ovs
+ segmentation_type: vlan
+ vlan_range:
+ - 1000
+ - 1200
+ networks:
+ - cidr: 172.16.0.0/24
+ gateway: 172.16.0.1
+ ip_ranges:
+ - - 172.16.0.2
+ - 172.16.0.126
+ meta:
+ assign_vip: true
+ cidr: 172.16.0.0/24
+ configurable: true
+ floating_range_var: floating_ranges
+ ip_range:
+ - 172.16.0.2
+ - 172.16.0.126
+ map_priority: 1
+ name: public
+ notation: ip_ranges
+ render_addr_mask: public
+ render_type: null
+ use_gateway: true
+ vlan_start: null
+ name: public
+ vlan_start: null
+ - cidr: 192.168.0.0/24
+ gateway: null
+ ip_ranges:
+ - - 192.168.0.2
+ - 192.168.0.254
+ meta:
+ assign_vip: true
+ cidr: 192.168.0.0/24
+ configurable: true
+ map_priority: 2
+ name: management
+ notation: cidr
+ render_addr_mask: internal
+ render_type: cidr
+ use_gateway: false
+ vlan_start: 101
+ name: management
+ vlan_start: 101
+ - cidr: 192.168.1.0/24
+ gateway: null
+ ip_ranges:
+ - - 192.168.1.2
+ - 192.168.1.254
+ meta:
+ assign_vip: false
+ cidr: 192.168.1.0/24
+ configurable: true
+ map_priority: 2
+ name: storage
+ notation: cidr
+ render_addr_mask: storage
+ render_type: cidr
+ use_gateway: false
+ vlan_start: 102
+ name: storage
+ vlan_start: 102
+ - cidr: null
+ gateway: null
+ ip_ranges: []
+ meta:
+ assign_vip: false
+ configurable: false
+ map_priority: 2
+ name: private
+ neutron_vlan_range: true
+ notation: null
+ render_addr_mask: null
+ render_type: null
+ seg_type: vlan
+ use_gateway: false
+ vlan_start: null
+ name: private
+ vlan_start: null
+ - cidr: 10.20.0.0/24
+ gateway: null
+ ip_ranges:
+ - - 10.20.0.3
+ - 10.20.0.254
+ meta:
+ assign_vip: false
+ configurable: false
+ map_priority: 0
+ notation: ip_ranges
+ render_addr_mask: null
+ render_type: null
+ unmovable: true
+ use_gateway: true
+ name: fuelweb_admin
+ vlan_start: null
+interfaces:
+ eth0:
+ - fuelweb_admin
+ - management
+ eth1:
+ - storage
+ eth2:
+ - private
+ eth3:
+ - public
+settings:
+ editable:
+ access:
+ email:
+ description: Email address for Administrator
+ label: email
+ type: text
+ value: admin@localhost
+ weight: 40
+ metadata:
+ label: Access
+ weight: 10
+ password:
+ description: Password for Administrator
+ label: password
+ type: password
+ value: admin
+ weight: 20
+ tenant:
+ description: Tenant (project) name for Administrator
+ label: tenant
+ regex:
+ error: Invalid tenant name
+ source: ^(?!services$)(?!nova$)(?!glance$)(?!keystone$)(?!neutron$)(?!cinder$)(?!swift
+$)(?!ceph$)(?![Gg]uest$).*
+ type: text
+ value: admin
+ weight: 30
+ user:
+ description: Username for Administrator
+ label: username
+ regex:
+ error: Invalid username
+ source: ^(?!services$)(?!nova$)(?!glance$)(?!keystone$)(?!neutron$)(?!cinder$)(?!swift
+$)(?!ceph$)(?![Gg]uest$).*
+ type: text
+ value: admin
+ weight: 10
+ additional_components:
+ ceilometer:
+ description: If selected, Ceilometer component will be installed
+ label: Install Ceilometer
+ type: checkbox
+ value: false
+ weight: 40
+ heat:
+ description: ''
+ label: ''
+ type: hidden
+ value: true
+ weight: 30
+ metadata:
+ label: Additional Components
+ weight: 20
+ murano:
+ description: If selected, Murano component will be installed
+ label: Install Murano
+ restrictions:
+ - cluster:net_provider != 'neutron'
+ type: checkbox
+ value: false
+ weight: 20
+ sahara:
+ description: If selected, Sahara component will be installed
+ label: Install Sahara
+ type: checkbox
+ value: false
+ weight: 10
+ common:
+ auth_key:
+ description: Public key(s) to include in authorized_keys on deployed nodes
+ label: Public Key
+ type: text
+ value: ''
+ weight: 70
+ auto_assign_floating_ip:
+ description: If selected, OpenStack will automatically assign a floating IP
+ to a new instance
+ label: Auto assign floating IP
+ restrictions:
+ - cluster:net_provider == 'neutron'
+ type: checkbox
+ value: false
+ weight: 40
+ compute_scheduler_driver:
+ label: Scheduler driver
+ type: radio
+ value: nova.scheduler.filter_scheduler.FilterScheduler
+ values:
+ - data: nova.scheduler.filter_scheduler.FilterScheduler
+ description: Currently the most advanced OpenStack scheduler. See the OpenStack
+ documentation for details.
+ label: Filter scheduler
+ - data: nova.scheduler.simple.SimpleScheduler
+ description: This is 'naive' scheduler which tries to find the least loaded
+ host
+ label: Simple scheduler
+ weight: 40
+ debug:
+ description: Debug logging mode provides more information, but requires more
+ disk space.
+ label: OpenStack debug logging
+ type: checkbox
+ value: false
+ weight: 20
+ disable_offload:
+ description: If set, generic segmentation offload (gso) and generic receive
+ offload (gro) on physical nics will be disabled. See ethtool man.
+ label: Disable generic offload on physical nics
+ restrictions:
+ - action: hide
+ condition: cluster:net_provider == 'neutron' and networking_parameters:segmentation_ty
+pe
+ == 'gre'
+ type: checkbox
+ value: true
+ weight: 80
+ libvirt_type:
+ label: Hypervisor type
+ type: radio
+ value: kvm
+ values:
+ - data: kvm
+ description: Choose this type of hypervisor if you run OpenStack on hardware
+ label: KVM
+ restrictions:
+ - settings:common.libvirt_type.value == 'vcenter'
+ - data: qemu
+ description: Choose this type of hypervisor if you run OpenStack on virtual
+ hosts.
+ label: QEMU
+ restrictions:
+ - settings:common.libvirt_type.value == 'vcenter
+ - data: vcenter
+ description: Choose this type of hypervisor if you run OpenStack in a vCenter
+ environment.
+ label: vCenter
+ restrictions:
+ - settings:common.libvirt_type.value != 'vcenter' or cluster:net_provider
+ == 'neutron'
+ weight: 30
+ metadata:
+ label: Common
+ weight: 30
+ nova_quota:
+ description: Quotas are used to limit CPU and memory usage for tenants. Enabling
+ quotas will increase load on the Nova database.
+ label: Nova quotas
+ type: checkbox
+ value: false
+ weight: 25
+ resume_guests_state_on_host_boot:
+ description: Whether to resume previous guests state when the host reboots.
+ If enabled, this option causes guests assigned to the host to resume their
+ previous state. If the guest was running a restart will be attempted when
+ nova-compute starts. If the guest was not running previously, a restart
+ will not be attempted.
+ label: Resume guests state on host boot
+ type: checkbox
+ value: false
+ weight: 60
+ use_cow_images:
+ description: For most cases you will want qcow format. If it's disabled, raw
+ image format will be used to run VMs. OpenStack with raw format currently
+ does not support snapshotting.
+ label: Use qcow format for images
+ type: checkbox
+ value: true
+ weight: 50
+ corosync:
+ group:
+ description: ''
+ label: Group
+ type: text
+ value: 226.94.1.1
+ weight: 10
+ metadata:
+ label: Corosync
+ restrictions:
+ - action: hide
+ condition: 'true'
+ weight: 50
+ port:
+ description: ''
+ label: Port
+ type: text
+ value: '12000'
+ weight: 20
+ verified:
+ description: Set True only if multicast is configured correctly on router.
+ label: Need to pass network verification.
+ type: checkbox
+ value: false
+ weight: 10
+ external_dns:
+ dns_list:
+ description: List of upstream DNS servers, separated by comma
+ label: DNS list
+ type: text
+ value: 8.8.8.8, 8.8.4.4
+ weight: 10
+ metadata:
+ label: Upstream DNS
+ weight: 90
+ external_ntp:
+ metadata:
+ label: Upstream NTP
+ weight: 100
+ ntp_list:
+ description: List of upstream NTP servers, separated by comma
+ label: NTP servers list
+ type: text
+ value: 0.pool.ntp.org, 1.pool.ntp.org
+ weight: 10
+ kernel_params:
+ kernel:
+ description: Default kernel parameters
+ label: Initial parameters
+ type: text
+ value: console=ttyS0,9600 console=tty0 rootdelay=90 nomodeset
+ weight: 45
+ metadata:
+ label: Kernel parameters
+ weight: 40
+ neutron_mellanox:
+ metadata:
+ enabled: true
+ label: Mellanox Neutron components
+ toggleable: false
+ weight: 50
+ plugin:
+ label: Mellanox drivers and SR-IOV plugin
+ type: radio
+ value: disabled
+ values:
+ - data: disabled
+ description: If selected, Mellanox drivers, Neutron and Cinder plugin will
+ not be installed.
+ label: Mellanox drivers and plugins disabled
+ restrictions:
+ - settings:storage.iser.value == true
+ - data: drivers_only
+ description: If selected, Mellanox Ethernet drivers will be installed to
+ support networking over Mellanox NIC. Mellanox Neutron plugin will not
+ be installed.
+ label: Install only Mellanox drivers
+ restrictions:
+ - settings:common.libvirt_type.value != 'kvm'
+ - data: ethernet
+ description: If selected, both Mellanox Ethernet drivers and Mellanox network
+ acceleration (Neutron) plugin will be installed.
+ label: Install Mellanox drivers and SR-IOV plugin
+ restrictions:
+ - settings:common.libvirt_type.value != 'kvm' or not (cluster:net_provider
+ == 'neutron' and networking_parameters:segmentation_type == 'vlan')
+ weight: 60
+ vf_num:
+ description: Note that one virtual function will be reserved to the storage
+ network, in case of choosing iSER.
+ label: Number of virtual NICs
+ restrictions:
+ - settings:neutron_mellanox.plugin.value != 'ethernet'
+ type: text
+ value: '16'
+ weight: 70
+ nsx_plugin:
+ connector_type:
+ description: Default network transport type to use
+ label: NSX connector type
+ type: select
+ value: stt
+ values:
+ - data: gre
+ label: GRE
+ - data: ipsec_gre
+ label: GRE over IPSec
+ - data: stt
+ label: STT
+ - data: ipsec_stt
+ label: STT over IPSec
+ - data: bridge
+ label: Bridge
+ weight: 80
+ l3_gw_service_uuid:
+ description: UUID for the default L3 gateway service to use with this cluster
+ label: L3 service UUID
+ regex:
+ error: Invalid L3 gateway service UUID
+ source: '[a-f\d]{8}-[a-f\d]{4}-[a-f\d]{4}-[a-f\d]{4}-[a-f\d]{12}'
+ type: text
+ value: ''
+ weight: 50
+ metadata:
+ enabled: false
+ label: VMware NSX
+ restrictions:
+ - action: hide
+ condition: cluster:net_provider != 'neutron' or networking_parameters:net_l23_provider
+ != 'nsx'
+ weight: 20
+ nsx_controllers:
+ description: One or more IPv4[:port] addresses of NSX controller node, separated
+ by comma (e.g. 10.30.30.2,192.168.110.254:443)
+ label: NSX controller endpoint
+ regex:
+ error: Invalid controller endpoints, specify valid IPv4[:port] pair
+ source: ^(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2
+[0-4][\d]|25[0-5])(:(6553[0-5]|655[0-2][\d]|65[0-4][\d]{2}|6[0-4][\d]{3}|5[\d]{4}|[\d][\d]{0,3})
+)?(,(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-
+5])(:(6553[0-5]|655[0-2][\d]|65[0-4][\d]{2}|6[0-4][\d]{3}|5[\d]{4}|[\d][\d]{0,3}))?)*$
+ type: text
+ value: ''
+ weight: 60
+ nsx_password:
+ description: Password for Administrator
+ label: NSX password
+ regex:
+ error: Empty password
+ source: \S
+ type: password
+ value: ''
+ weight: 30
+ nsx_username:
+ description: NSX administrator's username
+ label: NSX username
+ regex:
+ error: Empty username
+ source: \S
+ type: text
+ value: admin
+ weight: 20
+ packages_url:
+ description: URL to NSX specific packages
+ label: URL to NSX bits
+ regex:
+ error: Invalid URL, specify valid HTTP/HTTPS URL with IPv4 address (e.g.
+ http://10.20.0.2/nsx)
+ source: ^https?://(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1
+[\d]{2}|2[0-4][\d]|25[0-5])(:(6553[0-5]|655[0-2][\d]|65[0-4][\d]{2}|6[0-4][\d]{3}|5[\d]{4}|[\d][
+\d]{0,3}))?(/.*)?$
+ type: text
+ value: ''
+ weight: 70
+ replication_mode:
+ description: ''
+ label: NSX cluster has Service nodes
+ type: checkbox
+ value: true
+ weight: 90
+ transport_zone_uuid:
+ description: UUID of the pre-existing default NSX Transport zone
+ label: Transport zone UUID
+ regex:
+ error: Invalid transport zone UUID
+ source: '[a-f\d]{8}-[a-f\d]{4}-[a-f\d]{4}-[a-f\d]{4}-[a-f\d]{12}'
+ type: text
+ value: ''
+ weight: 40
+ provision:
+ metadata:
+ label: Provision
+ restrictions:
+ - action: hide
+ condition: not ('experimental' in version:feature_groups)
+ weight: 80
+ method:
+ description: Which provision method to use for this cluster.
+ label: Provision method
+ type: radio
+ value: cobbler
+ values:
+ - data: image
+ description: Copying pre-built images on a disk.
+ label: Image
+ - data: cobbler
+ description: Install from scratch using anaconda or debian-installer.
+ label: Classic (use anaconda or debian-installer)
+ public_network_assignment:
+ assign_to_all_nodes:
+ description: When disabled, public network will be assigned to controllers
+ and zabbix-server only
+ label: Assign public network to all nodes
+ type: checkbox
+ value: false
+ weight: 10
+ metadata:
+ label: Public network assignment
+ restrictions:
+ - action: hide
+ condition: cluster:net_provider != 'neutron'
+ weight: 50
+ storage:
+ ephemeral_ceph:
+ description: Configures Nova to store ephemeral volumes in RBD. This works
+ best if Ceph is enabled for volumes and images, too. Enables live migration
+ of all types of Ceph backed VMs (without this option, live migration will
+ only work with VMs launched from Cinder volumes).
+ label: Ceph RBD for ephemeral volumes (Nova)
+ restrictions:
+ - settings:common.libvirt_type.value == 'vcenter'
+ type: checkbox
+ value: false
+ weight: 75
+ images_ceph:
+ description: Configures Glance to use the Ceph RBD backend to store images.
+ If enabled, this option will prevent Swift from installing.
+ label: Ceph RBD for images (Glance)
+ type: checkbox
+ value: false
+ weight: 30
+ images_vcenter:
+ description: Configures Glance to use the vCenter/ESXi backend to store images.
+ If enabled, this option will prevent Swift from installing.
+ label: VMWare vCenter/ESXi datastore for images (Glance)
+ restrictions:
+ - settings:common.libvirt_type.value != 'vcenter'
+ type: checkbox
+ value: false
+ weight: 35
+ iser:
+ description: 'High performance block storage: Cinder volumes over iSER protocol
+ (iSCSI over RDMA). This feature requires SR-IOV capabilities in the NIC,
+ and will use a dedicated virtual function for the storage network.'
+ label: iSER protocol for volumes (Cinder)
+ restrictions:
+ - settings:storage.volumes_lvm.value != true or settings:common.libvirt_type.value
+ != 'kvm'
+ type: checkbox
+ value: false
+ weight: 11
+ metadata:
+ label: Storage
+ weight: 60
+ objects_ceph:
+ description: Configures RadosGW front end for Ceph RBD. This exposes S3 and
+ Swift API Interfaces. If enabled, this option will prevent Swift from installing.
+ label: Ceph RadosGW for objects (Swift API)
+ restrictions:
+ - settings:storage.images_ceph.value == false
+ type: checkbox
+ value: false
+ weight: 80
+ osd_pool_size:
+ description: Configures the default number of object replicas in Ceph. This
+ number must be equal to or lower than the number of deployed 'Storage -
+ Ceph OSD' nodes.
+ label: Ceph object replication factor
+ regex:
+ error: Invalid number
+ source: ^[1-9]\d*$
+ restrictions:
+ - settings:common.libvirt_type.value == 'vcenter'
+ type: text
+ value: '2'
+ weight: 85
+ vc_datacenter:
+ description: Inventory path to a datacenter. If you want to use ESXi host
+ as datastore, it should be "ha-datacenter".
+ label: Datacenter name
+ regex:
+ error: Empty datacenter
+ source: \S
+ restrictions:
+ - action: hide
+ condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_t
+ype.value
+ != 'vcenter'
+ type: text
+ value: ''
+ weight: 65
+ vc_datastore:
+ description: Datastore associated with the datacenter.
+ label: Datastore name
+ regex:
+ error: Empty datastore
+ source: \S
+ restrictions:
+ - action: hide
+ condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_t
+ype.value
+ != 'vcenter'
+ type: text
+ value: ''
+ weight: 60
+ vc_host:
+ description: IP Address of vCenter/ESXi
+ label: vCenter/ESXi IP
+ regex:
+ error: Specify valid IPv4 address
+ source: ^(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2
+[0-4][\d]|25[0-5])$
+ restrictions:
+ - action: hide
+ condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_t
+ype.value
+ != 'vcenter'
+ type: text
+ value: ''
+ weight: 45
+ vc_image_dir:
+ description: The name of the directory where the glance images will be stored
+ in the VMware datastore.
+ label: Datastore Images directory
+ regex:
+ error: Empty images directory
+ source: \S
+ restrictions:
+ - action: hide
+ condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_t
+ype.value
+ != 'vcenter'
+ type: text
+ value: /openstack_glance
+ weight: 70
+ vc_password:
+ description: vCenter/ESXi admin password
+ label: Password
+ regex:
+ error: Empty password
+ source: \S
+ restrictions:
+ - action: hide
+ condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_t
+ype.value
+ != 'vcenter'
+ type: password
+ value: ''
+ weight: 55
+ vc_user:
+ description: vCenter/ESXi admin username
+ label: Username
+ regex:
+ error: Empty username
+ source: \S
+ restrictions:
+ - action: hide
+ condition: settings:storage.images_vcenter.value == false or settings:common.libvirt_t
+ype.value
+ != 'vcenter'
+ type: text
+ value: ''
+ weight: 50
+ volumes_ceph:
+ description: Configures Cinder to store volumes in Ceph RBD images.
+ label: Ceph RBD for volumes (Cinder)
+ restrictions:
+ - settings:storage.volumes_lvm.value == true or settings:common.libvirt_type.value
+ == 'vcenter'
+ type: checkbox
+ value: false
+ weight: 20
+ volumes_lvm:
+ description: Requires at least one Storage - Cinder LVM node.
+ label: Cinder LVM over iSCSI for volumes
+ restrictions:
+ - settings:storage.volumes_ceph.value == true
+ type: checkbox
+ value: false
+ weight: 10
+ volumes_vmdk:
+ description: Configures Cinder to store volumes via VMware vCenter.
+ label: VMware vCenter for volumes (Cinder)
+ restrictions:
+ - settings:common.libvirt_type.value != 'vcenter' or settings:storage.volumes_lvm.value
+ == true
+ type: checkbox
+ value: false
+ weight: 15
+ syslog:
+ metadata:
+ label: Syslog
+ weight: 50
+ syslog_port:
+ description: Remote syslog port
+ label: Port
+ regex:
+ error: Invalid Syslog port
+ source: ^([1-9][0-9]{0,3}|[1-5][0-9]{4}|6[0-4][0-9]{3}|65[0-4][0-9]{2}|655[0-2][0-9]|6
+553[0-5])$
+ type: text
+ value: '514'
+ weight: 20
+ syslog_server:
+ description: Remote syslog hostname
+ label: Hostname
+ type: text
+ value: ''
+ weight: 10
+ syslog_transport:
+ label: Syslog transport protocol
+ type: radio
+ value: tcp
+ values:
+ - data: udp
+ description: ''
+ label: UDP
+ - data: tcp
+ description: ''
+ label: TCP
+ weight: 30
+ vcenter:
+ cluster:
+ description: vCenter cluster name. If you have multiple clusters, use comma
+ to separate names
+ label: Cluster
+ regex:
+ error: Invalid cluster list
+ source: ^([^,\ ]+([\ ]*[^,\ ])*)(,[^,\ ]+([\ ]*[^,\ ])*)*$
+ type: text
+ value: ''
+ weight: 40
+ datastore_regex:
+ description: The Datastore regexp setting specifies the data stores to use
+ with Compute. For example, "nas.*". If you want to use all available datastores,
+ leave this field blank
+ label: Datastore regexp
+ regex:
+ error: Invalid datastore regexp
+ source: ^(\S.*\S|\S|)$
+ type: text
+ value: ''
+ weight: 50
+ host_ip:
+ description: IP Address of vCenter
+ label: vCenter IP
+ regex:
+ error: Specify valid IPv4 address
+ source: ^(([\d]|[1-9][\d]|1[\d]{2}|2[0-4][\d]|25[0-5])\.){3}([\d]|[1-9][\d]|1[\d]{2}|2
+[0-4][\d]|25[0-5])$
+ type: text
+ value: ''
+ weight: 10
+ metadata:
+ label: vCenter
+ restrictions:
+ - action: hide
+ condition: settings:common.libvirt_type.value != 'vcenter'
+ weight: 20
+ use_vcenter:
+ description: ''
+ label: ''
+ type: hidden
+ value: true
+ weight: 5
+ vc_password:
+ description: vCenter admin password
+ label: Password
+ regex:
+ error: Empty password
+ source: \S
+ type: password
+ value: admin
+ weight: 30
+ vc_user:
+ description: vCenter admin username
+ label: Username
+ regex:
+ error: Empty username
+ source: \S
+ type: text
+ value: admin
+ weight: 20
+ vlan_interface:
+ description: Physical ESXi host ethernet adapter for VLAN networking (e.g.
+ vmnic1). If empty "vmnic0" is used by default
+ label: ESXi VLAN interface
+ restrictions:
+ - action: hide
+ condition: cluster:net_provider != 'nova_network' or networking_parameters:net_manager
+ != 'VlanManager'
+ type: text
+ value: ''
+ weight: 60
+ zabbix:
+ metadata:
+ label: Zabbix Access
+ restrictions:
+ - action: hide
+ condition: not ('experimental' in version:feature_groups)
+ weight: 70
+ password:
+ description: Password for Zabbix Administrator
+ label: password
+ type: password
+ value: zabbix
+ weight: 20
+ username:
+ description: Username for Zabbix Administrator
+ label: username
+ type: text
+ value: admin
+ weight: 10
+
+
diff --git a/fuel/prototypes/auto-deploy/documentation/3-dha.txt b/fuel/prototypes/auto-deploy/documentation/3-dha.txt
new file mode 100644
index 000000000..d38b6d00b
--- /dev/null
+++ b/fuel/prototypes/auto-deploy/documentation/3-dha.txt
@@ -0,0 +1,65 @@
+The structure is being reworked. This page describes the DHA.yaml file.
+
+Below is an example DHA for a libvirt deployment. An actual hardware deployment
+could for instance add additional data fields to the node list, such as:
+
+nodes:
+- id: 1
+ pxeMac: 52:54:00:9c:c2:c9
+ ipmiIp: 192.168.220.1
+ ipmiUser: admin
+ impiPassword: ericsson
+ isFuel: true
+
+The important thing is to keep the mandatory fields and add additional
+ones to map to the DHA adapter implementation for the hardware in
+question.
+
+The following example for libvirt is based on what's created by
+create_template.sh.
+
+Example DHA.yaml file for a libvirt adapter
+
+# DHA API version supported
+version: 1.1
+created: Wed Apr 22 11:34:14 UTC 2015
+comment: Small libvirt deployment
+
+# Adapter to use for this definition
+adapter: libvirt
+
+# Node list.
+# Mandatory fields are id, role and the "isFuel: true" property
+# for the Fuel node if not fuelCustomInstall is set, when it is
+# optional.
+# The MAC address of the PXE boot interface is not mandatory
+# to be set, but the field must be present.
+# All other fields are adapter specific.
+
+nodes:
+- id: 1
+ pxeMac: 52:54:00:38:c7:8e
+- id: 2
+ pxeMac: 52:54:00:9c:c2:c9
+- id: 3
+ pxeMac: 11:11:11:11:11:11
+ isFuel: true
+
+# Deployment power on strategy
+# all: Turn on all nodes at once. If MAC addresses are set these
+# will be used for connecting roles to physical nodes, if the
+# installation order will be arbitrary.
+# sequence: Turn on the nodes in sequence starting with the lowest order
+# node and wait for the node to be detected by Fuel. Not until
+# the node has been detected and assigned a role will the next
+# node be turned on.
+powerOnStrategy: all
+
+# If fuelCustomInstall is set to true, Fuel is assumed to be installed by
+# calling the DHA adapter function "dha_fuelCustomInstall()" with two
+# arguments: node ID and the ISO file name to deploy. The custom install
+# function is then to handle all necessary logic to boot the Fuel master
+# from the ISO and then return.
+# Allowed values: true, false
+
+fuelCustomInstall: false
diff --git a/fuel/prototypes/auto-deploy/documentation/4-dha-adapter-api.txt b/fuel/prototypes/auto-deploy/documentation/4-dha-adapter-api.txt
new file mode 100644
index 000000000..917d17cf5
--- /dev/null
+++ b/fuel/prototypes/auto-deploy/documentation/4-dha-adapter-api.txt
@@ -0,0 +1,128 @@
+The structure is being reworked. This page describes the DHA adapter interface.
+
+
+This is a the beginning of a documentation of the DHA adapter
+interface, which is auto generated from the bash implementation of the
+libvirt DHA adapter. So, to some extent work in progress.
+
+An example run from the ./verify_adapter tool:
+
+sfb@blackbox:~/git/toolbox/opnfv/production/deploy$ ./verify_adapter.sh libvirt.sh dha.yaml
+Adapter init
+dha.yaml
+DHAPARSE: /home/sfb/git/toolbox/opnfv/production/deploy/dha-adapters/dhaParse.py
+DHAFILE: dha.yaml
+Adapter API version: 1.0
+Adapter name: libvirt
+All PXE MAC addresses:
+1: 52:54:00:38:c7:8e
+2: 52:54:00:9c:c2:c9
+Using Fuel custom install: no
+Can set boot order live: no
+Can operate on ISO media: yes
+Can insert/eject ISO without power toggle: yes
+Can erase the boot disk MBR: yes
+Done
+
+
+*** DHA API definition version 1.1 ***
+
+# Get the DHA API version supported by this adapter
+dha_getApiVersion ()
+
+# Get the name of this adapter
+dha_getAdapterName ()
+
+# ### Node identity functions ###
+# Node numbering is sequential.
+# Get a list of all defined node ids, sorted in ascending order
+dha_getAllNodeIds()
+
+# Get ID for Fuel node ID
+dha_getFuelNodeId()
+
+# Get node property
+# Argument 1: node id
+# Argument 2: Property
+dha_getNodeProperty()
+
+# Get MAC address for the PXE interface of this node. If not
+# defined, an empty string will be returned.
+# Argument 1: Node id
+dha_getNodePxeMac()
+
+# Use custom installation method for Fuel master?
+# Returns 0 if true, 1 if false
+dha_useFuelCustomInstall()
+
+# Fuel custom installation method
+# Leaving the Fuel master powered on and booting from ISO at exit
+# Argument 1: Full path to ISO file to install
+dha_fuelCustomInstall()
+
+# Get power on strategy from DHA
+# Returns one of two values:
+# all: Power on all nodes simultaneously
+# sequence: Power on node by node, wait for Fuel detection
+dha_getPowerOnStrategy()
+
+# Power on node
+# Argument 1: node id
+dha_nodePowerOn()
+
+# Power off node
+# Argument 1: node id
+dha_nodePowerOff()
+
+# Reset node
+# Argument 1: node id
+dha_nodeReset()
+
+# Is the node able to commit boot order without power toggle?
+# Argument 1: node id
+# Returns 0 if true, 1 if false
+dha_nodeCanSetBootOrderLive()
+
+# Set node boot order
+# Argument 1: node id
+# Argument 2: Space separated line of boot order - boot ids are "pxe", "disk" and "iso"
+dha_nodeSetBootOrder()
+
+# Is the node able to operate on ISO media?
+# Argument 1: node id
+# Returns 0 if true, 1 if false
+dha_nodeCanSetIso()
+
+# Is the node able to insert add eject ISO files without power toggle?
+# Argument 1: node id
+# Returns 0 if true, 1 if false
+dha_nodeCanHandeIsoLive()
+
+# Insert ISO into virtualDVD
+# Argument 1: node id
+# Argument 2: iso file
+dha_nodeInsertIso()
+
+# Eject ISO from virtual DVD
+# Argument 1: node id
+dha_nodeEjectIso()
+
+# Wait until a suitable time to change the boot order to
+# "disk iso" when ISO has been booted. Can't be too long, nor
+# too short...
+# We should make a smart trigger for this somehow...
+dha_waitForIsoBoot()
+
+# Is the node able to reset its MBR?
+# Returns 0 if true, 1 if false
+dha_nodeCanZeroMBR()
+
+# Reset the node's MBR
+dha_nodeZeroMBR()
+
+# Entry point for dha functions
+# Typically do not call "dha_node_zeroMBR" but "dha node_ZeroMBR"
+# Before calling dha, the adapter file must gave been sourced with
+# the DHA file name as argument
+dha()
+
diff --git a/fuel/prototypes/auto-deploy/documentation/5-dea-api.txt b/fuel/prototypes/auto-deploy/documentation/5-dea-api.txt
new file mode 100644
index 000000000..d5c6f5c4e
--- /dev/null
+++ b/fuel/prototypes/auto-deploy/documentation/5-dea-api.txt
@@ -0,0 +1,47 @@
+The structure is being reworked. This page describes the DEA interface.
+
+The DEA API is internal to the deployer, but documented here for information.
+
+*** DEA API definition version 1.1 ***
+
+# Get the DEA API version supported by this adapter
+dea_getApiVersion ()
+
+# Node numbering is sequential.
+# Get the role for this node
+# Argument 1: node id
+dea_getNodeRole()
+
+# Get IP address of Fuel master
+dea_getFuelIp()
+
+# Get netmask Fuel master
+dea_getFuelNetmask()
+
+# Get gateway address of Fuel master
+dea_getFuelGateway()
+
+# Get gateway address of Fuel master
+dea_getFuelHostname()
+
+# Get DNS address of Fuel master
+dea_getFuelDns()
+
+# Convert a normal MAC to a Fuel short mac for --node-id
+dea_convertMacToShortMac()
+
+# Get property from DEA file
+# Argument 1: search path, as e.g. "fuel ADMIN_NETWORK ipaddress"
+dea_getProperty()
+
+# Convert DHA node id to Fuel cluster node id
+# Look for lowest Fuel node number, this will be DHA node 1
+# Argument: node id
+dea_getClusterNodeId()
+
+# Entry point for dea functions
+# Typically do not call "dea_node_zeroMBR" but "dea node_ZeroMBR"
+# Before calling dea, the adapter file must gave been sourced with
+# the DEA file name as argument
+dea()
+