summaryrefslogtreecommitdiffstats
path: root/overcloud-resource-registry-puppet.yaml
AgeCommit message (Collapse)AuthorFilesLines
2015-07-06Wire in Controller pre-deployment extraconfigSteven Hardy1-4/+7
The recently added cinder-netapp extraconfig contains some additional hieradata which needs to be applied during the initial pre-deployment phase, e.g in controller-puppet.yaml (before the manifests are applied) so wire in a new OS::TripleO::ControllerExtraConfigPre provider resource which allows passing in a nested stack (empty by default) which contains any required "pre deployment" extraconfig, such as applying this hieradata. Some changes were required to the cinder-netapp extraconfig and environment such that now the hieradata is actually applied, and the parameter_defaults specified will be correctly mapped into the StructuredDeployment. Change-Id: I8838a71db9447466cc84283b0b257bdb70353ffd
2015-06-17Merge "Remove Redis VirtualIP from params and build it from Neutron::Port"Jenkins1-0/+3
2015-06-13Remove Redis VirtualIP from params and build it from Neutron::PortGiulio Fidente1-0/+3
The redis_vip should come from a Neutron Port as its cidr depends on the Neutron Network configuration. This change adds 2 new files and modifies 1 in the network/ports directory: - noop.yaml - Passes through the ctlplane Controller IP (modified) - ctlplane_vip.yaml - Creates a new VIP on the control plane - vip.yaml - Creates a VIP on the named network (for isolated nets) Also, changes to overcloud-without-mergepy.yaml create the Redis Virtual IP. The standard resource registry was modified to use noop.yaml for the new Redis VIP. The Puppet resource registry was modified to use ctlplane_vip.yaml by default, but can be made to use vip.yaml when network isolation is used by using an environment file. vip.yaml will place the VIP according to the ServiceNetMap, which can also be overridden. We use this new VIP port definition to assign a VIP to Redis, but follow-up patches will assign VIPs to the rest of the services in a similar fashion. Co-Authored-By: Dan Sneddon <dsneddon@redhat.com> Change-Id: I2cb44ea7a057c4064d0e1999702623618ee3390c
2015-06-11Remove external bridge from Compute nodesDan Sneddon1-1/+1
This change modifies overcloud-resource-registry-puppet.yaml to use net-config-noop.yaml as the default os-net-config template for compute nodes. The current default of net-config-bridge.yaml will set up a br-ex on the compute nodes. Since we are not using DVR that is not needed. Change-Id: I4e149a4f5a6d19e94e8c0245f52677f92f22d3ec
2015-06-08Merge "Enable NetApp Backends in Cinder"Jenkins1-0/+3
2015-06-08Config & deployments to update overcloud packagesSteve Baker1-0/+1
This change adds config and deployment resources to trigger package updates on nodes. The deployments are triggered by doing a stack-update and setting one of the parameters to a unique value. The intent is that rolling update will be controlled by setting breakpoints on all of the UpdateDeployment resources inside the role resource groups. Change-Id: I56bbf944ecd6cbdbf116021b8a53f9f9111c134f
2015-06-05Enable NetApp Backends in CinderRyan Hefner1-0/+3
Enables support for configuring Cinder with a NetApp backend. This change adds all relevant parameters for: - Clustered Data ONTAP (NFS, iSCSI, FC) - Data ONTAP 7-Mode (NFS, iSCSI, FC) - E-Series (iSCSI) Change-Id: If6c6e511ef2d26c4794e3b37c61e5318485ff4db
2015-06-03Add virtual IPs for split out networksDan Prince1-0/+2
This patch adds VIPs for the internal_api, storage, and storage management networks. For puppet these are persisted into a local vip-config hieradata file which is then used by puppet-tripleo's loadbalancer module to apply per-service VIP settings. Change-Id: I909c3bdc9d17a8e15351f4797287769e3f76c849
2015-06-03Make all-nodes Ip networks configurableDan Prince1-0/+1
This patch adds a new NetIpListMap abstraction which we can use to make the all-nodes-config IP list network assignments configurable. Ip address lists for all overcloud services which require IPs were added to all-nodes-config so that puppet manifests can be directly supplied the correct network list for each service. Change-Id: I209f2b4f97a4bb78648c54813dad8615770bcf1a
2015-06-03Wire ServiceNetMap as a top level parameterDan Prince1-24/+0
This patch makes ServiceNetMap a top level parameter. This is helpful to tools like Tuskar which don't support Heat environments that contain both a resource_registry and default_parameters. ServiceNetMap will in fact be utilized at the top level in some of the VIP related patches that follow. Change-Id: I375063dacf5f3fc68e6df93e11c3e88f48aa3c3a
2015-05-28Map Mysql to isolated networksDan Prince1-0/+1
This change adds parameters to specify which networks the MySQL service will use. If the internal_api network exists the MySQL service will bind to the IP address on that network, otherwise the services will default to the IP on the Undercloud 'ctlplane' network. This patch also drop the old 'controller_host' variable from the puppet controller template since it is no longer in use. Change-Id: I4fba2c957f7db47e916bc269fb4bd32ccc99bd4c
2015-05-27Map Horizon, Redis, Rabbit, memcached to isolated netsDan Sneddon1-0/+4
This change adds parameters to select the networks for Horizon, Redis, Rabbit MQ, and memcached services. Horizon is often used for administration from outside the cloud, so if the external network exists, Horizon will bind to that IP, otherwise it will default to the Undercloud 'ctlplane' network. Redis, Rabbit MQ, and memcached will bind to IPs on the internal_api network if it exists, else they will default to the 'ctlplane' network as well. Any of these network assignments can be overridden with an environment file. Change-Id: Ie0aa46b4a3c00d3826866796b4ec3b14f71f987c
2015-05-27Map Swift services to isolated networksDan Sneddon1-0/+2
This change adds paramters to specify which networks the Swift API services will use. If the storage network exists, it will be used for the Swift API, otherwise the Undercloud 'ctlplane' network will be used. If the storage_mgmt network exists, it will be used for the back-end storage services, otherwise the 'ctlplane' will be used by default. Change-Id: I1d5e966a16416c52935c22efe2d4783cd2192c32
2015-05-27Map Nova services to isolated networksDan Sneddon1-0/+2
This change adds parameters to specify which networks the Nova API and metadata services will use. If the internal_api network exists, it will be used for the bind IP for Nova API and metadata servers, otherwise the Undercloud 'ctlplane' IP will be used by default. Change-Id: Ie420274c7fba80abf9cf2b599431acc47e28fc7a
2015-05-27Map Heat services to isolated networksDan Sneddon1-0/+3
This change adds parameters to specify which networks the Heat services will use. If the internal_api network exists, the Heat API, Heat Cloud Formations, and Heat Cloudwatch services will bind to the IP address on that network, otherwise the services will default to the IP on the Undercloud 'ctlplane' network. Change-Id: I5febe1b9071600b43fa76c6cf415db83cad472ab
2015-05-26Map Neutron services to isolated networksDan Sneddon1-0/+1
This change adds parameters to specify which network the Neutron API should use. If the internal_api network exists, Neutron will bind to the IP on that network, otherwise the Undercloud 'ctlplane' network will be used. The network that the Neutron API is bound to can be overridden in an environment file. Change-Id: I11bcebba3a22e8850095250a2ddfaf972339476b
2015-05-26Map Keystone services to isolated networksDan Sneddon1-0/+2
This change adds parameters to specify which networks the Keystone API services will use. If the external network exists, Keystone will bind to the IP on that network for the public API, otherwise it will default to the IP on the Undercloud 'ctlplane' network. If the internal_api network exists it will be used for the Keystone Admin API, otherwise it will default to the 'ctlplane' IP. The networks these APIs are bound to can be overridden in an environment file. Change-Id: I6694ef6ca3b9b7afbde5d4f9d173723b9ce71b20
2015-05-26Map Glance services to isolated networksDan Sneddon1-0/+2
This change adds parameters to specify which networks the Glance services will use. If the internal_api network exists, Glance Registry will bind to the IP on that network, otherwise it will default to the Undercloud 'ctlplane' network. If the storage network exists, Glance API will bind to the IP on that network, otherwise it will default to 'ctlplane'. The networks that these services use can be overridden with an environment file. Change-Id: I6114b2d898c5a0ba4cdb26a3da2dbf669666ba99
2015-05-26Map Cinder services to isolated networksDan Sneddon1-0/+2
This change adds parameters to specify which networks the Cinder API and Cinder iSCSI services will listen on. If the internal_api network exists, Cinder API will be bound to the IP on that network, otherwise it will default to the Undercloud 'ctlplane' network. The Cinder iSCSI service will bind to the storage network if it exists, otherwise will also default to using the Undercloud 'ctlplane' network. Change-Id: I98149f108baf28d46eb199b69a72d0f6914486fd
2015-05-26Map Ceilometer services to isolated networksDan Sneddon1-0/+2
This change adds the parameters to specify which networks the Ceilometer and MongoDB servers listen on. It is set to the internal_api network if present, and reverts to the default Undercloud 'ctlplane' network if not. Change-Id: Ib646e4a34496966f9b1d454f04d07bf95543517f
2015-05-26Update neutron local_ip to use the tenant networkDan Prince1-0/+3
This patch uses the new NetIpMap and ServiceMap abstractions to assign the Neutron tenant tunneling network addresses. By default this is associated with the tenant network. If no tenant network is activated this will still default to the control plane IP address. Change-Id: I9db7dd0c282af4e5f24947f31da2b89f231e6ae4
2015-05-26Add a network ports IP mapping resourceDan Prince1-0/+2
This patch adds a resource which constructs a Json output parameter called net_ip_map which will allow us to easily extract arbitrary IP addresses for each network using the get_attr function in heat. The goal is to use this data construct in each role template to obtain the correct IP address on each network. Change-Id: I1a8c382651f8096f606ad38f78bbd76314fbae5f
2015-05-26Add isolated network ports to block storage rolesDan Prince1-0/+5
This patch updates the cinder block storage roles so that they can optionally make use of isolated network ports on the storage, storage management, and internal_api networks. -Multiple networks are created based upon settings in the heat resource registry. These nets will either use the noop network (the control plane pass-thru default) or create a custom Neutron port on each of the configured networks. -The ipaddress/subnet of each network is passed passed into the NetworkConfig resource which drives os-net-config. This allows the deployer to define a custom network template for static IPs, etc on each of the networks. -The ipaddress is exposed as an output parameter. By exposing the individual addresses as outputs we allow Heat to construct collections of ports for various services. Change-Id: I4e18cd4763455f815a8f8b82c93a598c99cc3842
2015-05-26Add isolated network ports to swift rolesDan Prince1-0/+5
This patch updates the swift roles so that they can optionally make use of isolated network ports on the storage, storage management, and internal API networks. -Multiple networks are created based upon settings in the heat resource registry. These nets will either use the noop network (the control plane pass-thru default) or create a custom Neutron port on each of the configured networks. -The ipaddress/subnet of each network is passed passed into the NetworkConfig resource which drives os-net-config. This allows the deployer to define a custom network template for static IPs, etc on each of the networks. -The ipaddress is exposed as an output parameter. By exposing the individual addresses as outputs we allow Heat to construct collections of ports for various services. Change-Id: I9984404331705f6ce569fb54a38b2838a8142faa
2015-05-26Add isolated network ports to ceph rolesDan Prince1-0/+4
This patch updates the ceph roles so that they can optionally make use of isolated network ports on the storage and storage management networks. -Multiple networks are created based upon settings in the heat resource registry. These nets will either use the noop network (the control plane pass-thru default) or create a custom Neutron port on each of the configured networks. -The ipaddress/subnet of each network is passed passed into the NetworkConfig resource which drives os-net-config. This allows the deployer to define a custom network template for static IPs, etc on each of the networks. -The ipaddress is exposed as an output parameter. By exposing the individual addresses as outputs we allow Heat to construct collections of ports for various services. Change-Id: I35cb8e7812202f8a7bc0379067bf33d483cd2aec
2015-05-26Add isolated network ports to compute rolesDan Prince1-0/+5
This patch updates the compute roles so that they can optionally make use of isolated network ports on the tenant, storage, and internal_api networks. -Multiple networks are created based upon settings in the heat resource registry. These nets will either use the noop network (the control plane pass-thru default) or create a custom Neutron port on each of the configured networks. -The ipaddress/subnet of each network is passed passed into the NetworkConfig resource which drives os-net-config. This allows the deployer to define a custom network template for static IPs, etc on each of the networks. -The ipaddress is exposed as an output parameter. By exposing the individual addresses as outputs we allow Heat to construct collections of ports for various services. Change-Id: Ib07b4b7256ede7fb47ecc4eb5abe64b9144b9aa1
2015-05-26Add isolated network ports to controller rolesDan Prince1-0/+7
This patch updates the controller roles so that they can optionally make use of isolated network ports on each of 5 available overcloud networks. -Multiple networks are created based upon settings in the heat resource registry. These nets will either use the noop network (the control plane pass-thru default) or create a custom Neutron port on each of the configured networks. -The ipaddress/subnet of each network is passed passed into the NetworkConfig resource which drives os-net-config. This allows the deployer to define a custom network template for static IPs, etc on each of the networks. -The ipaddress is exposed as an output parameter. By exposing the individual addresses as outputs we allow Heat to construct collections of ports for various services. Change-Id: I9bbd6c8f5b9697ab605bcdb5f84280bed74a8d66
2015-05-22Wire in optional network creation for overcloudDan Prince1-0/+9
This patch enables uses to selectively enable the creation of split out networks for the overcloud traffic. These networks will be created on the undercloud's neutron instance. By default a noop network is used so that no extra networks are created. This allows our default to continue being all traffic on the control plane. Change-Id: Ied49d9458c2d94e9d8e7d760d5b2d971c7c7ed2d
2015-05-11Puppet: Split out controller pacemaker manifestDan Prince1-0/+2
This patch adds support for using the Heat resource registry so that end users can enable pacemaker. Using this approach allows us to isolate all of the pacemaker logic for the controller in a single template rather than use conditionals for every service that must support it. Change-Id: Ibefb80d0d8f98404133e4c31cf078d729b64dac3
2015-04-29Merge "Add hooks for extra post-deployment config"Jenkins1-0/+1
2015-04-24Separate the network configuration per flavor.Dan Sneddon1-1/+5
This change allows a different network config for each family of hosts. For instance, the controller may have a different network configuration than a block storage node. This change adds a declaration for each family in the overcloud-resource-registry.yaml & overcloud-resource-registry-puppet.yaml. Change-Id: I083df7ebbb535f97d8ddec2ac0e06281c55986cd
2015-04-24Add hooks for extra post-deployment configSteven Hardy1-0/+1
Adds optional hooks which can run operator defined additional config on nodes after the application deployment has completed. Change-Id: I3f99e648efad82ce2cd51e2d5168c716f0cee8fe
2015-04-24Enable passing optional first-boot user-dataSteven Hardy1-0/+1
Currently all the OS::Nova::Server resource created don't pass any user-data. It's possible to pass user-data as well as using heat SoftwareConfig/SoftwareDeployment resources, and this can be useful when you have simple "first boot" tasks which are possible either via cloud-init, or via simple run-once scripts. This enables passing such data by implementing a new provider resource OS::TripleO::NodeUserData, which defaults to passing an empty mime archive (thus it's a no-op). An example of non no-op usage is also provided. Change-Id: Id0caba69768630e3a10439ba1fc2547a609c0cfe
2015-03-06Correct the parameter_defaults section name.Dan Prince1-3/+2
Also, we can actually uncomment this now that heatclient 0.3 has been released. Change-Id: I0b4ce13f1426c364ea7921596022e5165e025fdb
2015-03-05Puppet: First support CephEmilien Macchi1-0/+2
This is a first implementation of Ceph support in TripleO with Puppet: * Install ceph-mon on controller node * Install ceph-osd on cephstorage node Co-Authored-By: Giulio Fidente <gfidente@redhat.com> Change-Id: I48488cbe950047fae5e746e458106d6edb9a6183
2015-02-23BlockStore: Exec puppet after all configurationDan Prince1-0/+1
This patch adds a new BlockStoreNodesPostDeployment resource which can be used along with the environment file to specify a nested stack which is guaranteed to execute after all the BlockStore config deployments have executed. This is really useful for Puppet in that Heat actually controls where puppet executes in the deployment process and we want to ensure puppet runs after all hiera configuration data has be deployed to the nodes. With the previous approach some of the data would be there, but allNodes data would not be guaranteed to be there in time. As os-apply-config (tripleo-image-elements) have their ordering controlled within the elements themselves an empty stubbed in nested stack has been added so that we don't break that implementation. Change-Id: I29b3574e341eecd53b2867788f415bff153cfa9f
2015-02-23ObjectStore: Exec puppet after all configurationDan Prince1-0/+1
This patch adds a new ObjectStoreNodesPostDeployment resource which can be used along with the environment file to specify a nested stack which is guaranteed to execute after all the ObjectStore config deployments have executed. This is really useful for Puppet in that Heat actually controls where puppet executes in the deployment process and we want to ensure puppet runs after all hiera configuration data has be deployed to the nodes. With the previous approach some of the data would be there, but allNodes data would not be guaranteed to be there in time. As os-apply-config (tripleo-image-elements) have their ordering controlled within the elements themselves an empty stubbed in nested stack has been added so that we don't break that implementation. Change-Id: I778b87a17d5e6824233fdf9957c76549c36b3f78
2015-02-23Compute: Exec puppet after all configurationDan Prince1-0/+1
This patch adds a new ComputeNodesPostDeployment resource which can be used along with the environment file to specify a nested stack which is guaranteed to execute after all the Compute config deployments have executed. This is really useful for Puppet in that Heat actually controls where puppet executes in the deployment process and we want to ensure puppet runs after all hiera configuration data has be deployed to the nodes. With the previous approach some of the data would be there, but allNodes data would not be guaranteed to be there in time. As os-apply-config (tripleo-image-elements) have their ordering controlled within the elements themselves an empty stubbed in nested stack has been added so that we don't break that implementation. Change-Id: I80bccd692e45393f8250607073d1fe7beb0d7396
2015-02-19Split out BootstrapNode SoftwareConfigDan Prince1-0/+1
This patch splits out the BootstrapNode config such that alternate implementation (puppet for example) can implement their own SoftwareConfig's via a nested stack. This is controlled by the standard overcloud heat environment. For os-apply-config deployments the implementation should work the same as before. For puppet deployments the implementation uses hiera metadata to configure bootstrap_nodeid. Change-Id: I691a9d7c474866038a5d47beab295899b5479d03
2015-02-13Split out allNodesConfig SoftwareConfigDan Prince1-0/+1
This patch splits out the allNodesConfig config such that alternate implementation (puppet for example) can implement their own SoftwareConfig's via a nested stack. This is controlled by the standard overcloud heat environment. For os-apply-config deployments the implementation should work the same as before. For puppet deployments the implementation uses hiera metadata to configure rabbit_nodes. The puppet deployment doesn't support hosts, or freeform sysctl metadata yet so those are the same for now as well. Change-Id: I34ae30b1f37aca8b39586f7e350511462d66f694
2015-02-12Split out SwiftDevicesAndProxy SoftwareConfigDan Prince1-0/+1
This patch splits out the SwiftDevicesAndProxy config such that alternate implementation (puppet for example) can implement their own SoftwareConfig's via a nested stack. This is controlled by the standard overcloud heat environment. For os-apply-config deployments the implementation should work the same as before. For puppet deployments the implementation uses hiera metadata to configure swift devices. Partial-bug: 1418805 Change-Id: Ibf6038460f36279ad51a04947589d4a03a553f66
2015-02-12Controller: Exec puppet after all configurationDan Prince1-0/+1
This patch adds a new ControllerNodesPostDeployment resource which can be used along with the environment file to specify a nested stack which is guaranteed to execute after all the Controller config (HA, or other) have executed. This is really useful for Puppet in that Heat actually controls where puppet executes in the deployment process and we want to ensure puppet runs after all hiera configuration data has be deployed to the nodes. With the previous approach some of the data would be there, but most of the HA data which actually gets composed outside of the controller-puppet.yaml nested stack would not be guaranteed to be there in time. As os-apply-config (tripleo-image-elements) have their ordering controlled within the elements themselves an empty stubbed in nested stack has been added so that we don't break that implementation. Partial-bug: 1418805 Change-Id: Icd6b2c9c1f9b057c28649ee3bdce0039f3fd8422
2015-02-12Move all puppet templates into puppet directory.Dan Prince1-5/+5
This cleans up the top level tree by moving all the puppet related bits into the puppet directory. The only exception is overcloud-resource-registry-puppet.yaml which is the puppet environment file and is used externally. Change-Id: Idb65a7143b0f29e5579d4e9d1642e4cda6f65d50
2015-02-09Add Ceph related templates needed to configure Cinder with CephGiulio Fidente1-0/+1
The new ceph-source.yaml file provides the config settings needed by the elements which configure Ceph on controllers (monitors) and storage nodes (OSDs) as well as the Cinder backend which uses it. There is also a without-mergepy copy named ceph-storage.yaml Change-Id: I954861536c41b2a7e6cbd86a0f0b55004eed4c70
2015-02-05puppet: Add EnablePackageInstall optionDan Prince1-0/+4
This adds an option which enables package installation via Yum when Puppet executes. Users might want to disable Yum installation of packages via puppet when using pre-installed images. The option is off by default: meaning that Puppet will no longer install packages by default. Users will need to enable the EnablePackageInstall in order to get the previous behavior. The intent is to use the default_parameters section of the Heat environment to allow users to cleanly enable this features without wiring it into the top level. This is because the new parameter is Puppet specific and doesn't really apply to other implementations. Kilo Heat already has support for default_parameters and so does python-heatclient. NOTE: most TripleO users do not yet have the heatclient features because setup-clienttools in tripleo-incubator only installs releases via pip. It is for these reasons the default_parameters section in overcloud-resource-registry-puppet.yaml is commented out for now. Change-Id: I3af71b801b87d080b367d9e4a1fb44c1bfea6e87
2015-01-27Puppet: Cinder common block storage supportDan Prince1-1/+1
This patch implements the required changes to configure common Cinder block storage nodes via Puppet. Change-Id: Iac8b4679a00f58d5faac4a1d08b7a830f0360ba5
2015-01-27Puppet: Swift Storage node supportDan Prince1-1/+1
This patch implements the required changes to configure swift storage nodes via Puppet. Similar to the overcloud we generate the rings on each node (with the same seed). Change-Id: I677c85b09b6e656b3ac1f938a4bd6bc7daae1755
2015-01-27Compute: consolidated nested stackDan Prince1-2/+1
In I250dc1a8c02626cf7d1a5d2ce92706504ec0c7de we split out just the Controller software config in an effort to provide hooks for alternate implementations (puppet). This sort of worked but caused quirky ordering issues with signal handling. It also causes problems for Tuskar which would prefer to think of these nested stacks and not have us split out just the software configs like this. This patch moves all the compute related stuff for our two implementations: compute.yaml: is used by os-apply-config (uses the tripleo-image-elements) compute-puppet.yaml: uses stackforge puppet-* modules for configuration By duplicating the entire compute in this manner we make it much easier to create dependencies and implement proper signal handling. The only (temporary) downside is the duplication of parameters most of which will eventually go away when we move using the global parameters via Heat environment files instead. Change-Id: I49175d1843520abc80fefe9528442e5dda151f5d
2015-01-27Controller: consolidated nested stackDan Prince1-2/+1
In I228216a0b55ff2d384b281d9ad2a61b93d58dab9 we split out just the Controller software config in an effort to provide hooks for alternate implementations (puppet). This sort of worked but caused quirky ordering issues with signal handling. It also causes problems for Tuskar which would prefer to think of these nested stacks and not have us split out just the software configs like this. This patch moves all the controller related stuff for our two implementations: controller.yaml: is used by os-apply-config (uses the tripleo-image-elements) controller-puppet.yaml: uses stackforge puppet-* modules for configuration By duplicating the entire controller in this manner we make it much easier to create dependencies and implement proper signal handling. The only (temporary) downside is the duplication of parameters most of which will eventually go away when we move towards using the global parameters via Heat environment files instead. Change-Id: Iaf3c889d7c8815f862308cd8e15ce1010059f5c6
2015-01-08Puppet: overcloud controller configDan Prince1-0/+1
This patch provides an alternate implementation of the OS::TripleO::Controller::SoftwareConfig which uses Puppet to drive the configuration. Using this it is possible to create a fully functional overcloud controller instance which has the controller node configured via Puppet stackforge modules. Initially this includes only the following services: MySQL RabbitMQ Keepalived/HAProxy (HA is not yet fully supported however) Nova Neutron Keystone Glance (file backend) Cinder Using these services it is possible to run devtest_overcloud.sh to completion. The idea is that we can quickly add more services once we have CI in place. In order to test this you'll want to build your images with these elements: os-net-config heat-config-puppet puppet-modules hiera None of the OpenStack specific TripleO elements should be used with this approach (the nova/neutron elements were NOT used to build the controller image). Also, rather than use neutron-openvswitch-agent to configure low level networking it is recommended that os-net-config by configured directly via heat modeling rather than parameter passing to init-neutron-ovs. This allows us to configure the physical network while avoiding the coupling to the neutron-openvswitch-element that our standard parameter driven networking currently uses. (We still need to move init-neutron-ovs so that it isn't coupled and/or deprecate its use entirely because the heat drive stuff is more flexible.) Packages may optionally be pre-installed via DIB using the -p option (-p openstack-neutron,openstack-nova) etc. Change-Id: If8462e4eacb08eced61a8b03fd7c3c4257e0b5b8