Age | Commit message (Collapse) | Author | Files | Lines |
|
Currently Cinder iscsi backend is configured within the DEFAULT section.
Since we aim to support multibackend, this commit puts the iscsi backend
in its own section and enable it by default configuring it properly.
Also adds a parameter which can be used to disable the default backend.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: I05fb44b59829c0afa8a6588956a48320f2f65159
|
|
This is a first implementation of Ceph support in TripleO with Puppet:
* Install ceph-mon on controller node
* Install ceph-osd on cephstorage node
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: I48488cbe950047fae5e746e458106d6edb9a6183
|
|
This patch applies the allNodesConfig data to swift storage
nodes. This contains hosts information which could be useful.
Change-Id: Iaccfdc698e371d6618d561c33f256ccc3c166fb7
|
|
This patch adds a new BlockStoreNodesPostDeployment resource
which can be used along with the environment file to
specify a nested stack which is guaranteed to execute
after all the BlockStore config deployments have executed.
This is really useful for Puppet in that Heat actually
controls where puppet executes in the deployment
process and we want to ensure puppet runs after
all hiera configuration data has be deployed to
the nodes. With the previous approach some of the
data would be there, but allNodes data would not be
guaranteed to be there in time.
As os-apply-config (tripleo-image-elements) have their
ordering controlled within the elements themselves an empty stubbed
in nested stack has been added so that we don't break that
implementation.
Change-Id: I29b3574e341eecd53b2867788f415bff153cfa9f
|
|
This patch adds a new ObjectStoreNodesPostDeployment resource
which can be used along with the environment file to
specify a nested stack which is guaranteed to execute
after all the ObjectStore config deployments have executed.
This is really useful for Puppet in that Heat actually
controls where puppet executes in the deployment
process and we want to ensure puppet runs after
all hiera configuration data has be deployed to
the nodes. With the previous approach some of the
data would be there, but allNodes data would not be
guaranteed to be there in time.
As os-apply-config (tripleo-image-elements) have their
ordering controlled within the elements themselves an empty stubbed
in nested stack has been added so that we don't break that
implementation.
Change-Id: I778b87a17d5e6824233fdf9957c76549c36b3f78
|
|
This patch adds a new ComputeNodesPostDeployment resource
which can be used along with the environment file to
specify a nested stack which is guaranteed to execute
after all the Compute config deployments have executed.
This is really useful for Puppet in that Heat actually
controls where puppet executes in the deployment
process and we want to ensure puppet runs after
all hiera configuration data has be deployed to
the nodes. With the previous approach some of the
data would be there, but allNodes data would not be
guaranteed to be there in time.
As os-apply-config (tripleo-image-elements) have their
ordering controlled within the elements themselves an empty stubbed
in nested stack has been added so that we don't break that
implementation.
Change-Id: I80bccd692e45393f8250607073d1fe7beb0d7396
|
|
This patch splits out the BootstrapNode config
such that alternate implementation (puppet for example)
can implement their own SoftwareConfig's via a nested stack.
This is controlled by the standard overcloud heat environment.
For os-apply-config deployments the implementation should work the
same as before.
For puppet deployments the implementation uses hiera metadata
to configure bootstrap_nodeid.
Change-Id: I691a9d7c474866038a5d47beab295899b5479d03
|
|
This patch splits out the allNodesConfig config
such that alternate implementation (puppet for example)
can implement their own SoftwareConfig's via a nested stack.
This is controlled by the standard overcloud heat environment.
For os-apply-config deployments the implementation should work the
same as before.
For puppet deployments the implementation uses hiera metadata
to configure rabbit_nodes. The puppet deployment doesn't support
hosts, or freeform sysctl metadata yet so those are the same
for now as well.
Change-Id: I34ae30b1f37aca8b39586f7e350511462d66f694
|
|
This patch splits out the SwiftDevicesAndProxy config
such that alternate implementation (puppet for example)
can implement their own SoftwareConfig's via a nested stack.
This is controlled by the standard overcloud heat environment.
For os-apply-config deployments the implementation should work the
same as before.
For puppet deployments the implementation uses hiera metadata
to configure swift devices.
Partial-bug: 1418805
Change-Id: Ibf6038460f36279ad51a04947589d4a03a553f66
|
|
This patch adds a new ControllerNodesPostDeployment resource
which can be used along with the environment file to
specify a nested stack which is guaranteed to execute
after all the Controller config (HA, or other) have
executed.
This is really useful for Puppet in that Heat actually
controls where puppet executes in the deployment
process and we want to ensure puppet runs after
all hiera configuration data has be deployed to
the nodes. With the previous approach some of the
data would be there, but most of the HA data which
actually gets composed outside of the controller-puppet.yaml
nested stack would not be guaranteed to be there in time.
As os-apply-config (tripleo-image-elements) have their
ordering controlled within the elements themselves an empty stubbed
in nested stack has been added so that we don't break that
implementation.
Partial-bug: 1418805
Change-Id: Icd6b2c9c1f9b057c28649ee3bdce0039f3fd8422
|
|
The new ceph-source.yaml file provides the config settings needed
by the elements which configure Ceph on controllers (monitors) and
storage nodes (OSDs) as well as the Cinder backend which uses it.
There is also a without-mergepy copy named ceph-storage.yaml
Change-Id: I954861536c41b2a7e6cbd86a0f0b55004eed4c70
|
|
This patch adds NTP support to all roles.
As part of this change overcloud-without-mergepy.yaml has
also been updated so that it passes the NtpServer parameters into
the Swift and Cinder storage node templates so that Ntp can
also be configured on those machines as well.
NOTE: The puppet support here uses the puppetlabs-ntp modules
which we add in Ib10ccbfdb3140b19f40049707548c6655d250e1c.
Change-Id: If2ef236fa42a714e84c6944eee5fe4daddf3fedf
|
|
Cinder block storage nodes shouldn't need to know the
AdminPassword and CinderPassword values. There are
no services which require Keystone related passwords
on the block storage nodes.
Change-Id: I4aa89347c60ec6258bd66725a895f6fd2b4844f6
|
|
Our existing default (replicas == 1) means that no data
(or copies) is being replicated in a multi-node Swift
environment. This seems like a bad production default
setting and could easily slip by if not set.
Setting it to 3 shouldn't hurt anything and seems to follow
suit with what several production installers (based around Puppet)
actually use. If using an installation with less than 3 swift
nodes I believe swift will do its best, and still work fine.
FWIW I noticed this when testing a multi-node Puppet swift
installation and was surprised when I didn't see any *data
files getting replicated across the storage cluster.
Change-Id: I44bdfff7aae6bdf845b79ca1f8f450c22113caed
|
|
In doing the Puppet version of the Swift role I noticed
4 parameters which we apply to storage nodes which should
not be required. This patch drops the following parameters
from the swift-storage and swift-storage-puppet nested
stacks which should not be required.
1) ControllerIP: There is no reason a storage node should need
the IP address of the controller. The swift proxy would need
this information in order to be able to contact keystone.
This swift-proxy is not installed on storage nodes so we can
drop the parameter here.
2) NeutronEnableTunnelling: There is no reason for Neutron
to be installed on Swift storage nodes. No need to create
an OVS bridge either.
3) NeutronNetworkType: Similar to above. No neutron requirements
exist here so this parameter is not required.
4) Password: This only applies to the the swift-proxy which is
currently part of our controller role. Storage nodes shouldn't need
the keystone service-password for any reason.
Change-Id: Icbf05363475c388fc722277da3d3d00a7355b19a
|
|
|
|
|
|
|
|
This change will allow for the enablement of Neutron routers HA
via the new NeutronL3HA parameter.
Change-Id: Ia5f7c0b4e89159456482e840c50d166ec5f25d4c
Implements: blueprint tripleo-icehouse-ha-production-configuration
|
|
This was added in I36fece56bafa9fe9c4883b572687b3fc819eeae1
and is missing from overcloud-without-mergepy.
Change-Id: I5c2566cc77247574f8d687eaab8094de481a8c67
|
|
This was added in Icc5e431a7e2884b3ca3a255b6fd901619bc98460
and is missing from overcloud-without-mergepy.
Change-Id: I1273b646c04783712fd3f8baccafead11817689c
|
|
|
|
We have never created these additional storage nodes by default with
the old templates; we agreed on adding a job for this in CI [1] so
we will override the default value in the specific CI job.
1. https://github.com/openstack-infra/tripleo-ci/blob/master/docs/wanted_ci_jobs.csv
Change-Id: Iaec38807bc209fc28d83e3d6922269e803110053
|
|
|
|
|
|
The Neutron tunnel type settings were missing from the Controller
section of the without-mergepy template, which made it impossible
to configure any tunnel other than gre.
Change-Id: Ia2579ed39a16d2b9826ce8406cb97fc116e3d595
|
|
|
|
|
|
|
|
We want to customize the default kernel keepalive timings and
make them more aggressive to workaround lack of hearbeat support
in the Oslo RabbitMQ client, see:
https://bugs.launchpad.net/oslo.messaging/+bug/856764/comments/19
and
https://bugs.launchpad.net/oslo.messaging/+bug/856764/comments/70
Change-Id: Ieac08f595086acb8dd336e33efc705ee0b8a3a87
Closes-Bug: 1301431
Closes-Bug: 1385240
Closes-Bug: 1385234
|
|
|
|
This patch removes all references to the Ceilometer DSN parameter
in the overcloud compute templates. These credentials
are not required in order to run the required Ceilometer
service/agents.
Change-Id: I421ce4fca87ac87dd65ab8bbb20e7ea9be8d9c5d
|
|
This patch removes all references to the Neutron DSN parameter
in the overcloud compute templates. These credentials
are not required in order to run the required Neutron
services.
Change-Id: I0691f43bd2ce85bec0d68ab979136414f0610c61
|
|
Remove NovaDSN from overcloud compute.
When using the Conductor the Nova compute service
does not need access to the database. This patch
removes all references to the Nova DSN in the overcloud
compute templates.
Change-Id: If75f480489b84002dd061c183dbee3572a8b63f1
|
|
In I00af10e07feed6c9c97ee6cad545dbff88cd6afc we removed the
Neutron* parameters from cinder-storage.yaml but we forgot to
also remove them from overcloud-without-mergepy.yaml.
Change-Id: I09f2eb278fa0eba1dff80884f12b6f682c7b0484
|
|
This patch adds the missing parameters to
overcloud-without-mergepy.yaml.
These parameters were adding to overcloud-source.yaml
in I422c65e7d941593083d52ad7fdf0dfd1d2fb3155. Due to
the concurrent review window they never made it
into the new overcloud-without-mergepy.yaml
implementation.
Change-Id: If54dc111aec852f906c9e7ac1bf56f9dcaf678ea
|
|
This patch adds the missing KeystoneSSLCertificate and
KeystoneSSLCertificateKey to overcloud-without-mergepy.yaml.
These parameters were adding to overcloud-source.yaml
in Icf46132230512a31b6dec3c07164c95b13dd8f73. Due to
the concurrent review window they never made it
into the new overcloud-without-mergepy.yaml
implementation.
Change-Id: I8b1155ca0a28392e5d5ade57d53bf810d8b5f053
|
|
This patch adds the missing RabbitClientUseSSL and
RabbitClientPort to overcloud-without-mergepy.yaml.
These parameters were adding to overcloud-source.yaml
in I7b7613cb60b9095ba5665c335c496fea4514391a. Due to
the concurrent review window they never made it
into the new overcloud-without-mergepy.yaml
implementation.
Change-Id: I182671b84d0a21d7018eb136003968f101384716
|
|
Now that we are using os-net-config we can make use of
the nic naming abstraction layer where the actual physical
nic name is mapped automatically.
This change removes all the eth0 references and replaces
them with nic1 which should make it more likely
that these default values would actually work on
some distributions.
It also removes the single instance of eth2 in the
undercloud-bm-nova-deploy.yaml template and replaces
it with nic1 as well. Underclouds aren't a special case
in this regard (I run my bare metal undercloud on em1)
so there is no good reason to default to the second nic.
Change-Id: I3ea92a502bc4b8789f74913f232ac8bc6b843008
|
|
|
|
Change-Id: I00af10e07feed6c9c97ee6cad545dbff88cd6afc
|
|
The params were added in I2997d23c584055c40034827e9beb58e6542ea11c
as a means to pass undercloud image data to overcloud instances
so they could perform an update via takeovernode). We've
never actually made use of them via takeovernode... furthermore
these params are a bit stale in that they haven't been applied
to other instance types (storage, etc.).
I propose we remove them entirely and start with a fresh plan for
how these would get used (perhaps a blueprint). As is these don't
appear to have ever been fully wired up to do anything removing
them should have no effect on end users.
Change-Id: I96f91fb0d67e7fe203d3767c8ab89ce82adbe331
|
|
With the push to using the new setup-flavors provided by
os-cloud-config, the default flavor will no longer be called
'baremetal', and Heat will always validate the default even if it
is overridden. To that end, remove the default flavor from every
flavor definition. Just to be certain, also add a custom_constraint
to every flavor definition that was missing it.
Change-Id: I24251e73be4e86738857f73b89499f592c4908de
|
|
|
|
|
|
Due to an ununsual interface to OS::Neutron::Port resources,
it's necessary to specify replacement_policy: AUTO, or the
resource is unconditionally replaced on every stack update.
I've started discussion re possibly changing the default in
Heat, but right now, we need this or we have the bad outcome
of replacing all (!) compute and controller nodes on every
stack-update, even if the templates are unmodified.
Passing the AUTO value should be safe regardless of any
potential change of default value in Heat.
Change-Id: I6dd02ae17407f8f4c81ae418e5027f4f38ae4e9b
Closes-Bug: #1383709
|
|
If you don't have (or provide) the wrong image, KeyName,
or flavor, we fail at some later point (not always early,
depending on what's wrong).
Since Icehouse, Heat has had a "custom constraints" method
of dynamically validating parameter values, by comparing the
value provided with a list from the underlying service.
Despite the name, there's nothing "custom" about the constraints,
these ones are included in Heat by default (though they are pluggable,
which is where the name comes from..)
See the docs for more info:
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#custom-constraint
Note, I've not considered network validation here, this could
possibly be added in a subsequent patch.
These constraints are evaluated via any of the following:
- heat template-validate -f <template>
- heat stack-preview <arguments given to create>
- heat stack-create <arguments, fails fast before creating anything>
- heat stack-update <arguments, fails fast before updating anything>
Change-Id: I3a6374ce5421575cdde893c62aa97c750a07acd8
|
|
This patch extends the previous 'Don't use merge.py for overcloud'
commit with the cinder-storage.yaml and swift-storage.yaml templates.
Requirements for this to deploy:
1. Block and object storage images have to be built
(overcloud-cinder-volume and overcloud-swift-storage)
2. The images have to be loaded by devtest_overcloud.sh
OVERCLOUD_CINDER_ID=$(load-image -d $TRIPLEO_ROOT/overcloud-cinder-volume.qcow2)
OVERCLOUD_SWIFT_ID=$(load-image -d $TRIPLEO_ROOT/overcloud-swift-storage.qcow2)
Change-Id: I45f9d9f051970a83e26c0fd924d7c98276958113
|
|
This provides three templates: overcloud-without-mergepy.yaml,
compute.yaml and controller.yaml. These can be used in combination with
overcloud-resource-registry.yaml to deploy the overcloud on their own --
without having to do any pre-processing (via merge.py).
To test these you have to add the resource registry environment (in
addition to the existing `-e` option) and use the new overcloud template
in the Heat call in devtest_overcloud.sh (line 374):
heat $HEAT_OP -e $TRIPLEO_ROOT/overcloud-env.json \
-e "$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry.yaml" \
-t 360 \
-f $TRIPLEO_ROOT/tripleo-heat-templates/overcloud-without-mergepy.yaml \
-P "ExtraConfig=${OVERCLOUD_EXTRA_CONFIG}" \
$STACKNAME
The existing overcloud Heat environment
($TRIPLE_ROOT/overcloud-env.json) should keep on working. Scaling is
now being controlled by the `ControllerCount` and `ComputeCount`
template parameters, though.
NOTE: the changes here depend on a fairly recent Heat build (commit
e5f285f6cb from ~7th September, 2014). In other words, this requires
Juno Heat.
Also, passing more than one environment file to Heat requires
python-heatclient version 0.2.11.
Change-Id: I687a00c7dc164ba044f9f2dfca96a02401427855
|