Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch provides an alternate implementation of
the OS::TripleO::Compute::SoftwareConfig which uses Puppet
to drive the configuration. Using this it is possible
to create a fully functional overcloud compute instance
which has the compute node configured via Puppet
stackforge modules. This includes all the Nova, Neutron,
and Ceilometer configuration required to make things work.
In order to test this you'll want to build your images
with these elements:
os-net-config
heat-config-puppet
puppet-modules
hiera
None of the OpenStack specific TripleO elements
should be used with this approach (the nova/neutron/ceilometer
elements were NOT used to build the compute image).
Also, rather than use neutron-openvswitch-agent to configure
low level networking it is recommended that os-net-config
by configured directly via heat modeling rather than
parameter passing to init-neutron-ovs. This allows us to
configure the physical network while avoiding the coupling to
the neutron-openvswitch-element that our standard
parameter driven networking currently uses. (We still need
to move init-neutron-ovs so that it isn't coupled and/or deprecate
its use entirely because the heat drive stuff is more flexible.)
Packages may optionally be pre-installed via DIB using the
-p option (-p openstack-neutron,openstack-nova).
Change-Id: Ic36be25d70f0a94ca07ffda6e0005669b81c1ac7
|
|
|
|
|
|
This example extends the compute software configuration
so that heat metadata is used to model the os-net-config
YAML (ultimately JSON) directly. The existing
os-net-config element already supports this format.
Configuring the physical network layer in this manner
would supplant the ever growing list of Heat parameters
that we have and is something that could be automatically
generated via tuskar.
The default is to use net-config-noop.yaml which
will pass no config metadata into the os-net-config
element which will essentially disable it in favor
of using parameters w/ init-neutron-ovs.
Change-Id: I30f325b1751caaef5624537e63ee27c2e418d5c8
|
|
|
|
We want to customize the default kernel keepalive timings and
make them more aggressive to workaround lack of hearbeat support
in the Oslo RabbitMQ client, see:
https://bugs.launchpad.net/oslo.messaging/+bug/856764/comments/19
and
https://bugs.launchpad.net/oslo.messaging/+bug/856764/comments/70
Change-Id: Ieac08f595086acb8dd336e33efc705ee0b8a3a87
Closes-Bug: 1301431
Closes-Bug: 1385240
Closes-Bug: 1385234
|
|
|
|
|
|
This patch removes all references to the Neutron DSN parameter
in the overcloud compute templates. These credentials
are not required in order to run the required Neutron
services.
Change-Id: I0691f43bd2ce85bec0d68ab979136414f0610c61
|
|
Remove NovaDSN from overcloud compute.
When using the Conductor the Nova compute service
does not need access to the database. This patch
removes all references to the Nova DSN in the overcloud
compute templates.
Change-Id: If75f480489b84002dd061c183dbee3572a8b63f1
|
|
In I00af10e07feed6c9c97ee6cad545dbff88cd6afc we removed the
Neutron* parameters from cinder-storage.yaml but we forgot to
also remove them from overcloud-without-mergepy.yaml.
Change-Id: I09f2eb278fa0eba1dff80884f12b6f682c7b0484
|
|
This patch adds the missing HAProxy novncproxy parameters to
controller.yaml
These parameters were adding to overcloud-source.yaml
in I0c6a3d6a8fd10da71abbf568633b28bdb5e56aa2.
Change-Id: Icff2f17a301e5e95fa43549ec1566c0c0d5b5353
|
|
This patch adds the missing parameters to controller.yaml
These parameters were adding to overcloud-source.yaml
in I1581c091b996422fb1374ea4c024d0a88453e10b.
Change-Id: I3e4e0e1feb521dded2679fed508fa97e8dd27661
|
|
This patch adds the missing parameters to
overcloud-without-mergepy.yaml.
These parameters were adding to overcloud-source.yaml
in I422c65e7d941593083d52ad7fdf0dfd1d2fb3155. Due to
the concurrent review window they never made it
into the new overcloud-without-mergepy.yaml
implementation.
Change-Id: If54dc111aec852f906c9e7ac1bf56f9dcaf678ea
|
|
In I422c65e7d941593083d52ad7fdf0dfd1d2fb3155
(Enable Neutron DVR support in TripleO installation)
we added duplicate parameters for NeutronPublicInterfaceRawDevice
and NeutronNetworkType.
In preparation for syncing with overcloud-without-mergepy.yaml
lets remove these dups.
Change-Id: Ib4888bc91f30aeb3aba590b69e4919a93f577143
|
|
This patch adds the missing KeystoneSSLCertificate and
KeystoneSSLCertificateKey to overcloud-without-mergepy.yaml.
These parameters were adding to overcloud-source.yaml
in Icf46132230512a31b6dec3c07164c95b13dd8f73. Due to
the concurrent review window they never made it
into the new overcloud-without-mergepy.yaml
implementation.
Change-Id: I8b1155ca0a28392e5d5ade57d53bf810d8b5f053
|
|
This patch adds the missing RabbitClientUseSSL and
RabbitClientPort to overcloud-without-mergepy.yaml.
These parameters were adding to overcloud-source.yaml
in I7b7613cb60b9095ba5665c335c496fea4514391a. Due to
the concurrent review window they never made it
into the new overcloud-without-mergepy.yaml
implementation.
Change-Id: I182671b84d0a21d7018eb136003968f101384716
|
|
Now that we are using os-net-config we can make use of
the nic naming abstraction layer where the actual physical
nic name is mapped automatically.
This change removes all the eth0 references and replaces
them with nic1 which should make it more likely
that these default values would actually work on
some distributions.
It also removes the single instance of eth2 in the
undercloud-bm-nova-deploy.yaml template and replaces
it with nic1 as well. Underclouds aren't a special case
in this regard (I run my bare metal undercloud on em1)
so there is no good reason to default to the second nic.
Change-Id: I3ea92a502bc4b8789f74913f232ac8bc6b843008
|
|
|
|
Change-Id: I00af10e07feed6c9c97ee6cad545dbff88cd6afc
|
|
The params were added in I2997d23c584055c40034827e9beb58e6542ea11c
as a means to pass undercloud image data to overcloud instances
so they could perform an update via takeovernode). We've
never actually made use of them via takeovernode... furthermore
these params are a bit stale in that they haven't been applied
to other instance types (storage, etc.).
I propose we remove them entirely and start with a fresh plan for
how these would get used (perhaps a blueprint). As is these don't
appear to have ever been fully wired up to do anything removing
them should have no effect on end users.
Change-Id: I96f91fb0d67e7fe203d3767c8ab89ce82adbe331
|
|
the default maxconn is only 150, which maybe good for api services
but not enough for the rabbitmq session in a cluster as small as 15
nodes. so bump up the number to 1500 for rabbitmq to allow for 100
nodes. this number should be calculated based on the scale numbers
in the long run.
Closes-bug: #1386406
Change-Id: Ieb707b31022a6fc9ade32ed2a332b67bf4dc0311
|
|
With the push to using the new setup-flavors provided by
os-cloud-config, the default flavor will no longer be called
'baremetal', and Heat will always validate the default even if it
is overridden. To that end, remove the default flavor from every
flavor definition. Just to be certain, also add a custom_constraint
to every flavor definition that was missing it.
Change-Id: I24251e73be4e86738857f73b89499f592c4908de
|
|
|
|
empty local_ip in ml2_conf.ini would make neutron-openvswitch-agent
fail to start, then fail to bridge dhcp to br-ctlplane and pxe boot an
overcloud, so provide the value in undercloud-source.yaml.
Related-Bug: #1394956
Change-Id: If3a94b9c2b971ceb7601f91a2db64989960fb5d3
|
|
|
|
|
|
This is a step towards supporting pluggable software configurations
in the heat templates. By moving compute-config out of compute.yaml
we make it possible to define alternate implementations by
changing the OS::TripleO::Compute::SoftwareConfig value in the
overcloud-resource-registry.yaml heat environment file.
Co-Authored-By: Steve Hardy <shardy@redhat.com>
Change-Id: I250dc1a8c02626cf7d1a5d2ce92706504ec0c7de
|
|
|
|
At present connect_host is specified by each port, individually, as
the same value. Move connect_host to be a direct child of the stunnel
element so it is only specified once.
Although previously we could theoretically specify a different
connect_host for each service, in practice they were the same and
that never would have worked.
This change means Mustache like {{#stunnel.connect_host}} will work.
Change-Id: I25c4bb09cf28a3728e959d4dd583af26a602ad90
Partial-Bug: #1391926
|
|
We've submitted a patch (https://review.openstack.org/#/c/130172/)
to set the value of mount_check to swift.mount-check if it exists,
and otherwise to set mount_check to false. By default TripleO
deployments set mount_check to false since they do not use mounted
disks to store data. However we (HP) and others are now using
TripleO to deploy Swift servers with mounted drives for data, in
which case mount_check should be set to True. This change adds
swift.mount-check data and sets it to the value of the
SwiftMountCheck parameter, which has a default value of False.
Change-Id: I36fece56bafa9fe9c4883b572687b3fc819eeae1
|
|
|
|
This change is congruent with I6dd02ae17407f8f4c81ae418e5027f4f38ae4e9b
but applies to undercloud configs rather than overcloud configs.
I've listed this as closing 138709 even though that bug didn't talk
about the undercloud as this seems like it's another instance of the
same issue seen there.
Change-Id: I3ee80043bb455460991e78525fa4310934df4697
Closes-Bug: #1383709
|
|
Instead of the default TCP connection check use the HTTP check. This
provides a more reliable way to tell if the service is up or not, only
2xx and 3xx response codes will signal a healthy service. This check can
also be used in conjunction with check-ssl to enable checks for services
running SSL/TLS in overcloud.
Change-Id: I1581c091b996422fb1374ea4c024d0a88453e10b
|
|
|
|
|
|
|
|
|
|
Due to an ununsual interface to OS::Neutron::Port resources,
it's necessary to specify replacement_policy: AUTO, or the
resource is unconditionally replaced on every stack update.
I've started discussion re possibly changing the default in
Heat, but right now, we need this or we have the bad outcome
of replacing all (!) compute and controller nodes on every
stack-update, even if the templates are unmodified.
Passing the AUTO value should be safe regardless of any
potential change of default value in Heat.
Change-Id: I6dd02ae17407f8f4c81ae418e5027f4f38ae4e9b
Closes-Bug: #1383709
|
|
Adds configuration options for Rabbit port and use_ssl settings using a shared
RabbitMQ parameter.
Change-Id: I7b7613cb60b9095ba5665c335c496fea4514391a
|
|
|
|
|
|
If you don't have (or provide) the wrong image, KeyName,
or flavor, we fail at some later point (not always early,
depending on what's wrong).
Since Icehouse, Heat has had a "custom constraints" method
of dynamically validating parameter values, by comparing the
value provided with a list from the underlying service.
Despite the name, there's nothing "custom" about the constraints,
these ones are included in Heat by default (though they are pluggable,
which is where the name comes from..)
See the docs for more info:
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#custom-constraint
Note, I've not considered network validation here, this could
possibly be added in a subsequent patch.
These constraints are evaluated via any of the following:
- heat template-validate -f <template>
- heat stack-preview <arguments given to create>
- heat stack-create <arguments, fails fast before creating anything>
- heat stack-update <arguments, fails fast before updating anything>
Change-Id: I3a6374ce5421575cdde893c62aa97c750a07acd8
|
|
This change adds the necessary elements to the overcloud-source.yaml,
nova-compute-config.yaml and nova-compute-instance.yaml to allow Neutron
Distributed Virtual Routers (DVR) to be enabled. The added elements are
set to default to values such that DVR is not enabled in keeping with
backwards compatibility.
Change-Id: I422c65e7d941593083d52ad7fdf0dfd1d2fb3155
blueprint: support-neutron-dvr
|
|
To implement the SSL PKI spec we need to change the keystone ssl cert
and cert key properties to be more generalizable. We also need to
support the old properties for backwards compatibility.
Change-Id: Icf46132230512a31b6dec3c07164c95b13dd8f73
|
|
Make the net binds simpler to maintain.
Change-Id: I7c7f2cde38a88976afe33097cdfe4a93d62a6417
|
|
This patch extends the previous 'Don't use merge.py for overcloud'
commit with the cinder-storage.yaml and swift-storage.yaml templates.
Requirements for this to deploy:
1. Block and object storage images have to be built
(overcloud-cinder-volume and overcloud-swift-storage)
2. The images have to be loaded by devtest_overcloud.sh
OVERCLOUD_CINDER_ID=$(load-image -d $TRIPLEO_ROOT/overcloud-cinder-volume.qcow2)
OVERCLOUD_SWIFT_ID=$(load-image -d $TRIPLEO_ROOT/overcloud-swift-storage.qcow2)
Change-Id: I45f9d9f051970a83e26c0fd924d7c98276958113
|
|
In I973d197245ed32612bde9209479e6ae3a443fc69, the signal_transport was
set to NO_SIGNAL to prevent the resource staying CREATE_IN_PROGRESS
forever. This means that Heat reports the stack is configured before it
actually is.
The correct fix was to add completion-signal to BlockStorageConfig.
However now there's a BlockStorage0AllNodesDeployment, we simply have to
receive the signal from allNodesConfig by setting the deployment
signal-transport.
Change-Id: I1f6408ca39fddd146e7aae140f61d265bbf563ec
|
|
This provides three templates: overcloud-without-mergepy.yaml,
compute.yaml and controller.yaml. These can be used in combination with
overcloud-resource-registry.yaml to deploy the overcloud on their own --
without having to do any pre-processing (via merge.py).
To test these you have to add the resource registry environment (in
addition to the existing `-e` option) and use the new overcloud template
in the Heat call in devtest_overcloud.sh (line 374):
heat $HEAT_OP -e $TRIPLEO_ROOT/overcloud-env.json \
-e "$TRIPLEO_ROOT/tripleo-heat-templates/overcloud-resource-registry.yaml" \
-t 360 \
-f $TRIPLEO_ROOT/tripleo-heat-templates/overcloud-without-mergepy.yaml \
-P "ExtraConfig=${OVERCLOUD_EXTRA_CONFIG}" \
$STACKNAME
The existing overcloud Heat environment
($TRIPLE_ROOT/overcloud-env.json) should keep on working. Scaling is
now being controlled by the `ControllerCount` and `ComputeCount`
template parameters, though.
NOTE: the changes here depend on a fairly recent Heat build (commit
e5f285f6cb from ~7th September, 2014). In other words, this requires
Juno Heat.
Also, passing more than one environment file to Heat requires
python-heatclient version 0.2.11.
Change-Id: I687a00c7dc164ba044f9f2dfca96a02401427855
|
|
|