Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Remove the hardcoding of gre as the Neutron tenant network type for the
Overcloud. This will enable the ability to deploy an Overcloud that uses
vxlan instead of gre tunnels. A new parameter, NeutronTunnelTypes, is
added to allow configuring the tunnel_types parameter in the Neutron ML2
configuration.
This change is required by https://review.openstack.org/#/c/92913
Change-Id: I2c2e2153a61349e58ada28c87aa2338c9f00e7bd
|
|
The 'show' attribute results in a nova API call, which has
performance overhead even with attribute memoization.
The name attribute was added to expose the name without needing
an API call, since the resource already knows the name. This
change switches from using 'show' to 'name' throughout.
Change-Id: I1e83dd008cd02e5cec97868db0d5a695f07b7199
|
|
|
|
The existing examples for the overcloud ExtraConfig options
use an ironic setting that would likely never apply (Ironic
isn't for the overcloud).
This patch modifies the default section setting to use
the Nova force_config_drive option instead as a
config example.
Change-Id: Ieb893552fe9466b90b9d9a831a676d114efb6db1
|
|
|
|
Supplement ExtraConfig with specific versions - ControllerExtraConfig and
NovaComputeExtraConfig. This allows the user to specify different
configurations for each role.
Change-Id: Ieaee80e414130504a5e40e878a5a4ca1c196ca2b
|
|
Clint pointed out that | prevents things rendering properly on
arbitrarily wide screens. For most things that makes sense but it
doesn't for the JSON examples IMO so I didn't alter those.
Change-Id: Ifb7dcc265c225b000bd5d26500212d41ea0233c8
|
|
|
|
Proper VLAN support requires adding the IP address to a new device,
rather than br-ex/br-ctlplane. This is added in the
tripleo-image-elements change https://review.openstack.org/103449
(I3f77f72ac623792e844dbb4d501b6ab269141f8e) and here we just expose
it with appropriate glue to get the IP address from Neutron.
With this we can now describe a VLAN public interface scenario
to the undercloud and overcloud control planes.
Change-Id: I4d2194fc813aebb0708d6fddf4f05bae5f091fd8
|
|
We can obviously use passthrough for this, but I rather suspect that
OMFG something is broken get me debug will be a common phrase.
Change-Id: I62539630a4737bbbe6883ed71929f38c819ceed4
|
|
|
|
|
|
With the default 60 second timeout, many services will periodically
log "MySQL has gone away" as HAProxy has closed the connection.
Change-Id: Ied67344fbabcd77def4483be37a4706190ab28a0
|
|
|
|
|
|
The address for the vnc proxy is incorrectly configured in the nova
configuration file.
The correct IP address is the Public Virtual IP address of the
controller node as created by:
I9649ee74ebaf62b6b929b28243a07c789a08867c
The nova image_element nova.conf already has:
novncproxy_base_url=http://{{nova.public_ip}}:6080/vnc_auto.html
but nothing was setting nova.public_ip - until now
Closes-Bug: #1332554
Change-Id: I41214834511680170393dd4325b510f549373141
|
|
|
|
There may be times where an update needs to change this without changing
the template, such as when updates will be done by something other than
Heat (i.e. Ansible).
Change-Id: I89d1153acab697b64468f841b3f2d17c169da649
|
|
|
|
When change I6730ffe1e27d952d563c16a9480298fbef9f61fe got merged we
introduced some occurrences of list_join which should have been
migrated to Fn::Join (change I039f57ab39c1fcfc319a7a34265ba4fabf4ccd08)
This caused overcloud CI jobs to fail with:
Property error : allNodesConfig: config Items to join must be strings
This change fixes this by replacing newly introduced occurrences
of list_join with Fn::Join
Change-Id: Ibac193781d31d6f81e955e7b9381e13cfdd0ab1d
|
|
|
|
Set the MySQL root password to a random string
for the undercloud and overcloud
Change-Id: I6d38ca82c77a4aa8f58089c50aa5bf320ec0ecc6
|
|
To use a VLAN based public network we need the ext-net network to be a
VLAN with a segmentation id - but we can't do this unless we also have
the datacentre physical network marked as allowing vlans.
We could make this strictly opt-in, but as this doesn't affect the
switch configuration (and thus actual machine capabilities) having it
on by default seems reasonable. OTOH we can't force it on, because
high security environments may well want a defense in depth setup
where neutron admins cannot configure VLANs that they are not meant
to have access too (consider that the cloud machine admins may be
separate to the folk running the services on top of them...)
Change-Id: I9687751753f810896c6d065750910da40132c9fa
|
|
We currently make the external network a single-node gre network but
this is not at all correct for HA environments - we need a provider
network, which means having a bridge mapping, a flat network
specified, and then because we run the same ovs config everywhere we
need br-ex on the hypervisors too. This is entirely reasonable since
DVR will require this as well (and solve lots of scaling issues...).
Change-Id: I8b63ab51e7e20b235430fad8d786d8da005d84a1
|
|
To support underclouds and seeds running older than the very
latest heat.
2013-05-23 lacks function list_join, so this change reverts to
using the equivalent function Fn::Join.
Change-Id: I039f57ab39c1fcfc319a7a34265ba4fabf4ccd08
Closes-Bug: #1354305
|
|
This change sets applications to utilize the VIP address for database
connectivity and sets HAProxy in between the applications and MySQL.
Depends upon tripleo-image-elements changes:
Ia6f26305f8e744e4ff938dff85de1193183ecd8f
Iac1274cc52014f25887d696261b32146afc926dd
I5af70abb96021146c098f788db349808d806a348
Related to blueprint tripleo-icehouse-ha-production-configuration
Change-Id: Ia9d6ed2771f756d2a97ae5df7ed737a062a59cf2
|
|
|
|
To balance load over the rabbit cluster we want to route access
to it via haproxy.
This also helps workaround bug #856764 as an additional benefit.
This change sets rabbit.host to the ControlVirtualIP (to be used by
the elements) and adds an haproxy listener for the rabbit nodes.
Related to blueprint tripleo-icehouse-ha-production-configuration
Depends on I3ff37ec18b9191ca8e861519bed142cbdbd5faa2
Change-Id: I49b622a604542f456bd9a37da8dae3353218e640
Related-Bug: 856764
|
|
Controller scaling was broken by the commit
02772ba2877b9f6d427c6fd760bf19d6334c68a8. Merge.py raises an exception
when it tries to scale the default value "controller0" of the
`BootstrapNodeResource` parameter.
This reverts back to using Fn::Select for specifying the bootstrap host,
the rest of the Fn::Select -> get_attr changes are kept.
Change-Id: I0cdebf75d4752a35f547d4fbb81545ece3172405
|
|
This change renames a few NovaCompute resources so that the naming
is consistent with the controller resources naming choice.
Change-Id: I8c22867b208c5e16fd52bb3157f838f762b71470
|
|
With this we populate the hosts key (needed for /etc/hosts editing)
with the BlockStorage and SwiftStorage nodes too.
Change-Id: I6730ffe1e27d952d563c16a9480298fbef9f61fe
|
|
Overcloud bootstrap_nodeid is now specified by parameter
BootStrapNodeResource with default value controller0.
This avoids the need to use Fn::Select on the mergy.py
built list of controllers to specify the first controller.
Change-Id: Id9cfeab50b90ceeeae51ea0e35997b7495b28cc4
Partial-Blueprint: tripleo-juno-remove-mergepy
|
|
This change was generated and validated by running the following:
make hot clean all validate-all
This converts all templates to be valid HOT.
Fn::Select is not converted in this change but this will actually
work with heat_template_version 2013-05-23. Fn::Select is converted
manually in the next change in this series.
This change also sets the heat_template_version to 2014-10-16 which
includes the list_join intrinsic functions used throughout these
templates.
Partial-Blueprint: tripleo-juno-remove-mergepy
Change-Id: Ib3cbb83f6ae94adb7b793ab1b662bd5c55cbb5b3
|
|
Currently there is very weak (no) ordering of StructuredDeployments during
heat stack creation (and, importantly, update) on the overcloud. This can
cause the deployment which sends the completion signal back to Heat to
happen before all others have completed, which in turn leads Heat to state
the stack is ready while ORC is still configuring services
The only workaround to this is to wait an unknown amount of time after the
heat stack completes before the system is usable.
This patch prevents the completion signal from being returned early, by
ensuring these are strictly ordered:
controller0Deploy
controller0Passthrough
controller0AllNodesDeploy
NovaCompute0Deploy
NovaCompute0Passthrough
NovaCompute0AllNodesDeploy
Change-Id: I0a549370b7aca55b1145de521ad51218428deaf5
|
|
Inherit passthrough from nova-compute-instance.yaml, rather than
having an exact copy in overcloud-source.yaml.
Change-Id: I4f5a4a7be5835cb68755734aa72f8d9670cba0d4
|
|
Rename NovaCompute0Config to NovaCompute0Deploy as this makes
the structured deployment name match the one in
nova-compute-instance.yaml.
Change-Id: I79f66c09006aa7f7118af1f48e1f6f10b87daec6
|
|
Rename all occurrences of controller0AllNodesConfig to
controller0AllNodes as this is in line with compute node
deployments. Also the current naming is confusing as this is a
deployment step not a configuration step.
Change-Id: I8efa3b6a64a099e1e8ee43009472152aed5f8ad8
|
|
|
|
Prior to this change our heat templates define one virtual IP, which all
the services are bound to.
We wish to be able to segregate these endpoints: some need to be
accessible to "the public"; some are only intended to be accessed within
the cloud; some are only for admin use.
This change adds a second VIP which we can use for binding only the
endpoints that are intended to be publicly accessible, leaving the older
VIP to be used for internal end points.
Haproxy is told to also listen on that new VIP so that we can expose selected
services via the new VIP, and keepalived is in charge of assigning the VIP to
control plane nodes.
This change has a proposed split of services between control-only and
control+public interfaces. Assuming our yaml parsers (in merge.py and
Heat) understand YAML anchors/aliases, and assuming I've got the syntax
right, this should get expanded so that all the control+public services
get their config defined from the same block without needing to repeat
it for each service. (AFAICT both merge.py and heat use pyyaml, which
does support aliases/anchors)
The default is left at binding to only the controlplane interface, so
that new services added to this map will default to being internal-only
This patchset partially completes a spec which will one day live at
https://blueprints.launchpad.net/tripleo/+specs/tripleo-juno-virtual-public-ips
but for now can bee seen in Id9addc65f0d2ed519ce4b3edbd561ed660a2786e
Implements: blueprint tripleo-juno-virtual-public-ips
Change-Id: I9649ee74ebaf62b6b929b28243a07c789a08867c
Co-Authored-By: Robert Collins <rbtcollins@hp.com>
Partial-Bug #1325114
|
|
The current configuration of services is that if SSL is in use (signaled by
stunnel.connect_ip) we bind to 127.0.0.1 - which is great, but it breaks
simultaneous non-SSL due to there being no pass-through stunnel equivalent on
all the nodes. As an interim measure, teach stunnel to connect to the ctlplane
address instead. We will need this flexability in future anyway to deal with
mixed-mode configurations, but we don't yet have an SSL only configuration.
The change will permit SSL only by altering the Deployment object only - the
SSL config object should now be flexible enough to run in either mode (but as
yet on an all-one-way-or-the-other basis).
Change-Id: Ibac3dec1fe7b573029482fdd9ad2d2f6223fbce0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This change adds into the overcloud-source template a structure
name horizon.caches meant to define the Horizon caches backend.
It defaults to using memcached and provides a list of the
memcached nodes in horizon.caches.memcached.nodes
Related to blueprint tripleo-icehouse-ha-production-configuration
Change-Id: I728e05926f2de0e867fb8e8c74c63947da7d987a
|
|
Previously glance.host was pointing to the local controller_host
which would have requests to glance from other services to fail
if the local glance daemon was unavailable.
Change-Id: Ifd4f4b12cd51e23313826288797cc00ba3cd1754
|
|
Previously keystone.host was pointing to the local controller_host
which would have caused all local services to become unavailable
if keystone was to go down.
Closes-Bug: #1339986
Change-Id: I9b73595d3e0ae6e872aa6b7e0f93354ff04f2956
|