Age | Commit message (Collapse) | Author | Files | Lines |
|
Prior to this change our heat templates define one virtual IP, which all
the services are bound to.
We wish to be able to segregate these endpoints: some need to be
accessible to "the public"; some are only intended to be accessed within
the cloud; some are only for admin use.
This change adds a second VIP which we can use for binding only the
endpoints that are intended to be publicly accessible, leaving the older
VIP to be used for internal end points.
Haproxy is told to also listen on that new VIP so that we can expose selected
services via the new VIP, and keepalived is in charge of assigning the VIP to
control plane nodes.
This change has a proposed split of services between control-only and
control+public interfaces. Assuming our yaml parsers (in merge.py and
Heat) understand YAML anchors/aliases, and assuming I've got the syntax
right, this should get expanded so that all the control+public services
get their config defined from the same block without needing to repeat
it for each service. (AFAICT both merge.py and heat use pyyaml, which
does support aliases/anchors)
The default is left at binding to only the controlplane interface, so
that new services added to this map will default to being internal-only
This patchset partially completes a spec which will one day live at
https://blueprints.launchpad.net/tripleo/+specs/tripleo-juno-virtual-public-ips
but for now can bee seen in Id9addc65f0d2ed519ce4b3edbd561ed660a2786e
Implements: blueprint tripleo-juno-virtual-public-ips
Change-Id: I9649ee74ebaf62b6b929b28243a07c789a08867c
Co-Authored-By: Robert Collins <rbtcollins@hp.com>
Partial-Bug #1325114
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This change adds into the overcloud-source template a structure
name horizon.caches meant to define the Horizon caches backend.
It defaults to using memcached and provides a list of the
memcached nodes in horizon.caches.memcached.nodes
Related to blueprint tripleo-icehouse-ha-production-configuration
Change-Id: I728e05926f2de0e867fb8e8c74c63947da7d987a
|
|
Previously glance.host was pointing to the local controller_host
which would have requests to glance from other services to fail
if the local glance daemon was unavailable.
Change-Id: Ifd4f4b12cd51e23313826288797cc00ba3cd1754
|
|
Previously keystone.host was pointing to the local controller_host
which would have caused all local services to become unavailable
if keystone was to go down.
Closes-Bug: #1339986
Change-Id: I9b73595d3e0ae6e872aa6b7e0f93354ff04f2956
|
|
Change keepalived.keepalive_interface so that it uses the actual
ControlVirtualInterface (bridge) for VRRP rather than the bridged
interface (NeutronPublicInterface).
Fixes the issue which caused keepalived to bring up the VIP on
all control nodes.
Change-Id: Ifc484d6a6086d9872210aa576f21d326f60b7d35
|
|
Pacemaker will be used for managing ceilometer central agent,
we need basic metadata to setup corosync and pacemaker.
Related to: Ifa83d62c2132bcdcb40d0b7c80ce3adadc0b5587
Change-Id: I44909005d9bc653c3e7c2de1c12fe4ffecf6bede
|
|
Without this, when there are multiple admin networks (e.g. a VLAN)
Nova will refuse to guess and we'll fail to deploy.
Change-Id: Id1dca43ef287fda2adcfdf5b5d30145b055dbe76
|
|
Previously the completion signal was just based on the first run of
os-refresh-config. But in this case, we actually need to wait until it
runs successfully with all hosts computed. That way we can know that
services aren't in an unstable state while that configuration rolls out.
Change-Id: I3b965c19c92b366df3069cb8e1daffa18252c884
Closes-Bug: #1337230
|
|
|
|
|
|
This causes that:
* rabbit.nodes is list of all control nodes
* rabbit_hosts in OS config files points to all nodes in
rabbitmq cluster
* overcloud control nodes are joined into cluster
This works both for single or multiple control nodes and it's needed
for scaling out control nodes.
rabbit.nodes property is very similar to generating list of all hosts,
so it uses same StructuredConfig block. This block (and couple of references)
is renamed to allNodesConfig to make it more general.
Related to blueprint tripleo-icehouse-ha-production-configuration
Change-Id: Ice1a34ba7a52c41c1bb0c63350438971c651e7b6
|
|
|
|
Then feed in through separate deployments. This reduces the exponential
growth of calculating the entire list for every server.
Change-Id: Ib1187eabeb91b46e29ddcf5065056e43a69bb2a0
|
|
|
|
Adding nodes and cluster_name properties for mysql in order to enable
galera clustering.
Change-Id: I522b7324460469c59f49983ca3becd9ea914cdc0
|
|
Added several sections that are required for HAproxy configuration
1. haproxy.services - standard openstack services ports
2. haproxy.nodes - openstack controllers
3. haproxy.net_binds - virtual ips, that will also act as public endpoint
input controller_nodes scales with OVERCLOUD_CONTROLSCALE > 1
Related change t-i-e I641fa90c4a34c26e5699cf7f5a6f9643792c7b16
Implements blueprint tripleo-haproxy-configuration
Related to blueprint tripleo-icehouse-ha-production-configuration
Change-Id: I9c70812ee1b3e8c8c072705fc5123da88ecc8f9f
|
|
Since the wrong one is a bad idea :)
Change-Id: I7ed40078f487459dee9055ef41f10a9b60a0e674
|
|
|
|
This will allow us distribute identical keys/certs to all
control nodes in HA mode.
CAKey was removed because it's not required by keystone.
Change-Id: I187492d5fac448e57f8cd687f9cb751520df5921
|
|
This change is to set the glance protocol and port as
configurable via the Heat template. Presently the port is
hard-coded in the elements nova.conf file, and the protocol
is assumed as being the default (http).
This change will allow the glance_api_servers
to be set (in nova.conf) using the constituent parts:
glance_protocol://glance_host:glance_port
Change to nova.conf to read this value is:
Idccc0d60c9f6b17a853c6de1bbea64bfc7e028b2
Default port value is set to the nova default(9292) which is
currently hard-coded in the elements nova.conf file.
Default protocol value is set to the nova default(http).
Change-Id: I3c7218292797c62c36e2aaab4f325bf053ef140b
|
|
|
|
|
|
VIP should be used when pointing an OS service to
another OS service in config files (most typical is
setting Keystone's host IP, but also Glance and Netron
host needs to be set in Nova config file).
Change-Id: Id91e6ef2747981f17a43afd279d4eebaad01fe4d
|
|
We have had a change of opinion and are moving bootstrap_host properties
out of bootstack in order to prevent mysql / rabbit from depending on
boot-stack.
Change-Id: I85399019c5fc448e98362ef832988abc8d9d459d
|
|
|
|
These provide a single consistent interface for checking whether
a given node is the bootstrap node, or not the bootstrap node
for database initialisation etc.
Change-Id: I7c5a09cb3477b61c4050e4a47a680ffc9aee97d8
|
|
This will allow us distribute identical keys/certs to all
control nodes in HA mode.
Change-Id: Ie84f3897717c02e196a405746865996c0a929977
|
|
This change is required to resolve scaling issue for
OVERCLOUD_CONTROLSCALE > 1
Basicly change affected all the places where endpoints
were configured to use controller0 ctlplane ip address
Change-Id: I76eb9d2b81d3ef5e9fae408f2432515f4de13e12
|
|
Add SSLCACertificate to the overcloud yaml.
This allows a CA certificate to be specified in cases where the Cert
does not come from a CA in the system bundle.
Partially implements: blueprint tripleo-ssl-overcloud
Full set of blueprint changes:
https://review.openstack.org/#/c/85098
https://review.openstack.org/#/c/85099
https://review.openstack.org/#/c/85100
Change-Id: I67d7c1362df323762023be5c74fbe75b1583570c
|
|
|
|
|
|
added ControlVirtualIP resource of type OS::Neutron::Port
Added ControlVirtualInterface - by default br-ex
To specify the IP address to use as ControlVirtualIP,
or for any others custom needs, you could provide:
-P 'ControlFixedIPs=[{"ip_address" : "192.0.2.251"}]'
Related to blueprint tripleo-icehouse-ha-production-configuration
Change-Id: Ie82750ac1537c80311a869880f636bda59ca5c58
|
|
Choosing 100MB here is not a production default. We also don't need two
places with the default value set. The closer a default is to the actual
usage of it, the better, so we'll set 0 here, which will defer to the
default in the element.
Change-Id: I1b41b604286245c2fb83249778db835253c02fc5
|
|
|
|
|
|
Creation of OS::Neutron::Port requires network_id parameter
OS::Neutron::Port will be used for VIP creation
Creating port for network by name, e.g:
neutron port-create ctlplane
works only with neutron cli
Change-Id: Ia8bd6f700a4897efd277fd67189d2e04ad716b87
|
|
This will indicate to os-collect-config that this config
resource represents os-apply-config configuration data,
so it can only write out top-level config files for
os-apply-config data (or Heat::Ungrouped for backwards
compatibility).
Change-Id: I3552fdd6df8106ab83cfd17d5f4b137cf33fbc36
Related-Bug: #1299109
|
|
Being able to figure out the hypervisors from the control nodes seems
useful, and equally all the hypervisors should know about all the
control nodes (at least until we have virtual IPs all in place), and
lastly the control plane need to know each other by hostname.
Change-Id: I92877501c58d8c210e7b2c94935e107355271fb9
|
|
-Undercloud Ceilometer has to have access to SNMPd credentials,
so it can poll the Overcloud nodes
-In every Overcloud node, we need to set the same cretentials
to SNMPd.conf
Change-Id: Icf7c0c1772b6380b7136108e61c15cafe17274ba
|
|
This was hard-coded to 5 gig, which is useless for anything other
than tempest runs and smoke testing
block-storage-nfs.yaml has intentionally not been changed, since
volume_size_mb is not used in that setup. Cleaning up that code will
be done separately.
Change-Id: I476b906a8d439d3e6643dd0c214965c5862418e8
|
|
Adds a new parameter, NeutronDnsmasqOptions, to the overcloud template.
Allows the ability to set dnsmasq options for neutron dhcp agent. This
will allow us to configure mtu to be 1400 for tenant instances on the
overcloud. This should help with poor network performance and vm's that
are just plain unreachable via ssh due to the GRE tunnel overhead.
The default here has been set to:
dhcp-option-force=26,1400
This is the recommended way to configure OpenStack with the Open vSwitch
plugin per:
http://docs.openstack.org/admin-guide-cloud/content/openvswitch_plugin.html
All the documentation I can find on the web (openstack-dev,
ask.openstack.org, etc), recommend applying this setting. Others have
reported slow vm performance as well, and this resolves that issue
(apparently anyway...we'd need to test).
Change-Id: If24326045987b5a484ba2f71f591092987966536
Partial-Bug: #1270646
|
|
This provides a means for users to pass configuration through to the
machines they are deploying without us modelling that.
Change-Id: I7134eb0c6be2d5cb1795b2f03cfba4afb69dc837
blueprint: passthrough-config
|
|
This migrates the overcloud to using OS::Heat::StructuredConfig and
OS::Heat::StructuredDeployment. With those tools, we can decouple
servers from software configuration and begin to deprecate features in
tripleo_heat_merge.
Change-Id: Ice85f0711e90d0fabf1d1bc4698201c4d6758508
|
|
Updates all references for notCompute and notcompute
to use 'controller' instead.
Change-Id: I70ef83f35064ab388bdc7e1a6da62b6585580010
Partial-bug: #1300324
|