Age | Commit message (Collapse) | Author | Files | Lines |
|
Currently you always have to pass the ctlplane ID because we're still
using the deprecated network_id property for the neutron port resource.
Since Juno, heat has supported a "network" property, which is used
elsewhere, e.g the nested port stacks, so switch to using it in the
overcloud-without-mergepy template, and flip the default to a more
useful "ctlplane" vs an empty string.
This means the stack create should just work on commonly documented
deployments without requiring any parameter.
Change-Id: Ifcea36d26b795c5e8b80accd8112e23b254127be
|
|
Currently there's a vague list of services in the description, so instead
describe the roles supported for deployment, and encode the minimum allowed
of one Controller/Compute with zero Storage nodes in the parameter constraints.
Change-Id: Ib4917843f3e4770f0260db72719ed6af0ee8dc13
|
|
|
|
|
|
|
|
|
|
Currently only Glance and Heat have their virtual IP passed to the
contrller directly.
This commit adds the same feature for :
* Ceilometer
* Cinder
* Nova
* Swift
Change-Id: I295d15d7a0aa33175a5530e3b155b0c61983b6ae
|
|
Together with [1] this change permits to parameterize the file
descriptor limit for RabbitMQ for both the Systemd startup script
and the Pacemaker resource agent.
1. https://github.com/puppetlabs/puppetlabs-rabbitmq/commit/20325325b977c508b151ef8036107dcfefdf990b
Closes-Bug: 1474586
Change-Id: I62d31e483641ccb5cf489df81146ecb31d0c423f
|
|
This commit aims to allow a deployer to specify where to send haproxy's
logs. It is backward compatible with what is already in place and send
the logs to the UNIX socket /dev/log
The value specified here will be written in the haproxy.cfg file with
the following behavior
HAProxySyslogAddress: 127.0.0.1 -> log 127.0.0.1 local0
HAProxySyslogAddress: ::1 -> log ::1 local0
HAProxySyslogAddress: /dev/log -> log /dev/log local0 (default)
Change-Id: I46c489a1f424e2219d129f332e64c64019aef850
Depends-On: If7f7c8154e544e5d8a49f79f642e1ad01644a66d
|
|
This patch allows the case where we're not running Ceph to host Persistent
storage (volumes) but just to host Ephemeral storage (VMs).
Before we were only allowing Ephemeral storage on Ceph when also
Persistent storage was using Ceph.
Change-Id: I03b775326e4424de413452f4453d4d88de0083bc
|
|
Change-Id: Ieb27729c6b33ffc849d07200ec0d42508214956e
Closes-Bug: #1399793
|
|
If horizon is running in production (DEBUG is False), it will answer
only to the IPs/hostnames specified in the ALLOWED_HOSTS variable in the
local_settings.py configuration file.
The puppet-horizon module offer the feature to customize that,
tripleo-heat-teamplates was missing the link between the top-level
parameter and the puppet parameter, hence this commit.
More info :
* https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
* https://github.com/openstack/puppet-horizon/blob/master/templates/local_settings.py.erb#L14-L24
Change-Id: I5faede8b74a0318e15baa761dc502b95b051ae0d
|
|
Previously the Registry service was reached using the local IP.
Change-Id: I8f2b7275cd39d8a5358d8ce69f4f7e5bc7758b62
|
|
Make core_plugin, type_drivers and service_plugins parameter in
neutron configurable through heat.
Also changing the type_drivers order to "vxlan,vlan,flat,gre"
Change-Id: Iba895ed5897bdaf7bb772ffc063c424abb6e1638
|
|
Adds hook to enable additional "AllNodes" config to be performed prior
to applying puppet - this is useful when you need to build
configuration data which requires knowledge of all nodes in a cluster,
or of the entire deployment.
As an example, there is a sample config template which collects the
hostname and mac addresses for all nodes in the deployment then writes
the data to all Controller nodes. Something similar to this may be
required to enable creation of the nexus_config in
https://review.openstack.org/#/c/198754/
There's also another, simpler, example which shows how you could share
the output of an OS::Heat::RandomString between nodes.
Change-Id: I8342a238f50142d8c7426f2b96f4ef1635775509
|
|
|
|
|
|
Moves the default KeystoneAdminApiNetwork setting to the ctlplane
so that the undercloud will always have easy access to be able
to configure endpoints.
Change-Id: I1f6aba62b98820b678cce1ca16e72a0c3d045720
|
|
This patch adds explicit nested stack parameters to
help manage use of the Keystone Admin API vs. the
Keystone Public API.
We also add a new output parameter specifically for the Keystone admin
API VIP. This can be useful when configuring keystone endpoints
with network isolation.
Change-Id: I2bd3e61570151e2faeee14ee09b03ad0b3208cc1
|
|
|
|
When using network isolation you might want to selective
move one of the services back to the default ctlplane network
by simply using the ServiceNetMap parameter. This patch
adds ctlplane to the output parameters for both
the net_ip_map and net_ip_list_map nested stacks so that
this is possible.
As part of this patch we also split out the NetIpSubnetMap
into its own unique nested stack so that the Heat input
parameters for this stack are more clearly named.
Change-Id: Iaa2dcaebeac896404e87ec0c635688b2a59a9e0f
|
|
VXLAN has better performance (20-25% better)
NICs with VXLAN offload are more common
Change-Id: If57c79a1309ae178b3e82d54bb101dde584c86cc
Related: rhbz#1244864
|
|
This change enables Keystone notifications and adds two parameters
to control the notification driver and format.
Change-Id: I23ac3c46ee9eb49523d3b8dab027ef21fc6e42df
|
|
This patch adds support for using an externally managed Ceph
cluster with the TripleO Heat templates.
For an externally managed Ceph cluster we initially
only deploy the Ceph client tools, install the 'openstack' user
keyring, and generate the ceph.conf. This matches what we do
for managed Ceph installations and is a good first start.
No other Ceph related services are installed or managed.
To enable use of a Ceph external cluster simply add
the custom Heat environment file environments/puppet-ceph-external.yaml
to your heat stack create/update command and make sure to
set the required CephClientKey, CephExternalMonHost, and CephClusterFSID
variables.
Change-Id: I0a8b213ce9dfa2fc4e62ae1e7631466e5179fc2b
|
|
This patches wires in a new "all nodes" validation resource
that can be used to add validations that occur early on
during the deployment process. This occurs after the nodes
have been brought online and the initial networks
have been configured but before any "post" (puppet, etc.)
sort of configuration has been executed.
A initial validation script has been added to ping test network IPs
on each network. When using network isolation this will ensure
network connectivity (vlans, etc) are working on each
node and if not the heat stack will fail early, allowing
time to fix the network connections and retry the
stack creation via an update.
Change-Id: I63cf95b27e8ad2aed48718cf84df5f324780e597
Co-Authored-By: Ian Main <imain@redhat.com>
Co-Authored-By: Ryan Hallisey <rhallise@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
This change brings PublicVirtualIP in line with the rest of the
VIPs in how it is created. This allows the network where
PublicVirtualIP is instantiated to be on cltplane when network
isolation is not used, and on the external network when network
isolation is used. This change removes the PublicVirtualNetwork
parameter, since it is no longer used. In order to continue to
support the PublicVirtualFixedIPs parameter, which is used to
provide a specific IP for the PublicVirtualIP, the FixedIP
parameter was added to cltplane_vip.yaml, vip.yaml, and
noop.yaml. The value of PublicVirtualIP is passed to FixedIP
in the VIP templates. This change also moves the default
network for keystone public api to the external net (which will
fallback to ctlplane if network isolation isn't used).
Change-Id: I3f5d35cbe55d3a148e95cf49dfbaad4874df960b
|
|
|
|
Adds support for global (ExtraConfig) and role-specific
(CephStorageExtraConfig) hiera overrides, similar to those added
for the Controller, NovaCompute, BlockStorage, ObjectStorage roles.
Change-Id: Idbe73b86a772491cd3c55ba69b5a95cc291d2598
|
|
Adds support for global (ExtraConfig) and role-specific
(ObjectStorageExtraConfig) hiera overrides, similar to those added
for the Controller, NovaCompute and BlockStorage roles.
Change-Id: I7dd0d8003017e2738366983cb5d8e08b3f3fa334
|
|
Adds support for global (ExtraConfig) and role-specific
(BlockStorageExtraConfig) hiera overrides, similar to those added
for the Controller and NovaCompute roles.
Change-Id: Iaf9665b53407e6a657f56d6516469f2c88bafbdd
|
|
As a matter of fact it seems that the 1024 connections barrier
can easily be reached with modern hardware, see:
https://bugzilla.redhat.com/show_bug.cgi?id=1240824
Change-Id: I194a0dd725907350ca16ea3c41f3ed4f68a11bcf
|
|
Wires in the ControllerExtraConfig and ExtraConfig parameters so
that they may be used to specify overrides of the default hieradata.
Note if this is used to override values specified via parameters
rather than hard-coded values in puppet/hieradata caution should
be used as the overridden values will always take precendence
regardless of the parameter input, unless the parameter is provided
directly to the Deployment resource applying the manifiest (e.g
not the pattern currently employed in most of t-h-t)
Also note that ControllerExtraConfig takes precedence over the
deployment-wide ExtraConfig.
For example, here's how you would pass a value which disables the
heat-api-cfn service on all controllers. This would be put into an
environment file, then passed to the heat stack-create via an extra
-e option:
parameters:
controllerExtraConfig:
heat::api_cfn::enabled: false
Note the parameter capitalization is different in the top-level
overcloud-without-mergepy template for some reason.
Change-Id: I6d6e3e78460308134d95c01892bb242aba70e9ca
|
|
|
|
|
|
|
|
This adds the NeutronTunnelIdRanges and NeutronVniRanges parameters
which govern the GRE or VXLAN tunnel IDs (respectively) that are to
be made available for overcloud tenant networks.
These both default to "1:1000," to retain the current behaviour.
They are propagated to the hiera data for puppet deploys and there
is a separate change to support passing these into the config via
the neutron tripleo-image-element at
https://review.openstack.org/#/c/199592/
Change-Id: I967a8cae218a31e888abc438e9de5756ae627adb
Related-Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1240631
|
|
By default MongoDB enables a journaling system that prevents loss of
data in case of an unexpected shut-down. When journaling is enabled,
MongoDB will create the journal files before actually starting the
daemon[1].
The journaling feature is useful in production environment, but not
really on a CI-like system, where we only want to make sure MongoDB is
setup correctly and running, hence here we allow a user to
enable/disable MongoDB journaling.
[1] http://docs.mongodb.org/manual/core/journaling/
Change-Id: I0e4e65af9f650c10fdf5155ff709b4eb984cf4e1
Closes-bug: #1468246
|
|
The number of connections created to the database depends on the
number of running processes and this is a factor of both the nodes
count and the cores count. We make it configurable so it can be
increased when needed.
Change-Id: I41d511bde95d0942706bf7c28cd913498ea165fb
|
|
|
|
Currently for both puppet and image-elements based deploys we set
the dhcp_agents_per_network in neutron.conf to 2 and there is no
control over that number (in the hieradata for the former and the
image element for the latter). This change adds the
NeutronDhcpAgentsPerNetwork parameter and also changes the default
to 3 when not explicitly set.
In the puppet case propagate this parameter in the hieradata for
the neutron class and in the non-puppet case expose a new item in
the neutron config to be consumed by the neutron image element
(that change will point here)
Change-Id: Id97c7796db7231b636f2001e28412452cf89562b
|
|
|
|
|
|
The *HostnameResolveNetwork services define the network against
which the hostnames in /etc/hosts should be resolved, defaults
to 'internal_api' for all except CephStorage for which it uses
'storage' as they do not have connectivity to 'internal_api'.
Closes-Bug: 1471179
Change-Id: Ia8971f8a63016966236e7975ac2d97921a314255
|
|
This allows to specify particular nodes when scaling down
number of nodes in a resource group.
Change-Id: Idc3682ed430f351d533b990b44e8038866434e42
|
|
Seeding of overcloud keystone endpoints is currently done via a script
that is external to the overcloud heat stack. Previously the script
didn't have a way to figure out what are the IP addresses that it should
use for internal service endpoints. This patch adds those IP addresses
into the stack outputs so that the script can properly configure
internal endpoints.
Change-Id: I9ae4fc4413a79d6b7e2dce1571fd7083c23348ca
|