Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
Currently you always have to pass the ctlplane ID because we're still
using the deprecated network_id property for the neutron port resource.
Since Juno, heat has supported a "network" property, which is used
elsewhere, e.g the nested port stacks, so switch to using it in the
overcloud-without-mergepy template, and flip the default to a more
useful "ctlplane" vs an empty string.
This means the stack create should just work on commonly documented
deployments without requiring any parameter.
Change-Id: Ifcea36d26b795c5e8b80accd8112e23b254127be
|
|
Currently there's a vague list of services in the description, so instead
describe the roles supported for deployment, and encode the minimum allowed
of one Controller/Compute with zero Storage nodes in the parameter constraints.
Change-Id: Ib4917843f3e4770f0260db72719ed6af0ee8dc13
|
|
In nova.conf, set cinder/catalog_info to 'volumev2:cinderv2:internalURL'
instead of 'volumev2:cinderv2:publicURL'.
So Nova will use internal Cinder endpoint to reach volume API by
using internal network.
Depends-On: Id9e579ca31364d5207d0c1b892d0f7aa7f20f7a8
Change-Id: Ia34f0fe59f662c3ad29ca0178c01ef1570759d57
|
|
|
|
|
|
|
|
Moves the vhost_params out of the manifest and into static hiera;
also removes unneeded server_alias parameter as that matched the
vhost servername anyway.
Change-Id: I4b5971b23ef3be9529a59075fa93ccc64af75b9c
|
|
Change-Id: Ia2079fc3e350cc677811ebb970cd2b306d6e7040
|
|
|
|
|
|
Currently only Glance and Heat have their virtual IP passed to the
contrller directly.
This commit adds the same feature for :
* Ceilometer
* Cinder
* Nova
* Swift
Change-Id: I295d15d7a0aa33175a5530e3b155b0c61983b6ae
|
|
Together with [1] this change permits to parameterize the file
descriptor limit for RabbitMQ for both the Systemd startup script
and the Pacemaker resource agent.
1. https://github.com/puppetlabs/puppetlabs-rabbitmq/commit/20325325b977c508b151ef8036107dcfefdf990b
Closes-Bug: 1474586
Change-Id: I62d31e483641ccb5cf489df81146ecb31d0c423f
|
|
This commit aims to allow a deployer to specify where to send haproxy's
logs. It is backward compatible with what is already in place and send
the logs to the UNIX socket /dev/log
The value specified here will be written in the haproxy.cfg file with
the following behavior
HAProxySyslogAddress: 127.0.0.1 -> log 127.0.0.1 local0
HAProxySyslogAddress: ::1 -> log ::1 local0
HAProxySyslogAddress: /dev/log -> log /dev/log local0 (default)
Change-Id: I46c489a1f424e2219d129f332e64c64019aef850
Depends-On: If7f7c8154e544e5d8a49f79f642e1ad01644a66d
|
|
This patch allows the case where we're not running Ceph to host Persistent
storage (volumes) but just to host Ephemeral storage (VMs).
Before we were only allowing Ephemeral storage on Ceph when also
Persistent storage was using Ceph.
Change-Id: I03b775326e4424de413452f4453d4d88de0083bc
|
|
Change-Id: Ieb27729c6b33ffc849d07200ec0d42508214956e
Closes-Bug: #1399793
|
|
|
|
If horizon is running in production (DEBUG is False), it will answer
only to the IPs/hostnames specified in the ALLOWED_HOSTS variable in the
local_settings.py configuration file.
The puppet-horizon module offer the feature to customize that,
tripleo-heat-teamplates was missing the link between the top-level
parameter and the puppet parameter, hence this commit.
More info :
* https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
* https://github.com/openstack/puppet-horizon/blob/master/templates/local_settings.py.erb#L14-L24
Change-Id: I5faede8b74a0318e15baa761dc502b95b051ae0d
|
|
|
|
|
|
|
|
|
|
The httpd daemon will be started and managed by Pacemaker, it should
not be enabled by puppet. Ideally, it shouldn't be started either
but it seems it isn't possible with horizon and apache mod_wsgi [1].
1. https://bugzilla.redhat.com/show_bug.cgi?id=1247547
Change-Id: I8a1b23c4ea27ac86385314f6cfde8c49d0879969
Co-Authored-By: marios andreou (marios@redhat.com)
|
|
|
|
|
|
|
|
|
|
|
|
This commit renames and updates the rather outdated README
for this project.
Change-Id: Ibd1531dc14a2c04d8d91a3339c1df47a41c94790
|
|
Previously the Registry service was reached using the local IP.
Change-Id: I8f2b7275cd39d8a5358d8ce69f4f7e5bc7758b62
|
|
This change adds a containerized version of the overcloud compute node for
TripleO. Configuration files are generated via OpenStack Puppet modules
which are then used to externally configure kolla containers for
each OpenStack service.
See the README-containers.md file for more information on how to set this up.
This uses AtomicOS as a base operating system and requires that we bootstrap
the image with a container which contains the required os-collect-config agent
hooks to support running puppet, shell scripts, and docker compose.
Change-Id: Ic8331f52b20a041803a9d74cdf0eb81266d4e03c
Co-Authored-By: Ian Main <imain@redhat.com>
Co-Authored-By: Ryan Hallisey <rhallise@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
In the current neutron-* services constraints chain, the ovs and
netns cleanup services are re-run after a neutron-server restart.
As discussed at [1] this may not be desirable leaving some neutron
services down and any tenant routers without IP.
This review introduces a second constraints chain so we now have:
neutron-server-->openvswitch-->dhcp-->l3-->metadata
and
ovs-cleanup-->netns-cleanup-->openvswitch
Instead of a single chain like
neutron-server-->ovs-cleanup-->netns-cleanup-->openvswitch-->
dhcp-->l3-->metadata
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1266910#c12
Related-Bug: 1501378
Change-Id: I4096704257aff74ff5bd37d8d01d8a776c6c6a76
|
|
The removal of default MariaDB accounts was being triggered roughly at
the same time on all controllers, causing a race condition -- multiple
nodes found an account present and attempted deletion, but then only one
succeeded with the deletion, the others failed.
HA controller deletes the accounts only on bootstrap node now, which
fixes the issue.
Change-Id: Ieacd10a6ce26da50f6a37eaa3221d866c24353fa
|
|
This patch moves all of the os-apply-config (tripleo-image-elements)
specific templates into a common directory. This matches what
we do for puppet and should help new users more easily
understand the project layout.
Change-Id: I7dce2a770d56795f3ea22c8a464595c4a0c60832
|
|
This patch removes a couple (top-level) templates that
are no longer used.
Change-Id: I71ba379b0d026e04fbcd45aaa2a0b587ba457c8c
|
|
This patch removes the examples directory which hasn't been
maintained for some time. The best examples for heat templates
now live in the heat-templates project.
Change-Id: Ia875cb8910418409d2335b5fb18c6df00b876e8c
|
|
By including ::ceilometer::config on controller & compute, we allow
anyone to trick ceilometer.conf with any parameter, using Hiera.
Change-Id: Ie6698d5e6900ecaaf7f19ed79e9c44b39ced0559
|
|
|
|
|
|
|
|
This patch moves the undercloud templates into the deprecated
directory. The Makefile still builds the resulting templates
at the top level so users should not be broken by this
change.
Change-Id: Ibcb87fe31a6894552a5e445b5495e69fdcc2d382
|
|
Currently, we have a problem because the unregistration happens in the
"post deploy" phase, which works fine when the top-level stack is being
deleted, but not when the ResourceGroup of servers is being scaled down,
because then the normal "post deploy" update ordering is respected and
we try to unregister after the corresponding server has been deleted.
So, instead, register/unregister each node inside the unit of scale,
e.g the role template being scaled down, which is possible via the new
NodesExtraConfig interface, which means unregistration will take
place at the right time both on stack delete and on scale-down.
Change-Id: I8f117a49fd128f268659525dd03ad46ba3daa1bc
|
|
It's become apparent that some actions are required in the pre-deploy
phase for all nodes, for example applying common hieradata overrides,
or also as a place to hook in logic which must happen for all nodes
prior to their removal on scale down (such as unregistration from
a satellite server, which currently doesn't work via the
*NodesPostDeployment for scale-down usage).
So, add a new interface that enables ExtraConfig per-node (inside the
scaled unit, vs AllNodes which is used for the cluster-wide config
outside of the ResourceGroup)
Change-Id: Ic865908e97483753e58bc18e360ebe50557ab93c
|