Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Moves the vhost_params out of the manifest and into static hiera;
also removes unneeded server_alias parameter as that matched the
vhost servername anyway.
Change-Id: I4b5971b23ef3be9529a59075fa93ccc64af75b9c
|
|
Due to a limitation in the puppet version used in RHEL7 there is no simple
way to scope a 2nd level hiera hash key with the create_resources + defined
types pattern. Lack of the .each method support prior to puppet 4.0 is the
problem here. This template change works around the problem by explicitly
adding the hostname to the hieradata for a server under a nexus switch.
The duplicate server names under different switches is needed for vPC
config scenarios.
Closes-bug: #1506546
Change-Id: I03b866fb440e968c9f86ae93942b687e7165a065
|
|
Change-Id: Ia2079fc3e350cc677811ebb970cd2b306d6e7040
|
|
This commits aims to allow a user to specify several ntp servers and not
just one.
Example:
openstack overcloud deploy --templates --ntp-server
0.centos.pool.org,1.centos.pool.org
Change-Id: I4925ef1cf1e565d789981e609c88a07b6e9b28de
|
|
|
|
|
|
Currently only Glance and Heat have their virtual IP passed to the
contrller directly.
This commit adds the same feature for :
* Ceilometer
* Cinder
* Nova
* Swift
Change-Id: I295d15d7a0aa33175a5530e3b155b0c61983b6ae
|
|
Together with [1] this change permits to parameterize the file
descriptor limit for RabbitMQ for both the Systemd startup script
and the Pacemaker resource agent.
1. https://github.com/puppetlabs/puppetlabs-rabbitmq/commit/20325325b977c508b151ef8036107dcfefdf990b
Closes-Bug: 1474586
Change-Id: I62d31e483641ccb5cf489df81146ecb31d0c423f
|
|
This commit aims to allow a deployer to specify where to send haproxy's
logs. It is backward compatible with what is already in place and send
the logs to the UNIX socket /dev/log
The value specified here will be written in the haproxy.cfg file with
the following behavior
HAProxySyslogAddress: 127.0.0.1 -> log 127.0.0.1 local0
HAProxySyslogAddress: ::1 -> log ::1 local0
HAProxySyslogAddress: /dev/log -> log /dev/log local0 (default)
Change-Id: I46c489a1f424e2219d129f332e64c64019aef850
Depends-On: If7f7c8154e544e5d8a49f79f642e1ad01644a66d
|
|
This patch allows the case where we're not running Ceph to host Persistent
storage (volumes) but just to host Ephemeral storage (VMs).
Before we were only allowing Ephemeral storage on Ceph when also
Persistent storage was using Ceph.
Change-Id: I03b775326e4424de413452f4453d4d88de0083bc
|
|
Change-Id: Ieb27729c6b33ffc849d07200ec0d42508214956e
Closes-Bug: #1399793
|
|
|
|
If horizon is running in production (DEBUG is False), it will answer
only to the IPs/hostnames specified in the ALLOWED_HOSTS variable in the
local_settings.py configuration file.
The puppet-horizon module offer the feature to customize that,
tripleo-heat-teamplates was missing the link between the top-level
parameter and the puppet parameter, hence this commit.
More info :
* https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
* https://github.com/openstack/puppet-horizon/blob/master/templates/local_settings.py.erb#L14-L24
Change-Id: I5faede8b74a0318e15baa761dc502b95b051ae0d
|
|
|
|
|
|
|
|
|
|
The httpd daemon will be started and managed by Pacemaker, it should
not be enabled by puppet. Ideally, it shouldn't be started either
but it seems it isn't possible with horizon and apache mod_wsgi [1].
1. https://bugzilla.redhat.com/show_bug.cgi?id=1247547
Change-Id: I8a1b23c4ea27ac86385314f6cfde8c49d0879969
Co-Authored-By: marios andreou (marios@redhat.com)
|
|
|
|
|
|
|
|
|
|
|
|
This commit renames and updates the rather outdated README
for this project.
Change-Id: Ibd1531dc14a2c04d8d91a3339c1df47a41c94790
|
|
Previously the Registry service was reached using the local IP.
Change-Id: I8f2b7275cd39d8a5358d8ce69f4f7e5bc7758b62
|
|
This change adds a containerized version of the overcloud compute node for
TripleO. Configuration files are generated via OpenStack Puppet modules
which are then used to externally configure kolla containers for
each OpenStack service.
See the README-containers.md file for more information on how to set this up.
This uses AtomicOS as a base operating system and requires that we bootstrap
the image with a container which contains the required os-collect-config agent
hooks to support running puppet, shell scripts, and docker compose.
Change-Id: Ic8331f52b20a041803a9d74cdf0eb81266d4e03c
Co-Authored-By: Ian Main <imain@redhat.com>
Co-Authored-By: Ryan Hallisey <rhallise@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
In the current neutron-* services constraints chain, the ovs and
netns cleanup services are re-run after a neutron-server restart.
As discussed at [1] this may not be desirable leaving some neutron
services down and any tenant routers without IP.
This review introduces a second constraints chain so we now have:
neutron-server-->openvswitch-->dhcp-->l3-->metadata
and
ovs-cleanup-->netns-cleanup-->openvswitch
Instead of a single chain like
neutron-server-->ovs-cleanup-->netns-cleanup-->openvswitch-->
dhcp-->l3-->metadata
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1266910#c12
Related-Bug: 1501378
Change-Id: I4096704257aff74ff5bd37d8d01d8a776c6c6a76
|
|
The removal of default MariaDB accounts was being triggered roughly at
the same time on all controllers, causing a race condition -- multiple
nodes found an account present and attempted deletion, but then only one
succeeded with the deletion, the others failed.
HA controller deletes the accounts only on bootstrap node now, which
fixes the issue.
Change-Id: Ieacd10a6ce26da50f6a37eaa3221d866c24353fa
|
|
This patch moves all of the os-apply-config (tripleo-image-elements)
specific templates into a common directory. This matches what
we do for puppet and should help new users more easily
understand the project layout.
Change-Id: I7dce2a770d56795f3ea22c8a464595c4a0c60832
|
|
This patch removes a couple (top-level) templates that
are no longer used.
Change-Id: I71ba379b0d026e04fbcd45aaa2a0b587ba457c8c
|
|
This patch removes the examples directory which hasn't been
maintained for some time. The best examples for heat templates
now live in the heat-templates project.
Change-Id: Ia875cb8910418409d2335b5fb18c6df00b876e8c
|
|
By including ::ceilometer::config on controller & compute, we allow
anyone to trick ceilometer.conf with any parameter, using Hiera.
Change-Id: Ie6698d5e6900ecaaf7f19ed79e9c44b39ced0559
|
|
|
|
|
|
|
|
This patch moves the undercloud templates into the deprecated
directory. The Makefile still builds the resulting templates
at the top level so users should not be broken by this
change.
Change-Id: Ibcb87fe31a6894552a5e445b5495e69fdcc2d382
|
|
Currently, we have a problem because the unregistration happens in the
"post deploy" phase, which works fine when the top-level stack is being
deleted, but not when the ResourceGroup of servers is being scaled down,
because then the normal "post deploy" update ordering is respected and
we try to unregister after the corresponding server has been deleted.
So, instead, register/unregister each node inside the unit of scale,
e.g the role template being scaled down, which is possible via the new
NodesExtraConfig interface, which means unregistration will take
place at the right time both on stack delete and on scale-down.
Change-Id: I8f117a49fd128f268659525dd03ad46ba3daa1bc
|
|
It's become apparent that some actions are required in the pre-deploy
phase for all nodes, for example applying common hieradata overrides,
or also as a place to hook in logic which must happen for all nodes
prior to their removal on scale down (such as unregistration from
a satellite server, which currently doesn't work via the
*NodesPostDeployment for scale-down usage).
So, add a new interface that enables ExtraConfig per-node (inside the
scaled unit, vs AllNodes which is used for the cluster-wide config
outside of the ResourceGroup)
Change-Id: Ic865908e97483753e58bc18e360ebe50557ab93c
|
|
Currently package updates won't occur on a single node
non-HA pacemaker managed Controller because stopping
the node loses the quorum of 1.
This change gets the count of current nodes in the cluster and
if the count is 1 then specify --force when doing a pcs cluster stop.
Change-Id: I0de2488e24f1ef53a935dbc90ec6de6142bb4264
|
|
This change adds alternative logic for handling package updates
on a pacemaker managed node.
"yum list updates" is now run and this script exits early if
there are no packages to update.
If the pacemaker service is not running then the previous puppet
logic remains, so a package update is performed which excludes packages
managed by puppet, and a flag is set to indicate that puppet should
perform an ensure=>latest on all packages it manages.
However if the pacemaker service is running, the following occurs:
- pcs cluster stop is run for this node
- a full yum update is performed
- pcs cluster start is run for this node
- pcs status is run until the hostname for this node appears in the
Online list
This means that puppet is not involved in the package update process when
the node is managed by pacemaker.
Change-Id: I5ad118552d053dbda280978751167d9fd9da9874
|
|
This change updates yum_update.sh so that we set set a boolean
output when "managed" packages should get updated. The
output is named 'update_managed_packages' and for the
puppet implementation it is wired up so that it
directly sets tripleo::packages::enable_upgrade to
control whether packages are updated.
It also modifies yum_update.sh to build a yum update excludes list for
packages managed by puppet. The exclude lists are being
generated via puppet-tripleo as well via the new 'write_package_names'
function that is now wired into all the role manifests.
This change does not actually trigger the puppet apply. The fix for
Related-Bug: #1463092 will be used to trigger the puppet run when the
hiera changes. As a minor tweak to this logic we append the
UpdateIdentifier to the config_identifier so that we ensure
puppet gets executed on an update where other (non-related)
hiera changes also occur.
Co-Authored-By: Dan Prince <dprince@redhat.com>
Change-Id: I343c3959517eae38bbcd43648ed56f610272864d
|
|
This patch updates all of the overcloud manifests so that
we write out flat files containing lists of the Puppet
packages which were managed by each manifest.
The flat files all get written to
/var/lib/puppet-tripleo/installed-packages/ where they can
be easily parsed by external tools. Example format from
the flat files looks like (for the controller step 1):
cat /var/lib/puppet-tripleo/installed-packages/overcloud_controller1
keepalived
haproxy
Depends-On: If3e03b1983fed47082fac8ce63f975557dbc503c
Change-Id: Ia324a08711796aa664f9c0273a051f4f2e3e92c9
|
|
This patch adds a new optional DnsServers parameter
which can be used to provide a custom list of DNS
resolvers which will be configured in resolv.conf.
Change-Id: I2bb7259ebc09d786dc56da18694c862f802091b1
Depends-On: I9edecfdd4e1d0f39883b72be554cd92c5685881d
|