Age | Commit message (Collapse) | Author | Files | Lines |
|
If horizon is running in production (DEBUG is False), it will answer
only to the IPs/hostnames specified in the ALLOWED_HOSTS variable in the
local_settings.py configuration file.
The puppet-horizon module offer the feature to customize that,
tripleo-heat-teamplates was missing the link between the top-level
parameter and the puppet parameter, hence this commit.
More info :
* https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
* https://github.com/openstack/puppet-horizon/blob/master/templates/local_settings.py.erb#L14-L24
Change-Id: I5faede8b74a0318e15baa761dc502b95b051ae0d
|
|
|
|
|
|
|
|
|
|
The httpd daemon will be started and managed by Pacemaker, it should
not be enabled by puppet. Ideally, it shouldn't be started either
but it seems it isn't possible with horizon and apache mod_wsgi [1].
1. https://bugzilla.redhat.com/show_bug.cgi?id=1247547
Change-Id: I8a1b23c4ea27ac86385314f6cfde8c49d0879969
Co-Authored-By: marios andreou (marios@redhat.com)
|
|
|
|
|
|
|
|
|
|
|
|
This commit renames and updates the rather outdated README
for this project.
Change-Id: Ibd1531dc14a2c04d8d91a3339c1df47a41c94790
|
|
Previously the Registry service was reached using the local IP.
Change-Id: I8f2b7275cd39d8a5358d8ce69f4f7e5bc7758b62
|
|
This change adds a containerized version of the overcloud compute node for
TripleO. Configuration files are generated via OpenStack Puppet modules
which are then used to externally configure kolla containers for
each OpenStack service.
See the README-containers.md file for more information on how to set this up.
This uses AtomicOS as a base operating system and requires that we bootstrap
the image with a container which contains the required os-collect-config agent
hooks to support running puppet, shell scripts, and docker compose.
Change-Id: Ic8331f52b20a041803a9d74cdf0eb81266d4e03c
Co-Authored-By: Ian Main <imain@redhat.com>
Co-Authored-By: Ryan Hallisey <rhallise@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
In the current neutron-* services constraints chain, the ovs and
netns cleanup services are re-run after a neutron-server restart.
As discussed at [1] this may not be desirable leaving some neutron
services down and any tenant routers without IP.
This review introduces a second constraints chain so we now have:
neutron-server-->openvswitch-->dhcp-->l3-->metadata
and
ovs-cleanup-->netns-cleanup-->openvswitch
Instead of a single chain like
neutron-server-->ovs-cleanup-->netns-cleanup-->openvswitch-->
dhcp-->l3-->metadata
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1266910#c12
Related-Bug: 1501378
Change-Id: I4096704257aff74ff5bd37d8d01d8a776c6c6a76
|
|
The removal of default MariaDB accounts was being triggered roughly at
the same time on all controllers, causing a race condition -- multiple
nodes found an account present and attempted deletion, but then only one
succeeded with the deletion, the others failed.
HA controller deletes the accounts only on bootstrap node now, which
fixes the issue.
Change-Id: Ieacd10a6ce26da50f6a37eaa3221d866c24353fa
|
|
This patch moves all of the os-apply-config (tripleo-image-elements)
specific templates into a common directory. This matches what
we do for puppet and should help new users more easily
understand the project layout.
Change-Id: I7dce2a770d56795f3ea22c8a464595c4a0c60832
|
|
This patch removes a couple (top-level) templates that
are no longer used.
Change-Id: I71ba379b0d026e04fbcd45aaa2a0b587ba457c8c
|
|
This patch removes the examples directory which hasn't been
maintained for some time. The best examples for heat templates
now live in the heat-templates project.
Change-Id: Ia875cb8910418409d2335b5fb18c6df00b876e8c
|
|
By including ::ceilometer::config on controller & compute, we allow
anyone to trick ceilometer.conf with any parameter, using Hiera.
Change-Id: Ie6698d5e6900ecaaf7f19ed79e9c44b39ced0559
|
|
|
|
|
|
|
|
This patch moves the undercloud templates into the deprecated
directory. The Makefile still builds the resulting templates
at the top level so users should not be broken by this
change.
Change-Id: Ibcb87fe31a6894552a5e445b5495e69fdcc2d382
|
|
Currently, we have a problem because the unregistration happens in the
"post deploy" phase, which works fine when the top-level stack is being
deleted, but not when the ResourceGroup of servers is being scaled down,
because then the normal "post deploy" update ordering is respected and
we try to unregister after the corresponding server has been deleted.
So, instead, register/unregister each node inside the unit of scale,
e.g the role template being scaled down, which is possible via the new
NodesExtraConfig interface, which means unregistration will take
place at the right time both on stack delete and on scale-down.
Change-Id: I8f117a49fd128f268659525dd03ad46ba3daa1bc
|
|
It's become apparent that some actions are required in the pre-deploy
phase for all nodes, for example applying common hieradata overrides,
or also as a place to hook in logic which must happen for all nodes
prior to their removal on scale down (such as unregistration from
a satellite server, which currently doesn't work via the
*NodesPostDeployment for scale-down usage).
So, add a new interface that enables ExtraConfig per-node (inside the
scaled unit, vs AllNodes which is used for the cluster-wide config
outside of the ResourceGroup)
Change-Id: Ic865908e97483753e58bc18e360ebe50557ab93c
|
|
Currently package updates won't occur on a single node
non-HA pacemaker managed Controller because stopping
the node loses the quorum of 1.
This change gets the count of current nodes in the cluster and
if the count is 1 then specify --force when doing a pcs cluster stop.
Change-Id: I0de2488e24f1ef53a935dbc90ec6de6142bb4264
|
|
This change adds alternative logic for handling package updates
on a pacemaker managed node.
"yum list updates" is now run and this script exits early if
there are no packages to update.
If the pacemaker service is not running then the previous puppet
logic remains, so a package update is performed which excludes packages
managed by puppet, and a flag is set to indicate that puppet should
perform an ensure=>latest on all packages it manages.
However if the pacemaker service is running, the following occurs:
- pcs cluster stop is run for this node
- a full yum update is performed
- pcs cluster start is run for this node
- pcs status is run until the hostname for this node appears in the
Online list
This means that puppet is not involved in the package update process when
the node is managed by pacemaker.
Change-Id: I5ad118552d053dbda280978751167d9fd9da9874
|
|
This change updates yum_update.sh so that we set set a boolean
output when "managed" packages should get updated. The
output is named 'update_managed_packages' and for the
puppet implementation it is wired up so that it
directly sets tripleo::packages::enable_upgrade to
control whether packages are updated.
It also modifies yum_update.sh to build a yum update excludes list for
packages managed by puppet. The exclude lists are being
generated via puppet-tripleo as well via the new 'write_package_names'
function that is now wired into all the role manifests.
This change does not actually trigger the puppet apply. The fix for
Related-Bug: #1463092 will be used to trigger the puppet run when the
hiera changes. As a minor tweak to this logic we append the
UpdateIdentifier to the config_identifier so that we ensure
puppet gets executed on an update where other (non-related)
hiera changes also occur.
Co-Authored-By: Dan Prince <dprince@redhat.com>
Change-Id: I343c3959517eae38bbcd43648ed56f610272864d
|
|
This patch updates all of the overcloud manifests so that
we write out flat files containing lists of the Puppet
packages which were managed by each manifest.
The flat files all get written to
/var/lib/puppet-tripleo/installed-packages/ where they can
be easily parsed by external tools. Example format from
the flat files looks like (for the controller step 1):
cat /var/lib/puppet-tripleo/installed-packages/overcloud_controller1
keepalived
haproxy
Depends-On: If3e03b1983fed47082fac8ce63f975557dbc503c
Change-Id: Ia324a08711796aa664f9c0273a051f4f2e3e92c9
|
|
This patch adds a new optional DnsServers parameter
which can be used to provide a custom list of DNS
resolvers which will be configured in resolv.conf.
Change-Id: I2bb7259ebc09d786dc56da18694c862f802091b1
Depends-On: I9edecfdd4e1d0f39883b72be554cd92c5685881d
|
|
Also adds an environment file which can be passed to heat stack-create
to enable debugging.
Change-Id: I9758e2ca3de6a0bed6d20c37ea19e48f47220721
Depends-On: Ie92d1714a8d7e59d347474039be999bd3a2b542f
|
|
This enables support for the Cisco N1kv driver for the ML2 plugin.
It also configures the Nexus 1000v switch.
Co-Authored-By: Steven Hillman <sthillma@cisco.com>
Depends-On: I02dda0685c7df9013693db5eeacb2f47745d05b5
Depends-On: I3f14cdce9b9bf278aa9b107b2d313e1e82a20709
Change-Id: Idf23ed11a53509c00aa5fea4c87a515f42ad744f
|
|
|
|
Make core_plugin, type_drivers and service_plugins parameter in
neutron configurable through heat.
Also changing the type_drivers order to "vxlan,vlan,flat,gre"
Change-Id: Iba895ed5897bdaf7bb772ffc063c424abb6e1638
|
|
This change adds a CephStorageExtraConfigPre which can be used
to distribute hooks for the CephStorage nodes.
Change-Id: Id0023d8ffddb3ee5e855d5dcc32c76bc41ce4c63
|
|
It is currently not possible to specify settings per host and not per
type of host.
One of the example of the problematic that could cause is : What if
node0 have devices /dev/sdb and /dev/sdc while node1 have devices
/dev/sda and /dev/sdd, they is currently no way to specify that simply.
The idea here is to add a top priority file in the hiera lookup that
will match the UUID of the System Information section in the output of
the dmidecode command.
The file could be provided with the firstboot/rsync stack for example.
Change-Id: I3ab082c8ebd2567bd1d914fc0b924e19b1eff7d0
|
|
Shows one method of passing a map of data in to the pre_deploy extraconfig
interface, such that it could be used in combination with
https://review.openstack.org/#/c/215013/ to create a node uuid specific
hieradata file, or to perform some other non-puppet per-node configuration.
This would be used by specifying an environment file like:
resource_registry:
OS::TripleO::ControllerExtraConfigPre: puppet/extraconfig/pre_deploy/per_node.yaml
parameter_defaults:
NodeDataLookup: |
{"AB4114B1-9C9D-409A-BEFB-D88C151BF2C3": {"foo": "bar"},
"8CF1A7EA-7B4B-4433-AC83-17675514B1B8": {"foo2": "bar2"}}
Change-Id: I62e344669e0ca781dd93d3f7d2190b70299877c2
|
|
|
|
|
|
|
|
|
|
The collection of hostname to MAC mappings done in AllNodesPostDeploy
uses 'hostname -f' to get the FQDN for each node. This form
of the command causes a nameserver lookup for the domain name. A
timing issue has been seen where the hostname lookup fails due to
the nameserver not having the mapping yet. The solution is to
hardcode the domain to 'localdomain' as is done in a few other
patches--ie. see controller-puppet.yaml.
Change-Id: Ibea50fcc6b9f22ca163ff063e0dc9ca69dff5f34
|
|
|
|
The staticweb middleware needs to be put after authentication
middlewares to ensure correct functionality as documented in
http://docs.openstack.org/developer/swift/middleware.html#staticweb
Without this Swift sends a HTML response even if the request was done
using a X-Auth-Token. This might result in a faulty handling of the response on
the client side; for example, "swift stat containername" would report an empty,
private container, while the container might actually be public readable with
data stored in it.
Closes-bug: 1494896
Change-Id: Id48840e0041f8d272e08def292fbedfaf76bbfbb
Co-Authored-By: Christian Schwede <cschwede@redhat.com>
|