Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
As of I54a75652efd5e91464b84adf84004400b343c3a5 for rdb
this is being done by the cinder puppet module.
Change-Id: I109e139fcbb859a0d9ed99054656be94975d33b5
|
|
We don't have swap space enabled on overcloud-full deploys
as discussed at https://bugs.launchpad.net/tripleo/+bug/1491335
The default is 1.5 so configure Virtual ram to physical ram
allocation ratio to 1:1 so we don't allow overcommit.
Related-Bug: 1491335
Change-Id: I58cfe6dc68e8615a5519428412dec8c653bd6093
|
|
This is passed from the heat templates as hiera data (defaulting
to 'openvswitch') but never effected, meaning we get the puppet
module default.
Change-Id: I3f14cdce9b9bf278aa9b107b2d313e1e82a20709
Closes-Bug: 1488176
|
|
This patch adds support for using an externally managed Ceph
cluster with the TripleO Heat templates.
For an externally managed Ceph cluster we initially
only deploy the Ceph client tools, install the 'openstack' user
keyring, and generate the ceph.conf. This matches what we do
for managed Ceph installations and is a good first start.
No other Ceph related services are installed or managed.
To enable use of a Ceph external cluster simply add
the custom Heat environment file environments/puppet-ceph-external.yaml
to your heat stack create/update command and make sure to
set the required CephClientKey, CephExternalMonHost, and CephClusterFSID
variables.
Change-Id: I0a8b213ce9dfa2fc4e62ae1e7631466e5179fc2b
|
|
|
|
|
|
|
|
Set up a cron job to flush keystone tokens periodically. The job runs
once a day near midnight per puppet-keystone defaults, and we pass
maxdelay 3600 which means each controller will wait a random delay of up
to 1 hour before running the task.
Change-Id: I351f0273c61106c182aa3945b7ad1ce8f5c7d12b
|
|
|
|
The dafault in nova.conf for default_floating_pool is set to nova
which is confusing given to make Tempest tests to pass one has to
create a public network with such a name.
Change-Id: I148222a9f276309ede062ee5292993898ff899d6
|
|
|
|
Memcached is used by novnc to share the auth tokens.
Change-Id: I18415b6ae38b46e3c92e4ce84b858a014ef8398b
|
|
This patch moves most of the ::db::mysql parameter initialization
into a new database.yaml Hiera file. This cleans up the
controller manifests and allows us to define things in a single
location across the two implementations (HA and nonHA).
Change-Id: I895b753b329097a96a6c6f3a03a5fcebefe32dd4
|
|
On slow environments the start operation of some services can
take longer than 20s so we increase the default for start
operation to 90s, more info can be found at:
https://bugzilla.redhat.com/show_bug.cgi?id=1242052
Systemd defaults to 90s as well.
Change-Id: Ie4652bad518075be77937d47830f263034eda79c
|
|
|
|
|
|
This wires in use of a new puppet-tripleo class which
encapsulates the logic to enable/disable package
installation and upgrades.
By using the new class we can remove the global
Package provider declaration at the top of each
module.
Change-Id: I5c6e5fd8600031bd8fb6195649721607c560f9d5
Depends-on: Ie8fbc344149bc8c9977e127de77636903607617a
|
|
It was incorrectly assumed that Puppet variables assigned to a
defined class (as seen in cinder-netapp.yaml) would be applied to
any resources created with that type. This is not how Puppet works.
The full range of configuration parameters to cinder::backend::netapp
have been added back in. They are still pulling from Hiera like they
were intended before, but it needs to be a little more explicit for
Puppet to be happy.
Change-Id: I2e00eae829713b2dbb1e4a5f296b6d08d0c21100
|
|
|
|
By default Cinder will get the publicURL for Nova and Swift, which
is not reachable by the CinderStorage nodes.
Change-Id: I25b7900c9ab261e0f706257ffdf6844533b63b94
|
|
By default Nova will get the publicURL instead, which is not
reachable by the compute nodes.
Change-Id: I57b6a7a7eddb0ffaf6d2d152d932f390c48f908e
|
|
|
|
|
|
Currently we build the overcloud image with selinux-permissive element
in CI. However, even in environments where selinux-permissive element is
not used, it should be ensured that SELinux is set to permissive mode on
nodes with Ceph OSD [1].
We have no nice way to manage SELinux status via Puppet at the moment,
so i'm resorting to execs, but with proper "onlyif" guards.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1241422
Change-Id: I31bd685ad4800261fd317eef759bcfd285f2ba80
|
|
Currently the bootstrap of the neutron server happens with the use of a
start/sleep/stop pattern.
Since Pacemaker doesn't mind if the service is already started let
simply start the neutron server on the $pacemaker_master node and wait
for 5 sec.
Change-Id: I894dc3305f7d6685ebcc6828e690c718a63f32bd
Closes-Bug: #1473410
|
|
Change-Id: Ib945b07dd93f9bdc613f464211745094c4c72836
|
|
The number of connections created to the database depends on the
number of running processes and this is a factor of both the nodes
count and the cores count. We make it configurable so it can be
increased when needed.
Change-Id: I41d511bde95d0942706bf7c28cd913498ea165fb
|
|
|
|
Adds support for NFS backend for Cinder, but remains disabled by
default.
Change-Id: I9ebef072ed115efe980fa4904ea80f02384522af
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Allows inclusion of additional arbitrary puppet classes by the
manifests if defined in the *_classes hieradata.
Example: to specify the Nova RAM allocation ratio there is a
param in nova::scheduler::filter but we do not include it
by default; if needed one can use:
nova::scheduler::filter::ram_allocation_ratio: 1.8
controller_classes:
- nova::scheduler::filter
Change-Id: I61d64d2498bed5c49376dee917d106598392db51
|
|
Without the constraint the VIP could get assigned to a node without
an active haproxy instance, which ultimately means everything stops
working.
kind=Optional allows a VIP to relocate to a healthy haproxy instance
in the event of a failure without tearing down the entire stack in the
process.
Change-Id: I44d44952fb42cf91a2a248250a4063e3034d119e
|
|
As reported in https://bugzilla.redhat.com/show_bug.cgi?id=1238117
and https://bugzilla.redhat.com/show_bug.cgi?id=1236578 the
NeutronScale resource is causing problems during post deploy
configuration of the overcloud (momentary inconsistency in the
host name for the neutron agents, given what NeutronScale does,
discussion in BZ 1238117).
As discussed in the bugs, we may not need NeutronScale, since our
host names should be safe enough for scaling. This change removes
neutron scale completely and links startup of neutron-server
directly to neutron-ovs-cleanup. If we can safely remove
the NeutronScale resource then this change does that.
Change-Id: Ib43a2d60b85fd9bb48eff5919602bb74dc463905
|
|
In 88b278f510b0c9351c58dfe67513f3902d415ab6 we dropped
the swift ceilometer middleware but we forgot to do it
for the overcloud pacemaker manifest.
Change-Id: If9fcc5d029492554472edbe3be98a44942f94d20
|
|
This maps the template param to the actual class param which optionally
configures Ceph as a backend for the ephemeral storage or for the
persistent storage only. See I4ae0fd605c5a57aa23bea83b06530a50844d24a0
Change-Id: Ic7007da8317e98d450b1362864e65093a184cb25
|
|
|
|
While trying to download a glance image from a webserver, you need to
enable the HTTP backend store.
This patch aims to merge the configured backend and the HTTP store
backend so it will be enabled anytime.
Change-Id: Ie769831f8d491c1b7fe08b8fc7df9ebea493f9e8
|
|
Add two new parameters: EnableFencing and FencingConfig.
FencingConfig is a json with an expected structure documented in the
templates. It gets passed further to puppet-tripleo, which configures
the fencing devices.
Fencing is configured and enabled in the last step after all pacemaker
resources and constraints have been created, which should be a more
stable approach than the other way round.
Change-Id: Ifd432bfd2443b6d13e7efa006d4120bb0eaa2554
Depends-On: I819fc8c126ec47cd207c59b3dcf92ff699649c5a
Depends-On: I8b7adff6f05f864115071c51810b41efad887584
|
|
We do not want to delay Redis vip start to promotion of Redis master,
HAProxy will take care of the validating the backends.
We do not need to force colocation of Redis vip with Redis master.
We do not want to restart the Ceilometer central agent when the vip
moves this can instead cause unwanted cascading restarts due to other
constraints in between services.
More details can be read on the BZ at:
https://bugzilla.redhat.com/show_bug.cgi?id=1236374
Change-Id: I594984cd23db7de57746c3e1018181d61b020f46
|
|
|
|
|
|
|
|
|