Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Configures all services to send notifications to rabbit. The puppet
modules are not consistent regarding how this is done - some expose
notification config as a top-level param, others you need to set it
through a *_config structure, and cinder provides a separate class
dedicated to enabling ceilometer notifications.
Change-Id: I23e2ddad3c59a06cfbfe5d896a16e6bad2abd943
|
|
See RHBZ 1311005 and 1247303. In short: sometimes when a controller
node gets fenced, rabbitmq is unable to rejoin the cluster. To fix this
we need two steps:
1) The fix for the RA in BZ 1247303
2) Add notify=true to the meta parameters of the rabbitmq resource on
fresh installs and updates
Note that if this change is applied on systems that do not
have the fix for the rabbitmq resource agent, no action is taken.
So when the resource agent will be updated, the notify
operation will start to work as soon as the first monitor
action will take place.
Fixes RH Bug #1311005
Change-Id: I513daf6d45e1a13d43d3c404cfd6e49d64e51d5a
|
|
This change adds extra config yaml files for big switch agent
and big switch lldp.
This change is mainly for compute nodes. The changes related
to controller nodes are landed at e78e1c8d9b5a7ebf327987b22091bff3ed42d1c1
This change also removes the neutron_enable_bigswitch_ml2 flag. Instead,
User needs to specify NeutronMechanismDrivers: bsn_ml2 in environment file.
Previous discussion about this change can be found at an abandoned
review request https://review.openstack.org/#/c/271940/
Depends-On: Iefcfe698691234490504b6747ced7bb9147118de
Change-Id: I81341a4b123dc4a8312a9a00f4b663c7cca63d7c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
As per conversation in [1], these settings should have probably never
been there.
1. https://bugzilla.redhat.com/show_bug.cgi?id=1262409
Change-Id: I116f825ba0fe3e4faac8dd347bb087e1b4c70e57
|
|
|
|
|
|
|
|
This enables the creation of the nova_api database that is now
mandatory since https://review.openstack.org/#/c/245828/
Change-Id: Ia8242f23864ebb14ccf858a77ba754059e9c2d4a
Related-Bug: #1539793
|
|
For both HA & non-HA scenarios, switch puppet-keystone configuration to
be run in a WSGI process instead of eventlet.
WSGI is the way to go for scaling Keystone, moreover, eventlet won't be
support in next OpenStack releases.
Co-Authored-By: Dan Prince <dprince@redhat.com>
Depends-On: I22a348c298ff44f616b2e898f4872eddea040239
Change-Id: I862b4a68f43347564ec3c0ddc4ec9e1d1c755cf2
Signed-off-by: Jason Guiditta <jguiditt@redhat.com>
|
|
During high load, the default limit of the kernel connection tracking
table (65536) is often too low, resuling in error messages such as:
kernel: nf_conntrack: table full, dropping packet
This patch increases the limit to 500,000.
Since the nf_conntrack kernel module is not always loaded by default, it also
adds a mechanism to load kernel modules via hieradata using the kmod puppet
module. In order to express the needed dependency in puppet that kernel modules
are loaded before sysctl settings are applied, the Exec resources tagged with
'kmod::load' are specified in a resource collector to express that that Exec
resources with the tag should run before Sysctl resources.
Depends-On: I59cc2280ebae315af38fb5008e6ee0073195ae51
Change-Id: Iffa0a77852729786b69945c1e72bc90ad57ce3bb
|
|
Updated the setting for the dell storage center
api port to the right variable name ::dell_sc_api_port
Change-Id: I67a7533469947355629b6cb54b79759e21e0ec55
|
|
|
|
This change will set a common value for 'host' across all
controllers. We missed to do so for the NFS backend previously.
It will still be possible to set a different per-backend 'host'
value by providing it via ExtraData.
Change-Id: I00fd05660a15be3611e1a394650be6ab713670f9
|
|
The name of the variable ::eqlx_pool had a typo. Fixed it
Change-Id: I83a94d4bccf9c9a60c7b37473ae8a64ac050671c
|
|
This was being silently ignored by the mysql puppet module
prior to this commit.[1] However, now that empty values are
allowed, the overcloud deploy fails because the option
--wsrep_notify_cmd requires an argument.
This is not currently failing on master because we are
pinned to an old puppet-mysql. We will need to remove that
pin in order to get on a newer delorean repo though. Also,
this is breaking stable/liberty HA job because we use the
packaged OPM there.
[1] https://github.com/puppetlabs/puppetlabs-mysql/commit/e30e0bc958761890ea4f06cdd3f1fc7242a00fe2
Change-Id: I9e07efe1650831e81e9a783428554578874aa765
Closes-Bug: 1537720
|
|
Change-Id: Ifd750e634812dae2b7945cbe2f35f98d8a82695e
Depends-On: If88dcdf9f4905e2a792b2fdc656eab51c85f637e
|
|
Including ::neutron::config on the controller and compute roles
will allow ad-hoc (non-puppet managed) settings to be made in all
the various neutron config files using Hiera.
Change-Id: Ifadc77cdcb60b7075d091d778cb92b0dd75bd949
|
|
Including ::cinder::config on controller, and volume roles
will allow ad-hoc (non-puppet managed) settings to be
made in the cinder.conf using Hiera.
Change-Id: I519aff02e3cfb7fbf57e89c7a139564df42f8967
|
|
Including ::heat::config on the controller roles will allow
ad-hoc (non-puppet managed) settings to be made in the
heat config file using Hiera.
Change-Id: I80a39b798869ac330ea8a4d01699f5db47c93d47
|
|
Including ::glance::config on glance roles will allow ad-hoc
(non-puppet managed) settings to be made in the
glance config files using Hiera.
Change-Id: I7c86ae0e8f1a0a2b46d526598964454cb80319a6
|
|
Including ::ceph::conf on ceph roles will allow ad-hoc
(non-puppet managed) settings to be made in the
ceph.conf using Hiera.
Change-Id: I656a0ecde465023d7afad9371aa3c5c270078a67
|
|
Some assignments must be fixed in order to make run midonet with HA
pacemaker properly and when the network isolation is enabled.
Change-Id: I69fb3a1911cfe3baea3349da8f3e185dddf60a95
|
|
Due to a bug [1] in Galera we can't pass an IPv6 as bind-address,
we pass an hostname instead.
1. https://bugzilla.redhat.com/show_bug.cgi?id=1298671
Change-Id: Ia5a5b66dd3e94d3dfb6588550fcfe34382897c27
|
|
If the X-Forwarded-Proto header is received by keystone, this option
will make the service properly handle it. This is useful, for instance,
if TLS is enabled for the admin endpoint.
Change-Id: I31a1f51591e8423367e61eafc3af9b2d61278468
|
|
|
|
Integration of OpenStack data processing service (sahara) with
TripleO.
- Deploys sahara in distributed mode (separate api and engine
processes on each controller node)
- Load balancing w/haproxy
- RabbitMQ/MySQL supported per current TripleO standard
- Minimal configurability at this time
Change-Id: I77a6a69ed5691e3b1ba34e9ebb4d88c80019642c
Partially-implements: blueprint sahara-integration
Depends-On: I0f0a1dc2eaa57d8226bad8cfb250110296ab9614
Depends-On: Ib84cc59667616ec94e7edce2715cbd7dd944f4ae
Depends-On: I9fe321fd4284f7bfd55bd2e69dcfe623ed6f8a2a
|
|
- Adds parameter to enable switching off token flush cron job.
- Sets destination for deleted rows to /dev/null
Change-Id: I9e8aed969e81595d8a1d0a5300da17da6ba15c03
Partial-bug: rhbz#1249106
Depends-On: I5e51562338f68b4ba1b2e942907e6f6a0ab7a61e
|
|
|
|
|
|
|
|
Enables support for configuring Cinder with a Dell
Storage Center iscsi storage backend.
This change adds all relevant parameters for:
- Dell Storage Center SC Series (iSCSI)
Change-Id: I3b1a4346f494139ab123c7dc1a62f81d03c9e728
|
|
|
|
|
|
Creates cron job running every 24 hours
for "cinder-manage db purge"
Partial-bug: rhbz#1249106
Change-Id: I9156e0bf1401eda49a7c9a2921dc3a8723af026d
Depends-On: I677f2ef3d9ca81fff0f672c8e34b6e4278674a96
|
|
- keeping enabled based on ceph node count being greater than 0
- adding enabled if ControllerEnableCephStorage is true
Intention here is to be able to run ceph without having dedicated
nodes for. Enabling Ceph alternativly from the ControllerEnableCeph
parameter allows ceph to be colocated on the controllers without
having to run any dedicated ceph nodes.
Change-Id: I71062d37226c679156380c0f4e194b51cb586bcf
Signed-off-by: Dan Radez <dradez@redhat.com>
|
|
Based on observed timeouts during updates bump the stop and start
timeouts for pacemaker service resources (via op_params) to 200.
This is based on the reasoning that the full timeout may be as
long as two elapsed timeout intervals. After an initial timeout,
the sigterm that follows is then allowed another
DefaultTimeoutStopSec seconds. The 200s is produced by allowing
this 2xDefaultTimeoutStopSec (@90s for systemd) and some
scheduling delta. Many thanks to Michele Baldessari.
Closes-Bug: 1531204
Change-Id: If6b43982c958f63bc78ad997400bf1279c23df7e
|
|
Adds a TimeZone parameter for node types and the top level
stack. Defaults to UTC.
Change-Id: I98123d894ce429c34744233fe3e631cbdd7c12b5
Depends-On: Icf7c681f359e3e48b653ea4648db6a73b532d45e
|
|
Creates cron job running every twelve hours
for "nova-manage db archive_deleted_rows"
Partial-bug: rhbz#1249106
Depends-On: Ic674f4d39bc88f89abfeb0ce99a571c2534e57e4
Change-Id: I4740cc02aa9714f48798521fe9918ac3487db031
|
|
Deploy a TripleO overcloud with networking midonet. MidoNet is a
monolithic plugin and quite changes on the puppet manifest must be done.
Depends-On: I72f21036fda795b54312a7d39f04c30bbf16c41b
Depends-On: I6f1ac659297b8cf6671e11ad23284f8f543568b0
Depends-On: Icea9bd96e4c80a26b9e813d383f84099c736d7bf
Change-Id: I9692e2ef566ea37e0235a6059b1ae1ceeb9725ba
|
|
|
|
Wires the following as arrays to the neutron module:
- mechanism_drivers
- flat_networks
- tenant_network_types
- tunnel_types
- bridge_mappings
Also updates the template version to use a Liberty feature which
allows serialization of comma_delimited_list into JSON.
Tidies up the manifests by removing the class declarations since
config is passed by the puppet/controller+compute hiera mapped_data.
Change-Id: Ie9f85fb827099f897ef750e267bc3ed3a864fe59
Co-Authored-By: Steven Hardy <shardy@redhat.com>
|
|
neutron-server-start-wait-stop is a dangerous Exec that is exposed to
race conditions, because it does not have "onlyif" or "unless"
statements.
That means during a deployment, this exec can be run in the wrong order
during Step 5 and/or 6, while it was supposed to be run at Step 4 only.
If that happens, the exec will fail because puppet tries to start
neutron-server while Pacemaker already started the resource. So in that
case, systemd would returns 1 to Puppet which would return 6 to the
overcloud deployment and the deployment would fail to finish correctly.
This patch aims to prevent from this scenario by making sure we run the
exec only during the step 4.
Also, in order to secure it a bit more, we add 'unless' statement to
this exec, so we would make sure the Puppet run would be idempotent and
the Exec would run one successful time only.
https://bugzilla.redhat.com/show_bug.cgi?id=1290582
Change-Id: I42813c5cff6c525c15c9c24baad4e355f88af672
|