Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Increases the max_connections since this is currently set to 151.
This causes problems in a baremetal environment with multiple CPUs.
A related change is @ https://review.openstack.org/#/c/183046/2
for haproxy. There is also a bug report at
https://bugzilla.redhat.com/show_bug.cgi?id=1218322
Change-Id: I9b4690191616cc04c4edc7b2402bd9ec54a7d17d
|
|
|
|
|
|
This commit aims to support the creation of the galera cluster via
Pacemaker. With this commit in, three use-cases will be supported.
* Non HA setup / Non Pacemaker setup : The deployment will take place
as it is currently the case in f20puppet-nonha. Nothing changes.
* Non HA setup / Pacemaker setup : Even though it is a non ha setup,
galera cluster via pacemaker will be deployed with a cluster nbr of 1.
* HA setup / Non Pacemaker setup : N/A
* HA setup / Pacemaker setup : It is assumed that HA setup will
always be with pacemaker. So in this situation pacemaker will deploy a
cluster of 3 galera master nodes.
Depends-On: I7aed9acec11486e0f4f67e4d522727476c767d83
Change-Id: If0c37a86fa8b5aa6d452129bccf7341a3a3ba667
|
|
Use some optimized configuration settings for RabbitMQ when
clustered. Data is ported from Astapor.
Change-Id: If54aff5654dbe75e68197588be12cb3995c77ec7
|
|
The puppet-pacemaker module realizes some abstraction for the
different service types in ::service already.
Change-Id: Icd897e18fda01b1bf4722a975c991e26341ac129
Closes-Bug: 1449988
|
|
This patch adds support for using the Heat resource registry
so that end users can enable pacemaker. Using this approach
allows us to isolate all of the pacemaker logic for the
controller in a single template rather than use conditionals
for every service that must support it.
Change-Id: Ibefb80d0d8f98404133e4c31cf078d729b64dac3
|
|
|
|
|
|
This commit allows one to configure MongoDB as a pacemaker resource when
EnablePacemaker is set to true
Change-Id: Iedfba3eb851442d0ca3b8c0a7163a63285ab6071
|
|
|
|
This patch adds support for a new GlanceBackend setting
which can be set to one of swift, rbd, or file to control
which Glance backend is configured for use by default.
Change-Id: Id6a3fbc3477e85e8e2446e3dc13d424f9535d0ff
|
|
This reverts commit 7313930c22b9f18d67e630de084ffcc6fad5ebe7.
Seeing errors when trying to create the keystone admin
role with packages. (ImportError: No module named os_client_config)
Change-Id: I78796598ccb8d2ffd6bfca85dce7d18dc0fd768e
Related-bug: #1450786
|
|
|
|
|
|
Ceilometer can use different backends. A recent change moved backend
support for Ceilometer from MySQL to MongoDB. This commit introduce a
greater flexibility, letting the deployer choose wheter MySQL or MongoDB
should be used as a backend for Ceilometer.
Change-Id: I0d5bfb0763cbcee234df7ab13574d866743d5ddf
|
|
Change-Id: I43a74c1db324144d33e96a94cb718db30e0fd243
|
|
Change-Id: I6bf5ada5a5298f4079594f3cc8b01ac0ef85876e
|
|
Change-Id: I45511569fda6b00ca35b1e590537a29271e56ce0
Depends-On: I98b9b3dbc48009ce255d964ac580e1a31f279f1e
|
|
Install OpenStack Dashboad (Horizon) on the Overcloud Controller with
Puppet.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Depends-On: If9b12d373e407be8be8428d77145f131eb450e88
Change-Id: I254e895014f58a51dade3dcdc63eabbb5dc458ac
|
|
This patch adds support for configuring Keystone domain for Heat
via heat-keystone-setup-domain script. It should be reverted
as soon as Keystone v3 is fully functional.
Change-Id: I7397f49fac17c30262d02b70021d613aef5c6cad
|
|
Adds a new ControllerEnableSwiftStorage parameter that
can be used to enable/disable use of the contoller node
as a Swift storage node.
Change-Id: Ic54144f4a46a671818c2f12e419cfa619b0dc1f9
|
|
This patch adds a new ControllerEnableCephStorage option
which can be used to install and configure Ceph storage
(OSD) on the controller node.
The default is to have this disabled by default (this is
probably a more production like setting).
The motivation for this change is to help facilitate CI
jobs which actually use Ceph. Right now we have an issue
where once the Heat stack finishes Ceph is configured
and ready, but Cinder volume (required by our CI
devtest_overcloud.sh test) may or may not have had
enough time to recognize the amount of storage
on the remote Ceph storage nodes. Waiting another
periodic cycle for Cinder volume to recognize the
actual amount of storage on the remote OSD nodes
would work but there isn't a good way to do this
ATM. The right solution here is probably to
implement Heat breakpoints in our CI. As we haven't quite
landed that change, another option is to simply
make the controller node also be a Ceph storage node.
Since this runs as "step 2" within the controller
it ensures that the OSD will be available and thus
Cinder volume will register the correct amount of
storage on startup.
Enabling this feature also matches what we do with Swift
storage on the Controller (although we should provide
an option to actually disable this as well).
Change-Id: Ic47d028591edbaab83a52d7f38283d7805b63042
|
|
|
|
|
|
Depends-On: Ia1bbf53c674e34ba7c70249895b106ec0af3c249
Change-Id: Ifa9f579d26a3cba9f8705226984c7b987ae0ad1c
|
|
Add support for Redis configuration on the overcloud controller role.
Change-Id: I917ff1e7c0abf9d76b9939a97978e858268deac2
Depends-On: I80a6c284af9eceb6b669a03c5d93256261523331
|
|
On Controller node, we also need to include ::glance if we want to have
common Glance bits (packaging included).
This is a Puppet best practice.
Change-Id: I967c06b2c78d8f3aa5fa984b518d34c813426a2e
|
|
|
|
|
|
This patch configure Ceilometer to use MongoDB backend.
Change-Id: I22be0e22e7a3991ebd2d3aa7d14c518418a2458a
|
|
Currently a replset parameter is set in mongodb.conf no matter if we are
in a ha or nonha setup. This install fine, but on a nonha setup it prevents
any program from using MongoDB, since no replset has been initialized. It
generates the following error when a program tries to use it :
not master and slaveOk=false
To prevent this issue a replicatset is initialized in both ha and nonha
setup, this way if another MongoDB node is added to the pool, it will be
able to attach automatically
Change-Id: I65e3f1ad35cb0cd31f6771444a0cffdf7569222f
|
|
Change-Id: I0655b7cae2c436944833894bf9837877b3a69878
|
|
This patch aims to configure MongoDB server on controller nodes with
Puppet.
It also create a default replicaset for Ceilometer, so MongoDB can be
highly available when multiple controllers are run.
Change-Id: I3c1ff06ebc3c9dac44fc790caaea711d0eba4bb7
|
|
Change-Id: Ia2e4eae619ca95c0f417f713676732eb4f01304b
Depends-On: I9563eec0a2266deb2ebef2e3d76ae89d39b2be29
|
|
Despite passing bind-address for MariaDB in overcloud_controller.pp
correctly, it was always trying to bind on 0.0.0.0. The problem is
caused by Galera's config file (we install Galera into the image even
though we don't use it yet). Galera's default config file contains
override of the bind-address value to 0.0.0.0, and the setting from
galera.cnf took precendence over what was in server.cnf.
The mariadb-galera-server package assumes that the main config happens
in galera.cnf and it ships an almost empty server.cnf. We now have an
EnableGalera param, when it's set to true the mysql module will manage
galera.cnf instead of server.cnf, overriding the default values from
galera.cnf and fixing the issue.
Change-Id: I7c2fd41d41dcf5eb4ee8b1dbd74d60cc2cabeed9
Closes-Bug: #1442256
|
|
Passing the key explicitly into nova::compute::rbd means that Puppet
will not attempt to fetch the key using `ceph auth get-key <keyring>`,
having these effects:
* One reason for compute node to have access to the client.admin key is
gone (in current implementation it does have access to the key, but
this change is a step towards removing it).
* Ceph cluster doesn't have to be running at the time when Puppet runs
on compute node, meaning we don't have to serialize things more than
we do now.
Also adding the ComputeCephDeployment as a dependency of
ComputePostDeployment, otherwise the hiera file it creates might be
created *after* Puppet configuration happens on compute nodes, and the
values it provides would be missing during the Puppet run on the compute
nodes.
Change-Id: Id3166e6d5f01d18ec8a5033398bb511f4321a5e8
Depends-On: I70da06159c0d3c6fa204b5f7a468909ffab4d633
Partial-Bug: #1439949
|
|
This should have been removed with change
I1bb8ee15d361638d77c5df7f8c03561c34f4c88f
Change-Id: I20d4099aabe5ae9f89db45fd3db585067cab01f5
|
|
|
|
This updates all of the puppet roles to use an optional
osfamily hieradata file which can be used to provide
distro specific settings.
Also, updates the controller role to make use of this
new file for setting the rabbitmq package_provider
parameter.
Change-Id: I46417db51b87b82bf276dfcef5647a90c37fb07d
|
|
|
|
|
|
|
|
|
|
|
|
Currently tripleo::loadbalancer allow a controller to have only itself
as a backend for a service, no matter the number of controller nodes.
This patch fixes that using all controller nodes available.
Change-Id: Ic8fc022b84850c669b19d37da7f275d9c811e694
Depends-On: I2a46c250bc3325eef9c3128cac2ab45c88b1ae75
|
|
This resolves a formatting issue with the Cinder enabled_backends
config file setting. Previously we would potentially construct
an array with an undef value at the end if iscsi was enabled
but ceph was not (this is the case for our current CI job).
When an array formatted like ['tripleo_iscsi', undef] is then
passed to join() in puppet-cinder to construct a string it leaves
us with an extra ',' on the end of the string. This causes
problems in that cinder-volume loads an extra (system default)
cinder volume process which is not expected.
Because Fedora uses LIO as a default it was causing about half
of our CI runs to fail if the tgtadm cinder-volume process
wasn't being chosen by the scheduler.
Closes-bug: #1437708
Change-Id: I3383012cb43792f334fdf789dc13147a3cb5ad63
|
|
A change [1] in puppet-ceph offers more flexibility but breaks
backwards so we had to update our composition layer as well; we gain
control of the cephx keyring in the template though.
1. Ie6adbd601388ab52c37037004bd0ceef9fc41942
Change-Id: Ia8196849afce2969daa608828cec81ebe3ac96e1
|
|
Compute nodes run libvirt, which automatically creates a default network
which has the same address space (192.168.122.*) as the libvirt default
network on the host machine where devtest is running. This overlap
causes that when a compute node wants to send a packet to the host
machine (192.168.122.1) it gets incorrectly routed through the compute
node's own virbr0 instead of br-ex. The current solution does not seem
to be enough because libvirt gets started and creates the default
network before Puppet is triggered on compute nodes. Making sure the
libvirt default network is destroyed on the compute node fixes the
issue.
We don't have any puppet modules in OPM that would deal with libvirt
networks and it's probably not worth exploring and adding one because of
this small issue (i don't expect another use case of managing libvirt
networks directly), so i'm using an exec with proper idempotency
check.
Change-Id: Icde12aa204ed1f7fa35b0525875ce07db34dc42c
Closes-Bug: #1436822
|