Age | Commit message (Collapse) | Author | Files | Lines |
|
Adds a new ControllerEnableSwiftStorage parameter that
can be used to enable/disable use of the contoller node
as a Swift storage node.
Change-Id: Ic54144f4a46a671818c2f12e419cfa619b0dc1f9
|
|
This patch adds a new ControllerEnableCephStorage option
which can be used to install and configure Ceph storage
(OSD) on the controller node.
The default is to have this disabled by default (this is
probably a more production like setting).
The motivation for this change is to help facilitate CI
jobs which actually use Ceph. Right now we have an issue
where once the Heat stack finishes Ceph is configured
and ready, but Cinder volume (required by our CI
devtest_overcloud.sh test) may or may not have had
enough time to recognize the amount of storage
on the remote Ceph storage nodes. Waiting another
periodic cycle for Cinder volume to recognize the
actual amount of storage on the remote OSD nodes
would work but there isn't a good way to do this
ATM. The right solution here is probably to
implement Heat breakpoints in our CI. As we haven't quite
landed that change, another option is to simply
make the controller node also be a Ceph storage node.
Since this runs as "step 2" within the controller
it ensures that the OSD will be available and thus
Cinder volume will register the correct amount of
storage on startup.
Enabling this feature also matches what we do with Swift
storage on the Controller (although we should provide
an option to actually disable this as well).
Change-Id: Ic47d028591edbaab83a52d7f38283d7805b63042
|
|
|
|
|
|
|
|
Depends-On: Ia1bbf53c674e34ba7c70249895b106ec0af3c249
Change-Id: Ifa9f579d26a3cba9f8705226984c7b987ae0ad1c
|
|
These appear in the Tuskar UI and CLI so are worth keeping
consistent with those of the controller/compute nodes
Change-Id: I7cdd3a67d6f190f43e279fad0c4bf5f409d1e161
|
|
Add support for Redis configuration on the overcloud controller role.
Change-Id: I917ff1e7c0abf9d76b9939a97978e858268deac2
Depends-On: I80a6c284af9eceb6b669a03c5d93256261523331
|
|
On Controller node, we also need to include ::glance if we want to have
common Glance bits (packaging included).
This is a Puppet best practice.
Change-Id: I967c06b2c78d8f3aa5fa984b518d34c813426a2e
|
|
|
|
|
|
This patch configure Ceilometer to use MongoDB backend.
Change-Id: I22be0e22e7a3991ebd2d3aa7d14c518418a2458a
|
|
Currently a replset parameter is set in mongodb.conf no matter if we are
in a ha or nonha setup. This install fine, but on a nonha setup it prevents
any program from using MongoDB, since no replset has been initialized. It
generates the following error when a program tries to use it :
not master and slaveOk=false
To prevent this issue a replicatset is initialized in both ha and nonha
setup, this way if another MongoDB node is added to the pool, it will be
able to attach automatically
Change-Id: I65e3f1ad35cb0cd31f6771444a0cffdf7569222f
|
|
|
|
Change-Id: I0655b7cae2c436944833894bf9837877b3a69878
|
|
|
|
|
|
This patch aims to configure MongoDB server on controller nodes with
Puppet.
It also create a default replicaset for Ceilometer, so MongoDB can be
highly available when multiple controllers are run.
Change-Id: I3c1ff06ebc3c9dac44fc790caaea711d0eba4bb7
|
|
Change-Id: Ia2e4eae619ca95c0f417f713676732eb4f01304b
Depends-On: I9563eec0a2266deb2ebef2e3d76ae89d39b2be29
|
|
Despite passing bind-address for MariaDB in overcloud_controller.pp
correctly, it was always trying to bind on 0.0.0.0. The problem is
caused by Galera's config file (we install Galera into the image even
though we don't use it yet). Galera's default config file contains
override of the bind-address value to 0.0.0.0, and the setting from
galera.cnf took precendence over what was in server.cnf.
The mariadb-galera-server package assumes that the main config happens
in galera.cnf and it ships an almost empty server.cnf. We now have an
EnableGalera param, when it's set to true the mysql module will manage
galera.cnf instead of server.cnf, overriding the default values from
galera.cnf and fixing the issue.
Change-Id: I7c2fd41d41dcf5eb4ee8b1dbd74d60cc2cabeed9
Closes-Bug: #1442256
|
|
|
|
It's very confusing for them to be different, especially in the case of
comparing Tuskar vs non-Tuskar deployments where the parameters are read
from different files.
Note: NeutronPhysicalBridge is named differently in the overcloud
template (HypervisorNeutronPhysicalBridge). This is the only parameter
checked that isn't named exactly the same, hopefully there aren't any
others.
(Checked controller, compute, ceph, cinder, and swift for both puppet
and non-puppet templates)
Change-Id: I48ce1eb40d2d080c589ce619c50eddff17efe882
|
|
Passing the key explicitly into nova::compute::rbd means that Puppet
will not attempt to fetch the key using `ceph auth get-key <keyring>`,
having these effects:
* One reason for compute node to have access to the client.admin key is
gone (in current implementation it does have access to the key, but
this change is a step towards removing it).
* Ceph cluster doesn't have to be running at the time when Puppet runs
on compute node, meaning we don't have to serialize things more than
we do now.
Also adding the ComputeCephDeployment as a dependency of
ComputePostDeployment, otherwise the hiera file it creates might be
created *after* Puppet configuration happens on compute nodes, and the
values it provides would be missing during the Puppet run on the compute
nodes.
Change-Id: Id3166e6d5f01d18ec8a5033398bb511f4321a5e8
Depends-On: I70da06159c0d3c6fa204b5f7a468909ffab4d633
Partial-Bug: #1439949
|
|
Change-Id: I353cffc13f56b54ce2d2aeb1468b9a7c51765d7c
|
|
Change-Id: I06f7066bf9eacf3ef0f5d73c0cfa65eaf4f74cff
|
|
Change-Id: Id193f8c13e3ad3e05bd884be5ba65621b9369d0e
|
|
|
|
|
|
This should have been removed with change
I1bb8ee15d361638d77c5df7f8c03561c34f4c88f
Change-Id: I20d4099aabe5ae9f89db45fd3db585067cab01f5
|
|
|
|
Ceph will not be supported in the (already) deprecated with-mergepy
templates.
Change-Id: If6482b4ac03899ea552442edf01ebfeb4fb97a7a
|
|
When trying out Ceph functionally the CephClusterFSID parameter
must be a UUID.
Additionally, the MonKey and AdminKey parameters should be
generated via ceph-authtool (or equivalently generated) to
ensure they work properly with the Ceph configuration.
Change-Id: I0c327843ef225d330d1c668f53324973c78d3505
|
|
Currently it is possible to know what is the hostname of the boostrap
nodeid but not its IP. Since depending on the use case the use of the IP
might be needed, a way to have access to this information should be
provided.
Change-Id: I9d0a7ee7de2088ddb87e0d8a8ae2b3ac75b0e78d
|
|
|
|
|
|
This updates all of the puppet roles to use an optional
osfamily hieradata file which can be used to provide
distro specific settings.
Also, updates the controller role to make use of this
new file for setting the rabbitmq package_provider
parameter.
Change-Id: I46417db51b87b82bf276dfcef5647a90c37fb07d
|
|
Propagate the top-level Debug parameter wherever it makes sense.
Swift doesn't have this kind of debug setting, it only allows to
configure log levels, so we'll need a different approach there.
Change-Id: I15332315a2fbaeaf924cde4e748fb0e064a778b7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Currently tripleo::loadbalancer allow a controller to have only itself
as a backend for a service, no matter the number of controller nodes.
This patch fixes that using all controller nodes available.
Change-Id: Ic8fc022b84850c669b19d37da7f275d9c811e694
Depends-On: I2a46c250bc3325eef9c3128cac2ab45c88b1ae75
|
|
This resolves a formatting issue with the Cinder enabled_backends
config file setting. Previously we would potentially construct
an array with an undef value at the end if iscsi was enabled
but ceph was not (this is the case for our current CI job).
When an array formatted like ['tripleo_iscsi', undef] is then
passed to join() in puppet-cinder to construct a string it leaves
us with an extra ',' on the end of the string. This causes
problems in that cinder-volume loads an extra (system default)
cinder volume process which is not expected.
Because Fedora uses LIO as a default it was causing about half
of our CI runs to fail if the tgtadm cinder-volume process
wasn't being chosen by the scheduler.
Closes-bug: #1437708
Change-Id: I3383012cb43792f334fdf789dc13147a3cb5ad63
|
|
A change [1] in puppet-ceph offers more flexibility but breaks
backwards so we had to update our composition layer as well; we gain
control of the cephx keyring in the template though.
1. Ie6adbd601388ab52c37037004bd0ceef9fc41942
Change-Id: Ia8196849afce2969daa608828cec81ebe3ac96e1
|
|
Compute nodes run libvirt, which automatically creates a default network
which has the same address space (192.168.122.*) as the libvirt default
network on the host machine where devtest is running. This overlap
causes that when a compute node wants to send a packet to the host
machine (192.168.122.1) it gets incorrectly routed through the compute
node's own virbr0 instead of br-ex. The current solution does not seem
to be enough because libvirt gets started and creates the default
network before Puppet is triggered on compute nodes. Making sure the
libvirt default network is destroyed on the compute node fixes the
issue.
We don't have any puppet modules in OPM that would deal with libvirt
networks and it's probably not worth exploring and adding one because of
this small issue (i don't expect another use case of managing libvirt
networks directly), so i'm using an exec with proper idempotency
check.
Change-Id: Icde12aa204ed1f7fa35b0525875ce07db34dc42c
Closes-Bug: #1436822
|
|
We need a list of hosts where MongoDB is supposed to run (as a list of
IP addresses, not names) to implement MongoDB support in overcloud.
Change-Id: I4b80f13be7e50630314d0642fa32b7763b6a2921
|
|
* Create hiera file 'all_nodes' instead of 'rabbit' -- we'll want
allNodesConfig to create keys for more services (e.g. mongo_node_ips)
and it's not necessary to create a separate hiera file for each.
* Rename rabbit_nodes to mongo_node_names -- we'll have more node lists,
some services will need hostnames, some services will need IPs, some
might need both, so we shouldn't have ambiguity in the hiera key
names.
Change-Id: If80f9c9b2849ae893e1ab78f1c4d246a2468665c
|
|
Purpose of this change is to enable on the server
side the ha-mode policy for all queues when nodes
are clustered.
Change-Id: I16e3d375aabac9dbcdc198c71069086951e40fc0
|