Age | Commit message (Collapse) | Author | Files | Lines |
|
Implement NovaConductor service using nova-base for common parameters.
* Move rabbitmq parameters from controller.yaml to nova-base service, as
an example. More parameters will move in the future.
* Move nova-conductor bits from monolithic manifests to the new service
using new profiles from puppet-tripleo.
Depends-On: Iaaf3a3c2528d9747e41f360a1fe55f95ed37b2d1
Implements: blueprint composable-services-within-roles
Change-Id: I178f092b74ae12f2cb6f006db7cb00e4d6bddfd8
|
|
Split Loadbalancer into HAproxy & Keepalived roles.
Depends-On: I8aa9045fc80205485abab723968b26084f60bf71
Change-Id: If2723358099e78052c351a4a45fdf01d116a89df
|
|
|
|
Uses a shared cinder-base resource to do the database
and messaging configuration for all three services.
Depends-On: I3c6d5226eed5f0f852b0ad9476c7cd9a959fda69
Change-Id: I47c5fd190efca5f02e73fd22aba6cda573daf5cc
|
|
Co-Authored-By: Carlos Camacho <ccamacho@redhat.com>
Change-Id: I0d9332f7f4f9116c5435d338a9c35d4fb3f512c6
Implements: blueprint composable-services-within-roles
Depends-On: I60493a3aa64e5136b763e8e2084d728f5f812f8a
|
|
|
|
|
|
|
|
|
|
Adds new puppet and puppet pacemaker specific services for Sahara API
and Sahara Engine.
The Pacemaker templates extend the default Sahara services and swap
in the pacemaker specific puppet-tripleo profile instead.
Change-Id: I1adda514e9592d149a3d45743a9a00b59c28ca38
Depends-On: I0c8bd68f9a98626e9d67ef713c72c9dd05b7cc12
Implements: blueprint composable-services-within-roles
|
|
Nova is using http_proxy_to_wsgi middleware[1][2]. This parses the
headers provided by the proxy, and helps us properly use TLS for
keystone discovery. There was an option introduced in this middleware
to have it disabled by default, and this change enables it.
[1] Ia78f73e96585ab33a379a0b0be6d9682f7fbd810
[2] I808469f24066d382decf55b9dad5312d6e068da7
Change-Id: I3918f24c0c87cb626a28645b46e3df6360d5f924
|
|
Recently the 'host' parameter was added to the neutron manifest. So we
no longer need to manually add it to the configuration.
Change-Id: I6cb73c6d5da8b99680dec97e03ac4805451835fb
Depends-On: I81b86208826e99beccafd2871ce2afd45394e37f
|
|
Recently the 'host' parameter was added to the nova manifest. So we
no longer need to manually add it to the configuration.
Change-Id: I6f3dc50ea8737e5e7cd859685a9308edff976f31
Depends-On: Icce3ebc401442651942f8de3eabffadaad812377
|
|
|
|
|
|
|
|
Some puppet parameters were deprecated, some of them removed.
This patch reduce the number of warnings to a few, and the rest of
warnings are bugs that are in progress by Puppet OpenStack team.
This patch is mostly some cleanup so we don't have useless warnings in
Puppet catalog.
Changes:
* Update Ceilometer auth params
* Update Neutron auth params
* Update Heat auth params
* Update Swift hash suffix param
* Remove neutron::server::notifications::nova_url, useless.
Change-Id: Ie32681a1fe32735f70ba372630da09f91227298c
|
|
|
|
The default journal size is 5 gigs. This change stops us
overwriting it with 1 gig that is too small for production.
The config value is used by ceph only when it creates the
journal so this does not affect upgrades.
Change-Id: I4bfd2ab47e131d8fcdd5dc75a5a56cfae8b22d5a
|
|
Similar to the https://review.openstack.org/#/c/259568 which added support
for the composable services StepConfig and ServiceConfigSettings parameters
so that the Controller role supports composable services, this adds those
interfaces for the ObjectStorage role.
Note that at this time the ObjectStorage post config only supports steps
2, 3 and 4, not all those in services/README.rst
Partially-Implements: blueprint composable-services-within-roles
Change-Id: I22ffaa68a6ccd4be29d51674871268179bcddcbc
|
|
Currently when deploying swift on the Controller nodes, we do the
ringbuilder config during step3 and the swift-storage config during
step 4, but this order is reversed on the ObjectStorage nodes.
Also, we include the base swift class inconsistently during step2
on controller nodes, and via the overcloud-object manifest on
ObjectStorage nodes.
So fix this inconsistency as a precursor to conversion to composable
services interfaces for the ObjectStorage role, we rework the post
config so we apply the ObjectStorage config in steps 2, 3 and 4,
which should hopefully get us much closer to the process used
on the controller role, thus be easier to decompose in a compatible
way.
Partially-Implements: blueprint composable-services-within-roles
Change-Id: Ic9d0ed8584a12d681a8f4d4742d39b96c15e531a
|
|
Switch the swift proxy service to use the new composable
services format.
Change-Id: Idc9ac64818882e73836ac99bbad56eec184c9a5d
Partially-Implements: blueprint composable-services-within-roles
Depends-On: I6bd72284911f3f449157a6fc00b76682dd53bd8c
|
|
In puppet-tripleo, we split loadbalancer.pp in 2 classes to be more
composable: haproxy & keepalived.
This patch is just updating all hiera parameters related to HAproxy &
keepalived.
Depends-On: I46ed8348dc990d9aa0d896e1abea3b30a8292634
Change-Id: Ibf56184cd10af1d0dcae773c02b0f31a6204badf
|
|
Use the new interface in puppet-nova to configure this parameter.
Depends-On: I3498076b292e9dff88b9ad9d5c65c99a2a98cd7f
Change-Id: Id9f253e942f6373f77acc9239d79f62103b39904
|
|
|
|
By passing the MysqlVirtualIP via the EndpointMap we won't need it
to be provided as a parameter to the services.
This follows what is already happening for the glance registry
service with I9186e56cd4746a60e65dc5ac12e6595ac56505f0.
Change-Id: Iad2ab389bf64d0fc8b06eb0e7d29b5370ff27dff
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
|
|
|
|
|
|
|
|
This patch wires a Heat feature to configure
services via a Heat resource chain.
Additional patches will be able to configure
compute services using composable services.
Change-Id: Ib4fd8bffde51902aa19f9673a389600fc467fc45
|
|
Now that cinder includes http_proxy_to_wsgi by default[1] our puppet
resource that included it in cinder's api-paste config is not longer
needed.
[1] If5aab9cc25a2e7c66a0bb13b5f7488a667b30309
Change-Id: I6141b6caf9b04ee73fae3ae2b94b3001b21b9999
|
|
Cinder is using http_proxy_to_wsgi middleware. This parses the
headers provided by the proxy, and helps us properly use TLS for
keystone discovery. There was an option introduced in this middleware
to have it disabled by default, and this change enables it.
Change-Id: Ia33b3fa04d71eab10effd0b33eb2c194282cd15b
|
|
|
|
|
|
For handling the X-Forwarded-Proto, heat uses the http_proxy_to_wsgi
middleware from oslo.middleware. It used to work by default, but now
configuration is required to enable that. We require it since we are
effectively behind a proxy (HAProxy).
Change-Id: I256f27ec6a3f66316ff6aa3f78b2f1ec1472f097
|
|
With change 648099e1925e7d0d3f6906e5e8d15f3871e88460 and the replacement
of ceilometer-alarm with aodh, the delay resource became a leaf in the
ordering graph and serves no real purpose any longer.
It can now be removed without affecting anything else.
Change-Id: Ib86e609821b9f0b7b0d99c49aead20f9a177f63d
|
|
In Fedora/RHEL land we carry a patch that sets the loopback_users
config explicitely to []. Since this patch diverges from upstream
and sometimes gets dropped by mistake during rebases, let's set
this value explicitely in our config files, instead of relying
on a patch that is distro-specific.
The patch is here:
http://pkgs.fedoraproject.org/cgit/rpms/rabbitmq-server.git/tree/rabbitmq-server-0004-Allow-guest-login-from-non-loopback-connections.patch
Change-Id: If9ca05b38a8bd2a6834c08336a816bbd0ae1ea94
|
|
Also wires in the steps into the CephStorage role.
Change-Id: Ib472f1279478ad7792349cc32bb3c5f510ba69fe
|
|
|
|
The ceph_keyring value is expected to be a full path
to the keyring. But we currently only pass in
client.<cephuser>. This patch fixes the value
to be full path.
Closes-Bug: #1586010
Change-Id: I5666c44bb35b6ae109c68506704eff776f5dceda
|
|
|
|
|
|
|
|
|
|
Implements: blueprint composable-services-within-roles
Depends-On: Icd504aef7dda144582c286c56c925a78566af72c
Change-Id: I8802c2a0cf1e5fa1a6d1fab5e87f6014bea2f517
|
|
Adds new puppet and puppet pacemaker specific services for
Heat API, Heat API CFN, Heat API Cloudwatch, and Heat Engine.
The Pacemaker templates extend the default heat services and
swap in the pacemaker specific puppet-tripleo profile instead.
Change-Id: I387b6bfd763d2d86cad68a3119b0edd0caa237b0
Partially-implements: blueprint composable-services-within-roles
Depends-On: I194cbb6aa307c2331597147545cf10299cab132f
Depends-On: I14dc923ac8ee8d5d538e7f4cf8138ccee8805b53
|
|
Deploy loadbalancer service using puppet-tripleo, and drop puppet code.
Implements: blueprint refactor-puppet-manifests
Depends-On: I9b106dcc1a4d446ab5dea8430ed295e6ec209cbd
Change-Id: I9ca50a4bc822ec17d89988894af9bdf07e4bd1a9
|
|
|
|
Set a password for the 'root' db user and add an additional
'clustercheck' user to be used only by the resource agent.
The password for this 'clustercheck' user is randomly generated
via a heat parameter.
Before this change the workflow to set up the database in the
manifest is the following:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty password
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then
proceed with the other steps
After this change the workflow is slightly more complex because there
is a bit of a chicken and egg problem:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty
password unless the file does exists already and has a clustercheck user
configured
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
yet, so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then proceed
with the other steps
- Step 2.e -> Create clustercheck db user
- Step 3/4 -> Create /etc/sysconfig/clustercheck with clustercheck user credentials
- Step 5.a -> Update the sql root password on the each node (at this
stage
- Step 5.b -> Create /root/.my.cnf with proper credentials on all nodes
Note that we cannot really create the root/clustercheck users right at
step 1 because the db is not running yet (an approach that spawned
mysqld on each node, created the users and shut it down, was tried but
was much more complex and cannot work on updating existing setups)
Given the new way of solving the root password issue, we also need to
make sure that Step1 and Step2 are running on updates.
Closes-bug: #1581677
Depends-On: I83eed8885503043e881db34411616f9726e00352
Change-Id: If3d6e7253af6195b96129be7ea3348d697e4bae1
|
|
Change the way to implement RabbitMQ, as a composable role.
Implements: blueprint refactor-puppet-manifests
Change-Id: I5fed5c437ad492af75791a9163f99ae292f58895
|