Age | Commit message (Collapse) | Author | Files | Lines |
|
Uses a shared cinder-base resource to do the database
and messaging configuration for all three services.
Depends-On: I3c6d5226eed5f0f852b0ad9476c7cd9a959fda69
Change-Id: I47c5fd190efca5f02e73fd22aba6cda573daf5cc
|
|
Co-Authored-By: Carlos Camacho <ccamacho@redhat.com>
Change-Id: I0d9332f7f4f9116c5435d338a9c35d4fb3f512c6
Implements: blueprint composable-services-within-roles
Depends-On: I60493a3aa64e5136b763e8e2084d728f5f812f8a
|
|
|
|
|
|
Adds new puppet and puppet pacemaker specific services for Sahara API
and Sahara Engine.
The Pacemaker templates extend the default Sahara services and swap
in the pacemaker specific puppet-tripleo profile instead.
Change-Id: I1adda514e9592d149a3d45743a9a00b59c28ca38
Depends-On: I0c8bd68f9a98626e9d67ef713c72c9dd05b7cc12
Implements: blueprint composable-services-within-roles
|
|
Recently the 'host' parameter was added to the neutron manifest. So we
no longer need to manually add it to the configuration.
Change-Id: I6cb73c6d5da8b99680dec97e03ac4805451835fb
Depends-On: I81b86208826e99beccafd2871ce2afd45394e37f
|
|
Recently the 'host' parameter was added to the nova manifest. So we
no longer need to manually add it to the configuration.
Change-Id: I6f3dc50ea8737e5e7cd859685a9308edff976f31
Depends-On: Icce3ebc401442651942f8de3eabffadaad812377
|
|
|
|
|
|
Some puppet parameters were deprecated, some of them removed.
This patch reduce the number of warnings to a few, and the rest of
warnings are bugs that are in progress by Puppet OpenStack team.
This patch is mostly some cleanup so we don't have useless warnings in
Puppet catalog.
Changes:
* Update Ceilometer auth params
* Update Neutron auth params
* Update Heat auth params
* Update Swift hash suffix param
* Remove neutron::server::notifications::nova_url, useless.
Change-Id: Ie32681a1fe32735f70ba372630da09f91227298c
|
|
|
|
Currently when deploying swift on the Controller nodes, we do the
ringbuilder config during step3 and the swift-storage config during
step 4, but this order is reversed on the ObjectStorage nodes.
Also, we include the base swift class inconsistently during step2
on controller nodes, and via the overcloud-object manifest on
ObjectStorage nodes.
So fix this inconsistency as a precursor to conversion to composable
services interfaces for the ObjectStorage role, we rework the post
config so we apply the ObjectStorage config in steps 2, 3 and 4,
which should hopefully get us much closer to the process used
on the controller role, thus be easier to decompose in a compatible
way.
Partially-Implements: blueprint composable-services-within-roles
Change-Id: Ic9d0ed8584a12d681a8f4d4742d39b96c15e531a
|
|
Switch the swift proxy service to use the new composable
services format.
Change-Id: Idc9ac64818882e73836ac99bbad56eec184c9a5d
Partially-Implements: blueprint composable-services-within-roles
Depends-On: I6bd72284911f3f449157a6fc00b76682dd53bd8c
|
|
In puppet-tripleo, we split loadbalancer.pp in 2 classes to be more
composable: haproxy & keepalived.
This patch is just updating all hiera parameters related to HAproxy &
keepalived.
Depends-On: I46ed8348dc990d9aa0d896e1abea3b30a8292634
Change-Id: Ibf56184cd10af1d0dcae773c02b0f31a6204badf
|
|
Use the new interface in puppet-nova to configure this parameter.
Depends-On: I3498076b292e9dff88b9ad9d5c65c99a2a98cd7f
Change-Id: Id9f253e942f6373f77acc9239d79f62103b39904
|
|
This patch wires a Heat feature to configure
services via a Heat resource chain.
Additional patches will be able to configure
compute services using composable services.
Change-Id: Ib4fd8bffde51902aa19f9673a389600fc467fc45
|
|
Now that cinder includes http_proxy_to_wsgi by default[1] our puppet
resource that included it in cinder's api-paste config is not longer
needed.
[1] If5aab9cc25a2e7c66a0bb13b5f7488a667b30309
Change-Id: I6141b6caf9b04ee73fae3ae2b94b3001b21b9999
|
|
|
|
With change 648099e1925e7d0d3f6906e5e8d15f3871e88460 and the replacement
of ceilometer-alarm with aodh, the delay resource became a leaf in the
ordering graph and serves no real purpose any longer.
It can now be removed without affecting anything else.
Change-Id: Ib86e609821b9f0b7b0d99c49aead20f9a177f63d
|
|
Also wires in the steps into the CephStorage role.
Change-Id: Ib472f1279478ad7792349cc32bb3c5f510ba69fe
|
|
|
|
|
|
|
|
|
|
Implements: blueprint composable-services-within-roles
Depends-On: Icd504aef7dda144582c286c56c925a78566af72c
Change-Id: I8802c2a0cf1e5fa1a6d1fab5e87f6014bea2f517
|
|
Adds new puppet and puppet pacemaker specific services for
Heat API, Heat API CFN, Heat API Cloudwatch, and Heat Engine.
The Pacemaker templates extend the default heat services and
swap in the pacemaker specific puppet-tripleo profile instead.
Change-Id: I387b6bfd763d2d86cad68a3119b0edd0caa237b0
Partially-implements: blueprint composable-services-within-roles
Depends-On: I194cbb6aa307c2331597147545cf10299cab132f
Depends-On: I14dc923ac8ee8d5d538e7f4cf8138ccee8805b53
|
|
Deploy loadbalancer service using puppet-tripleo, and drop puppet code.
Implements: blueprint refactor-puppet-manifests
Depends-On: I9b106dcc1a4d446ab5dea8430ed295e6ec209cbd
Change-Id: I9ca50a4bc822ec17d89988894af9bdf07e4bd1a9
|
|
Set a password for the 'root' db user and add an additional
'clustercheck' user to be used only by the resource agent.
The password for this 'clustercheck' user is randomly generated
via a heat parameter.
Before this change the workflow to set up the database in the
manifest is the following:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty password
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then
proceed with the other steps
After this change the workflow is slightly more complex because there
is a bit of a chicken and egg problem:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty
password unless the file does exists already and has a clustercheck user
configured
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
yet, so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then proceed
with the other steps
- Step 2.e -> Create clustercheck db user
- Step 3/4 -> Create /etc/sysconfig/clustercheck with clustercheck user credentials
- Step 5.a -> Update the sql root password on the each node (at this
stage
- Step 5.b -> Create /root/.my.cnf with proper credentials on all nodes
Note that we cannot really create the root/clustercheck users right at
step 1 because the db is not running yet (an approach that spawned
mysqld on each node, created the users and shut it down, was tried but
was much more complex and cannot work on updating existing setups)
Given the new way of solving the root password issue, we also need to
make sure that Step1 and Step2 are running on updates.
Closes-bug: #1581677
Depends-On: I83eed8885503043e881db34411616f9726e00352
Change-Id: If3d6e7253af6195b96129be7ea3348d697e4bae1
|
|
Change the way to implement RabbitMQ, as a composable role.
Implements: blueprint refactor-puppet-manifests
Change-Id: I5fed5c437ad492af75791a9163f99ae292f58895
|
|
Adds new puppet and puppet pacemaker specific services for
the Neutron Metadata agent.
Partially-implements: blueprint composable-services-within-roles
Change-Id: I25f026507e78f18594599b3621613a54f246545d
|
|
Adds new puppet and puppet pacemaker specific services for
the Neutron L3 agent.
Partially-implements: blueprint composable-services-within-roles
Change-Id: I0316043efe357a41ef3b4088a55d98dbb6d25963
|
|
|
|
Adds new puppet and puppet pacemaker specific services for
the Neutron DHCP agent.
Depends-On: Ibbfd79421f871e41f870745a593cca65e8c0e58a
Partially-implements: blueprint composable-services-within-roles
Change-Id: Ia61295943e67efe354a51a26fe4540f288ff6ede
|
|
Step6 was just about confuring fencing after creating all Pacemaker
resources.
It was created by this patch:
https://review.openstack.org/#q,1787fbc7ca58f9965cd5d64b685c1f9beed4cb9b,n,z
A bit of Puppet orchestration can help us to not require an extra step.
This patch:
* configure & enable fencing at step5
* make sure we don't configure fencing because creating Pacemaker
resources and constraints.
* remove step6 from deployment workflow.
* depends on a patch in puppet-tripleo that moves keystone resources
(endpoints, roles) to step 5.
Change-Id: Iae33149e4a03cd64c5831e689be8189ad0cf034b
Depends-On: Icea7537cea330da59fe108c9b874c04f2b94d062
Depends-On: I079e65f535af069312b602e8ff58be80ab2f2226
|
|
|
|
Change-Id: I511052dc765788336ffd32dee2118d787fce725d
|
|
The database will be created by the roles so we don't need to call
::mysql from the manifest.
Change-Id: I2b137cbd6597222a72cf46830f34a93f002c70ef
Depends-On: Id065a9180f1f1a41ab225ec5f755498ec7d9a827
|
|
When using the Nova RBD driver for the ephemeral storage it is
suggested by the Ceph RBD OpenStack guide [1] to optimize certain
settings; this change will set disk_cachemodes and hw_disk_discard
accordingly to the guide.
1. http://docs.ceph.com/docs/master/rbd/rbd-openstack/
Change-Id: I8d2ee89ca4ff5458d1888cc037e2e91d19025ad4
|
|
This will configure the openstack services and run the initial
db sync in step 3 (instead of step 4) for the node for which
$sync_db is true.
Closes-Bug: #1572952
Change-Id: I29012ee0a8b281e4472353ee7d9d44912e8a9b6c
|
|
|
|
Adds new puppet and puppet pacemaker specific services for
Glance API and Glance Registry.
The Pacemaker templates extend the default glance services and
swap in the pacemaker specific puppet-tripleo profile instead.
In the case of pacemaker glance-registry there is no separate
puppet manifest so only the configuration parameters are maintained
there. (Due to the way the pacemaker glance constraints are written
the pacemaker varients of this service can't be split out...)
Depends-On: Ifc388f7058ccfff2818f531bcbc00c7179874bbc
Change-Id: I00a8c916129af43cda225754eb10370289bb4b41
|
|
Horizon's backends (httpd) see IP address of the haproxy in the logs instead
of the client address.
This patch allows to:
- Install the remoteip httpd module [1].
- Use the X-Forwarded-For HTTP header and override the haproxy address.
- Configure the Horizon's logs with the client address via httpd logformat.
[1] https://httpd.apache.org/docs/2.4/mod/mod_remoteip.html
[2] https://httpd.apache.org/docs/2.4/mod/mod_log_config.html#logformat
Change-Id: Ib2f215913065426848b48f6293f33a75aff3d328
Depends-On: I54f0f5549d64768dacca71539c71a28cc99d9d95
|
|
|
|
This change will fix a logic error when L3 agent was disabled, where
a Pacemaker constraint (neutron-dhcp-agent-to-l3-agent-constraint) were
still looking for l3_agent_service in the Puppet catalog, but could not
because L3 agent was disabled.
It was sending this Puppet error:
Error: Could not find dependency
Pacemaker::Resource::Service[neutron-l3-agent] for
Pacemaker::Constraint::Base[neutron-dhcp-agent-to-l3-agent-constraint]
Change-Id: I0e5d24d844810c58a3205303399d1c20773af3dd
|
|
There are backwards incompatible patches [1][2] in puppet-cinder which break
upgrade scenarios, but at this point they made it to liberty and mitaka,
so workaround in t-h-t is easier than revert
[1] https://review.openstack.org/#/c/209412/
[2] https://review.openstack.org/#/c/231068/
Change-Id: Ic82258bf0893ebd4e595e5df73ffbc4c6443f9e8
Closes-Bug: #1570265
|
|
This part in overcloud_controller_pacemaker.pp has a lot of duplicate
code to define haproxy and vip creation. This is an attempt to refactor
this.
Change-Id: Icbd560de08999e48cfb54c6f3c94f8b96cddd6ba
Depends-On: I4cc6711911c1bfa1bc6063979e2b2a7ab5b8d37b
|
|
Previously ceilometer-notification, aodh-listener and sahara-engine
didn't have constraints that would anchor them under openstack-core
dummy resource. Such constraints are added now. (sahara-engine starting
after sahara-api, aodh-listener after aodh-evaluator, and
ceilometer-notification after openstack-core.) Openstack-core ->
heat-api constraint has been removed because heat-api depends on
ceilometer-notification, so there's a transitive dependency on
openstack-core already.
Change-Id: Ided7321ebbf2c3556726343b4bb466fd8759b43a
Closes-Bug: #1569444
|
|
* Deploy Gnocchi API.
* Storage backends: swift, rbd and file.
* Indexer backend default to mysql
* Configure Ceilometer to send metrics datas to Gnocchi
* Pacemaker config
Depends-On: Ic8778a3104e0ed0460423e4bf857682220dc5802
Depends-On: I7d2eb9405e0171fc54fa0b616122f69db5f51ce2
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Change-Id: Ifde17b1ab8fa2b30544633e455e1c7eb475705aa
|
|
Adds new puppet and puppet pacemaker specific services for
Keystone.
The puppet manifests for keystone now live in puppet-tripleo.
Hiera settings are driven by the nested stack heat templates
and used to control puppet-keystone and puppet-tripleo
directly.
The Pacemaker template extends the default keystone service and
swaps in the pacemaker specific puppet-tripleo profile instead.
Change-Id: I8b30438a27e9d5ec4e7d335e0bd1a931a20b03a2
Depends-On: I2faf5a78db802549053ec41678bf83bf28108189
|
|
Create the glance-fs Pacemaker resource on one node (pacemaker master)
instead of all nodes, and set verify_on_create to True.
* It will avoid a race condition if Puppet is applied on 2 nodes on the
same time, so the filesystem is attempted to be created once.
* Verify with psc that the resource has been correctly created.
The full context of the bug is decribed here:
https://bugzilla.redhat.com/show_bug.cgi?id=1319384
Change-Id: I625f0879ae56e814664d1433ae47e27148779f12
|