Age | Commit message (Collapse) | Author | Files | Lines |
|
Currently we have a special controller-only deployment which writes
the name/ip of the "bootstrap node", e.g the cluster master, which
defaults to the first node in the Controller ResourceGroup.
Now we're moving to fully composable services/roles, it's possible
folks will want to deploy services that expect to detect the bootstrap
node (e.g so only one node does a DB sync) for non-controller roles.
So, take this opportunity to combine the bootstrap node deployment with
the "all nodes" data, such that we deploy the same data for all roles.
Because the boostrap node data is per role cluster, rather than truly
global, we pass it via input_values into each per-role Deployment.
At some future point we might consider renaming this, e.g to
something which describes per-cluster config vs "all nodes",
but as a first step let's just rationalize the resources.
Change-Id: I4011526a13c51b3d0f95c17fe8ed38115b4fdce4
|
|
|
|
This change uses the new os-refresh-config --timeout argument to set a
kill timeout for stalled os-refresh-config runs.
4 hours is a reasonable conservative value since it matches the stack
timeout - but it can be set shorter in the future based on actual run
times.
Change-Id: I433f558515df24736263ec0d50de08ad8f78498f
Closes-Bug: #1595722
DependsOn: Ibcbb2090aed126abec8dac49efa53ecbdb2b9b2c
|
|
The ConfigCommand parameter overrides the server resource metadata to
specify what command os-collect-config runs whenever any configuration
data changes.
The default is already 'os-refresh-config' so this change has no
effect but it allows a future change to specify an
os-refresh-config --timeout argument to fix bug #1595722.
Change-Id: I8dd35b6724d8c00e5495faca84ee8fee77641b82
Partial-Bug: #1595722
|
|
I think this depends_on is bogus - the Controller ResourceGroup does
depend on Networks, but not the ControllerServiceChain - this needs to
be consistent with the other ServiceChain definitions for the
custom-roles work.
Change-Id: I0159968719f5d21c8f216ad69af047fa141d54e9
|
|
We added NodeConfigIdentifiers to trigger config to be re-applied on
update, but then later added DeployIdentifier which forces config to
*always* be applied on update, so we can simplify things by just
referencing the DeployIdentifier directly.
Change-Id: I79212def1936740825b714419dcb4952bc586a39
|
|
overcloud_volume contains some code that is already deployed by
OS::TripleO::Services::CinderVolume service.
Change-Id: I3446883cb89dcf179a854e2adef81b899117f66a
|
|
Change-Id: Ibc9abf7043c37104d03cd72d882e10cdb53fe6a2
|
|
Change-Id: I1921115cb6218c7554348636c404245c79937673
Depends-On: I7ac096feb9f5655003becd79d2eea355a047c90b
Depends-On: I871ef420700e6d0ee5c1e444e019d58b3a9a45a6
|
|
|
|
|
|
Nova & Neutron services are already managed by puppet-tripleo in
profiles, and we already override Service resource for some services to
manage them with Pacemaker.
Dropping this code here will allow us to run TripleO in AIO setup as we
don't manage everything with Pacemaker.
Change-Id: Idcfc6ea26ca41534ce407be0eb3dafe7bcd2ef2d
|
|
|
|
|
|
|
|
|
|
They moved to puppet-tripleo.
Change-Id: Idd4488fc4b1e8e8024d47f6e3d83ac4f3cecd088
Depends-On: I75d68cc766ad274b16b22f43b7d34d02ab26de13
|
|
|
|
Adds an example of proving a mapping file for all nodes, then
extracting the data for each node based on a lookup of the mac address.
Some assumptions are made (e.g the hard-coded reference to eth0), but
it should be easily modified to suit specific environments.
Usage via an enviroment file will look like:
resource_registry:
OS::TripleO::NodeUserData: os-net-config-mappings.yaml
parameter_defaults:
NetConfigDataLookup:
host1:
nic1: "00:c8:7c:e6:f0:2e"
host2:
nic1: "00:18:7d:99:0c:b6"
Note this version requires liberty heat in the undercloud due to the
use of a new str_replace feature to serialize the json parameter.
Change-Id: I7da9c9d8805e676a383e888a7d77f05d3432ab12
|
|
After we fixed bug #1567384 and bug #1567385, we no longer need to no-op
the PakageUpdate resource on upgrades. I removed the no-op in change
Ie14ddbff15e7ed21aaa3fcdacf36e0040f912382 from
major-upgrade-pacemaker-converge.yaml but didn't recall we had the no-op
in major-upgrade-pacemaker{,-init}.yaml too.
Change-Id: I24b913c790eae79e3b207729e0b22378075fb282
|
|
This patch modifies the interface for the -post stacks so
that we pass RoleData instead of just the StepConfig
into each -post.yaml template.
This will facilitate creating other types of -post.yaml scripts
that require more data that just 'step_config'. Things like
containers, etc. will require this.
Change-Id: I2527fc0098192f092f5e9046033a04bc71be2cae
|
|
This patch updates puppet/services/services.yaml (currently the only
interface for 'services' in t-h-t) so that we return a more generic
'role_data' Heat output.
This is a move towards making the services themselves a bit more generic
so we can accommodate other deployment types (containers, etc.)
Change-Id: I8bc32c59a48e6d5f0caa2f26fab394d5d992a4a5
|
|
This commit adds the epmd port 4369 to the firewall configuration for
the service rabbit. This is necessary for having HA setups working,
since without this port the rabbitmq cloned resource starts only on one
node and the others are not able to complete the rabbit cluster
creation.
Change-Id: Iae042dd60a578e158b75539dc3998fc40185b343
|
|
Gnocchi 2.1 introduces a change where legacy resource types
needed by ceilometer are not created by default. Instead a
new flag is exposed to create these. We should use this by
default. Note that this is an optional flag and is only
needed if you want to create legacy resource types.
Change-Id: I95ccccb40ce4a8319d0776c4d62c2890cf1fd970
Closes-bug: #1592449
|
|
This is a first iteration of implementing libvirt and nova compute as
composable services.
Note: some parameters are still in puppet/compute.yaml -- we'll move
them later in a next iteration.
Implements: blueprint composable-services-within-roles
Depends-On: I0b765f8cb08633005c1fc5a5a2a8e5658ff44302
Change-Id: I752198cdf231ef13062ba96c3877e5defd618c3a
|
|
Change-Id: Ia70688cfc333dc6536b5372cdb2eedb987ab61f8
|
|
Add timezone as a composable service
Change-Id: I5bed49e1f8b803fb6d9d0b06165a38f61b132431
Partially-implements: blueprint composable-services-within-roles
|
|
Add timezone as a composable service
Change-Id: I1569b2ebdca8e67c0e92a5c0e3fadd12006cc02a
Partially-implements: blueprint composable-services-within-roles
|
|
Add timezone as a composable service
Change-Id: I6e0e9cef3703cd186eab15d76e611d00c1da4a4e
Partially-implements: blueprint composable-services-within-roles
|
|
Add timezone as a composable service
Change-Id: I99f0b0727a10ee74ea1de0499c8bc3ba37ab3345
Partially-implements: blueprint composable-services-within-roles
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Wires the steps into the BlockStorage role and ensures
the installed-packages list is written on a per-step basis on
all roles, as it happens on the controllers already.
Change-Id: Iaec8ad3b2afbef6d586b7b46abaa1434cdb62f41
|
|
When the overcloud is upgraded we do a yum update of the packages.
This step might introduce a newer galera version. In such a situation
we need to dump the db and restore it. The high-level workflow should
be the following:
1) During the main upgrade step, before shutting down the cluster
we need to dump the db
2) We upgrade the packages
3) We briefly start mysql on a single node while making sure that
/root/.my.cnf is briefly moved out of the way (because it contains
a password) and import the data. After the import we shutdown this
mysql instance
4) We let the cluster start up normally
The above steps will take place in the following scenarios.
Given a locally installed mariadb version X.Y.Z and release R,
we will dump and restore the DB under the following conditions:
A) MySqlMajorUpgrade template parameter is set to 'auto' and
the upgraded package differs in X, Y *or* Z. We basically don't
dump automatically if the release field changes.
B) MySqlMajorUpgrade template parameter is set to 'yes'
When MySqlMajorUpgrade is set to 'no', no dumping will be performed.
Note that this will give a non functional upgrade if a major mariadb
upgrade is taking place.
Partial-Bug: #1587449
Co-Author: Damien Ciabrin <dciabrin@redhat.com>
Co-Author: Mike Bayer <mbayer@redhat.com>
Depends-On: I8cb4cb3193e6b823aad48ad7dbbbb227364d2a58
Depends-On: I38dcacfabc44539aab1f7da85168fe44a1b43a51
Change-Id: I374628547aed091129d0deaa29764bfc998d76ea
|
|
Since the Liberty release, the number of services managed by pacemaker
on HA Overcloud has increased. This has an impact on
major_upgrade_controller_pacemaker_1.sh, where cluster sync timeout
value tuned for older releases is now becoming too low.
Raise the cluster sync timeout value to a sensible limit to
give pacemaker enough time to stop the cluster during major upgrade.
Change-Id: I821d354ba30ce39134982ba12a82c429faa3ce62
Closes-Bug: #1597506
|
|
This patch drops a bunch of unused VIP parameters
from controller.yaml
Depends-On: I5e2feff7e5dc900849c9535f2b7ac05d3c8f93e1
Change-Id: I5c94f55ac4f2ec1103d5916942fb14e8b5595d01
|
|
Change-Id: I7265b0781acefd4a0de687b0465144e57bcc079f
Partially-Implements: blueprint composable-services-within-roles
|
|
|
|
|
|
Note that this change is not enough yet to deploy bare metal instances,
it only deploys Ironic services themselves and makes sure they work.
Also it does not support HA for now.
Co-Authored-By: Dmitry Tantsur <dtansur@redhat.com>
Partially-implements: blueprint ironic-integration
Change-Id: I541be905022264e2d4828e7c46338f2e300df540
|
|
|
|
Change-Id: I469f2bd429eba23b2010b7380e794c67b18e7a47
Depends-On: I1aa46086f69e7c3efd2782da62fd18ade8343fde
Partial-Bug: 1595518
|
|
The recently added Management network is missing from this example
Change-Id: Id2010e92b8c27188ed153243d0e54ec50bfdcffb
|
|
Depends-On: Ie68d7eccf4938bdbdea93327af0638b3fd002b3e
Change-Id: I1eb68d0cd5f8bf4bf954dd9f12941bc493345708
Partially-Implements: blueprint composable-services-within-roles
|
|
This avoids creating an empty nested stack.
Change-Id: Icce0bfab005a69fce42f58956dcc81acea805e74
|