Age | Commit message (Collapse) | Author | Files | Lines |
|
Use the network.network.j2.yaml to render these files, instead
of relying on the hard-coded versions.
Note this doesn't currently consider the _v6 templates as we may want
to deprecate these and instead rely on an ipv6 specific network_data file,
or perhaps make the network/network.network.j2.yaml generic and able to
detect the version from the cidr?
Change-Id: I662e8d0b3737c7807d18c8917bfce1e25baa3d8a
Partially-Implements: blueprint composable-networks
|
|
It seems UpdateIdentifier is an overloaded parameter - it is used
both to trigger package updates in the minor update case as well as
to trigger the upgrade steps during a major upgrade. I'm not sure
it's appropriate to change either of the descriptions as a result,
so for the moment that is added to the exclusion list.
Change-Id: Ied36cf259f6a6e5c8cfa7a01722fb7fda6900976
Partial-Bug: 1700664
|
|
This should be handled in puppet-tripleo, as is done for some other
services e.g ceph. This has also been identified as a possible
performance problem due to the nested get_attr calls.
Change-Id: I7e14f0219c28c023c4e8e1d4693f0bfa9674d801
Related-Bug: #1684272
Depends-On: Iccb9089db4b382db3adb9340f18f6d2364ca7f58
|
|
The list that was passed contained repeated services, which was
problematic if we wanted to use this list in puppet. So instead we pass
a list with the unique names.
Change-Id: Ib5eb0c5b59a9a50344d22c258ca461e8f1e52c86
|
|
The bug that prevented it from being a comma delimited list was fixed.
Change-Id: Ia5296140763849bdeac481c812f70a42d907c214
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
This will enable those consuming the stack_update_type hieradata
set by this parameter to differentiate an update from a major upgrade
Change-Id: I38469f4b7d04165ea5371aeb0cbd2e9349d70c79
|
|
This needs to handle a ServiceNetMap containing non-default
network names when they are overridden via the *NetName parameters.
Closes-Bug: #1651541
Change-Id: I95d808444642a37612a495e822e50449a7e7da63
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
In order to call commands that need to be run on a single node, we
create a new per-service variable that will contain the first node of
each role containing the service.
Change-Id: I03e8685f939e8ae1fcd8b16883b559615042505d
Partial-Bug: #1615983
|
|
This patch optimizes how we deploy hiera by using a new
heat hook specifically designed to help compose hiera
within heat templates. As part of this change:
- we update all the 'hiera' software configurations to set the group to hiera
instead of os-apply-config.
- The new format uses JSON instead of YAML. The hook actually writes
out the hiera JSON directly so no conversion takes place. Arrays,
Strings, Booleans all stay in their native formats. As such we can avoid
having to do many of the awkward string and list conversions in t-h-t to
support the previous YAML formatting.
- The new hook prefers JSON over YAML so upgrading users will have the
new files prefered. (we will post a cleanup routine for the old files
soon but this isn't a new behavior, JSON is now simply prefered.)
- A lot of services required edits to account for default settings that
worked in YAML that no longer work correctly in the native JSON
format. In almost all these cases I think the resulting codes looks
cleaner and is more explicit with regards to what is getting
configured in hiera on the actual nodes.
Depends-On: I6a383b1ad4ec29458569763bd3f56fd3f2bd726b
Closes-bug: #1596373
Change-Id: Ibe7e2044e200e2c947223286fdf4fd5bcf98c2e1
|
|
This patch moves the hosts configuration into its own deployment.
It will continue to use os-apply-config as something that is
required early on in the bootstrapping (it needs to be
configured before puppet runs for example).
The motivation here is so we can refactor all-nodes-config.yaml to use a
new hiera hook that that avoids os-apply-config entirely.
Change-Id: Ib3e4380f205358b27d22a1102b663cf300b1ed86
Partial-bug: #1596373
|
|
|
|
|
|
in the great rebase following the JINJA ALL THE THINGS changes we lost
critical functionality in the fluentd client service. This review
restores the missing features.
Change-Id: I7c23f16f81e75f3da6a24587b2eb8385b3e920a4
Closes-bug: 1630692
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee
Change-Id: I754c4a41d8a294a4c7c18bd282ae014efd4b9b16
Closes-Bug: #1628521
|
|
|
|
These hard-coded references to the Controller role mean that
things won't work if the keystone service is moved to any other
role, so we need to generate the lists dynamically based on the
enabled services for each role.
Change-Id: I5f1250a8a1a38cb3909feeb7d4c1000fd0fabd14
Closes-Bug: #1629096
|
|
This sets up a flag that tells the profiles to use TLS (this will happen
in the internal network).
bp tls-via-certmonger
Change-Id: If47febb5b38b1c65f60f9de87a34cb31936a7c0d
|
|
This will be used for internal (or even public) TLS, for when
certmonger is generating the certificates. This same setting is used
for the undercloud with the generate_service_certificate option.
Change-Id: Ic54fe512b9ed5c71417a66491b7954e653f660b6
|
|
|
|
|
|
|
|
The management network does not have a VIP, so it's been wrong to
generate a cloud name and hieradata for this. Instead, the network
that actually needs a name and a hosts entry is the ctlplane network,
which actually has a VIP and there are services that use it.
bp tls-via-certmonger
Closes-Bug: #1621742
Change-Id: I163b2c7b5684da6dc290636f54eefe3f2b0c3e3f
|
|
Keystone doesn't provide different flags to indicate that both of its
endpoints are enabled. So currently we have to manually add its
network to all-nodes-config.
bp tls-via-certmogner
Change-Id: Ibecd78706e84853107f698ba411a0c05e6f5be52
|
|
The nodes need to be aware of the fqdn's for the specific endpoints
in the cloud. This could be either to set the entries in /etc/hosts
or to select an appropriate hostname for a certificate to be
generated.
bp tls-via-certmonger
Change-Id: I9b4645b937a344f46ec18a9a68c5afa2bc5206d0
|
|
Previously we weren't creating Redis VIP in keepalived, causing Redis to
be unusable in non-HA deployments. This is now fixed.
Depends-On: I0bb37f6fb3eed022288b2dcfc7a88e8ff88a7ace
Change-Id: I0ecfda1e6ad5567f6f58d60bf418bc91761833ab
Closes-Bug: #1618510
|
|
Move Redis VIP from controller-only to all nodes so that we don't assume
where Redis is deployed.
Change-Id: I55f8d48e3e077951fbcc88158dd6f21a2fe5f457
Related-Bug: #1618510
Partially-Implements: blueprint custom-roles
|
|
This adds a mapping of which service is on which network. This
information can be used to fetch a certificate depending on the
network (since they use different hostnames).
Change-Id: I176245da591bea28aeabf3d2b552f24456c98c43
|
|
|
|
|
|
This makes it easier to access the VIP data for other node types and
de-ties this from the controller role.
Change-Id: I71125576ec93889fed134b92fb59f7e7dc9920c4
|
|
To avoid the hard-coded references which won't work with
composable roles, we instead default to the rabbitmq_node_ips
list in the per-service puppet-tripleo profiles.
Change-Id: I76b7e06781fdd5d969503b6d73423bb3f5f7a41f
Depends-On: Ie53c93456529420588eb1927703ea91b54095d87
Partially-Implements: blueprint custom-roles
|
|
Some puppet interfaces require a comma separated list of hostnames
where a service is running, so generate it in a similar way to th
service ips.
Change-Id: Icdf5d993d089dc94035194bdbd52299fcbc793be
Partially-Implements: blueprint custom-roles
|
|
Until we fix the bug where at validation time heat doesn't know
if a parent passes a value into the nested template, this may
be a workaround for validation failing where no default is found.
Change-Id: I02b0764ac29700cd29584e356ac0cfebcda09a36
Closes-Bug: #1619352
|
|
Pass the list of ceph nodes to the ceph_mon profile via
the service template - this requires some fixup to the
profile to handle the ipv6 case.
Note this also aligns the ServiceNetMap keys so that the
composable node_ips logic will generate the lists when
the ceph_mon service is enabled.
Change-Id: If8a5c65f17e677fe62243b3aa746fd642f72d2b0
Depends-On: I481dd2cd2cde7f1491080e6d9c7dcb7047c22de1
Partially-Implements: blueprint custom-roles
|
|
Currently we have a hard-coded list of ips for various services that
run on the controller, instead we can dynamically generate that list
of per-service ips, initially only for the controller but this approach
can be extended so it works for any role.
Change-Id: I3c8a946e439539d239ad7281a1395414df0893eb
Partially-Implements: blueprint custom-roles
|
|
This adds a list of all enabled service_names in the
enabled_services key, and also generates some boolean
values e.g service_name_enabled, which is more convenient
for some usage (such as haproxy where we need an easy way to
set a flag saying if a given service is enabled)
Partially-Implements: blueprint custom-roles
Change-Id: I62273f403838893602816204d9bc50d516c0057f
|
|
Introduces environment files for deploying OpenDaylight in two ways:
- ODL only managing L2 as an ML2 plugin
- ODL managing L2 and L3 DVR, by replacing NeutronL3Agent
Two services are added. One to install ODL and configure OVS on the
Controllers, and another service to only configure OVS on compute nodes.
Paritally-Implements: blueprint opendaylight-integration
Depends-On: I666dc0874f1d11a72a62d796f4f6d41f7aa87a3f
Change-Id: Ide69e20cbf2ec6151953cb23e51478b770aca17f
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
This aligns with the new naming conventions in puppet-tripleo, so
the keys can be more easily generated from the service_names.
Change-Id: Idb4a740e70257e3c69d8ec7d0c88594cc091b6a7
Partially-Implements: blueprint custom-roles
Depends-On: I423b544df174254ac511b906b0c570e701678022
|
|
To enable composable generation of this switch the key names
to align with the service_name of each service.
Note that this should depend on I423b544df174254ac511b906b0c570e701678022
and previously passed CI with that defined, but because we now run
gate validation jobs on puppet-tripleo it's impossible to land, so
this now contains both old and new hiera keys temporarily, which will
be removed when the puppet-tripleo patch lands.
Change-Id: I7febf28bf409e25e8e5961ab551b6d56bb11e0c6
Partially-Implements: blueprint custom-roles
|
|
|
|
Allows the installation and configuration of Manila.
Supports the generic driver only. This has a dependency on the
puppet-tripleo classes for manila where the puppet specific
config now lives.
The review at https://review.openstack.org/#/c/315658/ has been
merge into this one, as of v68, so manila lands as a composable
service. This was brought up on the mailing list at [1]
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/096126.html
Co-Authored-By: Marios Andreou <marios@redhat.com>
Implements: blueprint composable-services-within-roles
Depends-On: I444916d60a67bf730bf4089323dba1c1429e2e71
Depends-On: I9eda4b3364e5c59342761a1ec71b0eb567c69cf1
Depends-On: I571b65a5402c1028418476a573ebeb9450ed00c9
Change-Id: I7acebac4354fca1f8d7ff6c343c1346bf29b81c6
|
|
Currently we have hard-coded parameters for each role, but to enable
custom roles, we need to pass a generic hosts list that can be joined
for all enabled roles.
Change-Id: I0606f462ff61c3a541342b63fee7d46ebfd1f4e0
Partially-Implements: blueprint custom-roles
|
|
Migrate puppet/hieradata/*.yaml parameters to puppet/services/*.yaml
except for some services that are not composable yet.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: I7e5f8b18ee9aa63a1dffc6facaf88315b07d5fd7
|
|
Currently we have a special controller-only deployment which writes
the name/ip of the "bootstrap node", e.g the cluster master, which
defaults to the first node in the Controller ResourceGroup.
Now we're moving to fully composable services/roles, it's possible
folks will want to deploy services that expect to detect the bootstrap
node (e.g so only one node does a DB sync) for non-controller roles.
So, take this opportunity to combine the bootstrap node deployment with
the "all nodes" data, such that we deploy the same data for all roles.
Because the boostrap node data is per role cluster, rather than truly
global, we pass it via input_values into each per-role Deployment.
At some future point we might consider renaming this, e.g to
something which describes per-cluster config vs "all nodes",
but as a first step let's just rationalize the resources.
Change-Id: I4011526a13c51b3d0f95c17fe8ed38115b4fdce4
|
|
Change-Id: I1921115cb6218c7554348636c404245c79937673
Depends-On: I7ac096feb9f5655003becd79d2eea355a047c90b
Depends-On: I871ef420700e6d0ee5c1e444e019d58b3a9a45a6
|
|
Note that this change is not enough yet to deploy bare metal instances,
it only deploys Ironic services themselves and makes sure they work.
Also it does not support HA for now.
Co-Authored-By: Dmitry Tantsur <dtansur@redhat.com>
Partially-implements: blueprint ironic-integration
Change-Id: I541be905022264e2d4828e7c46338f2e300df540
|
|
|
|
* Deploy Gnocchi API.
* Storage backends: swift, rbd and file.
* Indexer backend default to mysql
* Configure Ceilometer to send metrics datas to Gnocchi
* Pacemaker config
Depends-On: Ic8778a3104e0ed0460423e4bf857682220dc5802
Depends-On: I7d2eb9405e0171fc54fa0b616122f69db5f51ce2
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Change-Id: Ifde17b1ab8fa2b30544633e455e1c7eb475705aa
|