Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
in the great rebase following the JINJA ALL THE THINGS changes we lost
critical functionality in the fluentd client service. This review
restores the missing features.
Change-Id: I7c23f16f81e75f3da6a24587b2eb8385b3e920a4
Closes-bug: 1630692
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: Ic6fec1057439ed9122d44ef294be890d3ff8a8ee
Change-Id: I754c4a41d8a294a4c7c18bd282ae014efd4b9b16
Closes-Bug: #1628521
|
|
|
|
These hard-coded references to the Controller role mean that
things won't work if the keystone service is moved to any other
role, so we need to generate the lists dynamically based on the
enabled services for each role.
Change-Id: I5f1250a8a1a38cb3909feeb7d4c1000fd0fabd14
Closes-Bug: #1629096
|
|
This will be used for internal (or even public) TLS, for when
certmonger is generating the certificates. This same setting is used
for the undercloud with the generate_service_certificate option.
Change-Id: Ic54fe512b9ed5c71417a66491b7954e653f660b6
|
|
|
|
|
|
|
|
The management network does not have a VIP, so it's been wrong to
generate a cloud name and hieradata for this. Instead, the network
that actually needs a name and a hosts entry is the ctlplane network,
which actually has a VIP and there are services that use it.
bp tls-via-certmonger
Closes-Bug: #1621742
Change-Id: I163b2c7b5684da6dc290636f54eefe3f2b0c3e3f
|
|
Keystone doesn't provide different flags to indicate that both of its
endpoints are enabled. So currently we have to manually add its
network to all-nodes-config.
bp tls-via-certmogner
Change-Id: Ibecd78706e84853107f698ba411a0c05e6f5be52
|
|
The nodes need to be aware of the fqdn's for the specific endpoints
in the cloud. This could be either to set the entries in /etc/hosts
or to select an appropriate hostname for a certificate to be
generated.
bp tls-via-certmonger
Change-Id: I9b4645b937a344f46ec18a9a68c5afa2bc5206d0
|
|
Previously we weren't creating Redis VIP in keepalived, causing Redis to
be unusable in non-HA deployments. This is now fixed.
Depends-On: I0bb37f6fb3eed022288b2dcfc7a88e8ff88a7ace
Change-Id: I0ecfda1e6ad5567f6f58d60bf418bc91761833ab
Closes-Bug: #1618510
|
|
Move Redis VIP from controller-only to all nodes so that we don't assume
where Redis is deployed.
Change-Id: I55f8d48e3e077951fbcc88158dd6f21a2fe5f457
Related-Bug: #1618510
Partially-Implements: blueprint custom-roles
|
|
This adds a mapping of which service is on which network. This
information can be used to fetch a certificate depending on the
network (since they use different hostnames).
Change-Id: I176245da591bea28aeabf3d2b552f24456c98c43
|
|
|
|
|
|
This makes it easier to access the VIP data for other node types and
de-ties this from the controller role.
Change-Id: I71125576ec93889fed134b92fb59f7e7dc9920c4
|
|
To avoid the hard-coded references which won't work with
composable roles, we instead default to the rabbitmq_node_ips
list in the per-service puppet-tripleo profiles.
Change-Id: I76b7e06781fdd5d969503b6d73423bb3f5f7a41f
Depends-On: Ie53c93456529420588eb1927703ea91b54095d87
Partially-Implements: blueprint custom-roles
|
|
Some puppet interfaces require a comma separated list of hostnames
where a service is running, so generate it in a similar way to th
service ips.
Change-Id: Icdf5d993d089dc94035194bdbd52299fcbc793be
Partially-Implements: blueprint custom-roles
|
|
Until we fix the bug where at validation time heat doesn't know
if a parent passes a value into the nested template, this may
be a workaround for validation failing where no default is found.
Change-Id: I02b0764ac29700cd29584e356ac0cfebcda09a36
Closes-Bug: #1619352
|
|
Pass the list of ceph nodes to the ceph_mon profile via
the service template - this requires some fixup to the
profile to handle the ipv6 case.
Note this also aligns the ServiceNetMap keys so that the
composable node_ips logic will generate the lists when
the ceph_mon service is enabled.
Change-Id: If8a5c65f17e677fe62243b3aa746fd642f72d2b0
Depends-On: I481dd2cd2cde7f1491080e6d9c7dcb7047c22de1
Partially-Implements: blueprint custom-roles
|
|
Currently we have a hard-coded list of ips for various services that
run on the controller, instead we can dynamically generate that list
of per-service ips, initially only for the controller but this approach
can be extended so it works for any role.
Change-Id: I3c8a946e439539d239ad7281a1395414df0893eb
Partially-Implements: blueprint custom-roles
|
|
This adds a list of all enabled service_names in the
enabled_services key, and also generates some boolean
values e.g service_name_enabled, which is more convenient
for some usage (such as haproxy where we need an easy way to
set a flag saying if a given service is enabled)
Partially-Implements: blueprint custom-roles
Change-Id: I62273f403838893602816204d9bc50d516c0057f
|
|
Introduces environment files for deploying OpenDaylight in two ways:
- ODL only managing L2 as an ML2 plugin
- ODL managing L2 and L3 DVR, by replacing NeutronL3Agent
Two services are added. One to install ODL and configure OVS on the
Controllers, and another service to only configure OVS on compute nodes.
Paritally-Implements: blueprint opendaylight-integration
Depends-On: I666dc0874f1d11a72a62d796f4f6d41f7aa87a3f
Change-Id: Ide69e20cbf2ec6151953cb23e51478b770aca17f
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
This aligns with the new naming conventions in puppet-tripleo, so
the keys can be more easily generated from the service_names.
Change-Id: Idb4a740e70257e3c69d8ec7d0c88594cc091b6a7
Partially-Implements: blueprint custom-roles
Depends-On: I423b544df174254ac511b906b0c570e701678022
|
|
To enable composable generation of this switch the key names
to align with the service_name of each service.
Note that this should depend on I423b544df174254ac511b906b0c570e701678022
and previously passed CI with that defined, but because we now run
gate validation jobs on puppet-tripleo it's impossible to land, so
this now contains both old and new hiera keys temporarily, which will
be removed when the puppet-tripleo patch lands.
Change-Id: I7febf28bf409e25e8e5961ab551b6d56bb11e0c6
Partially-Implements: blueprint custom-roles
|
|
|
|
Allows the installation and configuration of Manila.
Supports the generic driver only. This has a dependency on the
puppet-tripleo classes for manila where the puppet specific
config now lives.
The review at https://review.openstack.org/#/c/315658/ has been
merge into this one, as of v68, so manila lands as a composable
service. This was brought up on the mailing list at [1]
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/096126.html
Co-Authored-By: Marios Andreou <marios@redhat.com>
Implements: blueprint composable-services-within-roles
Depends-On: I444916d60a67bf730bf4089323dba1c1429e2e71
Depends-On: I9eda4b3364e5c59342761a1ec71b0eb567c69cf1
Depends-On: I571b65a5402c1028418476a573ebeb9450ed00c9
Change-Id: I7acebac4354fca1f8d7ff6c343c1346bf29b81c6
|
|
Currently we have hard-coded parameters for each role, but to enable
custom roles, we need to pass a generic hosts list that can be joined
for all enabled roles.
Change-Id: I0606f462ff61c3a541342b63fee7d46ebfd1f4e0
Partially-Implements: blueprint custom-roles
|
|
Migrate puppet/hieradata/*.yaml parameters to puppet/services/*.yaml
except for some services that are not composable yet.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: I7e5f8b18ee9aa63a1dffc6facaf88315b07d5fd7
|
|
Currently we have a special controller-only deployment which writes
the name/ip of the "bootstrap node", e.g the cluster master, which
defaults to the first node in the Controller ResourceGroup.
Now we're moving to fully composable services/roles, it's possible
folks will want to deploy services that expect to detect the bootstrap
node (e.g so only one node does a DB sync) for non-controller roles.
So, take this opportunity to combine the bootstrap node deployment with
the "all nodes" data, such that we deploy the same data for all roles.
Because the boostrap node data is per role cluster, rather than truly
global, we pass it via input_values into each per-role Deployment.
At some future point we might consider renaming this, e.g to
something which describes per-cluster config vs "all nodes",
but as a first step let's just rationalize the resources.
Change-Id: I4011526a13c51b3d0f95c17fe8ed38115b4fdce4
|
|
Change-Id: I1921115cb6218c7554348636c404245c79937673
Depends-On: I7ac096feb9f5655003becd79d2eea355a047c90b
Depends-On: I871ef420700e6d0ee5c1e444e019d58b3a9a45a6
|
|
Note that this change is not enough yet to deploy bare metal instances,
it only deploys Ironic services themselves and makes sure they work.
Also it does not support HA for now.
Co-Authored-By: Dmitry Tantsur <dtansur@redhat.com>
Partially-implements: blueprint ironic-integration
Change-Id: I541be905022264e2d4828e7c46338f2e300df540
|
|
|
|
* Deploy Gnocchi API.
* Storage backends: swift, rbd and file.
* Indexer backend default to mysql
* Configure Ceilometer to send metrics datas to Gnocchi
* Pacemaker config
Depends-On: Ic8778a3104e0ed0460423e4bf857682220dc5802
Depends-On: I7d2eb9405e0171fc54fa0b616122f69db5f51ce2
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Change-Id: Ifde17b1ab8fa2b30544633e455e1c7eb475705aa
|
|
Previously we tried to use UpdateIdentifier for two different things:
tell whether to perform package update, and also to tell whether the
top-level stack is being created or updated (which was incorrect and
resulted in bug 1567384, and an attempt to work around that bug resulted
in bug 1567385).
We cannot use Heat's "action" conditionals in some cases, because they
refer to the direct parent stack, which can yield undesirable results
when introducing new nested stacks or temporarily no-opping something
and then adding it back (in both these cases, "action" would be
considered "CREATE", even though the top-level stack is in "UPDATE").
So tripleoclient passes a new parameter StackAction to tell whether the
top-level stack is being created or updated, and we make use of
that. (It seems there's no better way of getting this info from within
the nested Heat stacks.)
Change-Id: Ie14ddbff15e7ed21aaa3fcdacf36e0040f912382
Depends-On: I9dc3b4cd8a6a71df34d8babf0e4c6505041f5311
Closes-Bug: #1567384
Related-Bug: #1567385
|
|
Ceilometer Alarm is deprecated in Liberty by Aodh.
This patch:
* manage Aodh Keystone resources
* deploy Aodh API under WSGI, Notifier, Listener and Evaluator
* manage new parameters to customize Aodh deployment
* uses ceilometer DB for the upgrade path
* pacemaker config
* Add migration logic to remove pcs resources
Depends-On: I5333faa72e52d2aa2a622ac2d4b60825aadc52b5
Depends-On: Ib6c9c4c35da3fb55e0ca8e2d5a58ebaf4204d792
Co-Authored-By: Emilien Macchi <emilien@redhat.com>
Change-Id: Ib47a22884afb032ebc1655e1a4a06bfe70249134
|
|
|
|
|
|
As discussed at https://bugzilla.redhat.com/show_bug.cgi?id=1299265
when providing a list of IPv6 addresses as the memcache_node_ips
the resulting nova.conf entry can't be parsed properly.
This adds a memcache_node_ips_v6 which has the required format like
inet6:[ADDR1],inet6:[ADDR2],inet6:[ADDR3]
Closes-Bug: 1536103
Change-Id: I7f95fa063cbba279c4c2e270841f0a279d2be2f6
|
|
This just a revert to see if reverting this gets back to a normal CI run time.
This reverts commit f72aed85594f223b6f888e6d0af3c880ea581a66.
Change-Id: I04a0893f6cf69f547a4db26261005e580e1fc90b
|
|
Previously we were always appending the :port suffix to the list
of rabbitmq nodes but the syntax was invalid for IPv6.
This change wires rabbit_hosts from the templates as it happens
already for the other services. Port can be customized using
rabbit_port.
Change-Id: Iecc7a97d46d7de17e85398c57996c104c9125b0e
|
|
Ceilometer Alarm is deprecated in Liberty by Aodh.
This patch:
* manage Aodh Keystone resources
* deploy Aodh API under WSGI, Notifier, Listener and Evaluator
* manage new parameters to customize Aodh deployment
* uses ceilometer DB for the upgrade path
* pacemaker config
Depends-On: I9e34485285829884d9c954b804e3bdd5d6e31635
Depends-On: I891985da9248a88c6ce2df1dd186881f582605ee
Depends-On: Ied8ba5985f43a5c5b3be5b35a091aef6ed86572f
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Change-Id: I58d419173e80d2462accf7324c987c71420fd5f6
|
|
|
|
Some assignments must be fixed in order to make run midonet with HA
pacemaker properly and when the network isolation is enabled.
Change-Id: I69fb3a1911cfe3baea3349da8f3e185dddf60a95
|
|
Integration of OpenStack data processing service (sahara) with
TripleO.
- Deploys sahara in distributed mode (separate api and engine
processes on each controller node)
- Load balancing w/haproxy
- RabbitMQ/MySQL supported per current TripleO standard
- Minimal configurability at this time
Change-Id: I77a6a69ed5691e3b1ba34e9ebb4d88c80019642c
Partially-implements: blueprint sahara-integration
Depends-On: I0f0a1dc2eaa57d8226bad8cfb250110296ab9614
Depends-On: Ib84cc59667616ec94e7edce2715cbd7dd944f4ae
Depends-On: I9fe321fd4284f7bfd55bd2e69dcfe623ed6f8a2a
|
|
The completion-signal input is no longer needed, because for some
time 99-refresh-completed has supported using per-deployment
signal URLs instead provided the config group is set correctly
to os-apply-config.
Change-Id: I76cb5331917ff54e978bd22b9dea0c1a2c65a928
|
|
|
|
|