Age | Commit message (Collapse) | Author | Files | Lines |
|
Change-Id: I035c26e0f50e4b3fc0f6085fa5a4bf524e4852b7
|
|
These are now passed via the heat profiles in t-h-t (via
heat-base.yaml and heat-engine.yaml) and use the actual names of
keystone parameters instead.
Change-Id: Id0f5dd03b6757df989339c93b58a5b7eac3402a2
Depends-On: I0e5124d57fdc519262fdec2dbeaaac85afaeebdf
|
|
This optionally enables TLS for Barbican API in the internal network.
If internal TLS is enabled, each node that is serving the Barbican API
service will use certmonger to request its certificate.
bp tls-via-certmonger
Change-Id: I1c1d3dab9bba7bec6296a55747e9ade242c47bd9
|
|
|
|
|
|
The civetweb binding format is IP:PORT; this change ensures the IP
is enclosed in brackets if IPv6.
To do so we add the bind_ip and bind_port parameters to the
rgw service class.
Change-Id: Ib84fa3479c2598bff7e89ad60a1c7d5f2c22c18c
Co-Authored-By: Lukas Bezdicka <social@v3.sk>
Related-Bug: #1636515
|
|
I'm not sure how this merged, but it's causing failures in other
patches to puppet-tripleo.
Change-Id: Ib20d349fa9abd6347739190bb29a02b6e3eb839d
|
|
|
|
|
|
|
|
This patch changes the rabbit_hosts config generation to work properly
with IPv6 addresses.
Closes-Bug: #1639881
Change-Id: I07cd983880a4a75a051e081dcb96134cb5c6f5e8
|
|
Use rabbitmq_node_ips to find out where rabbitmq nodes are, and have
correct ipv6 syntax if required.
Closes-Bug: 1637443
Change-Id: Ibc0ed642931dd3ada7ee594bb8c70a1c3462206d
|
|
Rather than use the heat::keystone::domain class which also includes the
configuration options, we should just create the user for heat in
keystone independently of the configuration.
Change-Id: I7d42d04ef0c53dc1e62d684d8edacfed9fd28fbe
Related-Bug: #1638350
Closes-Bug: #1638626
|
|
This optionally enables TLS for Cinder API in the internal network.
If internal TLS is enabled, each node that is serving the Cinder API
service will use certmonger to request its certificate.
bp tls-via-certmonger
Change-Id: Ib4a9c8d3ca57f1b02e1bb0d150f333db501e9863
|
|
|
|
|
|
This optionally enables TLS for Nova API in the internal network.
If internal TLS is enabled, each node that is serving the Nova API
service will use certmonger to request its certificate.
Note that this doesn't enable internal TLS for the nova metadata
service since it doesn't run over httpd. This will be handled in
a later commit.
bp tls-via-certmonger
Change-Id: I88380a1ed8fd597a1a80488cbc6ce357f133bd70
|
|
|
|
|
|
|
|
|
|
Since the service_name is now being passed from t-h-t, we can clean
it up from the profile in puppet.
Change-Id: I724af8c355c3077be64cf472cedbca80af55da01
Depends-On: I13638cd1af52537bef8540f0d5fa5f5f7decd392
|
|
In order to make the zaqar service fully composable, the mongo ips need
to be calculated without assuming that mongo and zaqar are on the same
node.
Change-Id: I0b077e85ba5fcd9fdfd33956cf33ce2403fcb088
|
|
In some cases, for instance, when updating from a non-SSL setup in
HAProxy to an SSL setup, we don't reload haproxy's configuration.
This is problematic since we need HAProxy to serve the certificates
and the new endpoints.
This forces the reload when puppet notices changes.
Change-Id: Ie1dd809e6beef33fadad48de55e488219fb7d686
Closes-Bug: #1636921
|
|
|
|
|
|
|
|
|
|
Previously we did this with Pacemaker, but with move to NG HA
architecture we lost the ability to use NFS mounts as image storage for
Glance. This reimplements the mounting without utilizing Pacemaker. The
mount is by default also written to /etc/fstab so that it persists over
reboot, but this behavior can be disabled.
This could also go to puppet-glance eventually, but not yet -- we need
this backported to Newton because it's a TripleO regression. I don't
think puppet-glance would allow backporting this to Newton, because from
their point of view it would be a RFE rather than a regression.
Change-Id: I45ad34c36587a8d695069368cf791f1efb68256c
Related-Bug: #1635606
|
|
|
|
|
|
To be able to monitor during deployment, we need sensu clients
and fluentd collectors be deployed as soon as it is possible.
Change-Id: I952f0d6de6f6327d5c923b8f1d7a5979758dbc59
|
|
In change I35921652bd84d1d6be0727051294983d4a0dde10 we want to remove
all those duplicate tcp_listen_option entries. One consequence of that
is that we need to set rabbitmq::tcp_keepalive to true via hiera
(as opposed to forcing it via the tcp_listen_option hash).
For this to work we need to remove this forced parameter override.
Note that even if I35921652bd84d1d6be0727051294983d4a0dde10 and this
change don't merge at the exact same time it is still okay because
we do force tcp_keepalive to true via the tcp_listen_options.
Change-Id: I608477d5714a5081b3b4ab3b9fc2932bdd598301
|
|
|
|
This optionally enables TLS for gnocchi in the internal network.
If internal TLS is enabled, each node that is serving the gnocchi
service will use certmonger to request its certificate.
bp tls-via-certmonger
Change-Id: Ie983933e062ac6a7f0af4d88b32634e6ce17838b
|
|
This optionally enables TLS for aodh in the internal network.
If internal TLS is enabled, each node that is serving the aodh
service will use certmonger to request its certificate.
This, in turn should also configure a command that should be ran when
the certificate is refreshed (which requires the service to be
restarted).
bp tls-via-certmonger
Change-Id: I50ef0c8fbecb19d6597a28290daa61a91f3b13fc
|
|
This optionally enables TLS for aodh in the internal network.
If internal TLS is enabled, each node that is serving the ceilometer
service will use certmonger to request its certificate.
This, in turn should also configure a command that should be ran when
the certificate is refreshed (which requires the service to be
restarted).
bp tls-via-certmonger
Change-Id: Ib5609f77a31b17ed12baea419ecfab5d5f676496
|
|
This optionally enables TLS for keystone in the internal network.
If internal TLS is enabled, each node that is serving the keystone
service will use certmonger to request its certificate.
This, in turn should also configure a command that should be ran when
the certificate is refreshed (which requires the service to be
restarted).
bp tls-via-certmonger
Change-Id: I303f6cf47859284785c0cdc65284a7eb89a4e039
|
|
|
|
|
|
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Change-Id: If2804b469eb3ee08f3f194c7dd3290d23a245a7a
|
|
|
|
|
|
Without this, neutron-server fails to start and communication will not
work to ODL REST.
Parital-Bug: 1633630
Change-Id: Ifd906db4e6062ac271c2147fe1149b1009d06ae2
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
|
|
This patch updates the Nova profile so that we set memcached
servers correctly for the Nova keystone auth_token middleware.
Most of the hiera settings for ::nova::keystone::authtoken are
already included in the t-h-t nova-api service.
Change-Id: I3b7ff02abbd0d5e0c38232d02b33e4c7bc411120
Closes-bug: #1633595
|
|
|
|
When deploying via ipv6, rabbitmq-ctl commands have the following
issues:
- `rabbitmq cluster_status` shows nodedown alerts
- list_queues / list_connections hang
- `rabbitmqctl node_health_check` fails with an error.
* There is no any issue while performing activity on RHOS setup(From
* horizon/cli). i.e. RHOS environment is functioning as expected.
For example:
sudo rabbitmqctl node_health_check -n rabbit@node1
Checking health of node 'rabbit@node1' ...
Heath check failed:
health check of node 'rabbit@node1' fails: nodedown
The problem is that we are missing the following in
/etc/rabbitmq/rabbitmq-env.conf:
RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp"
Fix these by setting the appropriate RABBITMQ_CTL_ERL_ARGS when
deploying ipv6.
Closes-Bug: #1633693
Change-Id: I53f4e76e687b3966fbb74fd0c2d83f05176630de
|
|
We use the rabbit_hosts configuration for most of our services but we
haven't been adding the configured port. This patch appends the IP port
used provided to the service's heat template to the IPs in the list.
Note: while we could use the value set for the rabbitmq server in
rabbitmq::port, it doesn't allow for dealing with SSL. This also is also
backwards compatible with the RabbitClientPort parameters used in the
heat templates.
Change-Id: I0000f039144a6b0e98c0a148dc69324f60db3d8b
Closes-Bug: #1633580
|
|
Since moving to composable service/roles there was some logic here that
was relying on a variable to enable ODL rather than enabling the service
itself to decide where ODL was enabled. Now that ODL and ODL OVS
configuration are split into 2 different services we can make these
truly composable.
Partial-Bug: 1633625
Change-Id: Ia55c05e12d5d434111a13e1ed795da530e3ff4a5
Signed-off-by: Tim Rozet <trozet@redhat.com>
|