Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
Found this today when rebasing the undercloud installer.
The puppet/role.role.j2.yaml Yaml has an invalid get_resource reference
that causes a cryptic heat stack failures.
Change-Id: Icfb7d73a1c4d02213b23a427605f2b0d5eaa984f
|
|
Cleanup old legacy params for wsgi config.
Change-Id: Ic775de171c95d43d9273e1a29db2ab685fdf7706
Depends-On: I59b3b36be33268fa6e261a7db3c4aa8e8e712ffb
|
|
puppet-nova renamed nova::wsgi::apache to nova::wsgi::apache_api to
welcome nova::wsgi::apache_placement (for nova placement API).
This patch adds the required parameters before we make the switch in
puppet-tripleo.
Legacy parameters will be removed when the switch will be done in
puppet-tripleo.
Change-Id: I5fc99062d349597393e2248c66f2d863029c7730
|
|
|
|
|
|
|
|
|
|
Tried to use the heat-engine composable service in the Undercloud and I
discovered that my software deployments (when spinning up an overcloud)
weren't getting signals from my t-h-t configured undercloud heat.
This patch resolves the issues by configuring the metadata URLs
for Heat.
Change-Id: I57c9e7010bfe4afc6e62fb4c3406716d11cdfa28
Closes-bug: #1653985
|
|
For cache monitoring technology feature to work, nova config
libvirt settings should have the perf events enabled for
nova to emit these so telemetry can capture them.
Depends-On: Ia27e6831f3f6e9cdeaacb650039be5c81b90cb40
Change-Id: I92c318008b965a6527acbce85b41a545eda7ee18
|
|
This change pulls the hard coded value out of puppet-tripleo to later
allow people to skip the cell0 creation if they want a more complex cell
v2 setup for nova.
Change-Id: I08119d781ef60750cc19753bc03190e413159925
Related-Bug: #1649341
|
|
|
|
|
|
|
|
|
|
When a service connects to the database VIP from the node hosting this
VIP, the resulting TCP socket has a src address which is by default
bound to the VIP as well. If the VIP is failed over to another node
while the socket's Send-Q is not empty, TCP keepalive won't engage and
the service will become unavailable for a very long time (by default
more than 10m).
To prevent failover issues, DB connections should have the src address
of their TCP socket bound to the IP of the network interface used for
MySQL traffic. This is achieved by passing a new option to the
database connection URIs. This option is available starting from
PyMySQL 0.7.9-2.
We use a new intermediate variable in hiera to hold the IP to be used
as a source address for all DB connections. All services adapt their
database URI accordingly.
Moreover, a new YAML validation check is added to guarantee that new
services will construct their database URI appropriately.
Change-Id: Ic69de63acbfb992314ea30a3a9b17c0b5341c035
Closes-Bug: #1643487
|
|
|
|
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
|
|
|
|
|
|
|
|
The cell v2 setup requires the transport url for nova. We need to
provide mysql with the rabbit connection information so that it can it
when setting up the cell information.
Change-Id: I43ba77cd4c8da7c6dc117ab0bd53e5cd330dc3de
Related-Bug: #1649341
|
|
|
|
|
|
|
|
This enables the deployer to dynamically add nova metadata to the
servers based on the output of service profiles that implement the
metadata_settings key in the role_data output for the profiles.
One can set an implementation via the OS::TripleO::ServerMetadataHook
resource, which currently is set as OS::Heat::None. So, because of
the default implementation, if left untouched it actually does
nothing.
Currently, besides the list, which is metadata_settings, this hook also
takes the name of the node that it's setting the metadata for.
This is useful for nova vendordata plugins that can parse said metadata.
Change-Id: I8a937f711f0b90156fbb6c4632760435ef846474
|
|
|
|
In order to call commands that need to be run on a single node, we
create a new per-service variable that will contain the first node of
each role containing the service.
Change-Id: I03e8685f939e8ae1fcd8b16883b559615042505d
Partial-Bug: #1615983
|
|
Custom role deployments were not working when ODL API was on a different
node due to firewall rules blocking traffic. This patch adds the
missing rules for the REST communication to ODL (8081 by default), OVSDB
connection (6640), and OpenFlow protocol (6653).
Closes-Bug: 1651476
Depends-On: I1f2af2793d040fda17bf73252afe59434d99f31f
Change-Id: Ic0119c783d01e864c49fa06a66fdd68c059a726b
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
ODL username and password are already present in the OpenDaylightApi
service. However, when moving the OpenDaylightApi service to its own
custom role, the Controller/Compute nodes no longer have access to these
hiera values. This patch adds them also to the OpenDaylightOvs service.
Closes-Bug: 1651499
Depends-On: I418643810ee6b8a2c17a4754c83453140ebe39c7
Change-Id: I169fdad4c94bd6dfc1fe7cde3d6b19b36d916af7
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Depends-On: Ice921f0fdd4bec6de50e62c39c447ee40dc0e8f5
Change-Id: I4109ac83c32ee2365695611009579a8b117134ff
|
|
Depends-On: I53b156505e08625d56ed6a302cf5b5c30e8e288c
Change-Id: Id9791d8a19a74c1f0855e794170f66542f88a548
|
|
Since we have aodh enabled for alarms, we should set the
notifier to the default queue alarm.all.
Closes-bug: #1590473
Change-Id: Ibcb5076424ac2ddcd18ff717d82da1aec4c035cb
|
|
|
|
|
|
In the freeipa-enroll.yaml, it can be the case that the node has been
enrolled (via a cloud-init script); in this case, the OTP and the
FreeIPA server are optional. However, we still need to get a kerberos
ticket, which is the last step of this script, since this ticket is what
certmonger will use to request the certificates in subsequent steps.
Change-Id: I7e9d6a747cdcbe81c9a74a17db5e91aa9d459f65
|
|
|
|
|
|
It turns out the puppet-mistral change this depends on broke
introspection, so we need to back it out for now.
This reverts commit ed029e5bf279945e82bff8766af4093856a7ac6a.
Change-Id: I828478267935cdc68aa24de8c9dc2d12fcadb631
|
|
|
|
Currently when the docker environments are invoked, every node has the
boot script run which replaces os-collect-config with the heat-agents
container. This should only be happening on Compute nodes currently,
and each role will be converted to heat-agents one at a time.
This change implements a role-specific NodeUserData resource and uses
that mechanism to run docker/firstboot/install_docker_agents.yaml only
on Compute nodes.
Change-Id: Id81811dbcaf0e661c3980aa25f3ca80db5ef0954
|
|
|
|
We can't run this during the upgrade steps, because there are things
which need to happen before any role configuration happens, e.g
installing the new hiera heat-config hook, which must be done before
e.g "ControllerDeployment" runs or the stack update hangs.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I365b57513590662c3f78a33dc625747f457c48c5
|
|
|
|
|
|
This allows us to take advantage of the composable roles hiera
settings to connect the plugin to the northd/ovndb API without
needing to hard-code the IP of the node running the service.
Change-Id: I2508d48f81c1819ae3521fff271c0bdc50724604
Depends-On: I9af7bd837c340c3df016fc7ad4238b2941ba7a95
Closes-Bug: #1634171
|