Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
Tried to use the heat-engine composable service in the Undercloud and I
discovered that my software deployments (when spinning up an overcloud)
weren't getting signals from my t-h-t configured undercloud heat.
This patch resolves the issues by configuring the metadata URLs
for Heat.
Change-Id: I57c9e7010bfe4afc6e62fb4c3406716d11cdfa28
Closes-bug: #1653985
|
|
For cache monitoring technology feature to work, nova config
libvirt settings should have the perf events enabled for
nova to emit these so telemetry can capture them.
Depends-On: Ia27e6831f3f6e9cdeaacb650039be5c81b90cb40
Change-Id: I92c318008b965a6527acbce85b41a545eda7ee18
|
|
This change pulls the hard coded value out of puppet-tripleo to later
allow people to skip the cell0 creation if they want a more complex cell
v2 setup for nova.
Change-Id: I08119d781ef60750cc19753bc03190e413159925
Related-Bug: #1649341
|
|
|
|
Previously we had to override the location of the manifest file
pushed on the controller nodes when deploying with Pacemaker, this is
not needed anymore with composable roles.
Change-Id: I1e010099263a325d06483f498c70582b008c31e2
|
|
|
|
|
|
|
|
When a service connects to the database VIP from the node hosting this
VIP, the resulting TCP socket has a src address which is by default
bound to the VIP as well. If the VIP is failed over to another node
while the socket's Send-Q is not empty, TCP keepalive won't engage and
the service will become unavailable for a very long time (by default
more than 10m).
To prevent failover issues, DB connections should have the src address
of their TCP socket bound to the IP of the network interface used for
MySQL traffic. This is achieved by passing a new option to the
database connection URIs. This option is available starting from
PyMySQL 0.7.9-2.
We use a new intermediate variable in hiera to hold the IP to be used
as a source address for all DB connections. All services adapt their
database URI accordingly.
Moreover, a new YAML validation check is added to guarantee that new
services will construct their database URI appropriately.
Change-Id: Ic69de63acbfb992314ea30a3a9b17c0b5341c035
Closes-Bug: #1643487
|
|
With this change we export ERL_EPMD_ADDRESS set to the
address rabbitmq is listening too. We need to explicitely
export it so that epmd can pick it up and bind to the address.
Closes-Bug: #1645898
Change-Id: Iacb2ee262da419f61ec3511f42b395f69f5d14da
|
|
|
|
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
|
|
|
|
|
|
|
|
The cell v2 setup requires the transport url for nova. We need to
provide mysql with the rabbit connection information so that it can it
when setting up the cell information.
Change-Id: I43ba77cd4c8da7c6dc117ab0bd53e5cd330dc3de
Related-Bug: #1649341
|
|
|
|
|
|
|
|
This enables the deployer to dynamically add nova metadata to the
servers based on the output of service profiles that implement the
metadata_settings key in the role_data output for the profiles.
One can set an implementation via the OS::TripleO::ServerMetadataHook
resource, which currently is set as OS::Heat::None. So, because of
the default implementation, if left untouched it actually does
nothing.
Currently, besides the list, which is metadata_settings, this hook also
takes the name of the node that it's setting the metadata for.
This is useful for nova vendordata plugins that can parse said metadata.
Change-Id: I8a937f711f0b90156fbb6c4632760435ef846474
|
|
|
|
In order to call commands that need to be run on a single node, we
create a new per-service variable that will contain the first node of
each role containing the service.
Change-Id: I03e8685f939e8ae1fcd8b16883b559615042505d
Partial-Bug: #1615983
|
|
Custom role deployments were not working when ODL API was on a different
node due to firewall rules blocking traffic. This patch adds the
missing rules for the REST communication to ODL (8081 by default), OVSDB
connection (6640), and OpenFlow protocol (6653).
Closes-Bug: 1651476
Depends-On: I1f2af2793d040fda17bf73252afe59434d99f31f
Change-Id: Ic0119c783d01e864c49fa06a66fdd68c059a726b
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
ODL username and password are already present in the OpenDaylightApi
service. However, when moving the OpenDaylightApi service to its own
custom role, the Controller/Compute nodes no longer have access to these
hiera values. This patch adds them also to the OpenDaylightOvs service.
Closes-Bug: 1651499
Depends-On: I418643810ee6b8a2c17a4754c83453140ebe39c7
Change-Id: I169fdad4c94bd6dfc1fe7cde3d6b19b36d916af7
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Depends-On: Ice921f0fdd4bec6de50e62c39c447ee40dc0e8f5
Change-Id: I4109ac83c32ee2365695611009579a8b117134ff
|
|
Depends-On: I53b156505e08625d56ed6a302cf5b5c30e8e288c
Change-Id: Id9791d8a19a74c1f0855e794170f66542f88a548
|
|
Since we have aodh enabled for alarms, we should set the
notifier to the default queue alarm.all.
Closes-bug: #1590473
Change-Id: Ibcb5076424ac2ddcd18ff717d82da1aec4c035cb
|
|
|
|
|
|
In the freeipa-enroll.yaml, it can be the case that the node has been
enrolled (via a cloud-init script); in this case, the OTP and the
FreeIPA server are optional. However, we still need to get a kerberos
ticket, which is the last step of this script, since this ticket is what
certmonger will use to request the certificates in subsequent steps.
Change-Id: I7e9d6a747cdcbe81c9a74a17db5e91aa9d459f65
|
|
|
|
|
|
It turns out the puppet-mistral change this depends on broke
introspection, so we need to back it out for now.
This reverts commit ed029e5bf279945e82bff8766af4093856a7ac6a.
Change-Id: I828478267935cdc68aa24de8c9dc2d12fcadb631
|
|
|
|
Currently when the docker environments are invoked, every node has the
boot script run which replaces os-collect-config with the heat-agents
container. This should only be happening on Compute nodes currently,
and each role will be converted to heat-agents one at a time.
This change implements a role-specific NodeUserData resource and uses
that mechanism to run docker/firstboot/install_docker_agents.yaml only
on Compute nodes.
Change-Id: Id81811dbcaf0e661c3980aa25f3ca80db5ef0954
|
|
|
|
We can't run this during the upgrade steps, because there are things
which need to happen before any role configuration happens, e.g
installing the new hiera heat-config hook, which must be done before
e.g "ControllerDeployment" runs or the stack update hangs.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I365b57513590662c3f78a33dc625747f457c48c5
|
|
|
|
|
|
This allows us to take advantage of the composable roles hiera
settings to connect the plugin to the northd/ovndb API without
needing to hard-code the IP of the node running the service.
Change-Id: I2508d48f81c1819ae3521fff271c0bdc50724604
Depends-On: I9af7bd837c340c3df016fc7ad4238b2941ba7a95
Closes-Bug: #1634171
|
|
When Nova and/or Cinder are using Ceph as backend, qemu will need
to open a connection and two threads for each and every Ceph OSD.
This change raises the max_files (set to 1024 by default) to 32768
and the max_processes (set to 4096 by default) to 131072. The max
number of FDs is per-process, while the max number of processes is
per-user. The values can be overridden via ExtraConfig, no params
are added to the templates.
A more detailed description of the values were chosen can be
found at: https://access.redhat.com/solutions/1602683
Change-Id: I1e79675f6aac1b0fe6cc7269550fa6bc8586e1fb
Depends-On: I258afd3ee6633e4b2ebc45aa8611be652476be0c
|
|
We could already pass metadata to the nova server instances (on
creation) via the ServerMetadata parameter, however, there was no
way of doing this per-role. This introduces that by adding a
{{role}}ServerMetadata parameter for each role. This parameter gets
merged with the ServerMetadata parameter and allows this
functionality.
Note that both default to {}, and so does the result of merging those
parameters with their default values. So nothing changes for the
default settings.
Change-Id: I334edcc51ce7ee82fc13b6cf4c0d74ccb7db099c
|
|
There are some requirements for early configuration that involves
e.g setting kernel parameters then rebooting. Currently this can
be done via cloud-init, e.g firstboot templates, but there's been
discussion around enabling a SoftwareDeployment approach instead.
The main advantage of doing it this way is there's an error path
if something goes wrong with the config (except triggering the
reboot as we have to use NO_SIGNAL for that).
Change-Id: Ia54ee654f755631b8062eb5c209a60c6f9161500
|
|
The RabbitMQ's puppet manifest configures the node's IP and port through
environment variables. While this would usually be fine, it doesn't
allow us to use TLS-only, since it will always try to start a TCP
listener. So, by setting these values through the config file, when
setting ssl_only for rabbitmq, they will effectively be discarded and
thus allow us to use an SSL listener on the same port.
Change-Id: I33d051a8c740baf69b99517378e1f9b0f3cc1681
|
|
Depends-On: Iac4a260af6738ed6afd4bcb107221a736d07c1b5
Change-Id: I279f6080b3cd7cf6be8513d94171bf9ff94a4698
Partial-Bug: #1644784
|