Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The cell v2 setup requires the transport url for nova. We need to
provide mysql with the rabbit connection information so that it can it
when setting up the cell information.
Change-Id: I43ba77cd4c8da7c6dc117ab0bd53e5cd330dc3de
Related-Bug: #1649341
|
|
|
|
run-os-net-config.sh only allows for limited customization of the
network configuration in config.json. Namely, it only customizes the
bridge_name and interface_name.
This will likely not be sufficient for all use cases. This patch adds a
generic network_config_hook bash function that will be called if it is
defined. The function is an entry point for deployers to write custom
code to further influence run-os-net-config.sh.
A possible alternative approach would be to pass the server resource
into the NetworkConfig template. That would allow running arbitrary
SoftwareDeployments on the server before NetworkDeployment is executed.
However, the interface of NetworkDeployment is likely still not as
flexible as this approach as the inputs are hardcoded in the role
template files (role.role.j2.yaml), which are not meant to be modified
by deployers.
The immediate use case for this work is using os-net-config in our
multinode CI jobs where we need to create vxlan tunnels between the
nodes and we need to know the local private IP of each node for the
tunnel endpoint. As the IP is different for each node, it's not a
parameter we could specify in the templates.
Change-Id: I26d0ebdaba6fcd3fe885e41ed234eb79a2405228
Implements: blueprint multinode-ci-os-net-config
|
|
|
|
|
|
Currently the description of CI matrix is defined
in tripleo-ci, but the services for each scenario
lives now in THT. This submission moves this table
to the repo in which the configuration is defined.
Change-Id: I9ef1acefc6e1f347528a48edcb4d997a9628fcf6
|
|
This enables the deployer to dynamically add nova metadata to the
servers based on the output of service profiles that implement the
metadata_settings key in the role_data output for the profiles.
One can set an implementation via the OS::TripleO::ServerMetadataHook
resource, which currently is set as OS::Heat::None. So, because of
the default implementation, if left untouched it actually does
nothing.
Currently, besides the list, which is metadata_settings, this hook also
takes the name of the node that it's setting the metadata for.
This is useful for nova vendordata plugins that can parse said metadata.
Change-Id: I8a937f711f0b90156fbb6c4632760435ef846474
|
|
|
|
|
|
|
|
In order to call commands that need to be run on a single node, we
create a new per-service variable that will contain the first node of
each role containing the service.
Change-Id: I03e8685f939e8ae1fcd8b16883b559615042505d
Partial-Bug: #1615983
|
|
|
|
|
|
|
|
|
|
ODL username and password are already present in the OpenDaylightApi
service. However, when moving the OpenDaylightApi service to its own
custom role, the Controller/Compute nodes no longer have access to these
hiera values. This patch adds them also to the OpenDaylightOvs service.
Closes-Bug: 1651499
Depends-On: I418643810ee6b8a2c17a4754c83453140ebe39c7
Change-Id: I169fdad4c94bd6dfc1fe7cde3d6b19b36d916af7
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
Since we have aodh enabled for alarms, we should set the
notifier to the default queue alarm.all.
Closes-bug: #1590473
Change-Id: Ibcb5076424ac2ddcd18ff717d82da1aec4c035cb
|
|
This patch updates the endpoint map for Zaqar websockets
so that we use ws (or wss for SSL) instead of the http varients.
This should help resolve protocol issues when trying to make
connections to the websocket API.
Change-Id: Iea88d1e30299cb621424740a39d498defa371ca4
|
|
|
|
|
|
|
|
In the freeipa-enroll.yaml, it can be the case that the node has been
enrolled (via a cloud-init script); in this case, the OTP and the
FreeIPA server are optional. However, we still need to get a kerberos
ticket, which is the last step of this script, since this ticket is what
certmonger will use to request the certificates in subsequent steps.
Change-Id: I7e9d6a747cdcbe81c9a74a17db5e91aa9d459f65
|
|
|
|
|
|
It turns out the puppet-mistral change this depends on broke
introspection, so we need to back it out for now.
This reverts commit ed029e5bf279945e82bff8766af4093856a7ac6a.
Change-Id: I828478267935cdc68aa24de8c9dc2d12fcadb631
|
|
|
|
This switches to using overcloud-full as the OS image for
containerized compute. It includes the following changes:
- install docker, until this change lands
I1eab2a6de721c8f3c21c7df0019f2d4d1cc3775f
- agent image pull has been removed. This avoids a race between docker
starting and the current call to pull. This relies on "docker run"
to do the initial pull and leaves open the option of some other
prefetch mechanism to do the initial pull
- rely on unit Conflicts= to ensure heat-docker-agents and
os-collect-config do not run at the same time
- tweaks to host bind mounts
- removal of commands which only apply to atomic
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: I2e82634785834a877a4dbdbdcd788a9ac1c14a9d
|
|
Currently when the docker environments are invoked, every node has the
boot script run which replaces os-collect-config with the heat-agents
container. This should only be happening on Compute nodes currently,
and each role will be converted to heat-agents one at a time.
This change implements a role-specific NodeUserData resource and uses
that mechanism to run docker/firstboot/install_docker_agents.yaml only
on Compute nodes.
Change-Id: Id81811dbcaf0e661c3980aa25f3ca80db5ef0954
|
|
These ensure that software configuration tasks are not re-run when the
heat-agents container is restarted.
Change-Id: Ieb84fe1f6dd849737ff22f51daa12ddc467dcdde
|
|
|
|
|
|
We can't run this during the upgrade steps, because there are things
which need to happen before any role configuration happens, e.g
installing the new hiera heat-config hook, which must be done before
e.g "ControllerDeployment" runs or the stack update hangs.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I365b57513590662c3f78a33dc625747f457c48c5
|
|
For some upgrade scenarios, e.g all-in-one deployments, it may
be possible to run the upgrade steps, then apply puppet in one
stack update, so reverse the order here. For normal deployments
the upgrade steps are mapped to OS::Heat::None so this will have
no effect.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I3c78751349a6ac2bc5dff82f67bffe13750ac21c
|
|
|
|
|
|
|
|
This allows us to take advantage of the composable roles hiera
settings to connect the plugin to the northd/ovndb API without
needing to hard-code the IP of the node running the service.
Change-Id: I2508d48f81c1819ae3521fff271c0bdc50724604
Depends-On: I9af7bd837c340c3df016fc7ad4238b2941ba7a95
Closes-Bug: #1634171
|
|
|