Age | Commit message (Collapse) | Author | Files | Lines |
|
run-os-net-config.sh only allows for limited customization of the
network configuration in config.json. Namely, it only customizes the
bridge_name and interface_name.
This will likely not be sufficient for all use cases. This patch adds a
generic network_config_hook bash function that will be called if it is
defined. The function is an entry point for deployers to write custom
code to further influence run-os-net-config.sh.
A possible alternative approach would be to pass the server resource
into the NetworkConfig template. That would allow running arbitrary
SoftwareDeployments on the server before NetworkDeployment is executed.
However, the interface of NetworkDeployment is likely still not as
flexible as this approach as the inputs are hardcoded in the role
template files (role.role.j2.yaml), which are not meant to be modified
by deployers.
The immediate use case for this work is using os-net-config in our
multinode CI jobs where we need to create vxlan tunnels between the
nodes and we need to know the local private IP of each node for the
tunnel endpoint. As the IP is different for each node, it's not a
parameter we could specify in the templates.
Change-Id: I26d0ebdaba6fcd3fe885e41ed234eb79a2405228
Implements: blueprint multinode-ci-os-net-config
|
|
|
|
|
|
This enables the deployer to dynamically add nova metadata to the
servers based on the output of service profiles that implement the
metadata_settings key in the role_data output for the profiles.
One can set an implementation via the OS::TripleO::ServerMetadataHook
resource, which currently is set as OS::Heat::None. So, because of
the default implementation, if left untouched it actually does
nothing.
Currently, besides the list, which is metadata_settings, this hook also
takes the name of the node that it's setting the metadata for.
This is useful for nova vendordata plugins that can parse said metadata.
Change-Id: I8a937f711f0b90156fbb6c4632760435ef846474
|
|
|
|
|
|
|
|
In order to call commands that need to be run on a single node, we
create a new per-service variable that will contain the first node of
each role containing the service.
Change-Id: I03e8685f939e8ae1fcd8b16883b559615042505d
Partial-Bug: #1615983
|
|
|
|
|
|
|
|
|
|
Since we have aodh enabled for alarms, we should set the
notifier to the default queue alarm.all.
Closes-bug: #1590473
Change-Id: Ibcb5076424ac2ddcd18ff717d82da1aec4c035cb
|
|
|
|
|
|
|
|
|
|
|
|
It turns out the puppet-mistral change this depends on broke
introspection, so we need to back it out for now.
This reverts commit ed029e5bf279945e82bff8766af4093856a7ac6a.
Change-Id: I828478267935cdc68aa24de8c9dc2d12fcadb631
|
|
|
|
|
|
|
|
We can't run this during the upgrade steps, because there are things
which need to happen before any role configuration happens, e.g
installing the new hiera heat-config hook, which must be done before
e.g "ControllerDeployment" runs or the stack update hangs.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I365b57513590662c3f78a33dc625747f457c48c5
|
|
For some upgrade scenarios, e.g all-in-one deployments, it may
be possible to run the upgrade steps, then apply puppet in one
stack update, so reverse the order here. For normal deployments
the upgrade steps are mapped to OS::Heat::None so this will have
no effect.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I3c78751349a6ac2bc5dff82f67bffe13750ac21c
|
|
|
|
|
|
|
|
|
|
This patch swaps out the noop ctlplane port for a more
proper fake neutron port stack. This stack is a swap
in for the OS::Neutron::Port heat resource and can be
controlled via the DeployedServerPortMap parameter.
By relying on <hostname>-<network> naming conventions in the
map we can map IPs to specific servers without using the
Neutron API. This will allow us to inject IP information
into the Heat stack within the new t-h-t undercloud installer
which currently does not run a Neutron service.
Change-Id: I29fbc720c3d582cbb94385e65e4b64b101f7eac9
|
|
Update the run-os-net-config.sh so that we make the
bridge_name and interface_name parameters (supplied by
the SoftwareConfig) optional. This allows operators to
create custom network templates to be used on roles other than
compute and controller which appear to be the only two roles which
set bridge_name and interface_name parameters.
Change-Id: I8997cf8177c1bf0e1f19de5f93dc4e81da1a951f
|
|
We could already pass metadata to the nova server instances (on
creation) via the ServerMetadata parameter, however, there was no
way of doing this per-role. This introduces that by adding a
{{role}}ServerMetadata parameter for each role. This parameter gets
merged with the ServerMetadata parameter and allows this
functionality.
Note that both default to {}, and so does the result of merging those
parameters with their default values. So nothing changes for the
default settings.
Change-Id: I334edcc51ce7ee82fc13b6cf4c0d74ccb7db099c
|
|
Without this Zaqar API will fail to run due to a missing bind
IP address in the config file.
Change-Id: Icd0a6e85b7455e89f37f05399146d5e743359da8
Closes-bug: #1650307
|
|
|
|
This patch updates the deployed-server interface to use a
simple hostname -s. The previous hostnamectl --transient
can pick up extra domain name configuration in some cases
that can cause very odd hostname generation if used
with the tripleo-heat-template host file generation.
This would actually break the new undercloud t-h-t installer
in that some of the /etc/hosts entries would be invalid
(no IP address) due to substring replacements failing in
a variety of odd hostname situations. Simplifying the
hostname of deployed servers to just the short version seems
the most sensable way to avoid all this.
Change-Id: Ia7e636d021f948ea5234475cef02f666d8ce6999
|
|
In I9b1f0eaa0d36a28e20b507bec6a4e9b3af1781ae and
I11fcf688982ceda5eef7afc8904afae44300c2d9 we added a manual step
for upgrading openvswitch in order to specify the --nopostun
as discussed in the bug below.
This change adds a minor update to make this workaround more
robust. It removes any existing rpms that may be around from
an earlier run, and also checks that the rpms installed are
at least newer than the version we are on.
This also refactors the code into a common definition in the
pacemaker_common_functions.sh which is included even for the
heredocs generating upgrade scripts during init. Thanks
Sofer Athlan-Guyot and Jirka Stransky for help with that.
Change-Id: Idc863de7b5a8c116c990ee8c1472cfe377836d37
Related-Bug: 1635205
|
|
The RabbitMQ's puppet manifest configures the node's IP and port through
environment variables. While this would usually be fine, it doesn't
allow us to use TLS-only, since it will always try to start a TCP
listener. So, by setting these values through the config file, when
setting ssl_only for rabbitmq, they will effectively be discarded and
thus allow us to use an SSL listener on the same port.
Change-Id: I33d051a8c740baf69b99517378e1f9b0f3cc1681
|
|
This reads makes Django take the X-Forwarded-Proto header into account
when forming URLs.
Change-Id: Ice64de9a11d7819ae7f380279ff356342d9b6673
Depends-On: Ifed7d4c3409419c01c5b20c707221c1fc76ea09e
|
|
The inputs on the NetworkDeployment SoftwareDeployment resource were not
the same for generic roles as they were for the default roles
(role.role.js.yaml vs. controller-role.yaml).
This patch synchronizes the input between the 2 so that the interface is
the same for deployers.
Change-Id: Id14cf7ca219aee61f5b9d21171a5c41dea765f98
Implements: blueprint multinode-ci-os-net-config
|
|
The new DeployedServer resource in Heat will provide a native resource
for Server resources that are not orchestrated via Nova. This will allow
associating SoftwareDeployment's with servers that have not been
launched with Nova with Heat directly.
With the new resource, all of the SoftwareConfigTransport methods are
available, including POLL_TEMP_URL. This patch also updates the
get-occ-config.sh script to configure the requests collector in
os-collect-config.conf on the deployed servers.
Change-Id: I4b80421088acca709fe3f92741c5c052be483131
Partially-implements: blueprint split-stack-software-configuration
Depends-On: I07b9a053ecd3ef4411b602bbc6ef985224834cf8
|
|
|
|
|
|
There are scenarios in which findmnt will return a list of all
mounted filesystems, which causes the upgrade script to fail in
recognizing if the Ceph OSD is backed by ext4.
Change-Id: Iadebdc32b523c05216202b782ceb54bec4389413
Closes-Bug: #1649407
|
|
|
|
This patch adds a new type called:
OS::TripleO::Network::Ports::ControlPlaneVipPort
This defaults to a normal OS::Neutron::Port object but can
be mocked out for some implementations like when installing
the undercloud where neutron doesn't exist.
Change-Id: Iebf2428432a98a9d789b206ce973599adbc0af8f
|
|
The upstream puppet module is adding the proper keystone authtoken
middleware support. This change updates THT to use the keystone
authtoken class rather than the deprecated settings. This also allows
for proper keystone v3 integration.
Change-Id: Iaf82716122a25e3e0785de1250d24edaaa5e4d04
Depends-On: I71969ef09018f9daa5f81c4f3bcbdb0b0974446c
|
|
|
|
Change-Id: I75815a4bcbf421597abb86226238b74a9afffc0d
Depends-On: Iffb8c2cfed53d8b29e777c35cee44921194239e9
|
|
This is based on previous work [1] and it's what I've been using to
test the TLS-everywhere work.
This introduces a template that will run on every node to enroll
them to FreeIPA and acquire a ticket (authenticate) in order to be
able to request certificates.
Enrollment is done via the ipa-client-install command and it does
the following:
* Get FreeIPA's CA certificate and trust it.
* Authenticate to FreeIPA using an OTP and get a kerberos keytab.
* Set up several configurations that are needed for FreeIPA (sssd,
kerberos, certmonger)
The keytab is then used to authenticate and get an actual TGT
(Ticket-Granting-Ticket) from Kerberos
The previous implementation used a PreConfig hook, however, here it
was modified to use NodeTLSCAData. This has the advantage that it
runs on every node as opposed to the PreConfig hook where we had to
specify the role type so it's a usability improvement. And, on the
other hand, this does set up necessary things for the usage of
FreeIPA as a CA, such as getting the certificate and enrolling to the
CA.
[1] https://github.com/JAORMX/freeipa-tripleo-incubator
bp tls-via-certmonger
Change-Id: Iac94b3b047dca1bcabd464ea8eed6f1220c844f1
|
|
This is problematic for the containerised heat-agents, lsb_release has
to be bind-mounted in, and atomic host doesn't even have lsb_release
installed.
Instead just write to every /etc/cloud/templates/hosts.*.tmpl file.
Change-Id: If2aab7e9b1e03aa657baf1c33aa4392ef7044075
|
|
The script run-os-net-config[1] copies in ifcfg-* from the host before
running os-net-config. Apparently it was done this way because the
other scripts in /etc/sysconfig/network-scripts/ differed between host
and agent container. This should be less of an issue now that host and
heat-agents run centos-7 (even when the host is atomic)
tripleo-heat-templates recently changed to running os-net-config in a
deployment script instead of an os-refresh-config script [2]. This
means that our current run-os-net-config approach is currently
resulting in os-net-config being executed twice.
Another issue with run-os-net-config is that it copies ifcfg-* from
host to container, but not back again. This means that rebooting the
server will result in unconfigured interfaces until os-net-config is
somehow run again.
This change bind mounts /etc/sysconfig/network-scripts/ from the host
and uses the conventional approach to running os-refresh-config.
This may fix the issue where compute nodes are losing network
connectivity, so
Closes-Bug: #1646897
[1] http://git.openstack.org/cgit/openstack/tripleo-common/tree/heat_docker_agent/run-os-net-config
[2] I0ed08332cfc49a579de2e83960f0d8047690b97a
Change-Id: I763fc8d8e3eb10ac64d33e46c92888d211003e72
|