Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
This service allows configuring and deploying manila-share
containers in a HA overcloud managed by pacemaker.
The containers are managed and run by pacemaker. Pacemaker runs the
standard Kolla image but overrides the initial command so that
it explicitely calls manila-share. This way, we shield ourselves
from any unexpected future change in Kolla.
This container needs to use the 'docker_config' section to invoke
puppet (as opposed to 'docker_puppet_tasks'), because due to the HA
composability each resource creation needs to happen on the bootstrap
node of that service and 'docker_puppet_tasks' will only run on the
controller/primary role.
Based on work done in fdb233e64e3d78014dd7e351abfed5aec5035866
Partial-Bug: #1668922
Change-Id: Ifa94c506db5eb667690a19d594115a93d2a790b2
Depends-On: I797eea2f7788f65411964ccb852b5707e916416f
|
|
Change-Id: Ib892f54781e568fb267a34390fec1a7e0323de2c
|
|
Pre existing Ceph clusters are migrated to containers using a
playbook in ceph-ansible which requires setting some 'ireallymeanit'
variable.
1. https://github.com/ceph/ceph-ansible/issues/1758
Change-Id: I5c2f46b91cf032913931275ce62315f293f21c8b
Closes-Bug: #1711159
|
|
|
|
Based on puppet/services/ceph-mds.yaml. Nodes in the CephMds role
will already be in the Ansible inventory but this change provides
a way pass their parameters to ceph-ansible.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: Ia3ef9e9a2b159dacea01e38762145ff2bcc7ba27
|
|
This change renders the network IP maps and hostname maps for
all networks defined in network_data.yaml. This should make it
possible to create custom networks that will be rendered for
all applicable roles.
Note that at this time all networks will be rendered whether
they are enabled or not. All networks will be present in all
roles, but ports will be associated with noop.yaml in roles
that do not use the network. This is in accordance with
previous behavior, although we may wish to change this in
the future to limit the size of the role definitions and
reduce the number of placeholder resources in deployments
with many networks.
Note that this patch is a replacement for original patch
https://review.openstack.org/#/c/486280, which I was having
trouble rebasing to current.
Change-Id: I445b008fc1240af57c2b76a5dbb6c751a05b7a2a
Depends-on: I662e8d0b3737c7807d18c8917bfce1e25baa3d8a
Partially-implements: blueprint composable-networks
|
|
When the OSD pool size is unset it defaults to 3, while we only
have a single OSD in CI so the pools are created but not writable.
We did set the default pool size to 1 in the non-containerized
scenarios but apparently missed it in the containerized version.
Change-Id: I1ac1fe5c2effd72a2385ab43d27abafba5c45d4d
Closes-Bug: #1710773
|
|
Ensure both the older compute and newer generic polling host services
are stopped during a compute upgrade.
Closes-Bug: #1710866
Change-Id: I2c63d6d50977eed112707c3c8aa6d46d8b796679
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This patch adds NeutronOverlayIPVersion parameter to congfigure
neutron ML2 overlay_ip_version option from T-H-T. puppet-neutron
already has support for configuration of this option, we are just
exposing it from T-H-T. This parameter needs to be set to '6' when
IPv6 vxlan tunnel endpoints are desired.
Closes-Bug: #1691213
Change-Id: I056afa25f67a3b6857bdfef14e6d582b0a9e5e93
Signed-off-by: Feng Pan <fpan@redhat.com>
|
|
|
|
To get this to work upgrade_tasks need to be rewritten with 'when'
statements like the update tasks (in parent review from shardy).
So that we don't break the existing upgrades workflow, we add these
as part of the config download see the depends on
Related-Bug: 1708115
Depends-On: Ief593dc758a2ffe33c1cbcbda9289393fcf023e4
Change-Id: Ib01b96a2c26721747d81d98e3d57c4c388663004
|
|
Use the network.network.j2.yaml to render these files, instead
of relying on the hard-coded versions.
Note this doesn't currently consider the _v6 templates as we may want
to deprecate these and instead rely on an ipv6 specific network_data file,
or perhaps make the network/network.network.j2.yaml generic and able to
detect the version from the cidr?
Change-Id: I662e8d0b3737c7807d18c8917bfce1e25baa3d8a
Partially-Implements: blueprint composable-networks
|
|
|
|
|
|
|
|
|
|
This file is generated and needs to be manually maintained. It
would be better for users who want to deploy latest directly from
docker hub to generate it locally by running:
openstack overcloud container image prepare \
--namespace tripleoupstream \
--tag latest \
--env-file docker-centos-tripleoupstream.yaml
The documentation and CI are being updated to use prepare.
Change-Id: I86503f1076459ae9d84a34e649a6097cba10fa3c
Closes-Bug: #1696598
|
|
|
|
|
|
Pass mode parameter to ceph-ansible in place of ACLs parameter
because ACLs are not for same UID in container as container host
and because ACLs are not passed by kolla_config.
Change-Id: I7e3433eab8e2a62963b623531f223d5abd301d16
Closes-Bug: #1709683
|
|
Bind mounts and adds the appropriate permissions for the cert and
key that's used for TLS.
bp tls-via-certmonger-containers
Change-Id: I7fae4083604c7dc89ca04141080a228ebfc44ac9
|
|
Per the attached bug, if a large number of instances are colocated
on a single compute node it is possible to exhaust the allowed VNC
ports. This change extends the range to include 1024 ports, which
with the default 16x overcommit ratio in Nova means we could handle
a fully loaded 64 core server. That's _probably_ overkill, but I
think it makes sense to overshoot a bit on this and ensure nobody
runs into weird problems because their VNC ports weren't allowed
through the firewall.
Change-Id: Ia48602e82b8e0fbb585371ea514eea3c2334dab0
Closes-Bug: 1678025
|
|
This bind mounts the certificates if TLS is enabled in the internal
network. It also disables the CRL usage since we can't restart haproxy
at the rate that the CRL is updated. This will be addressed later and
is a known limitation of using containerized haproxy (there's the same
issue in the HA scenario). To address the different UID that the certs
and keys will have, I added an extra step that changes the ownership
of these files; though this only gets included if TLS in the internal
network is enabled.
bp tls-via-certmonger-containers
Depends-On: I2078da7757ff3af1d05d36315fcebd54bb4ca3ec
Change-Id: Ic6ca88ee7b6b256ae6182e60e07498a8a793d66a
|
|
Don't unregister systems from the portal/satellite
when deleting from Heat. There are several reasons why
it's compelling to fix this behavior. See
https://bugs.launchpad.net/tripleo/+bug/1710144
for full information. The previous behavior can be triggered
by setting the DeleteOnRHELUnregistration parameter to "true".
Closes-Bug: #1710144
Change-Id: I909a6f7a049dc23fc27f2231a4893d428f06a1f1
|
|
There were 2 problems with this condition making the
rhel-registration.yal template broken:
"conditions" should be "condition"
The condition should refer to just a condition name defined in the
"conditions:" section of the template.
Change-Id: I14d5c72cf86423808e81f1d8406098d5fd635e66
Closes-Bug: #1709916
|
|
The containerized version of the mongodb service omits the
metadata_settings definition [1], which confuses certmonger when
internal TLS is enabled and make the generation of certificates fail.
Use the right setting from the non-containerized profile.
[1] https://review.openstack.org/#/c/461780/
Change-Id: I50a9a3a822ba5ef5d2657a12c359b51b7a3a42f2
Closes-Bug: #1709553
|
|
Various containerized services (e.g. nova, neutron, heat) run initial set up
steps with some ephemeral containers that don't use kolla_start. The
tripleo.cnf file is not copied in /etc/my.cnf.d and this can break some
deployments (e.g. when using internal TLS, service lack SSL settings).
Fix the configuration of transient containers by bind mounting of the
tripleo.cnf file when kolla_start is not used.
Change-Id: I5246f9d52fcf8c8af81de7a0dd8281169c971577
Closes-Bug: #1710127
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
|
|
So far we've been using virtlogd running on the host, we should now be
using virtlogd from a container.
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Change-Id: I998c69ea1f7480ebb90afb44d6006953a84a1c04
|
|
After 483293 commit is merged, major-upgrade-composable-steps.yaml file
is pointing to the wrong location deployment, which is now under
common/ folder.
Change-Id: Ic6784533d1c21b5b8fcb422bccd820af72e499d9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In commit I77650be5f04775a72e2bdf694f93988825a84b72
neutron ovs mechanism driver can bind direct port
with ovs SR-IOV hardware offload feature. currently both
feature can't co-exist together. To allow ovs and
sriovnicswitch to still work together, sriovnicswitch
should be before openvswitch.
Change-Id: Id19d65715d40d64f041bfe219afff98876fd7766
|
|
|
|
|
|
Splitting by colon using native str_split function did not work well
because we needed a right split.
This change replaces the str_split calls with yaql rightSplit().
Change-Id: Iab2f69a5fadc6b02e2eacf3c9d1a9024b0212ac6
|
|
The ip address which clients and other nodes use to connect to the
monitors is derived from the monitor_interface parameter unless
a monitor_address or monitor_address_block is given (to set mon_host
into ceph.conf); this change adds setting for monitor_address_block to
match the public_network so that clients attempt to connect to the mons
on the appropriate network.
Change-Id: I7187e739e9f777eab724fbc09e8b2c8ddedc552d
Closes-Bug: #1709485
|