Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In Ocata all live-migration over ssh is performed on the default ssh port (22).
In Pike the containerized live-migration over ssh is on port 2022 as the
docker host's sshd is using port 22.
To allow live migration during upgrade we need to temporarily pin the Pike
computes to port 22 and in the final converge we can switch over to port 2022.
This also changes the default port to 2022 for baremetal computes in Pike to
enable live-migration between baremetal and containerized computes.
Change-Id: Icb9bfdd9a99dc1dce28eb95c50a9a36bffa621b1
Depends-On: I0b80b81711f683be539939e7d084365ff63546d3
Closes-Bug: 1714171
(cherry picked from commit 17fd16b9f266e1aa67bf03ebdf309e89d668ada2)
|
|
node_private file doesn't exist anymore, use sub_nodes_private
instead
Change-Id: Ifb3af18733c0e1fd6895c270bb39199acaa98968
|
|
Closes-Bug: 1720823
Change-Id: I239cc9f827fe99a553f9c18b80336bc6ce0b1d14
Signed-off-by: Tim Rozet <trozet@redhat.com>
(cherry picked from commit ba5436099d37898e418406f8b4376923e14f4c89)
|
|
container" into stable/pike
|
|
Change-Id: I122a246a559e07ed74c69e3eb172a4bbb801aeb7
Closes-Bug: #1721239
(cherry picked from commit 3e8de70bd5a8c43389432d484189d4de5fc0ae2f)
|
|
With the dynamic Jinja2 rendering for networks, the heat resource for
Internal API network was accidentally being renamed to:
OS::TripleO::Network::Internal
when it should be the same as previous versions:
OS::TripleO::Network::InternalApi
This patch removes the 'compat_name' which was overriding the network
name for rendering the resource. This patch also removes the
compat_name functionality from the network/networks.j2.yaml file
since it is no longer needed.
Closes-Bug: 1718764
Change-Id: If756cddd91933edb303cc056515d98b941a3eb14
Signed-off-by: Tim Rozet <trozet@redhat.com>
(cherry picked from commit 97244b942d29d2b5acd7a3eb07acdba0d9b99677)
|
|
Since each dnsmasq process consumes one inotify socket, the default
value of fs.inotify.max_user_instances which is 128 lets us scale to
only around a 116 neutron subnets (a few other sockets are used by other
processes on the system). Since, we need to provide better defaults,
this patch proposes to bump this value to 1024 by default, while giving
the user a way to cahnge it. Based on
https://unix.stackexchange.com/a/13757 each inotify watch takes 1KB of
memory and we have fs.inotify.max_user_watches set to 8192 by default.
This means that even in the worst case we won't be using more than 8MB
of memory. Bumping the fs.inotify.max_user_instances value to 1024 is
safe because there is fs.inotify.max_user_watches which caps the total
number of files that can be watched by all the inotify instances a user
has.
Related Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1474515
https://bugzilla.redhat.com/show_bug.cgi?id=1491505
Change-Id: I39664312bf6cf06f1e1ca2e86ffd86fb9a4582ad
Closes-Bug: 1718266
(cherry picked from commit d2d0c3ff00de9b62382193d942239d543aa9499f)
|
|
During the controlplane upgrade the host_prep_tasks are being
executed on the disable_upgrade_deployment roles too.
This sets the role specific host_prep_tasks to an empty list for
those roles during an upgrade, as executing them during the
controlplane upgrade (during -e
major-upgrade-composable-steps-docker.yaml) causes problems.
They will be executed as part of the non controller upgrade as they
are written to the stack outputs to be used as ansible playbooks
(see bug 1708115 for more info on this)
Change-Id: I42c963440b9b1e8222097c3d4e83ffcbe820886c
Closes-Bug: 1719604
(cherry picked from commit 684267a7a4fbff489f6324020289afbdcaaca8f5)
|
|
Previously it was mistakenly replacing the contents because we
do not do deep merge.
Change-Id: I145feb0208f135da7c71694ebcecd937244d66b1
Closes-Bug: #1719919
(cherry picked from commit 17416dcfc56c5148ccc9ab40297f99adfdcd085b)
|
|
|
|
|
|
stable/pike
|
|
|
|
This was needed to make the upgrade job on Ocata->Pike passing, and we
now need to remove this to improve the argument order in OOOQ for
deployments with scenarios.
This shouldn't be backported to Ocata (at least not before we make the
split between deploy scenario and upgrade scenario).
Change-Id: Ie08bbe08530bd48a0ca58667f0704f360e0a4dd7
Co-Authored-By: Martin André <m.andre@redhat.com>
Related-Bug: #1714905
Related-Bug: #1712070
(cherry picked from commit 31550b42027588d82f01db6956c1efaf02d58558)
|
|
This commit brings the scenario004 file closer to its BM pendant. We
need to start with this one to address a chicken-and-egg issue with
featureset files.
Change-Id: Ia5c0cefb7051ca42b4d470f5a000eb446d18be30
Co-Authored-By: Martin André <m.andre@redhat.com>
Related-Bug: #1714905
Related-Bug: #1712070
(cherry picked from commit b4d0a81e55ad51ecdaf2e923f794418ac77cfc57)
|
|
Closes-Bug: 1718997
Change-Id: I2b347cbc4595e6651b0d4be032cb862fde72e15f
Signed-off-by: Tim Rozet <trozet@redhat.com>
(cherry picked from commit 253d9b9107aa158af5bcdafe510ecd96658ef137)
|
|
|
|
stable/pike
|
|
|
|
|
|
stable/pike
|
|
Upgrades from older versions using Management network fail.
This patch enables the management network even though it is not
enabled in any of the role definitions. This will allow upgrades
to complete using existing network environment files, without
requiring operators to switch to the new method for defining
which networks are attached to roles. Eventually these older
environment files will be removed.
Change-Id: Iadd12a559f0ad6918958a1355f189187fd327363
Closes-bug: 1717123
(cherry picked from commit 5b9fbc2b2bfa00de2fe0f437f21e05e3fc09a53d)
|
|
There is an extra RedisVipPort defined in network-isolation.j2.yaml
which is unused. This will waste an IP address, and can lead to
confusion if there are multiple ports named RedisVipPort.
This patch removes the extra (unneeded) instance of the VIP.
Change-Id: I222873859af1b4ed1050cfffe55687b2f8d4c528
Closes-bug: 1717017
(cherry picked from commit f543752da6e1df3537ffa68d86806e11ac380375)
|
|
Change-Id: Icb58d47a3911e83e2650b2c74b33eae522c84651
Closes-Bug: #1718451
(cherry picked from commit edc02b3352d53bdf460a495f689db55944eab432)
|
|
|
|
The Networker role should not have the api services run on it. Instead
these services should run as part of the ControllerOpenstack role that
should be used with this role.
Change-Id: Iabfe276fe700843f3a8da0b9e9220b2f82e20ec9
Closes-Bug: #1718299
(cherry picked from commit 964a5d738b8dbb6beb077d76448c6f3a84be2500)
|
|
We missed to set the pgp_num default in ceph.conf, causing WARNING
messages like:
pool default.rgw.buckets.data pg_num 32 > pgp_num 8
Also increases the default pg_num to 128 which is the recommended
value for less than 5 OSDs [1].
1. http://docs.ceph.com/docs/master/rados/operations/placement-groups/
Change-Id: Ibd9fb23e04576e95e24af58f856663397886a947
Closes-Bug: #1718173
(cherry picked from commit 58e6f6533a04eddd2dc897d890737bbccde4ea7b)
|
|
The existing network-isolation-no-tunneling.yaml contains
references to missing files. This patch generates the file
with jinja to include custom networks and make it work
with composable networks.
Closes-Bug: #1718797
Change-Id: Ibcab2f6b5ac880a6b3d7dd5126bd24facfa17322
Signed-off-by: Antoni Segura Puimedon <antonisp@celebdor.com>
Co-authored-by: Dan Sneddon <dsneddon@redhat.com>
(cherry picked from commit 47185342bdd247a2e2735ef96c777ecec663086d)
|
|
into stable/pike
|
|
After landing https://review.openstack.org/#/c/503484/ we run the
puppet host configuration steps twice. This change removes the
deploy_steps_tasks.yaml playbook in order to run the puppet steps
only once.
Closes-bug: 1717244
Change-Id: I09461094618124915841c8390c8bce8daf64d029
(cherry picked from commit e471c67aab6a8f91011aa2330b3cf80f4427f443)
|
|
|
|
stable/pike
|
|
stable/pike
|
|
Change-Id: Iafe17a91c4695e442881e6fe813a6499f812f4b4
(cherry picked from commit 96667edee266bf2a64f7c8e2488c0eba105eaa8f)
|
|
This wrapper binary spawns the HAproxy daemon and implements a
coordinated HAproxy restart on SIGHUP.
From a service's perspective, this allows reloading the HAProxy
configuration with minimal service disruption, i.e. without stopping
and restarting the HAProxy container.
Closes-Bug: #1717521
Change-Id: Ib3ef0c0bcf1a8151e179ff4d7509cf0d6b3ac5a1
(cherry picked from commit 91cd44cd7266c15ce07fafbee9d2e33f226096ba)
|
|
During the bootstrap of the mariadb database, galera replication
must be disabled while the users credentials are being set up. This
is done by setting wsrep-provider=none when starting mysqld_safe.
Icf67fd2fbf520e8a62405b4d49e8d5169ff3925b already disabled it
when the clustercheck credentials are being set up, but Kolla also
start a temporary server for setting up the root password.
Disable the setting directly at the end of the mysql.cnf in the
running container. That way, the default setting from galera.cnf will
be overriden, all mysqld_safe calls will disable WSREP and the setting
will stay ephemeral.
Change-Id: If14e22992b46a35a05a16a9db5ecb360ea13df8f
Closes-Bug: #1717250
(cherry picked from commit b0f50db80b10e9cd6263c4d6b3ca8dd818b658ba)
|
|
This adds a new config/deployment per role that will come after any
post deploy steps. It drives the same ansible config as the
upgrade_tasks but instead collects the post_upgrade_tasks for any
service in the given role.
The workflow is upgrade_tasks, then post deploy steps (either
puppet/ or docker/ depending on the env) and then the
post_upgrade_tasks added here.
This is added to the pacemaker/cinder-volume.yaml service for now
see the bug below for more info
Change-Id: Iced34fecf02ebddc91df9302de54d2f4c2cab680
Closes-Bug: 1706951
(cherry picked from commit 2e182bffeeb099cb5e0b1747086fb0e0f57b7b5d)
|
|
Running these daemons at step 5 should avoid seeing error messages in
the gnocchi-statsd log files on startup which starts at step4.
Change-Id: Idb82f864a2e1c623dab7a2a87054443036670453
Closes-bug: #1713182
(cherry picked from commit 9d8e496f3e8a825d48d9eba9aab540001bb780ea)
|
|
Some boolean params are set to string type. Although it works, but
it is better to use boolean type for better validation. This patch
changes them to boolean type.
Change-Id: I9f1d223619ea14fbab26033b24eb1144796e5ef2
Closes-Bug: #1715209
(cherry picked from commit cab8ab1d342c6ffada3f2adea5834b4549240af5)
|
|
This change allows running the major upgrade composable docker
steps multiple times by not trying to delete the pacemaker resources
if they're not reported as started or in master state.
Closes-bug: 1716031
Depends-On: I8da03f5c4a6d442617b81be5793a9724cc8842bf
Change-Id: Ifcf9de8c82550a90a9fb118052d43fdbcdc6ca7e
(cherry picked from commit 64d7be1e3d4552e06cbc53f788572e530cc5c3bb)
|
|
Nova patching parameters are available in nova.conf but are not
configurable from tripleo-heat-templates. Exposing these parameters
from Nuage composable services to make them configurable. It enables
setting the patching parameters in environment files. This change
depends on the addition of nova patching configuration parameters.
Change-Id: Iacad25da044f2bac83ee5f577ddcd70650eb61e5
Depends-On: I51ef3e19daff1d98cfe5c2c16475c16e6a3e3e0f
(cherry picked from commit f0041153eca8d82bb7f72dc68676cab8448ef037)
|
|
Using the service_ prefix seems incoherent with its use in
service_config_settings (vs config_settings).
Change-Id: Ia39f181415bee0071409dabddfa0c5c312915e1f
(cherry picked from commit 09137304b98a02ed024c0288da907cfe35ca5fe1)
|
|
Add a retry when the pacemaker_resource command
wasn't apply correctly, more info here:
https://bugzilla.redhat.com/show_bug.cgi?id=1482116
This is the same approach puppet-pacemaker uses
and provides eventual consistency when multiple
nodes change the cluster CIB concurrently.
This change depends-on :
https://review.gerrithub.io/375982
The return code is not available in the current
ansible-pacemaker package.
Change-Id: I8da03f5c4a6d442617b81be5793a9724cc8842bf
(cherry picked from commit e92430d8d03fc2ce2d0ce192b96209f2c5c04169)
|