Age | Commit message (Collapse) | Author | Files | Lines |
|
According to [1] we need os_region_name, not region_name. Furthermore
the os_interface is configured as well. The hard check on this
parameter was introduced in ocata[2], explaining why the newton version
did not chock on it.
[1] https://docs.openstack.org/ocata/config-reference/compute/config-options.html
[2] https://github.com/openstack/nova/commit/d486315e0
Closes-Bug: #1684058
Change-Id: If6118bf03e832fe3fa5ea4fcb1b436afd2adf80a
|
|
If we really want upgrade_batch_tasks before the upgrade_tasks
as described in the README then we should enforce the ordering
Noticed this working on bug 1671504 upgrade tasks were being
executed before batch upgrade tasks.
Closes-Bug: 1678101
Change-Id: Iaa1bce960a37c072b5f8441132705a6bb6eb6ede
|
|
Currently we don't enforce step ordering across role, only within
role. With custom role, we can reach a step5 on one role while the
cluster is still at step3, breaking the contract announced in the
README[1] where each step has a guarantied cluster state.
We have to remove the conditional here as well as jinja has no way to
access this information, but we need jinja to iterate over all enabled
role to create the orchestration.
This deals only with Upgrade tasks, there is another review to deal
with UpgradeBatch tasks.
[1] https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/README.rst
Closes-Bug: #1679486
Change-Id: Ibc6b64424cde56419fe82f984d3cc3620f7eb028
|
|
The restart of openstack-nova-compute takes place before crudini set
the password, user_domain and project_name get set.
Change-Id: I57b54d5f59d5803d7ad4e399d598f699785a5825
Closes-Bug: #1675739
Co-Authored-By: Oliver Walsh <owalsh@redhat.com>
|
|
We want to apply a puppet manifest for the non-controller role, but we
need to apply it in stages. By loading the proper hieradata we get the
needed step configuration.
Change-Id: I07bfeee7b7d9a9b8c2c20e5d5c9ed735d0bfc842
Closes-Bug: #1664304
|
|
We wants to run puppet on each role which has the flag
disable_upgrade_deployment to true. It will run after the upgrade
of the role and before running the whole converge step.
Change-Id: Ia85be688d070dfb5b8337e8ef3c4bc439fb6052e
|
|
This delivers a /root/tripleo_upgrade_node.sh to those nodes
that have the disable_upgrade_deployment flag set to true.
They will later be upgraded manually by the operator who will
invoke the script delivered here using upgrade-non-controller.sh
We can also deliver any service specific upgrade configuration,
such as configuring nova-compute to use the placement API as this
is required in order for placement to be configured and installed
during the subsequent upgrade steps for controller services.
This removes the compute and swift specific upgrade scripts as
they are now merged into the common
tripleo_upgrade_node.sh - removing any hard coded
reference to a particular role name (compute/objectstorage) and
only relying on the disable_upgrade_deployment is roles_data.yaml
Change-Id: I4531a4038b78087ef4a1a62c35f1328822427817
Co-Authored-By: Mathieu Bultel <mbultel@redhat.com>
|
|
Make UpgradeBatch depends on BatchConfig, for step0
avoid creation of the UpgradeBatchConfig_stepX prior
to UpgradeBatchConfig step0 and add condition
Change-Id: I852beee65590270422cfbc9abe02111d88442f2e
|
|
Currently we don't correctly disable the batch_upgrade_tasks, so
rework the loops to ensure we only create the batch deployments
for roles which enabled upgrades.
Note this modifies some loop whitespace too which cleans up the
rendered output and makes it a bit more readable/compact.
Change-Id: I1c257dcc351e99efa54f9cae4b3009287908756e
Partially-Renders: blueprint overcloud-upgrades-per-service
|
|
We don't need all the steps currently enabled for either batched
or concurrent updates, so decrease them. In future we can perhaps
introspect the task tags during plan creation and set these
dynamically.
Change-Id: I0358886a332dfbecd03bc4a67086b08d25756c22
Partially-Implements: blueprint overcloud-upgrades-per-service
|
|
We should enable each kind of upgrade per role, not per step
so rework the conditions, and also only apply it to the deployment
(to save the round-trip to the nodes applying an empty config)
but don't disable the *Config resources as the overhead of these
is small, and we reference the Step1 config in the outputs, even
if it's empty.
Change-Id: Iee2f1fb5b1d8b0b6001c6ab0f2a4ef2858cef281
Partially-Implements: blueprint overcloud-upgrades-per-service
|
|
Where the role has disabled upgrades, we need to skip both the ansible and
puppet steps. To do this we refactor the post.j2.yaml so that it can be
included in the upgrade template with an adjusted list of roles.
Note this requires https://review.openstack.org/#/c/425220/ - this
change will be required for local testing of this patch
(run mistral-db-mange populate after updating tripleo-common
and restart the mistral services, or update your repos and re-run
openstack undercloud install).
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: Ie7d0fa6fef3528bd93e6cde076b964ea8de3185a
|
|
Use heat conditions to skip resources (conditionally create them)
when there are no tasks to deploy.
This requires the heat fix Iefae1fcea720bee4ed69ad1a5fe403d52d54433c
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I2f43fb922d122ffade20e35738f0ba3bb56a4492
|
|
Some services (e.g ceph mon) require upgrading in batches (the old
upgrade architecture did the ceph mon upgrade one controller at a
time). This interface enables doing the same, and over time we
can probably move more services into this interface (e.g when
services support rolling upgrades) to reduce downtime.
Change-Id: If581f301a5493ef33ac1386bdc22f9fca4f2544e
Partially-Implements: blueprint overcloud-upgrades-per-service
|
|
|
|
As part of the composable upgrades current plan is to disable
the composable upgrades steps running on a particular role
(e.g. all compute nodes) in favor of a later operator driven
upgrades process as has previously been the case
This adds the disable_upgrade_deployment flag to roles_data as
a first step. Thanks to shardy for his help with this.
Change-Id: Ice845742a043b34917e61f662885786c73e955fd
|
|
Adds a step0 for any pre-upgrade checks. This migrates
some of the checks we have at the top of
extraconfig/tasks/major_upgrade_controller_pacemaker_1.sh
Checks for other services (and for the cluster) will follow
in separate commits.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I607f1fed68d7f11773484c3d7cb3e5af67465d57
|
|
Heat now supports release name aliases, so we can replace
the inconsistent mix of date related versions with one consistent
version that aligns with the supported version of heat for this
t-h-t branch.
This should also help new users who sometimes copy/paste old templates
and discover intrinsic functions in the t-h-t docs don't work because
their template version is too old.
Change-Id: Ib415e7290fea27447460baa280291492df197e54
|
|
We can't run this during the upgrade steps, because there are things
which need to happen before any role configuration happens, e.g
installing the new hiera heat-config hook, which must be done before
e.g "ControllerDeployment" runs or the stack update hangs.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I365b57513590662c3f78a33dc625747f457c48c5
|
|
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
architecture.
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I39f5426cb9da0b40bec4a7a3a4a353f69319bdf9
|