Age | Commit message (Collapse) | Author | Files | Lines |
|
Closes-Bug:1686619
Change-Id: I7c32ca39a456de9833d30c31d41fcb727d2b0a34
|
|
The [Pre|Post]Puppet resources were renamed in
https://review.openstack.org/#/c/365763.
This was intended for having a pre/post deployment
steps using an agnostic name instead of
being attached to a technology.
The renaming was unintentionally reverted in
https://review.openstack.org/#/c/393644/ and
https://review.openstack.org/#/c/434451.
This submission merge both resources into one,
and remove the old pre|post hooks.
Closes-bug: #1669756
Change-Id: Ic9d97f172efd2db74255363679b60f1d2dc4e064
|
|
This reverts commit b323f8a16035549d84cdec4718380bde3d23d6c3 and uses
the new logic in puppet-tripleo (see Ifd6fa5b398d98e8998630ea0c9a2ce9867ceba2b
), basically doing the same.
Closes-Bug: 1665641
Change-Id: Ib5cb0578be2993af0a0b8675005d838640bdb139
|
|
We used to have this in mitaka:
https://github.com/openstack/tripleo-heat-templates/blob/stable/mitaka/puppet/controller-post.yaml#L45
but we lost it along the way. The problem without this change is that we
are open to the following race:
1) ControllerDeployment_Step1 is started and manages to do a successful
"systemctl start pacemaker"
2) PrePuppet gets called and in the HA deployment calls
pacemaker_maintenance_mode.sh
3) pacemaker_maintenance_mode.sh will set the maintenance-mode=true
property because the pacemaker service is already up:
https://github.com/openstack/tripleo-heat-templates/blob/master/extraconfig/tasks/pacemaker_maintenance_mode.sh#L8-L9
4) If the maintenance property is set to true at this stage, the
creation of any resource will take place but they won't really
start.
Change-Id: Icb7495edd00385b2975dd42f63085d20292ef9a9
Closes-Bug: #1673795
Co-Authored-By: Jiri Stransky <jstransk@redhat.com>
|
|
We need to generate the Pre and Post Puppet Tasks for all roles, not
just the Controller role. Otherwise, you have to have a role
specifically named Controller that is running your pacemaker services,
or pacemaker won't be properly handled on stack-updates.
When using deployed-server's it's actually not possible to have a role
called Controller, since we need to use all custom roles so that we can
set disable_contraints on each role. Further, it is not possible to
redefine the Controller role since puppet/controller-role.yaml is listed
in the excludes file.
Change-Id: I737b24db90932e292b50b122640f66385f2d1c23
Partial-Bug: #1665060
|
|
|
|
This patch implements a new docker deployment architecture that
should us to install docker services in a stepwise manner alongside
of baremetal puppet services. This works by using Yaql to select
docker specific services (docker/services/*.yaml) vs the puppet
specific ones and then applying the selected Json to relevant Heat
software deployments for docker and baremetal puppet in a stepwise
fashion.
Additionally the new architecture
leverages new composable services interfaces from Newton to
allow configuration of per-service container configuration
sets (directories that are bind mounted into kolla containers) by
using the Kolla containers themselves. It does this by spinning up
a throw away "configuration only" version of the container being
configured itself, then running the puppet apply in that container and
copying the generated config files into /var/lib/config-data. This
avoids having to install all of the OpenStack dependency packages
in the heat-agent-container itself (our previous approach) and should
allow us to configure a much wider variety of container config files
that would otherwise be impossible with the previous shared approach.
The new approach (combined) should allow us to configure containers in
both the undercloud and overcloud and incrementally add CI coverage to
services as we containerize them.
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Ian Main <imain@redhat.com>
Co-Authored-By: Flavio Percoco <flavio@redhat.com>
Change-Id: Ibcff99f03e6751fbf3197adefd5d344178b71fc2
|
|
Swift rings created or updated on the overcloud nodes will now be
stored on the undercloud at the end of the deployment. An
additional consistency check is executed before storing them,
ensuring all rings within the cluster are identical.
These rings will be retrieved (before Puppet runs) by every node
when an UPDATE is executed, and by doing this will be in a
consistent state across the cluster.
This makes it possible to add, remove or replace nodes in an
existing cluster without manual operator interaction.
Closes-Bug: 1609421
Depends-On: Ic3da38cffdd993c768bdb137c17d625dff1aa372
Change-Id: I758179182265da5160c06bb95f4c6258dc0edcd6
|
|
Where the role has disabled upgrades, we need to skip both the ansible and
puppet steps. To do this we refactor the post.j2.yaml so that it can be
included in the upgrade template with an adjusted list of roles.
Note this requires https://review.openstack.org/#/c/425220/ - this
change will be required for local testing of this patch
(run mistral-db-mange populate after updating tripleo-common
and restart the mistral services, or update your repos and re-run
openstack undercloud install).
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: Ie7d0fa6fef3528bd93e6cde076b964ea8de3185a
|