Age | Commit message (Collapse) | Author | Files | Lines |
|
This is necessary to keep creating the Default domain.
Change-Id: Ib9911819e89f30270d4f7597639b33f30ad2e3a6
Closes-Bug: #1549867
|
|
|
|
This change adds a new set of network templates with IPv6 subnets
that can be used instead of the existing IPv4 networks. Each network
can use either the IPv4 or IPv6 template, and the Neutron subnet will
be created with the specified IP version.
The default addresses used for the IPv6 networks use the fd00::/8
prefix for the internal isolated networks (this range is reserved
for private use similar to 10.0.0.0/8), and 2001:db8:fd00:1000::/64
is used as an example default for the External network
(2001:db8::/32 are the documentation addresses [RFC3849]), but this
would ordinarily be a globally addressable subnet. These
parameters may be overridden in an environment file.
This change will require updates to the OpenStack Puppet
Modules to support IPv6 addresses in some of the hieradata values.
Many of the OPM modules already have IPv6 support to support IPv6
deployments in Packstack, but some OPM packages that apply only to
Instack/TripleO deployments need to be updated.
IPv6 addresses used in URLs need to be surrounded by brackets in
order to differentiate IP address from port number. This change
adds a new output to the network/ports resources for
ip_address_uri, which is an IP address with brackets in the case
of IPv6, and a raw IP address without brackets for IPv4 ports.
This change also updates some URLs which are constructed in Heat.
This has been tested and problems were found with Puppet not
accepting IPv6 addresses. This is addressed in the latest Puppet.
Additional changes were required to make this work with Ceph.
IPv6 tunnel endpoints with Open vSwitch are not yet supported
(although support is coming soon), so this review leaves the
Tenant network as an isolated IPv4 network for the time being.
Change-Id: Ie7a742bdf1db533edda2998a53d28528f80ef8e2
|
|
Id3d4f12235501ae77200430a2dc022f378dce336 added support for pre-allocated
IPs on the other overlay networks, but because the patch adding the
managment network (I0813a13f60a4f797be04b34258a2cffa9ea7e84f) was
under review around the same time, we missed adding the from_pool
capability to the ManagementNetwork.
Change-Id: If99f37634d5da7e7fb7cfc31232e926bd5ff074a
|
|
|
|
Fixed the heat_template_version of these YAML files to the liberty
release version according to HOT template specs.
Change-Id: Ic5e0d843f7e164c59fb1737e52ef4cf6ad4df77f
|
|
Ceilometer Alarm is deprecated in Liberty by Aodh.
This patch:
* manage Aodh Keystone resources
* deploy Aodh API under WSGI, Notifier, Listener and Evaluator
* manage new parameters to customize Aodh deployment
* uses ceilometer DB for the upgrade path
* pacemaker config
Depends-On: I9e34485285829884d9c954b804e3bdd5d6e31635
Depends-On: I891985da9248a88c6ce2df1dd186881f582605ee
Depends-On: Ied8ba5985f43a5c5b3be5b35a091aef6ed86572f
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Change-Id: I58d419173e80d2462accf7324c987c71420fd5f6
|
|
|
|
|
|
This commit introduces a bash file to be sourced into major upgrade
scripts. Into this file we can put specific pieces of migration logic in
the form of bash functions, which can then be called from the upgrade
scripts.
Change-Id: Ibf7aa84d3880e9218c488dec9d707300e1784744
|
|
|
|
|
|
This splits the upgrade script delivery out of the UpgradeWorkflow
and into a new task which delivers the upgrade script for
compute and object-storage nodes. This is intended to be the first
part of the upgrades process, since we need to upgrade swift nodes
before the controllers and then only one at a time. So this will
deliver the upgrade script which can be invoked by the operator
using the existing script in tripleo-common
'upgrade-non-controller.sh'.
This can be invoked by passing the -e
environments/major-upgrade-script-delivery.yaml (added here) to
the openstack overcloud deploy command.
Change-Id: I20a0d4978e907111404f8108c502ab53b69a3296
|
|
This introduces upgrades for Cinder block storage nodes. Currently
Cinder doesn't support upgrade level pinning and cannot safely deal with
version skew. This means that we have to upgrade Cinder storage nodes in
sync with controller nodes (after they were taken down for upgrade,
before they are brought back up) to ensure that Cinder services perform
AMQP communication only within the same major version of Cinder.
According to our current knowledge, Cinder block storage nodes are the
only node type that will have to be upgraded in sync with controllers.
Change-Id: Icec913c015eff744b0f31b513176b4b657df43af
|
|
|
|
Since swift isn't managed by pacemaker we need to manually (systemctl)
stop and start the swift services. This moves the duplicate blocks for
start/stop into a common function (we already include that
pacemaker_common_functions.sh here so may as well)
Change-Id: Ic4f23212594c1bf9edc39143bf60c7f6d648fd1d
|
|
|
|
|
|
|
|
Old overcloud images don't have python-zaqarclient installed, and new
overclouds' os-collect-config are configured with Zaqar support. This
together means that on upgrade we need to install python-zaqarclient,
otherwise os-collect-config will be restarted during yum update and
crash due to trying to import missing Python module from zaqarclient.
Change-Id: I3e875e14cb60b1b78aec0d9ddc412ccf865abd01
|
|
Quiet down yum during major upgrades to reduce the output size. This is
consistent with what was introduced into minor updates in change
I517271e8465885421a78b73c5af756816c37a977.
Change-Id: Ie6b470e383fdf42870ac6f60ca43e44b4c446ebe
|
|
|
|
|
|
Create a new SoftwareDeployment that can be used to add a swap file to
all nodes The amount of swap and the location of the swap file can be
customized via parameter_defaults and the swap_size_megabytes/swap_path
parameters.
Change-Id: I1fb14c0fab2255410fceb26c3a7d5cfe0ba57b3b
|
|
|
|
|
|
Nova v2.1 allows to use the same API as 2.0 but with microversions
support, which is the recommended way to discover the latest API
version supported in the cloud.
Change-Id: Id011de03d883001fd48dbbcfed53cb821607c7f3
|
|
With the move of keystone under wsgi and httpd, all openstack services received an
ordering constraint on the 'httpd' service (which now exposes keystone and horizon).
Since this is not only a layering violation, but it also removes the ability to
restart keystone (httpd) without having to restart all dependent services, we introduce
a dummy 'openstack-core' service which all other services depend on and we make also
keystone (httpd) depend on it.
The previous constraint ordering graph can be found here:
http://acksyn.org/files/tripleo/wsgi-2016-02-24-cib.pdf
Whereas after this change we have the following ordering graph:
http://acksyn.org/files/tripleo/wsgi-openstack-core.pdf
Once this is agreed upon, we can start working on fixing the upgrade path
from Liberty.
This fixes RHBZ#1290121
Closes bug: 1537885
Change-Id: Ie26908ac9bfc0b84b6b65ae3bda711236b03d9d4
|
|
Because Overcloud Keystone resources are not managed by puppet-keystone
but by os-cloud-config, we need to let os-cloud-config managing keystone
bootstrap otherwise the Exec will fail since some data is already in
place.
Later, when Keystone resources will be managed by Puppet, drop this
parameter, because puppet-keystone is able to manage the boostrap
itself.
Change-Id: I027deaae5cf90c27a6b5e9d236ae61145cab3c3f
Closes-Bug: #1551501
|
|
|
|
Due to fix: https://bugs.launchpad.net/networking-cisco/+bug/1469839,
the replay count parameter is no longer used. This needs to be
reflected in the Triple O templates.
Change-Id: I666c4394108287adcb4989e897ab3936667a602b
Closes-bug: #1551387
|
|
Add Satellite 5 support to the RHEL registration environment and
resources. The registration script is updated to support both satellite
versions in place given the similarity of the options for both
scenarios.
The satellite version is detected based on $REG_SAT_URL, and that
determines whether subscription-manager or rhnreg_ks is used.
Change-Id: Ic261c8a16a7d6d3978f8bfc6e53f75dbe1b716db
|
|
Adds missing configuration which allows overcloud nodes to be
polled by undercloud node.
One would have expected the snmp::snmpv3_user call to create the
missing configuration line. But as noted in this bug, it does not:
https://github.com/razorsedge/puppet-snmp/issues/9
Fixes BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1223278
Change-Id: Ieb2d612a27a938b45056bd37176f44cb55543d75
Closes-Bug: 1532700
|
|
|
|
|
|
Update this script to use 'set -e' for all commands except
the ping checks themselves which we allow to fail so we
can give enhanced output.
This resolves an issue where some commands could fail and cause
an undetectable error (the scripted exits with success) thus causing
a case where Heat won't detect any network errors at all. This
was recently the case with git commit 45be848 where we switched
to use python -c 'import ipaddr' which wasn't even installed as a
library on our overcloud nodes, thus causing all network validations
to silently fail.
Change-Id: I40ed6a537136e866357cc0d9304e905afdd28522
Depends-On: Ia617f44b4673b89202e5e5cdcac9b50f46b3e6c8
Related-bug: #1551048
|
|
|
|
|
|
|
|
|
|
|
|
Adds a new nested stack deployment which allows operators to
opt-in to deploy tarball's and RPM packages by setting
DeployArtifactURLs as a parameter_default in a Heat
environment.
The intent is to use this setting to allow t-h-t to
transparently deploy things like tarballs of puppet modules
via a Swift Temp URL.
Change-Id: I1bad4a4a79cf297f5b6e439e0657269738b5f326
Implements: blueprint puppet-modules-deployment-via-swift
|
|
|
|
|
|
As part of the major upgrade workflow non-controller nodes are to
be updated by the operator, out-of-band and only after an initial
heat stack-update that invokes the upgrade of the controller nodes.
This review adds a ComputeDeliverUpgradeConfigDeployment_Step3
SoftwareDeploymentGroup to be applied only to compute nodes, and
that depends on the controllers having been upgraded after
ControllerPacemakerUpgradeConfig_Step2.
Its purpose is to deliver but not invoke the upgrade script on
compute nodes to /root/tripleo_upgrade_node.sh .
The non-controller nodes will then be upgraded later by an
operator that will run the script provided for that purpose, like
at https://review.openstack.org/#/c/284722/1 for example.
Change-Id: Ic6115fc8cf5320abfcf500112ff563bde8b88661
|
|
|
|
Due to an incorrect rebase, d0dcb9401c868786df58f5801a431392b8e89df8
dropped the changes made in dd7602ad82100617126be26d80a6d3f67cb739ac to
add a vncproxy to the endpoint map. This change restores them.
Change-Id: Ifef7f955481405d5fe39ba48c8b1a79aa9c170f2
|
|
Currently since nova compute is not configured to
send notifications to ceilometer, tempest tests
fail on tempest.api.telemetry.test_telemetry_notification_api.
Change-Id: I763b7d246ae3f5955b6f555c8fd107d2cac89787
|
|
Configures all services to send notifications to rabbit. The puppet
modules are not consistent regarding how this is done - some expose
notification config as a top-level param, others you need to set it
through a *_config structure, and cinder provides a separate class
dedicated to enabling ceilometer notifications.
Change-Id: I23e2ddad3c59a06cfbfe5d896a16e6bad2abd943
|
|
|