Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Create a new SoftwareDeployment that can be used to add a swap file to
all nodes The amount of swap and the location of the swap file can be
customized via parameter_defaults and the swap_size_megabytes/swap_path
parameters.
Change-Id: I1fb14c0fab2255410fceb26c3a7d5cfe0ba57b3b
|
|
|
|
|
|
Nova v2.1 allows to use the same API as 2.0 but with microversions
support, which is the recommended way to discover the latest API
version supported in the cloud.
Change-Id: Id011de03d883001fd48dbbcfed53cb821607c7f3
|
|
With the move of keystone under wsgi and httpd, all openstack services received an
ordering constraint on the 'httpd' service (which now exposes keystone and horizon).
Since this is not only a layering violation, but it also removes the ability to
restart keystone (httpd) without having to restart all dependent services, we introduce
a dummy 'openstack-core' service which all other services depend on and we make also
keystone (httpd) depend on it.
The previous constraint ordering graph can be found here:
http://acksyn.org/files/tripleo/wsgi-2016-02-24-cib.pdf
Whereas after this change we have the following ordering graph:
http://acksyn.org/files/tripleo/wsgi-openstack-core.pdf
Once this is agreed upon, we can start working on fixing the upgrade path
from Liberty.
This fixes RHBZ#1290121
Closes bug: 1537885
Change-Id: Ie26908ac9bfc0b84b6b65ae3bda711236b03d9d4
|
|
Because Overcloud Keystone resources are not managed by puppet-keystone
but by os-cloud-config, we need to let os-cloud-config managing keystone
bootstrap otherwise the Exec will fail since some data is already in
place.
Later, when Keystone resources will be managed by Puppet, drop this
parameter, because puppet-keystone is able to manage the boostrap
itself.
Change-Id: I027deaae5cf90c27a6b5e9d236ae61145cab3c3f
Closes-Bug: #1551501
|
|
|
|
Due to fix: https://bugs.launchpad.net/networking-cisco/+bug/1469839,
the replay count parameter is no longer used. This needs to be
reflected in the Triple O templates.
Change-Id: I666c4394108287adcb4989e897ab3936667a602b
Closes-bug: #1551387
|
|
Add Satellite 5 support to the RHEL registration environment and
resources. The registration script is updated to support both satellite
versions in place given the similarity of the options for both
scenarios.
The satellite version is detected based on $REG_SAT_URL, and that
determines whether subscription-manager or rhnreg_ks is used.
Change-Id: Ic261c8a16a7d6d3978f8bfc6e53f75dbe1b716db
|
|
Adds missing configuration which allows overcloud nodes to be
polled by undercloud node.
One would have expected the snmp::snmpv3_user call to create the
missing configuration line. But as noted in this bug, it does not:
https://github.com/razorsedge/puppet-snmp/issues/9
Fixes BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1223278
Change-Id: Ieb2d612a27a938b45056bd37176f44cb55543d75
Closes-Bug: 1532700
|
|
|
|
|
|
Update this script to use 'set -e' for all commands except
the ping checks themselves which we allow to fail so we
can give enhanced output.
This resolves an issue where some commands could fail and cause
an undetectable error (the scripted exits with success) thus causing
a case where Heat won't detect any network errors at all. This
was recently the case with git commit 45be848 where we switched
to use python -c 'import ipaddr' which wasn't even installed as a
library on our overcloud nodes, thus causing all network validations
to silently fail.
Change-Id: I40ed6a537136e866357cc0d9304e905afdd28522
Depends-On: Ia617f44b4673b89202e5e5cdcac9b50f46b3e6c8
Related-bug: #1551048
|
|
|
|
|
|
|
|
|
|
|
|
Adds a new nested stack deployment which allows operators to
opt-in to deploy tarball's and RPM packages by setting
DeployArtifactURLs as a parameter_default in a Heat
environment.
The intent is to use this setting to allow t-h-t to
transparently deploy things like tarballs of puppet modules
via a Swift Temp URL.
Change-Id: I1bad4a4a79cf297f5b6e439e0657269738b5f326
Implements: blueprint puppet-modules-deployment-via-swift
|
|
|
|
|
|
As part of the major upgrade workflow non-controller nodes are to
be updated by the operator, out-of-band and only after an initial
heat stack-update that invokes the upgrade of the controller nodes.
This review adds a ComputeDeliverUpgradeConfigDeployment_Step3
SoftwareDeploymentGroup to be applied only to compute nodes, and
that depends on the controllers having been upgraded after
ControllerPacemakerUpgradeConfig_Step2.
Its purpose is to deliver but not invoke the upgrade script on
compute nodes to /root/tripleo_upgrade_node.sh .
The non-controller nodes will then be upgraded later by an
operator that will run the script provided for that purpose, like
at https://review.openstack.org/#/c/284722/1 for example.
Change-Id: Ic6115fc8cf5320abfcf500112ff563bde8b88661
|
|
|
|
Due to an incorrect rebase, d0dcb9401c868786df58f5801a431392b8e89df8
dropped the changes made in dd7602ad82100617126be26d80a6d3f67cb739ac to
add a vncproxy to the endpoint map. This change restores them.
Change-Id: Ifef7f955481405d5fe39ba48c8b1a79aa9c170f2
|
|
Currently since nova compute is not configured to
send notifications to ceilometer, tempest tests
fail on tempest.api.telemetry.test_telemetry_notification_api.
Change-Id: I763b7d246ae3f5955b6f555c8fd107d2cac89787
|
|
Configures all services to send notifications to rabbit. The puppet
modules are not consistent regarding how this is done - some expose
notification config as a top-level param, others you need to set it
through a *_config structure, and cinder provides a separate class
dedicated to enabling ceilometer notifications.
Change-Id: I23e2ddad3c59a06cfbfe5d896a16e6bad2abd943
|
|
|
|
|
|
|
|
|
|
This change adds a sample network-environment.yaml file to the
environments. This sample includes pointers to NIC config files,
as well as default network subnets and allocation pools.
This is meant to be a demonstration of the default settings for
a virtual deployment. In a real deployment, the operator would
customize the settings here and point to custom NIC config
templates.
Change-Id: I0288c0680effea06b5f805a0d955e8bbf6152ba6
|
|
|
|
|
|
Populates /etc/hosts with an entry for each IP address the node
is on, which will be useful to migrate services configuration from
using IPs into using hostnames.
This is how the lines look like on a host which doesn't have all ports:
172.16.2.6 overcloud-novacompute-0.localdomain overcloud-novacompute-0
192.0.2.9 overcloud-novacompute-0-external
172.16.2.6 overcloud-novacompute-0-internalapi
172.16.1.6 overcloud-novacompute-0-storage
192.0.2.9 overcloud-novacompute-0-storagemgmt
172.16.0.4 overcloud-novacompute-0-tenant
192.0.2.9 overcloud-novacompute-0-management
the network against which the default (or primary) name is resolved
can be configured (for computes) via ComputeHostnameResolveNetwork
Change-Id: Id480207c68e5d68967d67e2091cd081c17ab5dd7
|
|
This patch adds a new BondInterfaceOvsOptions to the bond.yaml.
Sometimes there is no need to use vlans and therefore bond-with-vlans
template files. This approach allows to setup things like LACP or
the bonding mode via a nested stack heat parameter.
Change-Id: I2d318aa738ab609bc76212bef49b2c5986d6dcdf
|
|
During upgrades, we only run Puppet on the whole deployment to converge
the state, after the upgrade workflow itself has been fully
completed. That is an opportunity to utilize Puppet to make sure Nova
Compute RPC doesn't remain pinned to the older version.
Change-Id: I6ebc813a80dfd9dfbbb213c38724487e044507b8
|
|
A stack is an extremely heavyweight abstraction in Heat. Particularly in
TripleO, every stack includes a copy of all the template and environment
data for all of the stacks in the tree, all of which must be stored anew
in the database.
The EndpointMap abstraction created no fewer than 30 nested stacks, none
of which contained any resources but which existed purely for the
purpose of abstracting out some intrinsic functions used to calculate
the endpoint URLs for the various services. This likely adds several GB
to the memory requirements of the undercloud, and can cause things to
slow to a crawl since all 30 nested stacks need to be queried whenever
we need data from any one of them.
This change eliminates the nested stacks and instead generates the
endpoint map statically. This can be done offline in less than 250ms,
allows the input data to be expressed in an even more human-readable
form, and reduces the runtime overhead of the endpoints map by a factor
of 31, all with no loss of functionality, compatibility or flexibility.
Since we don't run a setup script to generate the tarball, the
endpoint_map.yaml output is checked in to source control. The build
script offers a --check option that can be used to make sure that the
output file is up-to-date with the input data.
Change-Id: I2df8f5569d81c1bde417ff5b12b06b7f1e19c336
|
|
This parameter leads to the nonoperational state
in Nexus Vxlan topology when set to True, when VNIs created
but the nve peers do not get discovered on the Nexus.
It is time consuming process to debug the configuration
and find out that this parameter should be changed
to False. To prevent future problems for the future
deployment we want to default this parameter to False.
Change-Id: I685ad7d212af0d9e568acbf1ccf1607d120c195e
|
|
|
|
|
|
|
|
It is currently possible to provide arbitrary config settings for
Cinder using the "cinder::config::cinder_config:" hiera key. To add
a backend though particular one has to edit the list of enabled
backends in Cinder too which isn't possible. This change will make
it possible using a user-customizable array of backends to be enabled.
Change-Id: Ic664c1c2b0f7b1b4b6be8b5064a38650694d4857
|
|
This parameter can be used for pinning (and later unpinning) the Nova
Compute RPC version.
Change-Id: I2f181f3b01f0b8059566d01db0152a12bbbd1c3e
|
|
Change-Id: I7226070aa87416e79f25625647f8e3076c9e2c9a
|
|
Add Heat software deployments to be used to upgrade major versions of
OpenStack on the controller nodes. All controller services are taken
down while the upgrade is in progress.
The new updated yum repositories should be configured by another process
e.g. the deployment artifacts transfer via Swift.
Change-Id: Ia0a04e4a11d67e7a5acc53c1f8a8f01ed5ca8675
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
|
|
Our current nova-neutron configuration does not work with
the latest puppet-nova. In particular, this patch[1].
This commit adds keystone v3 endpoints to the map and gets the
nova::network::neutron configuration to use them.
[1] https://github.com/openstack/puppet-nova/commit/d09868a59c451932d67c66101b725182d7066a14
Change-Id: Ifb8c23c81c665c2732fa5cd757760668b06a449a
|
|
See RHBZ 1311005 and 1247303. In short: sometimes when a controller
node gets fenced, rabbitmq is unable to rejoin the cluster. To fix this
we need two steps:
1) The fix for the RA in BZ 1247303
2) Add notify=true to the meta parameters of the rabbitmq resource on
fresh installs and updates
Note that if this change is applied on systems that do not
have the fix for the rabbitmq resource agent, no action is taken.
So when the resource agent will be updated, the notify
operation will start to work as soon as the first monitor
action will take place.
Fixes RH Bug #1311005
Change-Id: I513daf6d45e1a13d43d3c404cfd6e49d64e51d5a
|
|
This change adds extra config yaml files for big switch agent
and big switch lldp.
This change is mainly for compute nodes. The changes related
to controller nodes are landed at e78e1c8d9b5a7ebf327987b22091bff3ed42d1c1
This change also removes the neutron_enable_bigswitch_ml2 flag. Instead,
User needs to specify NeutronMechanismDrivers: bsn_ml2 in environment file.
Previous discussion about this change can be found at an abandoned
review request https://review.openstack.org/#/c/271940/
Depends-On: Iefcfe698691234490504b6747ced7bb9147118de
Change-Id: I81341a4b123dc4a8312a9a00f4b663c7cca63d7c
|
|
This commit ensures we are not using any deprecated parameters for
nova::network::neutron and are using the right variable names.
Change-Id: Ic1b41e2cdbb6b180496822cc363c433e9388aa02
|