Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
There are quite a few cases where it is useful to disable ring building on the
nodes. For example:
- using different weights, regions, and zones
- replacing a node in an existing Swift cluster
- adding a new node to an existing cluster
- using storage policies and therefore multiple rings
- using different nodes and disks for account, container and object servers
This patch allows it to disable ring building. Rings need to be maintained
manually then, and copied to all storage and proxy nodes within a cluster.
This patch is similar to I01311ec3ca265b151f8740bf7dc57cdf0cf0df6f, except that
it uses the current templates.
Change-Id: I56978b15823dd6eaf4b6fd3440df2f895e89611a
|
|
This is set by tripleoclient, remove it from here so it doesn't
override the user provided value.
Change-Id: I6110b71e484af749838f91dc5c6c4982b0c83074
|
|
|
|
Since the password is now autogenerated from the tripleoclient,
there is no need to keep the default value here.
Change-Id: If41cb56134966456f8590da04f392faffe5c62a1
Closes-Bug: #1557688
|
|
|
|
|
|
|
|
|
|
|
|
In a previous patch [1], we added support for VIR_MIGRATE_TUNNELLED when
doing VM shared storage.
In Nova Mitaka [2] [3], we have now a parameter called
'live_migration_tunnelled' to whether or not use tunnelled migration.
It replaces 'block_migration_flag' and 'live_migration_flag' that are
both deprecated.
[1] https://review.openstack.org/#/c/286584/
[2] https://review.openstack.org/#/c/263436/
[3] https://review.openstack.org/#/c/263434/
Change-Id: I8b199b6e72c80b2df7b679e0a20e39f8400d0478
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This patch makes sure:
* When doing shared storage
Nova is configured with block_migration_flag and live_migration_flag = '(...),VIR_MIGRATE_TUNNELLED'
flag for security improvements.
* When not doing shared storage
Nova is not configured with VIR_MIGRATE_TUNNELLED flag because it's not
supported by Qemu yet. We need to make sure the value is unset otherwise
live migration will fail when not running shared storage for VMs.
Note: this patch will be backport to stable branches. In a further
iteration, we'll probably use live_migration_tunnelled new Nova
parameter which is a simplier way to manage this feature.
Co-Authored-By: Kashyap Chamarthy <kchamart@redhat.com>
Change-Id: I557c1624ee944a32b1831d504f7b189308cd1961
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To deploy Ceph on IPv6, we need to enable ms_bind_ipv6 in addition
to passing the list of MON IPs in brackets.
Change-Id: I3644b8fc06458e68574afa5573f07442f0a09190
|
|
https://review.openstack.org/268356 can cause issues in IPv6
environments. It generates the following Hiera data:
nova::vncproxy::common::vncproxy_host: [2001:db8:fd00:1000::10]
which fails due to the brackets. Making sure there are no brackets
in nova_vncproxy_host makes it work for both the IP case and when
using DNS names.
Change-Id: Iafe18f042725eb9419d97cd674c4b9a1a895b187
|
|
Currently the vnc server on the compute nodes binds on 0.0.0.0.
which only works with IPv4 addresses, it breaks connectivity with
IPv6 addressing.
This fixes https://bugzilla.redhat.com/show_bug.cgi?id=1300678.
Change-Id: Id642d224fb3c62f786453dc684634adca1c2c09d
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
|
|
Change-Id: I9ed917f32b3de95beb234ade4819a8b96affe3e9
|
|
|
|
Yum update on cinder nodes should be quiet, as it is on controllers,
because results of these updates are sent to Heat. I mistakenly left
this out in the first patch because i used one of the standalone node
upgrade scripts as a copy/paste base for the cinder node upgrade script.
Change-Id: Id13190dc4d242317829c7994088183f52d21461d
|
|
This patch adds support for configuring Keystone domain for Heat
via heat-keystone-setup-domain script. It should be reverted
as soon as Keystone v3 is fully functional.
This patch won't be fully functional without either python-keystoneclient
fix [1] or workaround [2].
[1] https://bugs.launchpad.net/python-keystoneclient/+bug/1452298
[2] https://review.openstack.org/180563
Change-Id: Ie9cdd518b299c141f0fdbb3441a7761c27321a88
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Depends-On: Ic541f11978908f9344e5590f3961f0d31c04bb0c
|
|
|
|
|
|
|
|
Change-Id: I26b7a1cd1b7b6520db1df49c60a86c2bb5bce1b0
Depends-On: I12e835964a0370de73e45ef0a8603656ecb02d0c
Depends-On: I8a5844e89bd81a99d5101ab6bce7a8d79e069565
|
|
For the external loadbalancer work, we added the ability to specify
fixed ips for controller nodes on all network isolation networks.
In order to allow users full control over the placement and ip
addresses of deployed nodes, we need to be able to do the same thing
for the other node types.
Change-Id: I3ea91768b2ea3a40287f2f3cdb823c23533cf290
|
|
The Neutron Agents is currently not used. Refactor the heat templates
to accommodate for this change.
Change-Id: Ice3c5ce723fa16cfb66c2b0afbe51d7b282c3210
|
|
Atomic's root partition & logical volume defaults to 3G.
In order to launch larger VMs, we need to enlarge the root
logical volume and scale down the docker_pool logical volume.
We are allocating 80% of the disk space for vm data and the
remaining 20% for docker images.
Change-Id: If3fff78f476de23c7c51741a49bae227f2cdfe3e
Co-authored-by: Ian Main <imain@redhat.com>
Co-authored-by: Jeff Peeler <jpeeler@redhat.com>
|
|
Depends-On: I1a8741b9e00775763911222cbe0af677b59e03a1
Change-Id: I373f97ada4e4101700a12b42dfb8ee4b2ff701f2
|
|
Combined with a fix in puppetlabs-rabbitmq, we can lift the forced
conversion of rabbitmq::file_limit into a string in Hiera. See the
referenced puppetlabs-rabbitmq pull request for explanation of the
issue.
Change-Id: I0ec720b5e06763e86ea93f59cfe05842b3d13269
Depends-On: https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/401
|
|
|
|
The variables in the heredoc should be escaped because they should
evaluate only when the inner script runs, not when the outer "writer"
script runs.
Python-zaqarclient is installed for os-collect-config to work, as we do
on the other node types.
Swift-proxy is removed from list of services to stop/start, as
swift-proxy isn't supposed to run on the swift storage nodes.
Change-Id: I8426b859d11378ebdc3da94dcc090133dab0c628
|
|
During the controller upgrade in
major_upgrade_controller_pacemaker_1.sh we use systemctl to stop
all swift services and then start them again in _pacemaker_2.sh
In the case of stand-alone swift nodes the deployer may have
used the ControllerEnableSwiftStorage: false so that only the
swift-proxy service is left on controllers (wrt swift). The
systemctl_swift function used during upgrades is changed to factor
this in.
Change-Id: Ib22005123429f250324df389855d0dccd2343feb
|
|
This allows to run a command or a script snippet on all overcloud nodes
at the beginning of the upgrade. The intended use is to switch to a new
set of repositories on the overcloud. This is done differently in
different contexts (e.g. upstream vs. downstream), but generally it
should be simple enough to not warrant creation of switchable
"UpgradeInit" resource in the resource registry, and a string
command/snippet parameter should suffice.
Change-Id: I72271170d3f53a5179b3212ec9bae9a6204e29e6
|
|
|
|
|