Age | Commit message (Collapse) | Author | Files | Lines |
|
Some user feedback indicates we need better examples showing how
we can run an extraconfig template, e.g on post-deploy so that it
runs every update, regardless of if a change to the config has
occurred. We can leverage DeployIdentifier for this, just like with
the puppet deployments.
This can be tested with an example like this:
resource_registry:
OS::TripleO::NodeExtraConfigPost: tripleo-heat-templates/extraconfig/post_deploy/example_run_on_update.yaml
Change-Id: I45d8f8093ab45c03238ec56651c437128661cb95
|
|
|
|
|
|
The coordination url connection string to redis
is incorrectly formatted with password.
More details: https://bugzilla.redhat.com/show_bug.cgi?id=1320036
Change-Id: I93f5e93dfce4ba2629aa57534e8d33d5d1e6d77b
|
|
|
|
|
|
|
|
|
|
|
|
There are quite a few cases where it is useful to disable ring building on the
nodes. For example:
- using different weights, regions, and zones
- replacing a node in an existing Swift cluster
- adding a new node to an existing cluster
- using storage policies and therefore multiple rings
- using different nodes and disks for account, container and object servers
This patch allows it to disable ring building. Rings need to be maintained
manually then, and copied to all storage and proxy nodes within a cluster.
This patch is similar to I01311ec3ca265b151f8740bf7dc57cdf0cf0df6f, except that
it uses the current templates.
Change-Id: I56978b15823dd6eaf4b6fd3440df2f895e89611a
|
|
This is set by tripleoclient, remove it from here so it doesn't
override the user provided value.
Change-Id: I6110b71e484af749838f91dc5c6c4982b0c83074
|
|
The Management network is optional and disabled by default.
This change preserves backward compatibility and fixes
https://bugzilla.redhat.com/show_bug.cgi?id=1317594
Change-Id: I73cf51154c9ee7c05938e2cadf0c5ac107840bad
|
|
We don't need an endpoint for the glance-registry service, that is
used by glance-api when needed and is not meant to be user-facing.
Change-Id: Ia6c9dd6164d3b91adbc937d70fa74d5fbbfb28a3
|
|
|
|
Since the password is now autogenerated from the tripleoclient,
there is no need to keep the default value here.
Change-Id: If41cb56134966456f8590da04f392faffe5c62a1
Closes-Bug: #1557688
|
|
|
|
|
|
|
|
|
|
|
|
In a previous patch [1], we added support for VIR_MIGRATE_TUNNELLED when
doing VM shared storage.
In Nova Mitaka [2] [3], we have now a parameter called
'live_migration_tunnelled' to whether or not use tunnelled migration.
It replaces 'block_migration_flag' and 'live_migration_flag' that are
both deprecated.
[1] https://review.openstack.org/#/c/286584/
[2] https://review.openstack.org/#/c/263436/
[3] https://review.openstack.org/#/c/263434/
Change-Id: I8b199b6e72c80b2df7b679e0a20e39f8400d0478
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This patch makes sure:
* When doing shared storage
Nova is configured with block_migration_flag and live_migration_flag = '(...),VIR_MIGRATE_TUNNELLED'
flag for security improvements.
* When not doing shared storage
Nova is not configured with VIR_MIGRATE_TUNNELLED flag because it's not
supported by Qemu yet. We need to make sure the value is unset otherwise
live migration will fail when not running shared storage for VMs.
Note: this patch will be backport to stable branches. In a further
iteration, we'll probably use live_migration_tunnelled new Nova
parameter which is a simplier way to manage this feature.
Co-Authored-By: Kashyap Chamarthy <kchamart@redhat.com>
Change-Id: I557c1624ee944a32b1831d504f7b189308cd1961
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To deploy Ceph on IPv6, we need to enable ms_bind_ipv6 in addition
to passing the list of MON IPs in brackets.
Change-Id: I3644b8fc06458e68574afa5573f07442f0a09190
|
|
https://review.openstack.org/268356 can cause issues in IPv6
environments. It generates the following Hiera data:
nova::vncproxy::common::vncproxy_host: [2001:db8:fd00:1000::10]
which fails due to the brackets. Making sure there are no brackets
in nova_vncproxy_host makes it work for both the IP case and when
using DNS names.
Change-Id: Iafe18f042725eb9419d97cd674c4b9a1a895b187
|
|
Currently the vnc server on the compute nodes binds on 0.0.0.0.
which only works with IPv4 addresses, it breaks connectivity with
IPv6 addressing.
This fixes https://bugzilla.redhat.com/show_bug.cgi?id=1300678.
Change-Id: Id642d224fb3c62f786453dc684634adca1c2c09d
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
|
|
Change-Id: I9ed917f32b3de95beb234ade4819a8b96affe3e9
|
|
|
|
Yum update on cinder nodes should be quiet, as it is on controllers,
because results of these updates are sent to Heat. I mistakenly left
this out in the first patch because i used one of the standalone node
upgrade scripts as a copy/paste base for the cinder node upgrade script.
Change-Id: Id13190dc4d242317829c7994088183f52d21461d
|
|
This patch adds support for configuring Keystone domain for Heat
via heat-keystone-setup-domain script. It should be reverted
as soon as Keystone v3 is fully functional.
This patch won't be fully functional without either python-keystoneclient
fix [1] or workaround [2].
[1] https://bugs.launchpad.net/python-keystoneclient/+bug/1452298
[2] https://review.openstack.org/180563
Change-Id: Ie9cdd518b299c141f0fdbb3441a7761c27321a88
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Depends-On: Ic541f11978908f9344e5590f3961f0d31c04bb0c
|
|
|
|
|
|
|
|
Change-Id: I26b7a1cd1b7b6520db1df49c60a86c2bb5bce1b0
Depends-On: I12e835964a0370de73e45ef0a8603656ecb02d0c
Depends-On: I8a5844e89bd81a99d5101ab6bce7a8d79e069565
|
|
For the external loadbalancer work, we added the ability to specify
fixed ips for controller nodes on all network isolation networks.
In order to allow users full control over the placement and ip
addresses of deployed nodes, we need to be able to do the same thing
for the other node types.
Change-Id: I3ea91768b2ea3a40287f2f3cdb823c23533cf290
|
|
The Neutron Agents is currently not used. Refactor the heat templates
to accommodate for this change.
Change-Id: Ice3c5ce723fa16cfb66c2b0afbe51d7b282c3210
|
|
Atomic's root partition & logical volume defaults to 3G.
In order to launch larger VMs, we need to enlarge the root
logical volume and scale down the docker_pool logical volume.
We are allocating 80% of the disk space for vm data and the
remaining 20% for docker images.
Change-Id: If3fff78f476de23c7c51741a49bae227f2cdfe3e
Co-authored-by: Ian Main <imain@redhat.com>
Co-authored-by: Jeff Peeler <jpeeler@redhat.com>
|