Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
There were 2 problems with this condition making the
rhel-registration.yal template broken:
"conditions" should be "condition"
The condition should refer to just a condition name defined in the
"conditions:" section of the template.
Change-Id: I14d5c72cf86423808e81f1d8406098d5fd635e66
Closes-Bug: #1709916
|
|
The containerized version of the mongodb service omits the
metadata_settings definition [1], which confuses certmonger when
internal TLS is enabled and make the generation of certificates fail.
Use the right setting from the non-containerized profile.
[1] https://review.openstack.org/#/c/461780/
Change-Id: I50a9a3a822ba5ef5d2657a12c359b51b7a3a42f2
Closes-Bug: #1709553
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The ip address which clients and other nodes use to connect to the
monitors is derived from the monitor_interface parameter unless
a monitor_address or monitor_address_block is given (to set mon_host
into ceph.conf); this change adds setting for monitor_address_block to
match the public_network so that clients attempt to connect to the mons
on the appropriate network.
Change-Id: I7187e739e9f777eab724fbc09e8b2c8ddedc552d
Closes-Bug: #1709485
|
|
This enables either deploying without configuring any services, or
temporarily disabling the deploy steps such as will be required
for minor updates where we want to re-run the rolling update outside
of heat.
To deploy directly via ansible-playbook you can do e.g:
openstack overcloud config download --config-dir tmpconfig
cd tmpconfig/tripleo-6b02U7-config
ansible-playbook -vvv -b -i /usr/bin/tripleo-ansible-inventory deploy_steps_playbook.yaml
Which will run the same ansible steps as we normally run via heat.
Change-Id: I59947b67523dfcc43d454d4ac7d82b06804cf71d
|
|
These work the same way as upgrade_tasks *but* they use a step variable
instead of tags, so we can iterate over a count/sequence which isn't
possibly via a wrapper playbook with tags (we may want to align upgrade
tasks with the same approach if this works out well).
Note the tasks can be run via ansible-playbook on the undercloud, like:
openstack overcloud config download --config-dir tmpconfig
cd tmpconfig/tripleo-HCrDA6-config
ansible-playbook -b -i /usr/bin/tripleo-ansible-inventory update_steps_playbook.yaml --limit controller
The above will do a rolling update for the Controller role (note the inconsistent
capitalization, we probably need to fix the group naming in tripleo-ansible-inventory)
because we specify serial: 1 in the playbook.
You can also trigger an update explicitly on one node like this, which is useful for debugging:
ansible-playbook -vvv -b -i /usr/bin/tripleo-ansible-inventory update_steps_playbook.yaml --limit overcloud-controller-0
Change-Id: I20bb3e26ab9d9cadf1a31fd304de8a014a901aa9
|
|
This exposes the deploy workflow for all roles from deploy-steps
via overcloud.j2.yaml - which means we can write it via the new
openstack overcloud config download command and/or run the workflow
outside of heat via mistral
With https://review.openstack.org/#/c/485732/ applied to
tripleoclient it becomes possible to do:
openstack overcloud config download --config-dir tmpconfig
cd tmpconfig/tripleo-EvEZk0-config
ansible-playbook -b -i /usr/bin/tripleo-ansible-inventory deploy_steps_playbook.yaml
This runs the deploy steps, exactly the same as normally run via heat
via ansible-playbook for all overcloud nodes (--limit can be used to restrict
to specific nodes/roles).
Change-Id: I96ec09bc788836584c4b39dcce5bf9b80e914c71
|
|
This isn't set unless the playbook is run via heat, so default it to false
to enable easier use via ansible-playbook combined with tripleo-ansible-inventory
Change-Id: I9705e4533831a019dd0051e5522d4b7958682506
|
|
So that we can more easily iterate over an include in an output
Change-Id: Idd5bb47589e5c37123caafcded1afbff8881aa33
|
|
|
|
|
|
|
|
|
|
|
|
|
|
After merging commit 488796, single quotation marks
were missed. This causes the upgrade to fail as the
flag --sacks-number is considered a su command flag.
Also mounts Ceph config data into the container which
seems needed for the gnocchi-upgrade command when
configured to use Ceph.
Also move the gnocchi db sync to step 4, so ceph is
ready. Add a retry loop to ceilometer-upgrade cmd so
it doesnt fail while apache is restarted.
Closes-Bug: #1709322
Change-Id: I62f3a5fa2d43a2cd579f72286661d503e9f08b90
|
|
|
|
If we consolidate these we can focus on one implementation (the new ansible
based one used for docker-steps)
Change-Id: Iec0ad2278d62040bf03613fc9556b199c6a80546
Depends-On: Ifa2afa915e0fee368fb2506c02de75bf5efe82d5
|
|
Add some special-casing for backwards compatibility, such that the
CephStorage role can be rendered via j2 for support of composable networks.
Change-Id: Iee92bb6ee94963717d3a8ef400e7970f62576a0d
Partially-Implements: blueprint composable-networks
|
|
Add some special-casing for backwards compatibility, such that the
BlockStorage role can be rendered via j2 for support of composable networks.
Change-Id: Ia5fb5ff6dbe218710e95a69583ac289cf7b4af9e
Partially-Implements: blueprint composable-networks
|
|
Add some special-casing for backwards compatibility, such that the
ObjectStorage role can be rendered via j2 for support of composable networks.
Change-Id: I52abbefe2f5035059ccbed925990faab020c6c89
Partially-Implements: blueprint composable-networks
|
|
Add some special-casing for backwards compatibility, such that the
Compute role can be rendered via j2 for support of composable networks.
Change-Id: Ieee446583f77bb9423609d444c576788cf930121
Partially-Implements: blueprint composable-networks
|
|
Add deprecated role-specific parameters to role definition, in
order to special-case some parameters for backwards compatibility,
such that the Controller role can be rendered via j2 for support
of composable networks.
Co-Authored By: Dan Sneddon <dsneddon@redhat.com>
Change-Id: I5983f03ae1b7f0b6add793914540b8ca405f9b2b
Partially-Implements: blueprint composable-networks
|
|
It wasn't being configured, thus making mongodb fail.
Change-Id: If0d7513aacfa74493a9747440fb97f915a77db84
Closes-Bug: #1710162
|
|
|
|
|
|
|
|
With these two services running over httpd in the containers, we can now
enable TLS for them.
bp tls-via-certmonger-containers
Change-Id: Ib8fc37a391e3b32feef0ac6492492c0088866d21
|
|
The non-containerized version will run over httpd [1], and for the
containerized TLS work, it is needed in the container version as well.
[1] Iac35b7ddcd8a800901548c75ca8d5083ad17e4d3
bp tls-via-certmonger-containers
Depends-On: I1c5f13039414f17312f91a5e0fd02019aa08e00e
Change-Id: I2c39a2957fd95dd261b5b8c4df5e66e00a68d2f7
|
|
In non-containerized deployments, Galera can be configured to use TLS
for gcomm group communication when enable_internal_tls is set to true.
Fix the metadata service definition and update the Kolla configuration
to make gcomm use TLS in containers, if configured.
bp tls-via-certmonger-containers
Change-Id: Ibead27be81910f946d64b8e5421bcc41210d7430
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Closes-Bug: #1708135
Depends-On: If845baa7b0a437c28148c817b7f94d540ca15814
|
|
This de-couples public TLS from controllers to now run wherever HAProxy
is deployed.
Partially-Implements: blueprint composable-networks
Change-Id: I9e84a25a363899acf103015527787bdd8248949f
|
|
|
|
|
|
|
|
|
|
|