Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
Horizon provides a password validation check, which OpenStack cloud
operators can use to enforce password complexity checks for users
within horizon.
A dictionary containing a regular expression can be used for
password validation with help text that is displayed if the password
does not pass validation.
HORIZON_CONFIG["password_validator"] = {
"regex": '.*',
"help_text": _("Your password does not meet the requirements."),
}
This change allows injection of the regex into horizons local_settings
file from a tripleo heat template
Change-Id: Ib6517c8f96148bea002b0e3442a26367b236928f
Depends-On: If82a80ed6a8e6e65aecc2a25ee6d60640ae03c9a
Closes-Bug: #1640800
|
|
|
|
|
|
|
|
This nested for loop is wrong as it generates all steps for all
roles twice. This works because yaml parsing ignores the duplicate
resources, but it's a big waste of space in swift (this fix reduces
the rendered file size by over 2000 lines with the default roles!)
Change-Id: Ifaf860020839390147c92848d52b1a59e355dc50
Closes-Bug: #1659139
|
|
|
|
These are only used for TLS-everywhere, and fills up the kerberos
principals that will need to be created for the certs used by the
overcloud. With this, the metadata hook will format these principals
correctly and will further pass them on to the nova metadata service.
Where they can be used if there's a plugin enabled.
bp tls-via-certmonger
bp novajoin
Change-Id: I873094bb69200052febda629fda698a7a782c031
|
|
|
|
We've broken the upgrade job because anyone upgrading with the
glance registry deployed (and defined in their *Services parameters)
will try to deploy with the old glance-registry.yaml defined in heat.
Instead we define a template which stops and disables the service on
upgrade.
Closes-Bug: #1659079
Change-Id: I03561954d794afae2be06811375d16611fa45973
|
|
|
|
Cleanup some TODO.
Change-Id: I84e369a9797359fea124e00e2007ae745a96847a
|
|
If TLS in the internal network is enabled, we run glance-api beind a
TLS proxy (which is actually httpd's mod_proxy). This passes the
necessary hieradata.
bp tls-via-certmonger
Change-Id: I693213a1f35021b540202240e512d121cc1cd0eb
Depends-On: Id35a846d43ecae8903a0d58306d9803d5ea00bee
|
|
This change adds the ec2api service using the
tripleo::profile::base::nova::ec2api profile.
The deprecated nova-cert service is not supported, and therefore the
RegisterImage action is not supported either.
Change-Id: I2510fd4ed935d8423216fff9ce3adf2d69c9c804
Depends-On: If4b091e1ca02f43aa9c65392baf8ceea007b7cfb
|
|
|
|
This adds a pacemaker_remote puppet service so that an operator
can automatically deploy pacemaker-remote on nodes of his choice.
Change-Id: I9678606b3de9b9f4c03014b33c1dd27fcba67513
Depends-On: I581552dfa64160e2f82f6a9b8f2ae521c3d6da8d
Depends-On: I92953afcc7d536d387381f08164cae8b52f41605
|
|
|
|
|
|
|
|
|
|
This patch adds support for using Keystone V3 authentication
with Ceph/RGW. This removes the usage of the admin_token
Change-Id: I3265b787ed1f059f86fdc80a91d0f7ed498c1e16
Depends-On: I42861afcac221478dcb68be13b6dbc2533a7f158
|
|
|
|
|
|
|
|
As part of the composable upgrades current plan is to disable
the composable upgrades steps running on a particular role
(e.g. all compute nodes) in favor of a later operator driven
upgrades process as has previously been the case
This adds the disable_upgrade_deployment flag to roles_data as
a first step. Thanks to shardy for his help with this.
Change-Id: Ice845742a043b34917e61f662885786c73e955fd
|
|
Manila default_share_type config option is by default unset.
This option is used by manila when a user creates a new
share and doesn't specify share type explicitly.
Albeit it's not hard requirement to have this option set
to run Manila service, it's convenient to set a default
share type and also it seems to be a general community
opinion that this option should be set.
Note that setting this option does not create the share
type itself (this still has to be done manually which is
probably best because admins may want customize default type
settings according to their needs).
Change-Id: Iab60e42c7f347bbf074d60eb91dd4a1f6a94d3a6
Closes-Bug: #1654204
|
|
|
|
|
|
|
|
|
|
Nova placement hiera parameters need to be common across all nova
services because they are used to more than one place.
This patch moves them to nova-base, so nova-compute and other services
that need it will be able to run correctly.
Change-Id: Ibccc55fc9d045487fb7e47bd1c2ebe9cf788765e
Depends-On: Iada8e9fcccec7dbfe7ac0ec0f9ec6eac1581290e
|
|
glance params are also used by cinder-volume. This patch aims to
cinder::glance in common roles for cinder, so we can split cinder
and cinder volume.
Change-Id: Id81c029318016068481dd614ed62cc4bfaf0f3e8
|
|
|
|
|
|
|
|
Adding to THT the cabability of configuring until_complete
in the archive job.
This will be a boolean flag to clean all the deleted instances.
Will run in batches of max_rows until empty.
Change-Id: I087bc66729fef4f33122a7633c154d5a66613d6f
Depends-On: I927b75adb0fc3251f3734d41f4393590294c1c9b
Closes-Bug: 1650680
|
|
|
|
|
|
|
|
Introduce THT for fossw ML2 plugin in networking-fujitsu.
networking-fujitsu is a neutron ML2 plugin which enables several
FUJITSU switch products in OpenStack environment. This templates
deploy overcloud with FOS switch.
Change-Id: I977dbecbf9f6f9725f7fb5ca4745b537a73975ff
Implements: blueprint integration-fossw-networking-fujitsu
Depends-On: I044c5812bbc5cd3de4bc33556cffbe5bad8e64cf
Depends-On: I79df6b6a27d95f0c0e2c87207ab80235a4efccfc
|
|
Change-Id: Icf8e215935bdf299cb792abb29bb5d58c5c312c5
Partially-Implements: blueprint overcloud-upgrades-per-service
|
|
Co-Authored-By: Sofer Athlan-Guyot <sathlang@redhat.com>
Partially-Implements: blueprint overcloud-upgrades-per-service
Closes-Bug: #1655651
Change-Id: I83134f51d152f3b97f9a570bbd9a67c753982810
|
|
puppet-swift has hard-coded sections which expect these to be
*_quotas, without matching the pipeline to the sections swift
proxy fails to start.
Change-Id: I3ee94a9bc4b046051e5d814e82a69f759bea1296
Closes-Bug: #1657167
|
|
Currently we start all OpenStack services in step6, but puppet
already does this, and sometimes services require configuration
to account for the new version after the yum update before they
will start.
So instead of reimplementing that configuration management in
ansible, just defer starting the services until puppet has run
which will happen right after the ansible upgrade steps complete.
Note there are some DB sync operations etc that we may also be able
to remove and let puppet do those steps, but I've left those in
for now, as we know there are some actions during that phase
e.g nova cells setup, which aren't yet handled by puppet.
Change-Id: Idc8e253167a4bc74b086830cfabf28d4aab97d28
|
|
Change-Id: I447ce74cca93fcae87ca608ecc8eeb2721fecefb
|
|
Deploy NTP by using puppet-tripleo profile, so we can re-use the bits on
the undercloud.
Depends-On: If3cf7d9690001b051465ea25cf8a8c3bc6f7c33a
Change-Id: I8c13fbc9267ff28065f0de97424a4eac78c370fb
|