Age | Commit message (Collapse) | Author | Files | Lines |
|
We now fetch the name argument from the correctly named SwiftStorage
object.
Change-Id: I885505eadfc778ab57793c97af4d1c6739ec9614
Closes-Bug: #1647716
|
|
|
|
|
|
|
|
|
|
The script tries to download all artifact URLs with a single
request, instead of downloading each URL on its own if
multiple DeployArtifactURLs were given.
Change-Id: I6a8be699aff7023a67702bb1d3ddc2273984cd08
|
|
This seems to have broken the updates job, causing it to fail
with following error:
Can't set long node name!\nPlease check your configuration\n
Related-Bug: 1646873
This reverts commit 3e9fcfd09320ace07bc1bd4cb57feb98cd057332.
Change-Id: I72ba891cd9cd8c4f1bc204144f46aaabbdfd3647
|
|
|
|
There were several instances where the short-names/FQDNs where being
gotten in the same way in the role's templates. So this introduces a
mapping to get these values in order to reduce clutter.
Change-Id: Ie7df360bb69d56655f3e0fcbbf4d297db39b7a26
|
|
|
|
|
|
|
|
'user' is required or puppet-ceph will complain that the Keystone_user
has no title:
Evaluation Error: Missing title. The title expression resulted in undef
at /etc/puppet/modules/ceph/manifests/rgw/keystone/auth.pp
The value is set to Swift, as we use the same credentials as Swift
service.
Closes-Bug: #1642524
Change-Id: Ib4a7c07086b0b3354c8e589612f330ecdffdc637
|
|
|
|
|
|
|
|
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
architecture.
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I39f5426cb9da0b40bec4a7a3a4a353f69319bdf9
|
|
|
|
|
|
Change-Id: Iee1afeced0b210a46b273aafc0d40e99d6ee6d4e
|
|
This changes how we get the network-based FQDNs for the specific
services, from using the custom fact, to the new hiera entries.
Change-Id: Iae668a5d89fb7bee091db4a761aa6c91d369b276
|
|
Currently, one can get the network-based FQDNs via a custom puppet
fact. This is currently unreliable, as it's based on the ::hostname
fact which we assume it's set correctly by nova. However, this is not
necessarily the case (for instance, if you use pre-deployed services
such as we do with the multinode-jobs). In these cases, the
::hostname fact will return something other than what we specified in
nova, and effectively breaks the configurations in we relly too much
on the network-based FQDN facts.
By using hiera instead, we avoid this issue as we set those values to
be exactly what we expect (as we set them in the OS::TripleO::Server
resource.
Change-Id: I6ce31237098f57bdc0adfd3c42feef0073c224fb
|
|
This patch optimizes how we deploy hiera by using a new
heat hook specifically designed to help compose hiera
within heat templates. As part of this change:
- we update all the 'hiera' software configurations to set the group to hiera
instead of os-apply-config.
- The new format uses JSON instead of YAML. The hook actually writes
out the hiera JSON directly so no conversion takes place. Arrays,
Strings, Booleans all stay in their native formats. As such we can avoid
having to do many of the awkward string and list conversions in t-h-t to
support the previous YAML formatting.
- The new hook prefers JSON over YAML so upgrading users will have the
new files prefered. (we will post a cleanup routine for the old files
soon but this isn't a new behavior, JSON is now simply prefered.)
- A lot of services required edits to account for default settings that
worked in YAML that no longer work correctly in the native JSON
format. In almost all these cases I think the resulting codes looks
cleaner and is more explicit with regards to what is getting
configured in hiera on the actual nodes.
Depends-On: I6a383b1ad4ec29458569763bd3f56fd3f2bd726b
Closes-bug: #1596373
Change-Id: Ibe7e2044e200e2c947223286fdf4fd5bcf98c2e1
|
|
This patch adds a local version of our template processing
routine so that developers can more quickly view the templates
that are actually getting generated. I've noticed multiple developers
now do a full deployment with 'overcloud deploy' only to download
the swift container with the generated templates. This simple task
avoids that step by allowing developers to generate it locally.
It also aims to preserve the ability to use t-h-t templates directly
with Heat (instead of going through Mistral) should users wish to do that.
The new undercloud heat installer requires the ability to generate
templates without requiring Mistral and Swift to do so.
Ideally the Mistral API workflow would use this same code
so perhaps in the future we might modify that routine to:
-download swift tarball containing the templates
-run this local routine that lives in t-h-t
-re-upload the tarball of templates to the swift container
Change-Id: Ie664c9c5f455b7320a58a26f35bc403355408d9b
|
|
Currently we only support one dispatcher at a time. But ceilometer
config supports dispatching data to multiple destinations at the
same time. Update the param to support this.
Change-Id: Ie7d854928513239a5903862623df12af1d02b642
|
|
|
|
The parameter type is invalid making it impossible to enable monitoring-environment.
Change-Id: I835d1e82480edb0b6d082a7496d7ceebb1781728
Closes-Bug: #1641080
Closes-Bug: rhbz#1392473
|
|
|
|
|
|
This patch drops use of the vip-hosts.yaml service which can
cause issues during deployment because puppet 'hosts' resources
overwrite the data in /etc/hosts. The only reason things seem to work
at all at the moment is because our hosts element in t-i-e runs
on each os-refresh-config iteration and re-adds the dropped hosts
entries.
To work around the issue we add a conditional which selectively
adds the extra hosts entries only if the AddVipsToEtcHosts is set
to true.
Closes-bug: 1645123
Change-Id: Ic6aaeb249a127df83894f32a704219683a6382b2
|
|
We removed Step 6 in Iae33149e4a03cd64c5831e689be8189ad0cf034b
but forgot to update the README. Similarly we made all roles
use the same steps in Ia2ea559e8eeb64763908f75705e3728ee90b5744
so the comment is no longer true.
Change-Id: If5482ebd22a2547ed2165199992840a0dcacb04c
|
|
This adds the necessary hieradata for enabling TLS for MySQL (which
happens to run on the internal network). It also adds a template so
this can be done via certmonger. As with other services, this will
fill the necessary specs for the certificate to be requested in a
hash that will be consumed in puppet-tripleo.
Note that this only enables that we can now use TLS, however, we still
need to configure the services (or limit the users the services use)
to only connect via SSL. But that will be done in another patch, as
there is some things that need to land before we can do this (changes
in puppetlabs-mysql and puppet-openstacklib).
Change-Id: I71e1d4e54f2be845f131bad7b8db83498e21c118
Depends-On: I7275e5afb3a6550cf2abbb9a8007dedb62ada4b4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Not having the default easily accessible is causing issues for the UI,
as it cannot guess at it and can accidentally overwrite the value with
an empty string (the expected default when unset). The default is
already helpfully spelled out in the doc string for each file, this
updates the parameter to match it.
Change-Id: Ic284f9904e8f1d01cc717d59a0759f679d94106d
Closes-Bug: #1643670
|
|
This change modifies the template interface to support containers and
converts the compute services to composable roles.
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Flavio Percoco <flavio@redhat.com>
Co-Authored-By: Martin André <m.andre@redhat.com>
Co-Authored-By: Steve Baker <sbaker@redhat.com>
Change-Id: I82fa58e19de94ec78ca242154bc6ecc592112d1b
|
|
|
|
|
|
If barbican is set, it will configure cinder and nova-compute with
the necessary parameters to enable encrypted volumes to be created if
requested.
Change-Id: Id13811cf8e090706c590ffff46c237ff8131efd9
|
|
Ceilometer notifications can be sent in a background thread, unblocking
the Swift proxy in case the RabbitMQ is not processing notifications
quick enough or even unavailable.
There is a default queue size of 1000 notifications. If more messages
are added to the queue these will be discarded, and a warning log entry
will be emitted.
Change-Id: I98022dcbf661a5bb7425f49ba8525225d61212dc
|
|
Currently this is disabled via a conditional in the keepalived
profile in puppet-tripleo, but this will be incompatible with
the planned composable upgrades implementation. Instead we should
disable the service template by mapping to OS::Heat::None, and
ensure the haproxy manifest uses the t-h-t generated hiera value
keepalived_enabled instead of hard-coding a hiera override in the
haproxy template.
Change-Id: I85a8b1cca7268506de22adfb3a8ce7faa4f157ef
Partial-Bug: #1642936
Depends-On: I90faf51881bd05920067c1e1d82baf5d7586af23
|
|
|
|
Security scanners complain that directory listings are enabled in horizon.
Change-Id: I1d7cfcb3521e8235a99bc452f1b7b92c20ce72ac
Closes-Bug: #1637576
|