Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch adds a new type called:
OS::TripleO::Network::Ports::ControlPlaneVipPort
This defaults to a normal OS::Neutron::Port object but can
be mocked out for some implementations like when installing
the undercloud where neutron doesn't exist.
Change-Id: Iebf2428432a98a9d789b206ce973599adbc0af8f
|
|
|
|
|
|
|
|
|
|
For usability and to reduce the number of environments that need to be
given when enabling TLS in the internal network, it's convenient to add
the enabling of TLS in the internal front-ends for HAProxy, instead of
doing that in a separate environment file.
bp tls-via-certmonger
Change-Id: Icef0c70b4b166ce2108315d5cf0763d4e8585ae1
|
|
It's no longer available in Neutron (removed in Mitaka). See:
I2a879213c3b095a007a4531f430a33cea9fdf1bd
Change-Id: I044c648eb8c4933667b8ea2c9159a30e5ebb7df3
|
|
We now fetch the name argument from the correctly named SwiftStorage
object.
Change-Id: I885505eadfc778ab57793c97af4d1c6739ec9614
Closes-Bug: #1647716
|
|
|
|
|
|
|
|
|
|
|
|
This change adds a NIC config to the multiple-nics sample NIC
config templates for a compute node running DVR. In order for
DVR to work on the compute nodes, they must share an external
bridge with the controllers. All of the other sample NIC
configs already have an external bridge (defaults to 'br-ex'),
but the multiple-nics compute role does not, so now the
compute-dvr.yaml NIC template will demonstrate DVR with
multiple NICs.
Change-Id: I80fe2e5842a67984e1d4d8aa295c7607c4f340ad
|
|
|
|
|
|
The script tries to download all artifact URLs with a single
request, instead of downloading each URL on its own if
multiple DeployArtifactURLs were given.
Change-Id: I6a8be699aff7023a67702bb1d3ddc2273984cd08
|
|
This seems to have broken the updates job, causing it to fail
with following error:
Can't set long node name!\nPlease check your configuration\n
Related-Bug: 1646873
This reverts commit 3e9fcfd09320ace07bc1bd4cb57feb98cd057332.
Change-Id: I72ba891cd9cd8c4f1bc204144f46aaabbdfd3647
|
|
|
|
Change-Id: Iecafa7878fec20c707e94bdaca55f1489f3e338a
|
|
|
|
|
|
There were several instances where the short-names/FQDNs where being
gotten in the same way in the role's templates. So this introduces a
mapping to get these values in order to reduce clutter.
Change-Id: Ie7df360bb69d56655f3e0fcbbf4d297db39b7a26
|
|
|
|
|
|
|
|
|
|
|
|
Updates the get-occ-config.sh script used with the deployed-server
environment to support custom roles. Any custom role name, and a
corresponding set of hosts (ip addresses or hostnames) can now be passed
to the script and it will query for the proper nested stack uuid's and
configure os-collect-config appropriately on the respective nodes.
Change-Id: I8fc39e6d18cd70ff881e2a284234b26261018d67
|
|
Improve scenario001 with Cinder + RBD coverage.
Also remove Barbican bits, we don't deploy Barbican in scenario001, but
002.
Change-Id: Ib9cadbefcb3ddcdb4812f47ff5496e74b2bd888d
|
|
|
|
|
|
Improve scenario003 to configure Keystone tokens with Fernet provider.
Scenario001 and scenario002 will still deploy uuid for now.
Change-Id: I8c671d0371b2c3590b58b9623bb0df0b0c625a5b
|
|
Like Puppet OpenStack CI, implement scenario004 with Ceph RGW scenario,
where Glance uses it as a image storage backend.
Change-Id: If055ca225c456a738c5726ef1e76a4a4f9c566a8
|
|
'user' is required or puppet-ceph will complain that the Keystone_user
has no title:
Evaluation Error: Missing title. The title expression resulted in undef
at /etc/puppet/modules/ceph/manifests/rgw/keystone/auth.pp
The value is set to Swift, as we use the same credentials as Swift
service.
Closes-Bug: #1642524
Change-Id: Ib4a7c07086b0b3354c8e589612f330ecdffdc637
|
|
|
|
|
|
|
|
|
|
Add Ceph to scenario001 and use it as a backend for Nova, Glance and
Gnocchi.
Change-Id: I29065d4b2ac39db40984873fda550d7adbe904fe
|
|
The resource is failing and it prevents us to add more coverage. Until
we figure what's wrong with it, let's disable it.
Change-Id: If89775bf67d686327d0d27222e0c9179be74a668
|
|
|
|
|
|
|
|
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
architecture.
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I39f5426cb9da0b40bec4a7a3a4a353f69319bdf9
|
|
|
|
|
|
Change-Id: Iee1afeced0b210a46b273aafc0d40e99d6ee6d4e
|
|
This changes how we get the network-based FQDNs for the specific
services, from using the custom fact, to the new hiera entries.
Change-Id: Iae668a5d89fb7bee091db4a761aa6c91d369b276
|
|
Currently, one can get the network-based FQDNs via a custom puppet
fact. This is currently unreliable, as it's based on the ::hostname
fact which we assume it's set correctly by nova. However, this is not
necessarily the case (for instance, if you use pre-deployed services
such as we do with the multinode-jobs). In these cases, the
::hostname fact will return something other than what we specified in
nova, and effectively breaks the configurations in we relly too much
on the network-based FQDN facts.
By using hiera instead, we avoid this issue as we set those values to
be exactly what we expect (as we set them in the OS::TripleO::Server
resource.
Change-Id: I6ce31237098f57bdc0adfd3c42feef0073c224fb
|