Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
https://review.openstack.org/#/c/318840/ decomposed the Sahara services
but they weren't added to the ControllerServices list, thus are now disabled.
Since we shipped mitaka with sahara enabled by default, we should probably add
them so the behavior is consistent when folks upgrade.
This also fixes a couple of issues we missed when landing the initial service
templates (partly because CI didn't test them).
In order for each service to operate independently when used with Pacemaker,
the roles needed to be separated. This commit also does this.
Depends-On: Id61eb15b1e2366f5b73c6e7d47941651e40651b1
Change-Id: I0846b328e9d938275e373d58f0b99219b19b326c
Closes-Bug: #1592284
Co-Authored-By: Brad P. Crochet <brad@redhat.com>
|
|
Implements: blueprint composable-services-within-roles
Depends-On: Ie48a123cc5bc402aee635a5daf118b158c6f3b6a
Closes-Bug: #1601850
Change-Id: Ifcfe0e3937fa8577635d803d46c3dfc2e873e553
|
|
|
|
|
|
|
|
|
|
|
|
This patch adds support for conditionally enabling DVR by deploying the
L3 and metadata agents on the compute node and setting the proper
configuration values throughout.
Implements: blueprint neutron-dvr-support
Change-Id: I24099795e76ecd520c990ba49d3511288dec7a12
|
|
Allows the installation and configuration of Manila.
Supports the generic driver only. This has a dependency on the
puppet-tripleo classes for manila where the puppet specific
config now lives.
The review at https://review.openstack.org/#/c/315658/ has been
merge into this one, as of v68, so manila lands as a composable
service. This was brought up on the mailing list at [1]
[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/096126.html
Co-Authored-By: Marios Andreou <marios@redhat.com>
Implements: blueprint composable-services-within-roles
Depends-On: I444916d60a67bf730bf4089323dba1c1429e2e71
Depends-On: I9eda4b3364e5c59342761a1ec71b0eb567c69cf1
Depends-On: I571b65a5402c1028418476a573ebeb9450ed00c9
Change-Id: I7acebac4354fca1f8d7ff6c343c1346bf29b81c6
|
|
Currently we have hard-coded parameters for each role, but to enable
custom roles, we need to pass a generic hosts list that can be joined
for all enabled roles.
Change-Id: I0606f462ff61c3a541342b63fee7d46ebfd1f4e0
Partially-Implements: blueprint custom-roles
|
|
This is essentially the same data as defined in the *Services parameter,
but it shows what's enabled for all roles in the format output from the
service templates, so is useful for debugging, and possibly for things
like conditional endpoint generation in future.
Change-Id: Ia4b1694e419533b05d2757d2925471cae75fb5b6
|
|
Change-Id: Iacd94294b8a66bc082bb2b3e8d3364ec1bf053b8
Depends-On: I16a786ce167c57848551c7245f4344c382c55b3d
|
|
We have some inconsistent naming here, but move them with their
current names for backwards compatibility, we can address the
deprecation of the inconsistent names at a future time.
This is required to enable jinja templating of roles in overcloud.yaml
Change-Id: I2ea673d9bc52967f9b7c25555059b964abf66966
Partially-Implements: blueprint custom-roles
|
|
We've got some inconsistent naming here, but I'm not attempting to
fix that yet, only move the current parameters inside each role template.
This should be backwards compatible because the parameter names
don't change, but also enable progress on custom-roles. We can
figure out a strategy for deprecating these and aligning per-role
parameter naming in a subsequent patch.
Also moves ImageUpdatePolicy, which wasn't consistently passed to
all roles anyway, and aligns the default image and constraints
for each role.
Change-Id: I85ec979934df220acbab9f7c3a6055f23e3bfc29
Partially-Implements: blueprint custom-roles
|
|
This is already defined in all the per-role templates and is passed
via parameter_defaults:
Change-Id: Ifde54d3d29a3f0754f0f05740d6b6030aa871d38
Partially-Implements: blueprint custom-roles
|
|
To enable custom roles, move these into the role templates where
they can be passed via parameter defaults. Because the Compute
role uses an inconsistent NovaCompute naming, these parameters
cannot be generated in overcloud.yaml, so moving them enables
backwards compatibility to be maintained when we move to a
fully jinja generated overcloud (e.g including the role
ResourceGroup resources)
Change-Id: I3f9b2275f2b1daeb8b83f09548a089dadcfe9eee
Partially-Implements: blueprint custom-roles
|
|
Remove those parameters which simply shadow parameters defined in
puppet/controller.yaml, these can be passed via parameter_defaults,
which is the default. The remaining properties are more tricky so
will be handled via subsequent patches.
Partially-Implements: blueprint custom-roles
Change-Id: I9bbbd12631de8cb1ad83e265f6ddc9e942dff9ab
|
|
|
|
|
|
Removes from the templates the old CephCluster configuration and
deployment which before roles was distributing the shared settings
for the Ceph cluster configuration.
Change-Id: Ia704f5d7add85e52dd477f4bc758aa0a02e4b39b
|
|
This moves the ringbuilder puppet code to puppet-tripleo
and migrates to the composable services format.
Closes-Bug: #1601857
Change-Id: I0ea2230072d3ff61a4047ffff1f4187951370f67
Depends-On: I427f0b5ee93a0870d43419009178e0690ac66bd6
|
|
This patch adds a new service_name section to each composable
service. We now have an explicit unit test check to ensure that
service_name exists in tools/yaml-validate.py.
This patch also wires service_names into hieradata on each
of the roles so that tools can access the deployed services locally
during deployment and upgrades.
Change-Id: I60861c5aa760534db3e314bba16a13b90ea72f0c
|
|
Some deployments want to deploy zero controllers, and this will become
a more likely configuration when support for custom roles lands (e.g
a set of decomposed custom roles vs the monolithic controller role
may be deployed instead).
Change-Id: Idb21779f3ad9d875576bdb5e6502ed27a72600ba
|
|
This patch just moves the Puppet code into puppet-tripleo.
A future iteration will be to move parameters within the service
template.
Closes-Bug: #1601853
Depends-On: I7ddae28a6affd55c5bffc15d72226a18c708850e
Change-Id: I51a05dbf53f516b200c146b35529ce563ce9ac7b
|
|
|
|
Deploy Pacemaker using composable services.
Change-Id: I038514812af5a9f30260a81ea3366d46bee4ee4e
Depends-On: I46215f82480854b5e04aef1ac1609dd99455181b
Closes-Bug: #1601970
|
|
Implement the service for ceilometer agent compute.
Change-Id: I5ab3887832588ce26e2d258d05f725d87d2c103d
|
|
|
|
|
|
Create a new resource registry entry for a Neutron "compute plugin".
For ML2 this may be the same os the NeutronComputePlugin but patches
for other vendors will follow that require extra bits on nodes
where VMs will be created.
This patch removes the ML2 code from the compute role and instead
uses the existing composable services.
NOTE: we are able to remove the puppet resource chain to force OVS to
get restarted due to puppet-neutron commit:
Idb1332dd498bb3065720f2ccaf68e6b0e9fa80c3 which should resolve that
issue.
Co-Authored-By: Emilien Macchi <emilien@redhat.com>
Depends-On: I95b9188607ab6c599ad4cde6faa1deb081618f3e
Change-Id: I2496372ca6e6ba9f52e9a8bb6e8dc731c125af13
|
|
Implements: blueprint composable-services-within-roles
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Co-Authored-By: Carlos Camacho <ccamacho@redhat.com>
Depends-On: Id728aae79442c45ab48fe0914c065f1807e8890d
Closes-Bug: #1601846
Change-Id: I40a3815923099d00a0f3fc1d88a942784e7c6fb9
|
|
Add horizon as a composable service
Depends-on: Iff6508972edfd5f330b239719bc5eb14d3f71944
Change-Id: I734c3e0784c25f30adff2e13faf1155a3e45cefd
Partially-implements: blueprint composable-services-within-roles
|
|
|
|
|
|
This patch brings back Ceilometer composable roles for controller,
module some adjustments to make it work.
Fixes 3 issues in Ceilometer composable services
1) This patch fixes the hiera maps in the pacemaker ceilometer*
templates. These were lists and should be a map.
2) fixes a critical issue in ceilometer-base.yaml where the
password was incorrectly coded in the YAML using get_param on
a string which wasn't actually a parameter.
3) Fixes the ceilometer_coordination_url so that it uses a YAML anchor
as was implied instead of get_param on a string which wasn't a
parameter.
4) Fixes the default database connection to use mongodb and configured
in puppet-tripleo profile appropriately.
Co-Authored-By: Dan Prince <dprince@redhat.com>
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Closes-Bug: #1601844
Change-Id: Ia0a59121b9ffd5e07647f66137ce53870bc6b5d6
|
|
While the endpoints do need brackets around IPv6 addresses these
are not wanted by some of the puppet classes so we must pass the
non bracketed version as well.
It will allow us to remove the need for the MysqlVirtualIP param
passed to the controller role thouh when hieradata/database.yaml
is emptied.
Change-Id: If264b02a134b96368035f032e05d02e84f6499ed
|
|
|
|
Since Mitaka, Neutron and Nova do the right thing for MTU, correctly
calculating and applying MTU per network, considering its network type
and underlying physical network MTU (1500 by default). Neutron now also
correctly advertise proper MTU to instances through DHCP and RA
mechanisms. With that, there is no reason to have those MTU hacks in
deployment tools. Actually, they not only do no good, but break some
setups (Jumbo frame aware infrastructure), or at least make them
non-optimal (lowering instance MTU to 1400 when it's not needed, or when
tunnel overhead does not require 100 bytes).
Note that Neutron still has a set of configuration options to allow for
custom physical network MTUs (global_physnet_mtu, path_mtu,
physical_network_mtus). Those options define underlying infrastructure
though, not tenant MTUs. To support Jumbo frames, TripleO should allow
to set those options. That said, it's not the immediate goal of the
patch, and hence such an effort would require a separate patch.
Mitaka+ documentation on MTU configuration for Neutron:
http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html
Change-Id: I540ba5dc69d0506f71b59746efcce94c73f9317f
|
|
|
|
Add a new service that will load and configure kernel modules.
Depends-On: If4f1047ff8c193a14b821d8b826f637872cf62bd
Change-Id: I8f771712595d0f4826858b855985f65d3621c3f1
|
|
Currently we have a special controller-only deployment which writes
the name/ip of the "bootstrap node", e.g the cluster master, which
defaults to the first node in the Controller ResourceGroup.
Now we're moving to fully composable services/roles, it's possible
folks will want to deploy services that expect to detect the bootstrap
node (e.g so only one node does a DB sync) for non-controller roles.
So, take this opportunity to combine the bootstrap node deployment with
the "all nodes" data, such that we deploy the same data for all roles.
Because the boostrap node data is per role cluster, rather than truly
global, we pass it via input_values into each per-role Deployment.
At some future point we might consider renaming this, e.g to
something which describes per-cluster config vs "all nodes",
but as a first step let's just rationalize the resources.
Change-Id: I4011526a13c51b3d0f95c17fe8ed38115b4fdce4
|
|
|
|
I think this depends_on is bogus - the Controller ResourceGroup does
depend on Networks, but not the ControllerServiceChain - this needs to
be consistent with the other ServiceChain definitions for the
custom-roles work.
Change-Id: I0159968719f5d21c8f216ad69af047fa141d54e9
|
|
We added NodeConfigIdentifiers to trigger config to be re-applied on
update, but then later added DeployIdentifier which forces config to
*always* be applied on update, so we can simplify things by just
referencing the DeployIdentifier directly.
Change-Id: I79212def1936740825b714419dcb4952bc586a39
|
|
Change-Id: Ibc9abf7043c37104d03cd72d882e10cdb53fe6a2
|
|
Change-Id: I1921115cb6218c7554348636c404245c79937673
Depends-On: I7ac096feb9f5655003becd79d2eea355a047c90b
Depends-On: I871ef420700e6d0ee5c1e444e019d58b3a9a45a6
|
|
|