Age | Commit message (Collapse) | Author | Files | Lines |
|
Implements: blueprint composable-services-within-roles
Depends-On: Ie48a123cc5bc402aee635a5daf118b158c6f3b6a
Closes-Bug: #1601850
Change-Id: Ifcfe0e3937fa8577635d803d46c3dfc2e873e553
|
|
We've got some inconsistent naming here, but I'm not attempting to
fix that yet, only move the current parameters inside each role template.
This should be backwards compatible because the parameter names
don't change, but also enable progress on custom-roles. We can
figure out a strategy for deprecating these and aligning per-role
parameter naming in a subsequent patch.
Also moves ImageUpdatePolicy, which wasn't consistently passed to
all roles anyway, and aligns the default image and constraints
for each role.
Change-Id: I85ec979934df220acbab9f7c3a6055f23e3bfc29
Partially-Implements: blueprint custom-roles
|
|
To enable custom roles, move these into the role templates where
they can be passed via parameter defaults. Because the Compute
role uses an inconsistent NovaCompute naming, these parameters
cannot be generated in overcloud.yaml, so moving them enables
backwards compatibility to be maintained when we move to a
fully jinja generated overcloud (e.g including the role
ResourceGroup resources)
Change-Id: I3f9b2275f2b1daeb8b83f09548a089dadcfe9eee
Partially-Implements: blueprint custom-roles
|
|
|
|
This moves the ringbuilder puppet code to puppet-tripleo
and migrates to the composable services format.
Closes-Bug: #1601857
Change-Id: I0ea2230072d3ff61a4047ffff1f4187951370f67
Depends-On: I427f0b5ee93a0870d43419009178e0690ac66bd6
|
|
This patch adds a new service_name section to each composable
service. We now have an explicit unit test check to ensure that
service_name exists in tools/yaml-validate.py.
This patch also wires service_names into hieradata on each
of the roles so that tools can access the deployed services locally
during deployment and upgrades.
Change-Id: I60861c5aa760534db3e314bba16a13b90ea72f0c
|
|
|
|
|
|
|
|
Currently we have a special controller-only deployment which writes
the name/ip of the "bootstrap node", e.g the cluster master, which
defaults to the first node in the Controller ResourceGroup.
Now we're moving to fully composable services/roles, it's possible
folks will want to deploy services that expect to detect the bootstrap
node (e.g so only one node does a DB sync) for non-controller roles.
So, take this opportunity to combine the bootstrap node deployment with
the "all nodes" data, such that we deploy the same data for all roles.
Because the boostrap node data is per role cluster, rather than truly
global, we pass it via input_values into each per-role Deployment.
At some future point we might consider renaming this, e.g to
something which describes per-cluster config vs "all nodes",
but as a first step let's just rationalize the resources.
Change-Id: I4011526a13c51b3d0f95c17fe8ed38115b4fdce4
|
|
This change uses the new os-refresh-config --timeout argument to set a
kill timeout for stalled os-refresh-config runs.
4 hours is a reasonable conservative value since it matches the stack
timeout - but it can be set shorter in the future based on actual run
times.
Change-Id: I433f558515df24736263ec0d50de08ad8f78498f
Closes-Bug: #1595722
DependsOn: Ibcbb2090aed126abec8dac49efa53ecbdb2b9b2c
|
|
The ConfigCommand parameter overrides the server resource metadata to
specify what command os-collect-config runs whenever any configuration
data changes.
The default is already 'os-refresh-config' so this change has no
effect but it allows a future change to specify an
os-refresh-config --timeout argument to fix bug #1595722.
Change-Id: I8dd35b6724d8c00e5495faca84ee8fee77641b82
Partial-Bug: #1595722
|
|
We added NodeConfigIdentifiers to trigger config to be re-applied on
update, but then later added DeployIdentifier which forces config to
*always* be applied on update, so we can simplify things by just
referencing the DeployIdentifier directly.
Change-Id: I79212def1936740825b714419dcb4952bc586a39
|
|
Add timezone as a composable service
Change-Id: I5bed49e1f8b803fb6d9d0b06165a38f61b132431
Partially-implements: blueprint composable-services-within-roles
|
|
Depends-On: Ie68d7eccf4938bdbdea93327af0638b3fd002b3e
Change-Id: I1eb68d0cd5f8bf4bf954dd9f12941bc493345708
Partially-Implements: blueprint composable-services-within-roles
|
|
Add NTP as a composable service for ObjectStorage.
Partially-implements: blueprint composable-services-within-roles
Change-Id: I6315abc7955c9dc1df9f211c1c5b7332b5e01d5a
|
|
|
|
Some puppet parameters were deprecated, some of them removed.
This patch reduce the number of warnings to a few, and the rest of
warnings are bugs that are in progress by Puppet OpenStack team.
This patch is mostly some cleanup so we don't have useless warnings in
Puppet catalog.
Changes:
* Update Ceilometer auth params
* Update Neutron auth params
* Update Heat auth params
* Update Swift hash suffix param
* Remove neutron::server::notifications::nova_url, useless.
Change-Id: Ie32681a1fe32735f70ba372630da09f91227298c
|
|
Similar to the https://review.openstack.org/#/c/259568 which added support
for the composable services StepConfig and ServiceConfigSettings parameters
so that the Controller role supports composable services, this adds those
interfaces for the ObjectStorage role.
Note that at this time the ObjectStorage post config only supports steps
2, 3 and 4, not all those in services/README.rst
Partially-Implements: blueprint composable-services-within-roles
Change-Id: I22ffaa68a6ccd4be29d51674871268179bcddcbc
|
|
This might be useful if we switch to %{hiera()} calls to lookup
the bind address from within a service.
Also gets rid of NetIpSubnetMap and provides same output from
NetIpMap instead.
Change-Id: I328a417d1f1fff9c31e9ad7b2b5083ac19bc7329
|
|
This change configures the hiera merge behavior to 'deeper' [1],
which is useful to merge values when the same hiera key is found
in multiple datafiles.
The hiera default 'native' only picks the value from the key with
the highest priority in the hierarchy.
1. https://docs.puppetlabs.com/hiera/1/lookup_types.html#deep-merging-in-hiera--120
Change-Id: I88c764d9af510ffbbad9fcaa4b747655e38255c2
|
|
Right now, the service-related IPs assosiated with the machine are
registered in the /etc/hosts with different hostnames. This is fine,
except if you need to register that hostname in a third party service
(such as FreeIPA), since the current configuration is not assigning a
domain to those IP addresses. So the current implementation requires
DNS to be properly working, which is not ideal for testing purposes.
Since the current hostnames are not currently being used; it's still
trivial to change this mapping and the format of them. instead of
having entries such as:
<INTERNAL IP> <node>-internalapi
<STORAGE IP> <node>-storage
...
in /etc/hosts; This changes the format to:
<INTERNAL IP> <node>.internalapi.<domain> <node>.internalapi
<STORAGE IP> <node>.storage.<domain> <node>.storage
...
So the network (external, internal, storage, etc...) is now
represented as a subdomain. For simplicity, the format without the
domain is still available through an alias.
Change-Id: I6502959a974546e5de757935acea15df6326acda
|
|
There are quite a few cases where it is useful to disable ring building on the
nodes. For example:
- using different weights, regions, and zones
- replacing a node in an existing Swift cluster
- adding a new node to an existing cluster
- using storage policies and therefore multiple rings
- using different nodes and disks for account, container and object servers
This patch allows it to disable ring building. Rings need to be maintained
manually then, and copied to all storage and proxy nodes within a cluster.
This patch is similar to I01311ec3ca265b151f8740bf7dc57cdf0cf0df6f, except that
it uses the current templates.
Change-Id: I56978b15823dd6eaf4b6fd3440df2f895e89611a
|
|
For the external loadbalancer work, we added the ability to specify
fixed ips for controller nodes on all network isolation networks.
In order to allow users full control over the placement and ip
addresses of deployed nodes, we need to be able to do the same thing
for the other node types.
Change-Id: I3ea91768b2ea3a40287f2f3cdb823c23533cf290
|
|
The swift device string is formatted in the outputs of the
controller template and swift-storage templates.
For ipv6 we need to delimit the address with [] as discussed in
https://bugzilla.redhat.com/show_bug.cgi?id=1296701#c0
Change-Id: Ie611d62c3668a65a7be52777a613d265682c6a8b
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Closes-Bug: 1534135
|
|
This change adds a new set of network templates with IPv6 subnets
that can be used instead of the existing IPv4 networks. Each network
can use either the IPv4 or IPv6 template, and the Neutron subnet will
be created with the specified IP version.
The default addresses used for the IPv6 networks use the fd00::/8
prefix for the internal isolated networks (this range is reserved
for private use similar to 10.0.0.0/8), and 2001:db8:fd00:1000::/64
is used as an example default for the External network
(2001:db8::/32 are the documentation addresses [RFC3849]), but this
would ordinarily be a globally addressable subnet. These
parameters may be overridden in an environment file.
This change will require updates to the OpenStack Puppet
Modules to support IPv6 addresses in some of the hieradata values.
Many of the OPM modules already have IPv6 support to support IPv6
deployments in Packstack, but some OPM packages that apply only to
Instack/TripleO deployments need to be updated.
IPv6 addresses used in URLs need to be surrounded by brackets in
order to differentiate IP address from port number. This change
adds a new output to the network/ports resources for
ip_address_uri, which is an IP address with brackets in the case
of IPv6, and a raw IP address without brackets for IPv4 ports.
This change also updates some URLs which are constructed in Heat.
This has been tested and problems were found with Puppet not
accepting IPv6 addresses. This is addressed in the latest Puppet.
Additional changes were required to make this work with Ceph.
IPv6 tunnel endpoints with Open vSwitch are not yet supported
(although support is coming soon), so this review leaves the
Tenant network as an isolated IPv4 network for the time being.
Change-Id: Ie7a742bdf1db533edda2998a53d28528f80ef8e2
|
|
|
|
Populates /etc/hosts with an entry for each IP address the node
is on, which will be useful to migrate services configuration from
using IPs into using hostnames.
This is how the lines look like on a host which doesn't have all ports:
172.16.2.6 overcloud-novacompute-0.localdomain overcloud-novacompute-0
192.0.2.9 overcloud-novacompute-0-external
172.16.2.6 overcloud-novacompute-0-internalapi
172.16.1.6 overcloud-novacompute-0-storage
192.0.2.9 overcloud-novacompute-0-storagemgmt
172.16.0.4 overcloud-novacompute-0-tenant
192.0.2.9 overcloud-novacompute-0-management
the network against which the default (or primary) name is resolved
can be configured (for computes) via ComputeHostnameResolveNetwork
Change-Id: Id480207c68e5d68967d67e2091cd081c17ab5dd7
|
|
Some operators desire more granular control of hostnames than is
currently possible via the *HostnameFormat parameters, in particular
mapping nodes to explicit IDs (such as inventory references) is not
easily possible.
So, add a HostnameMap parameter, which is optional and allows
explicit overriding of the default hostnames.
E.g pass an environment like this:
parameter_defaults:
HostnameMap:
overcloud-controller-0: overcloud-controller-prod-123-0
overcloud-controller-1: overcloud-controller-prod-456-0
overcloud-controller-2: overcloud-controller-prod-789-0
Note this is mapping is global (for all roles), because we
expect the keys to be unique given that they include the
role name and index by default.
Note that this depends on a fix for heat bug #1539737
Change-Id: Ib4d3d40e9523903ebccc06c3e14b2d71d924afa3
Depends-On: Ib934f443a8b8e4f75335a9d8b992e7f86791aa45
|
|
|
|
Adds a TimeZone parameter for node types and the top level
stack. Defaults to UTC.
Change-Id: I98123d894ce429c34744233fe3e631cbdd7c12b5
Depends-On: Icf7c681f359e3e48b653ea4648db6a73b532d45e
|
|
|
|
|
|
|
|
|
|
This change allows every overcloud node to optionally participate in
any of the isolated networks. The optional networks are not enabled
by default, but allow additional flexibility. Since the new networks
are not enabled by default, the standared deployment is unchanged.
This change was originally requested for OpenDaylight support.
There are several use cases for using non-standard networks.
For instance, one example might be adding the Internal API network
to the Ceph nodes, in order to use that network for administrative
functions. Another example would be adding the Storage Management
network to the compute nodes, in order to use it for backup. Without
this change, any deviation from the standard set of roles that use a
network is a custom change to the Heat templates, which makes
upgrades much more difficult.
Change-Id: Ia386c964aa0ef79e457821d8d96ebb8ac2847231
|
|
This change adds a system management network to all overcloud
nodes. The purpose of this network is for system administration,
for access to infrastructure services like DNS or NTP, or for
monitoring. This allows the management network to be placed on a
bond for redundancy, or for the system management network to be
an out-of-band network with no routing in or out. The management
network might also be configured as a default route instead of the
provisioning 'ctlplane' network.
This change does not enable the management network by default. An
environment file named network-management.yaml may be included to
enable the network and ports for each role. The included NIC config
templates have been updated with a block that may be uncommented
when the management network is enabled.
This change also contains some minor cleanup to the NIC templates,
particularly the multiple nic templates.
Change-Id: I0813a13f60a4f797be04b34258a2cffa9ea7e84f
|
|
|
|
This change adds a SoftwareConfigTransport parameter to role templates
so that the transport can be changed via a parameter_defaults entry.
This change will have no effect on an existing overcloud as the current
default POLL_SERVER_CFN is now explicit in the parameter default.
Change-Id: I5c2a2d2170714093c5757282cba12ac65f8738a4
|
|
The parameters have nothing to do with EC2 keypairs, they are used to
specify Nova SSH key pairs.
Change-Id: Ia8d37cb5c443812d02133747cb54fcaf0110d091
|
|
There are two reasons the name property should always be set for deployment
resources:
- The name often shows up in logs, files and API calls, the default
derived name is long and unhelpful
- Sorting by name determines the merge order of os-apply-config, and the
execution order of puppet/shell scripts (note this is different to
resource dependency order) so leaving the default name results in an
undetermined order which could lead to unpredictable deployment of
configs
This change simply sets the name to the resource name, but a future change
should prepend each name with a run-parts style 2 digit prefix so that the
order is explicitly stated. Documentation for extraconfig needs to clearly
state what prefix is needed to override which merge/execution order.
For existing overcloud stacks, heat currently replaces deployment resources
when the name changes, so this change
Depends-On: I95037191915ccd32b2efb72203b146897a4edbc9
Change-Id: Ic4bcd56aa65b981275c3d4214588bfc4de63b3b0
|
|
All of our sensitive parameters are defaulted to easily predictable
values, which is very bad from a security perspective because we don't
force clients to make sane choices thus risk deploying with the
predictable default values. tripleoclient supports generating random
values for all of these, so remove the defaults, for non-tripleoclient
usage we can create a developer-only environment with defaults.
Related-Bug: #1516027
Change-Id: Ia0cf3b7e2de1aa42cf179cba195fb7770a1fc21c
Depends-On: Ifb34b43fdedc55ad220df358c3ccc31e3c2e7c14
|
|
This adds a parameter for each role, where optional scheduler hints
may be passed to nova. One potential use-case for this is using
the ComputeCapabilities to pin deployment to a specific node (not
just a specific role/profile mapping to a pool of nodes like we
have currently documented in the ahc-match docs).
This could work as follows:
1. Tag a specific node as "node:controller-0" in Ironic:
ironic node-update <id> replace properties/capabilities='node:controller-0,boot_option:local'
2. Create a heat environment file which uses %index%
parameters:
ControllerSchedulerHints:
'capabilities:node': 'controller-%index%'
Change-Id: I79251dde719b4bb5c3b0cce90d0c9d1581ae66f2
|
|
Some Nova hooks might require custom properties/metadata set for the
servers deployed in the overcloud, and this would enable us to inject
such information.
For FreeIPA (IdM) integration, there is effectively a Nova hook that
requires such data.
Currently this inserts metadata for all servers, but a subsequent CR
will introduce per-role metadata. However, that was not added to this
because it will require the usage of map_merge. which will block those
changes to be backported. However, this one is not a problem in that
sense.
Change-Id: I98b15406525eda8dff704360d443590260430ff0
|
|
Introduce configuration of the nodes' domains through a parameter.
Change-Id: Ie012f9f2a402b0333bebecb5b59565c26a654297
|
|
This commit enables the injection of a trust anchor or root
certificate into every node in the overcloud. This is in case that the
TLS certificates for the controllers are signed with a self-signed CA
or if the deployer would like to inject a relevant root certificate
for other purposes. In this case the other nodes might need to have
the root certificate in their trust chain in order to do proper
validation
Change-Id: Ia45180fe0bb979cf12d19f039dbfd22e26fb4856
|
|
We don't necessarily want the network configuration to be reapplied
with every template update so we add a param to configure on which
action the NetworkDeployment resource should be executed.
Change-Id: I0e86318eb5521e540cc567ce9d77e1060086d48b
Co-Authored-By: Dan Sneddon <dsneddon@redhat.com>
Co-Authored-By: James Slagle <jslagle@redhat.com>
Co-Authored-By: Jiri Stransky <jstransk@redhat.com>
Co-Authored-By: Steven Hardy <shardy@redhat.com>
|
|
This commits aims to allow a user to specify several ntp servers and not
just one.
Example:
openstack overcloud deploy --templates --ntp-server
0.centos.pool.org,1.centos.pool.org
Change-Id: I4925ef1cf1e565d789981e609c88a07b6e9b28de
|
|
|
|
It's become apparent that some actions are required in the pre-deploy
phase for all nodes, for example applying common hieradata overrides,
or also as a place to hook in logic which must happen for all nodes
prior to their removal on scale down (such as unregistration from
a satellite server, which currently doesn't work via the
*NodesPostDeployment for scale-down usage).
So, add a new interface that enables ExtraConfig per-node (inside the
scaled unit, vs AllNodes which is used for the cluster-wide config
outside of the ResourceGroup)
Change-Id: Ic865908e97483753e58bc18e360ebe50557ab93c
|