Age | Commit message (Collapse) | Author | Files | Lines |
|
The compute service list is polled until all expected hosts are reported or a
timeout occurs (600s).
Adds a cellv2_discovery flag to puppet services. Used to generate a list of
hosts that should have cellv2 host mappings.
Adds a canonical fqdn and that should match the fqdn reported by a host.
Adds the ability to upload a config script for docker config instead of using
complex bash on-liners.
Closes-bug: 1720821
Change-Id: I33e2f296526c957cb5f96dff19682a4e60c6a0f0
(cherry picked from commit 61fcfca045aeb5be1ee280d8dd9c260fb39b9084)
|
|
Currently when a network in network_data is disabled it no port
definitions for that network will be created per role. This results in
no fallback to the ctlplane IP because overriding a type in
network-isolation to noop.yaml does nothing when the port does not exist
for the role.
This patch changes the IPs when a network is disabled to be the same IPs
as ctlplane and fixes the issue, along with removing the need to use
noop.yaml override for ports (non-vip).
Closes-Bug: 1721542
Change-Id: I301370fbf47a71291614dd60e4c64adc7b5ebb42
Signed-off-by: Tim Rozet <trozet@redhat.com>
(cherry picked from commit 9285cb5fc99331ca63ff09df59f26b6018bc781b)
|
|
fluentd hiera elements were being set in all_nodes.json, but then were
overwritten by values in <role>.json (e.g., controller.json). This
commit removes the values from all-nodes.json and ensures that they
are set correctly in <role>.json.
Closes-Bug: #1713240
Change-Id: I2b4c74c2a807f8e2fed57112f06b3791701bbe95
(cherry picked from commit d9db0c5f4f0fb07832e54b1c7fd7f5c8bfd4134e)
|
|
These were missed in the previous refactor in role.role.j2.yaml,
we shouldn't reference these via hard-coded values or they become
mandatory in the roles_data.yaml
Change-Id: I014e7d6679c5733b17243d647eaad228c276585a
Closes-Bug: #1711656
(cherry picked from commit 4a4f6783081d9c5b74cda5149bef7655102fcfd8)
|
|
The changes in puppet/role.role.j2.yaml should have been made
to overcloud.j2.yaml, because we don't want the hard-coded reference
to the deprecated name in the parent template. Note we need to
pass this value from the parent template so the %index% substitution
works, which is required for predictable placement via *SchedulerHints
Partial-Bug: #1711656
Change-Id: Ided1802daac48d737f53caa7093df814ba101dd0
(cherry picked from commit c6207379db07544240b699ba000537b58d9fb68f)
|
|
|
|
This exposes the deploy workflow for all roles from deploy-steps
via overcloud.j2.yaml - which means we can write it via the new
openstack overcloud config download command and/or run the workflow
outside of heat via mistral
With https://review.openstack.org/#/c/485732/ applied to
tripleoclient it becomes possible to do:
openstack overcloud config download --config-dir tmpconfig
cd tmpconfig/tripleo-EvEZk0-config
ansible-playbook -b -i /usr/bin/tripleo-ansible-inventory deploy_steps_playbook.yaml
This runs the deploy steps, exactly the same as normally run via heat
via ansible-playbook for all overcloud nodes (--limit can be used to restrict
to specific nodes/roles).
Change-Id: I96ec09bc788836584c4b39dcce5bf9b80e914c71
|
|
Add some special-casing for backwards compatibility, such that the
Compute role can be rendered via j2 for support of composable networks.
Change-Id: Ieee446583f77bb9423609d444c576788cf930121
Partially-Implements: blueprint composable-networks
|
|
This change modifies the templates to dynamically define the VIPs
based on network_data.yaml. If a network is defined and marked
with "vip: true" in network_data.yaml, it will be included in the
overcloud.yaml which defines the deployment-level resources.
This should make it possible to create custom networks and
use them for services which use high-availability through VIPs.
Also, extraconfig/nova_metadata/krb-service-pricipals.yaml
was modified to dynamically produce the FQDN map for VIPs on
isolated networks, to match overcloud.j2.yaml.
Depends-On: If074f87494a46305c990a0ea332c7b576d3c6ed8
Depends-On: Iab8aca2f1fcaba0c8f109717a4b3068f629c9aab
Partially-implements: blueprint composable-networks
Closes-bug: 1667104
Change-Id: I71339a6ac41133e95dbc3f93abb7a9fdeb0f2da0
|
|
|
|
These are mostly the low hanging fruit that only required a few
minor changes to fix. There are more that require a lot of changes
or might be more controversial that will be done later.
Change-Id: I55cebc92ef37a3bb167f5fae0debe77339395e62
Partial-Bug: 1700664
|
|
Just setting CloudDomain won't make the domains used consistent.
There are a number of CloudName parameters that must be set as well.
This change adds a sample environment that includes all of those
parameters so it is easy to set everything consistently.
Also fixes the description of CloudNameCtlplane to reflect the
actual use for that parameter.
Change-Id: I56d1c1c5619f83c16c4e8350aa84fccc3d748425
|
|
We missed to parse and merge {controller,NovaCompute}ExtraConfig data
in change [1].
Also fixes whitespaces handling in docker-steps.j2 and
puppet-steps.j2 previously updated by [2].
1. Id37de5864138edd5476c097a8a1f0763faeaf768
2. I36a642fbc2076ad9e4a10ffc56d6d16f3ed6f27a
Change-Id: Ia9983bc991eb79e479855993c1c8819ddfb52e38
|
|
Merges per-role config settings into merged_config_settings which
is wired into the workflow executions environment.
Useful to consume role config settings from within a workflow.
Change-Id: Id37de5864138edd5476c097a8a1f0763faeaf768
|
|
Makes it possible to resolve network subnets within a service
template; the data is transported into a new property ServiceData
wired into every service which hopefully is generic enough to
be extended in the future and transport more data.
Data can be consumed in service templates to set config values
which need to know what is the subnet where a deamon operates (for
example the Ceph Public vs Cluster network).
Change-Id: I28e21c46f1ef609517175f7e7ee19e28d1c0cba2
|
|
This should be handled in puppet-tripleo, as is done for some other
services e.g ceph. This has also been identified as a possible
performance problem due to the nested get_attr calls.
Change-Id: I7e14f0219c28c023c4e8e1d4693f0bfa9674d801
Related-Bug: #1684272
Depends-On: Iccb9089db4b382db3adb9340f18f6d2364ca7f58
|
|
|
|
|
|
Starting with Pike, Heat will do attribute resolution in a single pass. A
consequence of this is that when the result of a get_attr is passed to
another get_attr call, there must be a dependency relationship between the
resources so that the inner attribute is resolved first before we try to
determine which attributes are required from the resource in the outer
call.
There are two uses of nested dep_attr in the overcloud template. One (which
hopefully can be removed soon) is in the allNodesConfig resource. In this
case, the {{primary_role_name}}IpListMap already depends on the
ServiceNetMap.
The second is in the KeystoneAdminVip output. This patch makes the VipMap
depend on the ServiceNetMap so that attributes can be resolved in a single
pass in that case.
Change-Id: I438a79748b9b408ec1101271d96c60d84028b57e
|
|
Just use the value from the ServerOsCollectConfigData resource in the
output instead of recalculating the value for each role via jinja.
Change-Id: I4e3bf4f25c9a8f677d5d177eb409594193a86405
|
|
Add a new output, DeployedServerEnvionmentOutput, that can be used as
the contents of an environment file to input into a services only stack
when using split-stack. The parameter simplifies the manual steps needed
to deploy split-stack.
By default, the resource that generates the output is mapped to
OS::Heat::None.
implements blueprint split-stack-default
Change-Id: I6004cd3f56778f078a69a20e93a0eba0c574b3db
|
|
This exposes the nova server IDs for each role, and the bootstrap node
so that we can add this data to the tripleo dynamic ansible inventory
Change-Id: I2fc48eec77210805c0139fa4abcbf4dd721e7c37
|
|
|
|
|
|
|
|
|
|
|
|
Adds in the execution environment of the workflow steps a list of
per-service network IPs. This can be used by the workflows to
execute actions against the nodes hosting a given service.
Change-Id: Id7c735d53f04f6ad848b2f9f1adaa3c84ecd2fcd
Implements: blueprint tripleo-ceph-ansible
|
|
Introduces a general mechanism meant to allow for the execution
of workflows during the deployment steps.
Services can define workflow actions to be triggered during a step
in the newly added service_workflow_tasks section. The syntax is:
service_workflow_tasks:
step2:
- name: my_action_name
action: std.echo
input:
output: 'hello world'
Implements: blueprint tripleo-ceph-ansible
Depends-On: If02799e7457ca017cc119317dfb2db7198a3559f
Depends-On: Ibc5707f9f06266fe84ad1dd91dcb984157871d30
Change-Id: I36a642fbc2076ad9e4a10ffc56d6d16f3ed6f27a
|
|
Add VipMap output to the top level stack output. VipMap is a mapping
from each network to the VIP address on that network. Also includes the
Redis VIP.
This output facilitates deploying split-stack so you can feed the VIP
addresses from VipMap as inputs into the services stack.
implements blueprint split-stack-default
Change-Id: I245920994613c9bd10801c25fa545267aa49b239
|
|
Add 2 new environments to faciltate deploying split-stack:
environments/overcloud-baremetal.j2.yaml
environments/overcloud-services.j2.yaml
The environments are used to deploy 2 separate Heat stacks, one for just
the baremetal+network configuration and one for the service
configuration.
In order to keep Heat's view of the server's hostname consistent across
the 2 stacks the 2 environments set the same HostnameFormat with
"overcloud" as the stack name.
implements blueprint split-stack-default
Change-Id: I0b3f282c08af6fecea8f136908b806db70bada46
|
|
Adds a new output, ServerOsCollectConfigData, which is the
os-collect-config configuration associated with each server resource.
This can be used to [pre]configure the os-collect-config agents on
deployed-server's.
Having the data available as a stack output is more user friendly than
having to query several nested levels of stack resources, and then
inspect resource metadata.
implements blueprint split-stack-default
Change-Id: Iaf062f1a72e2a9e4d97f84c67f72408a6b5cebfc
Depends-On: I8acfd67cd8138d587cc362184c84a08134bf3157
|
|
First, this parameter must match what is configured on the
undercloud, so strengthen that language.
There is also now an undercloud.conf parameter that can be used to
set the requisite options on the undercloud services, so just point
users at that rather than trying to explain how to configure the
services manually (which is error-prone and doesn't survive
undercloud updates).
Change-Id: I002cce176e3430473a29e79efde3464bddb24cc7
|
|
Existing host_config_and_reboot.role.j2.yaml is done in ocata to
configure kernel args. This can be enhanced with use of role-specific
parameters, which is done in the current patch. The earlier method is
deprecated and will be removed in Q releae.
Implements: blueprint ovs-2-6-dpdk
Change-Id: Ib864f065527167a49a0f60812d7ad4ad12c836d1
|
|
Adds the ability to blacklist servers from all SoftwareDeployment
resources. The servers are specified in a new list parameter,
DeploymentServerBlacklist by the Heat assigned name
(overcloud-compute-0, etc).
implements blueprint disable-deployments
Change-Id: I46941e54a476c7cc8645cd1aff391c9c6c5434de
|
|
|
|
|
|
This exposes a list of hostnames similar to the RoleNetIpMap, this
will be consumed by the dynamic inventory ref
https://review.openstack.org/465558
Change-Id: I61efac5634e9b6fbb820e693c71a0adae5fa8b6a
|
|
Master is now the development branch for pike
changing the release alias name.
Change-Id: I938e4a983e361aefcaa0bd9a4226c296c5823127
|
|
Looking up role_data is very slow, particularly when referencing the
RoleData output, as it re-resolves every output for all the (many) nested
stacks in the *ResourceChain resources.
There is work ongoing to optimize this in heat, but this approach improves
performance considerably (my local output-show for RoleData is 10x faster)
so we can consider including RoleData in the tripleo dynamic ansible inventory,
which may be needed for validations and minor updates in future.
Change-Id: I5e6665703e859dc1ec6b60dece70f858c9afaf66
|
|
When a service is enabled on multiple roles, the parameters for the
service will be global. This change enables an option to provide
role specific parameter to services and other templates.
Two new parameters - RoleName and RoleParameters, are added to the
service template. RoleName provides the role name of on which the
current instance of the service is being applied on. RoleParameters
provides the list of parameters which are configured specific to the
role in the environment file, like below:
parameters_default:
# Default value for applied to all roles
NovaReservedHostMemory: 2048
ComputeDpdkParameters:
# Applied only to ComputeDpdk role
NovaReservedHostMemory: 4096
In above sample, the cluster contains 2 roles - Compute, ComputeDpdk.
The values of ComputeDpdkParameters will be passed on to the templates
as RoleParameters while creating the stack for ComputeDpdk role. The
parameter which supports role specific configuration, should find the
parameter first in in the RoleParameters list, if not found, then the
default (for all roles) should be used.
Implements: blueprint tripleo-derive-parameters
Change-Id: I72376a803ec6b2ed93903cc0c95a6ffce718b6dc
|
|
When implementing custom roles, we lost an implicit dependency that
ensured AllNodesExtraConfig is applied before AllNodesDeploySteps,
which causes problems if you need to write hieradata via the
AllNodesExtraConfig hook (some cisco integrations we have in tree
do this, and are now broken because the ordering is no longer ensured.
Change-Id: Ie78ecbb4e135ab7f196867ef9d8d271049a9cd10
Closes-Bug: #1687597
|
|
|
|
|
|
|
|
Fetch the host public keys from each node, combine them all and write to the
system-wide ssh known hosts. The alternative of disabling host key
verification is vulnerable to a MITM attack.
Change-Id: Ib572b5910720b1991812256e68c975f7fbe2239c
|
|
Prior to Ocata, the Controller role was hardcoded for various lookups.
When we switched to having the primary role name being dynamically
pulled from the roles_data.yaml using the first role as the primary
role as part of I36df7fa86c2ff40026d59f02248af529a4a81861, it
introduced a regression for folks who had previously been using
a custom roles file without the Controller being listed first.
Instead of relying on the position of the role in the roles data, this
change adds the concepts of tags to the role data that can be used when
looking for specific functionality within the deployment process. If
no roles are specified with this the tags indicating a 'primary'
'controller', it will fall back to using the first role listed in the
roles data as the primary role.
Change-Id: Id3377e7d7dcc88ba9a61ca9ef1fb669949714f65
Closes-Bug: #1677374
|
|
To enable easier detection of the IPs associated with nodes (such
as to enable the tripleo-validations ansible inventory to work with
custom roles more easily) expose the data we already have about the
nodes/roles and the list of IPs for each network.
Change-Id: I5667a142f47fbff120c703bedadd8b6e163c9480
|
|
This change adds two files which demonstrate manipulation of the
VIP IP addresses without using an external load balancer. This
allows the configuration of DNS, or allows for continuity when
replacing an existing environment.
The fixed IPs for the virtual IPs are set using the new parameters,
and this change also adds a RedisVirtualFixedIPs parameter for
setting the Redis VIP.
Partial-Bug: https://bugs.launchpad.net/tripleo/+bug/1604946
Change-Id: I4e926f1c6b30d4009d24a307bc21e07e1731b387
|
|
|