Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
Introduces a general mechanism meant to allow for the execution
of workflows during the deployment steps.
Services can define workflow actions to be triggered during a step
in the newly added service_workflow_tasks section. The syntax is:
service_workflow_tasks:
step2:
- name: my_action_name
action: std.echo
input:
output: 'hello world'
Implements: blueprint tripleo-ceph-ansible
Depends-On: If02799e7457ca017cc119317dfb2db7198a3559f
Depends-On: Ibc5707f9f06266fe84ad1dd91dcb984157871d30
Change-Id: I36a642fbc2076ad9e4a10ffc56d6d16f3ed6f27a
|
|
This was made configurable in a recent commit [1] So this flag makes it
easier for deployers to use that functionality.
[1] Ic68266eaf39d6803f7c3e299095578bbcfd63b88
Change-Id: Iffff20dcda53bc7237586dd240e581bcb0282844
|
|
|
|
|
|
Starting with the Ocata release, bare metal nodes are no longer get recognized
by nova automatically. To avoid forcing users into running nova manage command
each time they enroll a node, we will have to allow enable the periodic task
to do so.
Change-Id: I8b0afac54dc9bd51dbe2ae4f237e4de50459be0f
Closes-Bug: #1697724
|
|
In order to deploy OpenDaylight with DPDK we need to copy the DPDK
config for OVS done in the neutron-ovs-dpdk service template, without
enabling OVS agent for compute nodes. To do this correctly, we should
inherit and openvswitch service which is a common place to set OVS
configuration and parameters. Note: vswitch::dpdk config will be called
in prenetwork setup with ovs_dpdk_config.yaml so there is no need to
include that in the step config for neutron-ovs-dpdk-agent service or
opendaylight-ovs-dpdk.
Changes Include:
- Creates a common openvswitch service template, which in the future
will migrate to be its own service.
- Renames and fixes OVS DPDK configuration heat parameters in the
openvswitch template.
- neutron-ovs-dpdk-agent now inherits the common openvswitch template.
- Adds opendaylight-ovs-dpdk template which also inherits common ovs
template.
- Uses OVS DPDK config script to allow configuring OVS DPDK in
prenetwork config (before os-net-config runs). This has an issue
where hieradata is not present yet, so we have to redefine the heat
parameters and pass them via bash. In the future this should be
corrected.
- Adds opendaylight-dpdk environment file used to deploy an ODL + DPDK
deployment.
- Updates neutron-ovs-dpdk environment file.
Closes-Bug: 1656097
Partial-Bug: 1656096
Depends-On: I3227189691df85f265cf84bd4115d8d4c9f979f3
Change-Id: Ie80e38c2a9605d85cdf867a31b6888bfcae69e29
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
DPDK has to be enabled on openvswitch on the boot before
configuring the network as when the network uses DPDK ports
OvS should be ready to handle DPDK. Enabled DPDK via
PreNetworkConfig by checking if ServiceNames contains
DPDK service.
Implements: blueprint ovs-2-6-dpdk
Closes-Bug: #1654975
Depends-On: I83a540336c01a696780621fb2b39486a6abf0917
Change-Id: I7af4534d91e67c94ba559b78b9ac6a001e639db3
|
|
The glance API network was being set to storage and it should be
internal_api.
Closes-Bug: 1699535
Change-Id: I75bc05aeab999f0e3eb3f4ebaceb276e888addc9
Signed-off-by: Tim Rozet <trozet@redhat.com>
|
|
This was missed in the original patch series, but this is a
significant enough change that it needs to be communicated to users.
Change-Id: Ibcfdde3cc544d152e78529fc57a63c7dc6592c4f
|
|
|
|
|
|
|
|
|
|
This will set the max_active_keys setting in keystone.conf, and
furtherly we'll read this value from tripleo-common to do purging of
keys if necessary.
bp keystone-fernet-rotation
Change-Id: I9c6b0708c2c03ad9918222599f8b6aad397d8089
|
|
Add VipMap output to the top level stack output. VipMap is a mapping
from each network to the VIP address on that network. Also includes the
Redis VIP.
This output facilitates deploying split-stack so you can feed the VIP
addresses from VipMap as inputs into the services stack.
implements blueprint split-stack-default
Change-Id: I245920994613c9bd10801c25fa545267aa49b239
|
|
Add 2 new environments to faciltate deploying split-stack:
environments/overcloud-baremetal.j2.yaml
environments/overcloud-services.j2.yaml
The environments are used to deploy 2 separate Heat stacks, one for just
the baremetal+network configuration and one for the service
configuration.
In order to keep Heat's view of the server's hostname consistent across
the 2 stacks the 2 environments set the same HostnameFormat with
"overcloud" as the stack name.
implements blueprint split-stack-default
Change-Id: I0b3f282c08af6fecea8f136908b806db70bada46
|
|
The DeploymentSwiftDataMap parameter is used to set the
deployment_swift_data property on the Server resoures. The parameter is
a map of role names and node indexes to Swift container and object names
to be used for storing deployment data.
The parameter allows for using predefined Swift objects for storing
deployment data instead of container/object names with generated uuid's
from Heat.
implements blueprint split-stack-default
Depends-On: Ia07e9374a4b95bd0e74fc47fb9df4bf6ad096715
Change-Id: I471037de35e7f349d900462ec3ffb16fe2d6ebd9
|
|
Adds a new output, ServerOsCollectConfigData, which is the
os-collect-config configuration associated with each server resource.
This can be used to [pre]configure the os-collect-config agents on
deployed-server's.
Having the data available as a stack output is more user friendly than
having to query several nested levels of stack resources, and then
inspect resource metadata.
implements blueprint split-stack-default
Change-Id: Iaf062f1a72e2a9e4d97f84c67f72408a6b5cebfc
Depends-On: I8acfd67cd8138d587cc362184c84a08134bf3157
|
|
|
|
Change-Id: I8dca09372a58b6dacbb8e65602e1b0bdb6c01ae7
Related-Blueprint: example-custom-role-environments
|
|
|
|
The current port conflicts with trove. This is updated in puppet
module. See related change: https://review.openstack.org/#/c/471551/
Change-Id: Iefacb98320eef0bca782055e3da5d243993828d7
|
|
With the addition of the KeystoneFernetKeys parameter, it's now possible
to do fernet key rotations using mistral, by modifying the
KeystoneFernetKeys variable in mistral; subsequently a rotation could
happen when doing a stack update.
So this re-enables the managing of the key files by puppet. However,
this is left configurable, as folks might want to manage those files
out-of-band.
bp keystone-fernet-rotation
Change-Id: Ic82fb8b8a76481a6e588047acf33a036cf444d7d
|
|
This uses the newly introduced dict with the keys and paths instead of
the individual keys. Having the advantage that rotation will be
possible on stack update, as we no longer have a limit on how many keys
we can pass (as we did with the individual parameters).
bp keystone-fernet-rotation
Change-Id: I7d224595b731d9f3390fce5a9d002282b2b4b8f2
Depends-On: I63ae158fa8cb33ac857dcf9434e9fbef07ecb68d
|
|
|
|
|
|
Existing host_config_and_reboot.role.j2.yaml is done in ocata to
configure kernel args. This can be enhanced with use of role-specific
parameters, which is done in the current patch. The earlier method is
deprecated and will be removed in Q releae.
Implements: blueprint ovs-2-6-dpdk
Change-Id: Ib864f065527167a49a0f60812d7ad4ad12c836d1
|
|
Gnocchi 4 supports storage sacks during upgrade. lets make this
configurable if we want to use more metricd workers.
Change-Id: Ibb2ee885e59d43c1ae20887ec1026786d58c6b9e
|
|
Add new parameters that control the NAS security settings in Cinder's
NFS and NetApp back end drivers. The settings are disabled by default.
Partial-Bug: #1688332
Depends-On: I76e2ce10acf7b671be6a2785829ebb3012b79308
Change-Id: I306a8378dc1685132f7ea3ed91d345eaae70046f
|
|
This patch adds the templates required to enable the OVN DB servers
to be started in master/slave mode in the pacemaker cluster.
For the OVN DBs base profile, ::tripleo::haproxy expects the parameter
'ovn_dbs_manage_lb' set to true in order for it to configure OVN DBs
for load balancing (please see this commit [1]). So this patch sets
'ovn_dbs_manage_lb' to true.
[1] - I9dc366002ef5919339961e5deebbf8aa815c73db
Co-authored-by: Babu Shanmugam (babu.shanmugam@gmail.com)
Depends-on: I94d3960e6c5406e3af309cc8c787ac0a6c9b1756
Change-Id: I60c55abfc523973aa926d8a12ec77f198d885916
Closes-bug: #1670564
|
|
Parameters which are not part of the heat environment template
are required by the worflows like derive parameters. In order to
seprate from the heat environment parameters, the workflow only
parameters will be provided via plan-environement section,
workflow_parameters.
Implements: blueprint tripleo-derive-parameters
Change-Id: I36d295223c28afff1e0996b4885b8a81c00842f0
|
|
The deploy-artifacts.sh script is supposed to support installing rpms
when provided by DeployArtifactUrls. The problem is that it uses yum to
install which does not actually work unless the filename ends with .rpm.
This change updates the script to rename the downloaded file to end with
.rpm if it is an rpm so that it is properly installed.
Change-Id: I048d2b4474f9efe424e98e3868f325704e9c352f
Closes-Bug: #1697102
|
|
|
|
Implements: blueprint ironic-inspector-composable-service
Co-Authored-By: Dmitry Tantsur <dtantsur@redhat.com>
Change-Id: I825516f9f5c2b0c03a3f497d6954022714aab988
|
|
This reverts commit a915b150018bf306a5942782bf93c5faadcd7cde.
The argument is renamed and causing promotions to fail.
Change-Id: I7e1674cff75b606c20956edddf70eee2990fca78
|
|
|
|
|
|
As we create new standard roles, we should include them from a single
location for ease of use and to reduce the duplication of the role
definitions elsewhere. This change adds a roles folder to the THT that
can be used with the new roles commands in python-tripleoclient by the
end user to generate a roles_data.yaml from a standard set of roles.
Depends-On: I326bae5bdee088e03aa89128d253612ef89e5c0c
Change-Id: Iad3e9b215c6f21ba761c8360bb7ed531e34520e6
Related-Blueprint: example-custom-role-environments
|
|
Instead of doing this via puppet which has the consequence of including
the step_config and getting included on the host manifest. Lets disable
via ansible upgrade task instead.
Change-Id: I5f1a4019dd635dea67db4313bd06a228ae7bacd4
|
|
Gnocchi 4 supports storage sacks during upgrade. lets make this
configurable if we want to use more metricd workers.
Change-Id: I27390b8babf8c4ef35f4c9b8a2e5be69fb9a54ee
|
|
Add ServiceDebug parameters for each services that will allow operators
to enable/disable Debug for specific services.
We keep the Debug parameters for backward compatibility.
Operators want to enable Debug everywhere:
Debug: true
Operators want to disable Debug everywhere:
Debug: false
Operators want to disable Debug everywhere except Glance:
GlanceDebug: true
Operators want to enable Debug everywhere except Glance:
Debug: true
GlanceDebug: false
New parameters: AodhDebug, BarbicanDebug, CeilometerDebug, CinderDebug,
CongressDebug, GlanceDebug, GnocchiDebug, HeatDebug, HorizonDebug,
IronicDebug, KeystoneDebug, ManilaDebug, MistralDebug, NeutronDebug,
NovaDebug, OctaviaDebug, PankoDebug, SaharaDebug, TackerDebug,
ZaqarDebug.
Note: for backward compatibility in Horizon, HorizonDebug is set to
false, so we maintain previous behavior.
Change-Id: Icbf4a38afcdbd8471d1afc11743df9705451db52
Implement-blueprint: composable-debug
Closes-Bug: #1634567
|
|
This helps with processing the backlog, so lets update
the default out of the box.
Change-Id: I06d4ca95f4a1da2864f4845ef3e7a74a1bce9e41
|
|
|
|
|
|
Idle compute nodes are found to already consume ~1.5GB of memory, so
2GB is a bit tight. Increasing to 4GB to be on the safe side. Also
see https://bugzilla.redhat.com/show_bug.cgi?id=1341178
Change-Id: Ic95984b62a748593992446271b197439fa12b376
|
|
Adds the ability to blacklist servers from all SoftwareDeployment
resources. The servers are specified in a new list parameter,
DeploymentServerBlacklist by the Heat assigned name
(overcloud-compute-0, etc).
implements blueprint disable-deployments
Change-Id: I46941e54a476c7cc8645cd1aff391c9c6c5434de
|
|
|