Age | Commit message (Collapse) | Author | Files | Lines |
|
Ceilometer Alarm is deprecated in Liberty by Aodh.
This patch:
* manage Aodh Keystone resources
* deploy Aodh API under WSGI, Notifier, Listener and Evaluator
* manage new parameters to customize Aodh deployment
* uses ceilometer DB for the upgrade path
* pacemaker config
Depends-On: I9e34485285829884d9c954b804e3bdd5d6e31635
Depends-On: I891985da9248a88c6ce2df1dd186881f582605ee
Depends-On: Ied8ba5985f43a5c5b3be5b35a091aef6ed86572f
Co-Authored-By: Pradeep Kilambi <pkilambi@redhat.com>
Change-Id: I58d419173e80d2462accf7324c987c71420fd5f6
|
|
This splits the upgrade script delivery out of the UpgradeWorkflow
and into a new task which delivers the upgrade script for
compute and object-storage nodes. This is intended to be the first
part of the upgrades process, since we need to upgrade swift nodes
before the controllers and then only one at a time. So this will
deliver the upgrade script which can be invoked by the operator
using the existing script in tripleo-common
'upgrade-non-controller.sh'.
This can be invoked by passing the -e
environments/major-upgrade-script-delivery.yaml (added here) to
the openstack overcloud deploy command.
Change-Id: I20a0d4978e907111404f8108c502ab53b69a3296
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This change adds a sample network-environment.yaml file to the
environments. This sample includes pointers to NIC config files,
as well as default network subnets and allocation pools.
This is meant to be a demonstration of the default settings for
a virtual deployment. In a real deployment, the operator would
customize the settings here and point to custom NIC config
templates.
Change-Id: I0288c0680effea06b5f805a0d955e8bbf6152ba6
|
|
During upgrades, we only run Puppet on the whole deployment to converge
the state, after the upgrade workflow itself has been fully
completed. That is an opportunity to utilize Puppet to make sure Nova
Compute RPC doesn't remain pinned to the older version.
Change-Id: I6ebc813a80dfd9dfbbb213c38724487e044507b8
|
|
This parameter leads to the nonoperational state
in Nexus Vxlan topology when set to True, when VNIs created
but the nve peers do not get discovered on the Nexus.
It is time consuming process to debug the configuration
and find out that this parameter should be changed
to False. To prevent future problems for the future
deployment we want to default this parameter to False.
Change-Id: I685ad7d212af0d9e568acbf1ccf1607d120c195e
|
|
|
|
This parameter can be used for pinning (and later unpinning) the Nova
Compute RPC version.
Change-Id: I2f181f3b01f0b8059566d01db0152a12bbbd1c3e
|
|
Change-Id: I7226070aa87416e79f25625647f8e3076c9e2c9a
|
|
Add Heat software deployments to be used to upgrade major versions of
OpenStack on the controller nodes. All controller services are taken
down while the upgrade is in progress.
The new updated yum repositories should be configured by another process
e.g. the deployment artifacts transfer via Swift.
Change-Id: Ia0a04e4a11d67e7a5acc53c1f8a8f01ed5ca8675
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
|
|
This change adds extra config yaml files for big switch agent
and big switch lldp.
This change is mainly for compute nodes. The changes related
to controller nodes are landed at e78e1c8d9b5a7ebf327987b22091bff3ed42d1c1
This change also removes the neutron_enable_bigswitch_ml2 flag. Instead,
User needs to specify NeutronMechanismDrivers: bsn_ml2 in environment file.
Previous discussion about this change can be found at an abandoned
review request https://review.openstack.org/#/c/271940/
Depends-On: Iefcfe698691234490504b6747ced7bb9147118de
Change-Id: I81341a4b123dc4a8312a9a00f4b663c7cca63d7c
|
|
Changed the heat-docker-agents namespace to use the namespacing
specified in the environment file, which reduces modifications required
on the user when using a local registry.
Changed the start agents script to handle using a local registry both
with a namespace and without.
Change-Id: I16cc96b7ecddeeda07de45f50ffc6a880dabbba6
|
|
|
|
|
|
Deploy a TripleO overcloud with OpenContrail Vrouter plugin configured
to interact with an existing OpenContrail Server Manager.
OpenContrail is an Apache 2.0-licensed project that is built using
standards-based protocols and provides all the necessary components for
network virtualization–SDN controller, virtual router, analytics engine,
and published northbound APIs. It has an extensive REST API to configure
and gather operational and analytics data from the system.
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Change-Id: I699a7c4ea09d024fe4d70c6a507c524f0a7aafd5
|
|
Define environments to create VLANs attached to a single physical nic as
'single-nic-vlans' does, but using linux_bridge instead of ovs_bridge
Change-Id: I8c6fe9ec7028178f783e7d9c0a1cc67a1517eb3d
|
|
Right now our vncproxy settings are hard-coded to http and the
non-ssl port. This change adds a vncproxy entry to the endpoint
map and uses those values to configure the proxy correctly on
compute nodes. This is sufficient to get it working in my
environment with ssl enabled.
Change-Id: I9d69b088eef4700959b33c7e0eb44932949d7b71
|
|
|
|
Enables support for configuring Cinder with a Dell
Storage Center iscsi storage backend.
This change adds all relevant parameters for:
- Dell Storage Center SC Series (iSCSI)
Change-Id: I3b1a4346f494139ab123c7dc1a62f81d03c9e728
|
|
|
|
- keeping enabled based on ceph node count being greater than 0
- adding enabled if ControllerEnableCephStorage is true
Intention here is to be able to run ceph without having dedicated
nodes for. Enabling Ceph alternativly from the ControllerEnableCeph
parameter allows ceph to be colocated on the controllers without
having to run any dedicated ceph nodes.
Change-Id: I71062d37226c679156380c0f4e194b51cb586bcf
Signed-off-by: Dan Radez <dradez@redhat.com>
|
|
The template will all neutron-agents to be configured so that it can
run the network isolation templates on the containerized compute node.
Co-Authored-By: Dan Prince <dpince@redhat.com>
Change-Id: I7837ed7ed3e807ec5c1276904893695918bef293
|
|
|
|
|
|
|
|
|
|
|
|
Deploy a TripleO overcloud with networking midonet. MidoNet is a
monolithic plugin and quite changes on the puppet manifest must be done.
Depends-On: I72f21036fda795b54312a7d39f04c30bbf16c41b
Depends-On: I6f1ac659297b8cf6671e11ad23284f8f543568b0
Depends-On: Icea9bd96e4c80a26b9e813d383f84099c736d7bf
Change-Id: I9692e2ef566ea37e0235a6059b1ae1ceeb9725ba
|
|
This change allows every overcloud node to optionally participate in
any of the isolated networks. The optional networks are not enabled
by default, but allow additional flexibility. Since the new networks
are not enabled by default, the standared deployment is unchanged.
This change was originally requested for OpenDaylight support.
There are several use cases for using non-standard networks.
For instance, one example might be adding the Internal API network
to the Ceph nodes, in order to use that network for administrative
functions. Another example would be adding the Storage Management
network to the compute nodes, in order to use it for backup. Without
this change, any deviation from the standard set of roles that use a
network is a custom change to the Heat templates, which makes
upgrades much more difficult.
Change-Id: Ia386c964aa0ef79e457821d8d96ebb8ac2847231
|
|
This change adds a system management network to all overcloud
nodes. The purpose of this network is for system administration,
for access to infrastructure services like DNS or NTP, or for
monitoring. This allows the management network to be placed on a
bond for redundancy, or for the system management network to be
an out-of-band network with no routing in or out. The management
network might also be configured as a default route instead of the
provisioning 'ctlplane' network.
This change does not enable the management network by default. An
environment file named network-management.yaml may be included to
enable the network and ports for each role. The included NIC config
templates have been updated with a block that may be uncommented
when the management network is enabled.
This change also contains some minor cleanup to the NIC templates,
particularly the multiple nic templates.
Change-Id: I0813a13f60a4f797be04b34258a2cffa9ea7e84f
|
|
|
|
|
|
In previous releases, when not using network isolation, we used to create
two different VIPs for the ControlVirtualIP and the PublicVirtualIP both on
the ctlplane network. Later we moved into a configuration with a single
VIP instead so we need a compatibility yaml for those updating from old
versions which preserves both the IPs; one of the two is deleted
otherwise.
Also updates README.md with a short description of the use case.
Change-Id: Iae08b938a255bf563d3df2fdc0748944a9868f8e
|
|
This change adds a sample environment file which documents how to
assign to controllers a predictable IP on each network.
Change-Id: I5be21428c66c82488af8e0240c1614ac3b9b55f0
|
|
This change adds a new *_from_pool.yaml meant to return an IP from
a list instead of allocating a Neutron port, useful to pick an IP
from a pre-defined list and making it possible to configure, for
example an external balancer in advance (or dns), with the future
IPs of the controller nodes.
The list of IPs is provided via parameter_defaults (in the
ControllerIPs struct) using ControllerIPs param.
Also some additional VipPort types are created for the *VirtualIP
resources. The VIPs were previously created using the same port
resource used by the nodes, but when deploying with an external
balancer we want the VIP resource to be nooped instead.
Change-Id: Id3d4f12235501ae77200430a2dc022f378dce336
|
|
This enables pacemaker maintenantce mode when running Puppet on stack
update. Puppet can try to restart some overcloud services, which
pacemaker tries to prevent, and this can result in a failed Puppet run.
At the end of the puppet run, certain pacemaker resources are restarted
in an additional SoftwareDeployment to make sure that any config changes
have been fully applied. This is only done on stack updates (when
UpdateIdentifier is set to something), because the assumption is that on
stack create services already come up with the correct config.
(Change I9556085424fa3008d7f596578b58e7c33a336f75 has been squashed into
this one.)
Change-Id: I4d40358c511fc1f95b78a859e943082aaea17899
Co-Authored-By: Jiri Stransky <jistr@redhat.com>
Co-Authored-By: James Slagle <jslagle@redhat.com>
|
|
If there is a value for the certificate path (which should only happen
if the environment for enabling TLS is used) then the loadbalancer will
detect it and configure it's front ends correctly. On the other hand
a proper override for the example environment was given, since this
will be needed because we want to pass the hosts and protocols
correctly so the tripleoclient will catch it and pass it to
os-cloud-config
Change-Id: Ifba51495f0c99398291cfd29d10c04ec33b8fc34
Depends-On: Ie2428093b270ab8bc19fcb2130bb16a41ca0ce09
|
|
Added a parameter to Nuage ExtraConfig template for setting
use_forwarded_for value required by Nuage metadata agent
Change-Id: I02c15311272126c5e530f118fbfb4a8f6e11a620
|
|
Added ExtraConfig templates and environment files for Nuage specific parameters.
Modified overcloud_compute.pp and overcloud_controller.pp to conditionally
include Nuage plugin and agents.
Change-Id: I95510c753b0a262c73566481f9e94279970f4a4f
|
|
|
|
|
|
|
|
|
|
This commit enables the injection of a trust anchor or root
certificate into every node in the overcloud. This is in case that the
TLS certificates for the controllers are signed with a self-signed CA
or if the deployer would like to inject a relevant root certificate
for other purposes. In this case the other nodes might need to have
the root certificate in their trust chain in order to do proper
validation
Change-Id: Ia45180fe0bb979cf12d19f039dbfd22e26fb4856
|