Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
This change adds config and deployment resources to trigger package
updates on nodes. The deployments are triggered by doing a stack-update
and setting one of the parameters to a unique value.
The intent is that rolling update will be controlled by setting
breakpoints on all of the UpdateDeployment resources inside the
role resource groups.
Change-Id: I56bbf944ecd6cbdbf116021b8a53f9f9111c134f
|
|
Change-Id: I731b408f24da01c1bc897bfffe8fd4d5638932ed
|
|
Turns NeutronNetworkVLANRanges into a list and makes it consumable by
neutron::plugins::ml2::network_vlan_ranges as an array. Previously
usage of vlans was impossible due to puppet-neutron failing to
join() network_vlan_ranges.
Also fixes wiring of network_vlan_ranges on computes and adds a
sample environment file to test use of vlans for tenant networks.
Change-Id: I8725cdb9591dd8d0b7125fdacbefdc9138703266
|
|
This patch refactors the puppet controller role so that it
makes use of per service VIP settings for each service.
Previously the VIP for the ctlplane was hard wired to
many of the controller service. With this patch we have
the ability to isolate traffic for services which
made use of the ctlplane and public VIPs for their
settings.
The implementation includes:
* stops the use of the VirtualIP and PublicVirtualIP within the
controller role. These parameters have now been replaced with
per service heat parameters for the controller nested stack which
are determined via VipMap based on per service settings in the heat
environment.
* All VIP configuration is now moved into puppet/vip-config.yaml.
This made sense so we could deprecate the use of the VirtualIP
and PublicVirtualIP settings above.
* The puppet manifests for the controller were cleaned up for several
to use Hiera directly instead of constructing URLs based on the
static controller and public network VIPs. This improvement
was something we wanted to do anyways and made the implementation
cleaner.
Change-Id: I9b9a15be67f74bec97366408f7047acfd6ea0ec6
|
|
This patch makes ServiceNetMap a top level parameter.
This is helpful to tools like Tuskar which don't support Heat
environments that contain both a resource_registry and default_parameters.
ServiceNetMap will in fact be utilized at the top level in some of
the VIP related patches that follow.
Change-Id: I375063dacf5f3fc68e6df93e11c3e88f48aa3c3a
|
|
|
|
This patch removes the custom config_id outputs and replaces
it with OS::stack_id which allows us to just call get_resource
in the parent stack.
The motivation for this change is we'll be adding more os-net-config
templates and it would be nice to take advantage of this newer
template feature.
Change-Id: I6fcb26024b94420779b86766e16d8a24210c4f8e
|
|
This patch updates the controller roles so that
they can optionally make use of isolated network
ports on each of 5 available overcloud networks.
-Multiple networks are created based upon settings in the heat
resource registry. These nets will either use the noop network (the
control plane pass-thru default) or create a custom Neutron port on
each of the configured networks.
-The ipaddress/subnet of each network is passed passed into the
NetworkConfig resource which drives os-net-config. This allows the
deployer to define a custom network template for static IPs, etc
on each of the networks.
-The ipaddress is exposed as an output parameter. By exposing
the individual addresses as outputs we allow Heat to construct
collections of ports for various services.
Change-Id: I9bbd6c8f5b9697ab605bcdb5f84280bed74a8d66
|
|
|
|
This patch bumps the HOT version for the overcloud
to Kilo 2015-04-30. We should have already done this
since we are making use of OS::stack_id (a kilo feature)
in some of the nested stacks. Also, this will give us access to
the new repeat function as well.
Change-Id: Ic534e5aeb03bd53296dc4d98c2ac5971464d7fe4
|
|
The exec timeout/attempts is configured so that it is
left running for up to 30mins if the command runs but is
unsuccessfull and up to 2h if the command times out.
Change-Id: I4b6b77e878017bf92d7c59c868d393e74405a355
|
|
This commit aims to support the creation of the galera cluster via
Pacemaker. With this commit in, three use-cases will be supported.
* Non HA setup / Non Pacemaker setup : The deployment will take place
as it is currently the case in f20puppet-nonha. Nothing changes.
* Non HA setup / Pacemaker setup : Even though it is a non ha setup,
galera cluster via pacemaker will be deployed with a cluster nbr of 1.
* HA setup / Non Pacemaker setup : N/A
* HA setup / Pacemaker setup : It is assumed that HA setup will
always be with pacemaker. So in this situation pacemaker will deploy a
cluster of 3 galera master nodes.
Depends-On: I7aed9acec11486e0f4f67e4d522727476c767d83
Change-Id: If0c37a86fa8b5aa6d452129bccf7341a3a3ba667
|
|
|
|
|
|
This patch adds support for a new GlanceBackend setting
which can be set to one of swift, rbd, or file to control
which Glance backend is configured for use by default.
Change-Id: Id6a3fbc3477e85e8e2446e3dc13d424f9535d0ff
|
|
We need to stop using "unset" as the password for all databases. Ideally we
would add a "XxxxDSN" parameter (e.g. KeystoneDSN) but this wont work because
we don't know the VirtualIP to pass in.
Until we can come up with a better solution we should at least get rid of
the "unset" passwords.
Change-Id: I31f45912fa9c116ccdee010a2c5d91ea43a25671
Depends-On: I8ffe1eb481f615b0fbe127cd8107f1e70794c839
|
|
|
|
Ceilometer can use different backends. A recent change moved backend
support for Ceilometer from MySQL to MongoDB. This commit introduce a
greater flexibility, letting the deployer choose wheter MySQL or MongoDB
should be used as a backend for Ceilometer.
Change-Id: I0d5bfb0763cbcee234df7ab13574d866743d5ddf
|
|
Remove references to the .novalocal domain part in the hosts file.
Change-Id: Idf14907adaf2f35440b6f28870fe18434eadd1be
Depends-On: Iadfdf4120c4d1c9b6976321753957fd4eecf301c
|
|
|
|
Install OpenStack Dashboad (Horizon) on the Overcloud Controller with
Puppet.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Depends-On: If9b12d373e407be8be8428d77145f131eb450e88
Change-Id: I254e895014f58a51dade3dcdc63eabbb5dc458ac
|
|
This change allows a different network config for each family of hosts. For
instance, the controller may have a different network configuration than a
block storage node. This change adds a declaration for each family in the
overcloud-resource-registry.yaml & overcloud-resource-registry-puppet.yaml.
Change-Id: I083df7ebbb535f97d8ddec2ac0e06281c55986cd
|
|
Currently all the OS::Nova::Server resource created don't pass any
user-data. It's possible to pass user-data as well as using heat
SoftwareConfig/SoftwareDeployment resources, and this can be useful
when you have simple "first boot" tasks which are possible either via
cloud-init, or via simple run-once scripts.
This enables passing such data by implementing a new provider resource
OS::TripleO::NodeUserData, which defaults to passing an empty mime
archive (thus it's a no-op). An example of non no-op usage is also
provided.
Change-Id: Id0caba69768630e3a10439ba1fc2547a609c0cfe
|
|
|
|
Pacemaker is a new feature and should probably be disabled
by default.
Change-Id: I840d08c9e0563aeb7128eb2b21929612b7a5bf7a
|
|
Adds a new ControllerEnableSwiftStorage parameter that
can be used to enable/disable use of the contoller node
as a Swift storage node.
Change-Id: Ic54144f4a46a671818c2f12e419cfa619b0dc1f9
|
|
This patch adds a new ControllerEnableCephStorage option
which can be used to install and configure Ceph storage
(OSD) on the controller node.
The default is to have this disabled by default (this is
probably a more production like setting).
The motivation for this change is to help facilitate CI
jobs which actually use Ceph. Right now we have an issue
where once the Heat stack finishes Ceph is configured
and ready, but Cinder volume (required by our CI
devtest_overcloud.sh test) may or may not have had
enough time to recognize the amount of storage
on the remote Ceph storage nodes. Waiting another
periodic cycle for Cinder volume to recognize the
actual amount of storage on the remote OSD nodes
would work but there isn't a good way to do this
ATM. The right solution here is probably to
implement Heat breakpoints in our CI. As we haven't quite
landed that change, another option is to simply
make the controller node also be a Ceph storage node.
Since this runs as "step 2" within the controller
it ensures that the OSD will be available and thus
Cinder volume will register the correct amount of
storage on startup.
Enabling this feature also matches what we do with Swift
storage on the Controller (although we should provide
an option to actually disable this as well).
Change-Id: Ic47d028591edbaab83a52d7f38283d7805b63042
|
|
Depends-On: Ia1bbf53c674e34ba7c70249895b106ec0af3c249
Change-Id: Ifa9f579d26a3cba9f8705226984c7b987ae0ad1c
|
|
Change-Id: Ia2e4eae619ca95c0f417f713676732eb4f01304b
Depends-On: I9563eec0a2266deb2ebef2e3d76ae89d39b2be29
|
|
Despite passing bind-address for MariaDB in overcloud_controller.pp
correctly, it was always trying to bind on 0.0.0.0. The problem is
caused by Galera's config file (we install Galera into the image even
though we don't use it yet). Galera's default config file contains
override of the bind-address value to 0.0.0.0, and the setting from
galera.cnf took precendence over what was in server.cnf.
The mariadb-galera-server package assumes that the main config happens
in galera.cnf and it ships an almost empty server.cnf. We now have an
EnableGalera param, when it's set to true the mysql module will manage
galera.cnf instead of server.cnf, overriding the default values from
galera.cnf and fixing the issue.
Change-Id: I7c2fd41d41dcf5eb4ee8b1dbd74d60cc2cabeed9
Closes-Bug: #1442256
|
|
It's very confusing for them to be different, especially in the case of
comparing Tuskar vs non-Tuskar deployments where the parameters are read
from different files.
Note: NeutronPhysicalBridge is named differently in the overcloud
template (HypervisorNeutronPhysicalBridge). This is the only parameter
checked that isn't named exactly the same, hopefully there aren't any
others.
(Checked controller, compute, ceph, cinder, and swift for both puppet
and non-puppet templates)
Change-Id: I48ce1eb40d2d080c589ce619c50eddff17efe882
|
|
This commit aims to add support for Ceph as a cinder and a nova backend.
* Allows creation of Ceph pools from heat (Default: volumes, vms)
* Creates the proper ceph user and inject the keys
* Applies the proper configuration in cinder.conf and nova.conf
* Enable the backend out of the box
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: Ic17d7a665de81a8bab5e34035abe90eda4bc889f
|
|
|
|
Currently we have a hard-coded default for auth_encryption_key,
which isn't ideal as it's used as a salt for the DB encryption.
Instead, reference an OS::Heat::RandomString resource so we create
a random key for each deployment.
Change-Id: Ic76b89db17603c114d98d28c01f75cc287fb2e90
|
|
Currently Cinder iscsi backend is configured within the DEFAULT section.
Since we aim to support multibackend, this commit puts the iscsi backend
in its own section and enable it by default configuring it properly.
Also adds a parameter which can be used to disable the default backend.
Co-Authored-By: Giulio Fidente <gfidente@redhat.com>
Change-Id: I05fb44b59829c0afa8a6588956a48320f2f65159
|
|
Our existing default (replicas == 1) means that no data
(or copies) is being replicated in a multi-node Swift
environment. This seems like a bad production default
setting and could easily slip by if not set.
Setting it to 3 shouldn't hurt anything and seems to follow
suit with what several production installers (based around Puppet)
actually use. If using an installation with less than 3 swift
nodes I believe swift will do its best, and still work fine.
FWIW I noticed this when testing a multi-node Puppet swift
installation and was surprised when I didn't see any *data
files getting replicated across the storage cluster.
Change-Id: I44bdfff7aae6bdf845b79ca1f8f450c22113caed
|
|
In I228216a0b55ff2d384b281d9ad2a61b93d58dab9 we split
out just the Controller software config in an effort
to provide hooks for alternate implementations (puppet).
This sort of worked but caused quirky ordering issues
with signal handling. It also causes problems for Tuskar
which would prefer to think of these nested stacks and
not have us split out just the software configs like this.
This patch moves all the controller related stuff for
our two implementations:
controller.yaml: is used by os-apply-config (uses the
tripleo-image-elements)
controller-puppet.yaml: uses stackforge puppet-* modules for
configuration
By duplicating the entire controller in this manner we make
it much easier to create dependencies and implement proper
signal handling. The only (temporary) downside is the duplication
of parameters most of which will eventually go away when we move towards
using the global parameters via Heat environment files instead.
Change-Id: Iaf3c889d7c8815f862308cd8e15ce1010059f5c6
|
|
|
|
This change will allow for the enablement of Neutron routers HA
via the new NeutronL3HA parameter.
Change-Id: Ia5f7c0b4e89159456482e840c50d166ec5f25d4c
Implements: blueprint tripleo-icehouse-ha-production-configuration
|
|
This was added in I36fece56bafa9fe9c4883b572687b3fc819eeae1
and is missing from overcloud-without-mergepy.
Change-Id: I5c2566cc77247574f8d687eaab8094de481a8c67
|
|
This was added in Icc5e431a7e2884b3ca3a255b6fd901619bc98460
and is missing from overcloud-without-mergepy.
Change-Id: I1273b646c04783712fd3f8baccafead11817689c
|
|
|
|
|
|
This example extends the controller software configuration
so that heat metadata is used to model the os-net-config
YAML (ultimately JSON) directly. The existing
os-net-config element already supports this format.
Configuring the physical network layer in this manner
would supplant the ever growing list of Heat parameters
that we have and is something that could be automatically
generated via tuskar.
The default is to use net-config-noop.yaml which
will pass no config metadata into the os-net-config
element which will essentially disable it in favor
of using parameters w/ init-neutron-ovs.
Change-Id: Ifba60454ee11222173a9762882e767a836a4545c
|
|
This is a step towards supporting pluggable software configurations
in the heat templates. By moving controller-config out of controller.yaml
we make it possible to define alternate implementations by
changing the OS::TripleO::ControllerConfig value in the
overcloud-resource-registry.yaml heat environment file.
Change-Id: I228216a0b55ff2d384b281d9ad2a61b93d58dab9
|
|
Trying to use overcloud-without-mergepy currently fails with
a validation error around MysqlClusterUniquePart. This
works around the issue by temporarily dropping the validation.
Change-Id: If93945a2c3396b07b592d08efb1f66e11d6194dd
Partial-bug: #1405446
|
|
|
|
|
|
|