Age | Commit message (Collapse) | Author | Files | Lines |
|
Adds new puppet and puppet pacemaker specific services for
Heat API, Heat API CFN, Heat API Cloudwatch, and Heat Engine.
The Pacemaker templates extend the default heat services and
swap in the pacemaker specific puppet-tripleo profile instead.
Change-Id: I387b6bfd763d2d86cad68a3119b0edd0caa237b0
Partially-implements: blueprint composable-services-within-roles
Depends-On: I194cbb6aa307c2331597147545cf10299cab132f
Depends-On: I14dc923ac8ee8d5d538e7f4cf8138ccee8805b53
|
|
Deploy loadbalancer service using puppet-tripleo, and drop puppet code.
Implements: blueprint refactor-puppet-manifests
Depends-On: I9b106dcc1a4d446ab5dea8430ed295e6ec209cbd
Change-Id: I9ca50a4bc822ec17d89988894af9bdf07e4bd1a9
|
|
|
|
|
|
|
|
|
|
|
|
Set a password for the 'root' db user and add an additional
'clustercheck' user to be used only by the resource agent.
The password for this 'clustercheck' user is randomly generated
via a heat parameter.
Before this change the workflow to set up the database in the
manifest is the following:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty password
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then
proceed with the other steps
After this change the workflow is slightly more complex because there
is a bit of a chicken and egg problem:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty
password unless the file does exists already and has a clustercheck user
configured
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
yet, so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then proceed
with the other steps
- Step 2.e -> Create clustercheck db user
- Step 3/4 -> Create /etc/sysconfig/clustercheck with clustercheck user credentials
- Step 5.a -> Update the sql root password on the each node (at this
stage
- Step 5.b -> Create /root/.my.cnf with proper credentials on all nodes
Note that we cannot really create the root/clustercheck users right at
step 1 because the db is not running yet (an approach that spawned
mysqld on each node, created the users and shut it down, was tried but
was much more complex and cannot work on updating existing setups)
Given the new way of solving the root password issue, we also need to
make sure that Step1 and Step2 are running on updates.
Closes-bug: #1581677
Depends-On: I83eed8885503043e881db34411616f9726e00352
Change-Id: If3d6e7253af6195b96129be7ea3348d697e4bae1
|
|
Change the way to implement RabbitMQ, as a composable role.
Implements: blueprint refactor-puppet-manifests
Change-Id: I5fed5c437ad492af75791a9163f99ae292f58895
|
|
|
|
This might be useful if we switch to %{hiera()} calls to lookup
the bind address from within a service.
Also gets rid of NetIpSubnetMap and provides same output from
NetIpMap instead.
Change-Id: I328a417d1f1fff9c31e9ad7b2b5083ac19bc7329
|
|
We can drive enabled and manage_service from the Pacemaker specific
templates so change I8ef7bb94e048b998712b3534ceb51a7d10d016e9
removed that logic from the TripleO specific manifests. This change
adds them into the Pacemaker template for the Neutron/L3 agent.
Change-Id: I1146a8bae4938577e9927d1ce74009e066cb58dd
|
|
https://review.openstack.org/#/c/236243 added a new conditional
for the controller steps, but we don't pass any step for the
ObjectStorage nodes, so the deployment fails. This passes a
step that enables the ringbuilder again, although it does end
up inconsistent with the deployment Step name.
Change-Id: I506961f4a22dba9960d819d7376a39e7ccbcdece
Closes-Bug: #1583225
|
|
Adds new puppet and puppet pacemaker specific services for
the Neutron Metadata agent.
Partially-implements: blueprint composable-services-within-roles
Change-Id: I25f026507e78f18594599b3621613a54f246545d
|
|
Adds new puppet and puppet pacemaker specific services for
the Neutron L3 agent.
Partially-implements: blueprint composable-services-within-roles
Change-Id: I0316043efe357a41ef3b4088a55d98dbb6d25963
|
|
|
|
Change Id Ia61295943e67efe354a51a26fe4540f288ff6ede added support for
composable Neutron dhcp agent services. However in the sample
environments for OpenContrail and Plumgrid where dhcp agent is disabled,
the mapping for OS::Heat::None was under parameter_defaults instead of
resource_registry.
Change-Id: I0aedbbc3720783d4208d524cd28c7eed4fc5d1d7
|
|
AFAICS this isn't actually used anywhere, I assume it's left over
from the older element based implementation.
Change-Id: Ie95628bd7af1bcd50a6e331531b2987e434c7136
|
|
|
|
Per the nova devs on [1], this is not necessary.
Change-Id: I11974432c995b22b3c98ef9ae2adc3508d9cc536
1: https://review.openstack.org/#/c/316241/1/manifests/keystone/auth.pp
|
|
Nova EC2 does not exist anymore since Mitaka, parameters are already
deprecated in Mitaka and send warnings to the Puppet catalog.
The service has been replaced by ec2api project, where Puppet OpenStack
team is currently writting a module.
In the meantime we add support in TripleO, this patch removes all
occurences of Nova EC2 configuration, which are useless and send
warnings for nothing.
Change-Id: Ief2d0e5c77b5ac58560606fee930fbd66c40ffc3
|
|
|
|
We can control the two manage_service and enabled boolean from
the Pacemaker specific template now.
Change-Id: I91a4267f0fc230f63df3333747d28463c7ae55fe
|
|
Change-Id: I8f98ce92fc387d2263fda738c1c8a209e3cbbb85
|
|
|
|
Adds new puppet and puppet pacemaker specific services for
the Neutron DHCP agent.
Depends-On: Ibbfd79421f871e41f870745a593cca65e8c0e58a
Partially-implements: blueprint composable-services-within-roles
Change-Id: Ia61295943e67efe354a51a26fe4540f288ff6ede
|
|
Step6 was just about confuring fencing after creating all Pacemaker
resources.
It was created by this patch:
https://review.openstack.org/#q,1787fbc7ca58f9965cd5d64b685c1f9beed4cb9b,n,z
A bit of Puppet orchestration can help us to not require an extra step.
This patch:
* configure & enable fencing at step5
* make sure we don't configure fencing because creating Pacemaker
resources and constraints.
* remove step6 from deployment workflow.
* depends on a patch in puppet-tripleo that moves keystone resources
(endpoints, roles) to step 5.
Change-Id: Iae33149e4a03cd64c5831e689be8189ad0cf034b
Depends-On: Icea7537cea330da59fe108c9b874c04f2b94d062
Depends-On: I079e65f535af069312b602e8ff58be80ab2f2226
|
|
Step7 was created when we incremented the step of ringbuilder, by
https://review.openstack.org/#q,9988bd25aa4bac1375ef4783d636c7adecedee92,n,z
But step7 is not used anywhere and consumes some times for nothing.
This patch removes the step, so deployments and upgrades will be faster.
Change-Id: I77af9126abc61ace227cf1a69c2d3b5ceb735276
|
|
Puppet-nova recently changed the default neutron auth setting
in I3416ae594e972e40ff0336779258a887987e46b1 to 'password'.
This single setting seems to break the tripleo upgrades job.
Setting it here manually for now and following up in puppet-nova.
Closes-bug: #1580076
Change-Id: I3f38a3e1ef3378a272a51ecbc1e8a801c8d3608a
|
|
This is an optimization of the ping command. It changes ping test
from waiting for a single sent packet for 300 seconds to waiting
to receive a packet from any number of sent packets. The current
implemenation waits for full 300 seconds before retrying if we
do not get a reply for the first packet sent. By using -w flag,
we keep sending icmp packets until a reply is sent, making the
ping test more responsive to connectivity change.
Change-Id: I01ab374ae44718c8d56e2d7f35812dfb5bb2ce5a
Signed-off-by: Feng Pan <fpan@redhat.com>
|
|
|
|
This patch switches to use docker-cmd without changing the heat
templates.
Change-Id: I4a6a42819e83e3b70bf1e37c09d155c5cf8a7ee4
|
|
|
|
|
|
|
|
|
|
The paramaters field was changed to parameter_defaults
everywhere. Combine both parameter_default fields.
Change-Id: Ia1874463cdd6bf81be5739363e639bd11312abec
|
|
This commit passes the necessary hieradata in order to create
the endpoints, users and roles of the services in keystone via
puppet.
Change-Id: I2470dfa4661be7ba8218f6035fffa05f547214f0
|
|
Change-Id: I511052dc765788336ffd32dee2118d787fce725d
|
|
The database will be created by the roles so we don't need to call
::mysql from the manifest.
Change-Id: I2b137cbd6597222a72cf46830f34a93f002c70ef
Depends-On: Id065a9180f1f1a41ab225ec5f755498ec7d9a827
|
|
|
|
|
|
|
|
When using the Nova RBD driver for the ephemeral storage it is
suggested by the Ceph RBD OpenStack guide [1] to optimize certain
settings; this change will set disk_cachemodes and hw_disk_discard
accordingly to the guide.
1. http://docs.ceph.com/docs/master/rbd/rbd-openstack/
Change-Id: I8d2ee89ca4ff5458d1888cc037e2e91d19025ad4
|
|
|
|
|
|
If "pcs cluster stop --all" is executed on a controller that
happens to have a VIP on the internal network, pcs may use the
VIP as the source address for communication with another cluster
node. When pacemaker is stopped this VIP goes away, and pcs never
receives a response from the other node. This causes pcs to hang
indefinitely; eventually the upgrade times out and fails.
Disabling the VIPs before stopping the cluster avoids this
situation.
Change-Id: I6bc59120211af28456018640033ce3763c373bbb
Closes-Bug: 1577570
|
|
Change-Id: I282dbc025500b1628d4f08a49b54a2adefd38b5f
|
|
|
|
|