diff options
author | Michele Baldessari <michele@acksyn.org> | 2016-05-10 19:14:54 +0000 |
---|---|---|
committer | Giulio Fidente <gfidente@redhat.com> | 2016-05-19 06:38:55 +0000 |
commit | e734d752d4f37e93a1637560f9f515320bbe68c5 (patch) | |
tree | eda203fe3ed87c4d52956c067017bd6935f0f1b3 /puppet/swift-storage-post.yaml | |
parent | aeb9482f4b1ec9d78dc3ac44ee3c0b180cd27574 (diff) |
Tighten the access rules for galera
Set a password for the 'root' db user and add an additional
'clustercheck' user to be used only by the resource agent.
The password for this 'clustercheck' user is randomly generated
via a heat parameter.
Before this change the workflow to set up the database in the
manifest is the following:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty password
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then
proceed with the other steps
After this change the workflow is slightly more complex because there
is a bit of a chicken and egg problem:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty
password unless the file does exists already and has a clustercheck user
configured
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
yet, so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then proceed
with the other steps
- Step 2.e -> Create clustercheck db user
- Step 3/4 -> Create /etc/sysconfig/clustercheck with clustercheck user credentials
- Step 5.a -> Update the sql root password on the each node (at this
stage
- Step 5.b -> Create /root/.my.cnf with proper credentials on all nodes
Note that we cannot really create the root/clustercheck users right at
step 1 because the db is not running yet (an approach that spawned
mysqld on each node, created the users and shut it down, was tried but
was much more complex and cannot work on updating existing setups)
Given the new way of solving the root password issue, we also need to
make sure that Step1 and Step2 are running on updates.
Closes-bug: #1581677
Depends-On: I83eed8885503043e881db34411616f9726e00352
Change-Id: If3d6e7253af6195b96129be7ea3348d697e4bae1
Diffstat (limited to 'puppet/swift-storage-post.yaml')
0 files changed, 0 insertions, 0 deletions