Age | Commit message (Collapse) | Author | Files | Lines |
|
Set a password for the 'root' db user and add an additional
'clustercheck' user to be used only by the resource agent.
The password for this 'clustercheck' user is randomly generated
via a heat parameter.
Before this change the workflow to set up the database in the
manifest is the following:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty password
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then
proceed with the other steps
After this change the workflow is slightly more complex because there
is a bit of a chicken and egg problem:
- Step 1 -> Install all the basic galera packages and basic configuration
- Step 2.a -> Create /etc/sysconfig/clustercheck with root and empty
password unless the file does exists already and has a clustercheck user
configured
- Step 2.b -> Start up galera-monitor xinetd service
- Step 2.c -> Start pacemaker ocf resource (no root user has been created
yet, so there will be an empty password per default)
- Step 2.d -> Wait for /bin/clustercheck to return success and then proceed
with the other steps
- Step 2.e -> Create clustercheck db user
- Step 3/4 -> Create /etc/sysconfig/clustercheck with clustercheck user credentials
- Step 5.a -> Update the sql root password on the each node (at this
stage
- Step 5.b -> Create /root/.my.cnf with proper credentials on all nodes
Note that we cannot really create the root/clustercheck users right at
step 1 because the db is not running yet (an approach that spawned
mysqld on each node, created the users and shut it down, was tried but
was much more complex and cannot work on updating existing setups)
Given the new way of solving the root password issue, we also need to
make sure that Step1 and Step2 are running on updates.
Closes-bug: #1581677
Depends-On: I83eed8885503043e881db34411616f9726e00352
Change-Id: If3d6e7253af6195b96129be7ea3348d697e4bae1
|
|
|
|
Nova EC2 does not exist anymore since Mitaka, parameters are already
deprecated in Mitaka and send warnings to the Puppet catalog.
The service has been replaced by ec2api project, where Puppet OpenStack
team is currently writting a module.
In the meantime we add support in TripleO, this patch removes all
occurences of Nova EC2 configuration, which are useless and send
warnings for nothing.
Change-Id: Ief2d0e5c77b5ac58560606fee930fbd66c40ffc3
|
|
|
|
|
|
Adds new puppet and puppet pacemaker specific services for
the Neutron DHCP agent.
Depends-On: Ibbfd79421f871e41f870745a593cca65e8c0e58a
Partially-implements: blueprint composable-services-within-roles
Change-Id: Ia61295943e67efe354a51a26fe4540f288ff6ede
|
|
Step6 was just about confuring fencing after creating all Pacemaker
resources.
It was created by this patch:
https://review.openstack.org/#q,1787fbc7ca58f9965cd5d64b685c1f9beed4cb9b,n,z
A bit of Puppet orchestration can help us to not require an extra step.
This patch:
* configure & enable fencing at step5
* make sure we don't configure fencing because creating Pacemaker
resources and constraints.
* remove step6 from deployment workflow.
* depends on a patch in puppet-tripleo that moves keystone resources
(endpoints, roles) to step 5.
Change-Id: Iae33149e4a03cd64c5831e689be8189ad0cf034b
Depends-On: Icea7537cea330da59fe108c9b874c04f2b94d062
Depends-On: I079e65f535af069312b602e8ff58be80ab2f2226
|
|
Step7 was created when we incremented the step of ringbuilder, by
https://review.openstack.org/#q,9988bd25aa4bac1375ef4783d636c7adecedee92,n,z
But step7 is not used anywhere and consumes some times for nothing.
This patch removes the step, so deployments and upgrades will be faster.
Change-Id: I77af9126abc61ace227cf1a69c2d3b5ceb735276
|
|
Puppet-nova recently changed the default neutron auth setting
in I3416ae594e972e40ff0336779258a887987e46b1 to 'password'.
This single setting seems to break the tripleo upgrades job.
Setting it here manually for now and following up in puppet-nova.
Closes-bug: #1580076
Change-Id: I3f38a3e1ef3378a272a51ecbc1e8a801c8d3608a
|
|
|
|
|
|
|
|
|
|
|
|
This commit passes the necessary hieradata in order to create
the endpoints, users and roles of the services in keystone via
puppet.
Change-Id: I2470dfa4661be7ba8218f6035fffa05f547214f0
|
|
Change-Id: I511052dc765788336ffd32dee2118d787fce725d
|
|
The database will be created by the roles so we don't need to call
::mysql from the manifest.
Change-Id: I2b137cbd6597222a72cf46830f34a93f002c70ef
Depends-On: Id065a9180f1f1a41ab225ec5f755498ec7d9a827
|
|
|
|
|
|
|
|
|
|
|
|
If "pcs cluster stop --all" is executed on a controller that
happens to have a VIP on the internal network, pcs may use the
VIP as the source address for communication with another cluster
node. When pacemaker is stopped this VIP goes away, and pcs never
receives a response from the other node. This causes pcs to hang
indefinitely; eventually the upgrade times out and fails.
Disabling the VIPs before stopping the cluster avoids this
situation.
Change-Id: I6bc59120211af28456018640033ce3763c373bbb
Closes-Bug: 1577570
|
|
Change-Id: I282dbc025500b1628d4f08a49b54a2adefd38b5f
|
|
|
|
|
|
Change-Id: I0ebb5a1e504dd3ffef8ec15c721cf9a9bce6f05b
|
|
Add additional parameters, specifically:
* set core plugin to Nuage
* disable service plugins
* disable OVS, l3, metadata and DHCP agent
* rename OVS bridge to alubr0
* include Nuage API extensions
Change-Id: Ia0c201fd3b01cd524e096e6f246d707c6e643944
|
|
Default ovs bonding options parameter BondInterfaceOvsOptions was wrongly set to "mode=active-backup" which makes os-net-config fail. This fix changes it to "bond_mode=active-backup".
Change-Id: If3eaac6558c1a15ac09b198f90f05f1a77cf794b
Closes-Bug: #1576137
|
|
Change-Id: Iff287b9ea46100800e386efb98371be7ab48361f
|
|
|
|
|
|
|
|
|
|
This will configure the openstack services and run the initial
db sync in step 3 (instead of step 4) for the node for which
$sync_db is true.
Closes-Bug: #1572952
Change-Id: I29012ee0a8b281e4472353ee7d9d44912e8a9b6c
|
|
The ManagementNetValueSpecs param type is currently set to string.
This change sets the param to the correct type of json, allowing the
network value specs to correctly parse.
Example Management Network value spec:
{'provider:physical_network': 'management', 'provider:network_type': 'flat'}
Change-Id: I5b12c7251690368d79a4d00725a9d6e0d5e75af8
Closes-Bug: #1573649
|
|
|
|
For some reason the controller-no-external.yaml template is configured
for DHCP on the control plane interface. We switched to static control
plane IPs before the controller-no-external.yaml was created (IIRC), so
I'm not sure how that happened. This change brings the
controller-no-external.yaml in line with the rest of the bonded NIC
templates.
Change-Id: I2ac929e241707db72a0beabf9d5cd7fc14b90f76
|
|
Change-Id: I0cab3cdb2189dab3844f2eda52b8697d05ad3447
|
|
This change configures the hiera merge behavior to 'deeper' [1],
which is useful to merge values when the same hiera key is found
in multiple datafiles.
The hiera default 'native' only picks the value from the key with
the highest priority in the hierarchy.
1. https://docs.puppetlabs.com/hiera/1/lookup_types.html#deep-merging-in-hiera--120
Change-Id: I88c764d9af510ffbbad9fcaa4b747655e38255c2
|
|
Adds new puppet and puppet pacemaker specific services for
Glance API and Glance Registry.
The Pacemaker templates extend the default glance services and
swap in the pacemaker specific puppet-tripleo profile instead.
In the case of pacemaker glance-registry there is no separate
puppet manifest so only the configuration parameters are maintained
there. (Due to the way the pacemaker glance constraints are written
the pacemaker varients of this service can't be split out...)
Depends-On: Ifc388f7058ccfff2818f531bcbc00c7179874bbc
Change-Id: I00a8c916129af43cda225754eb10370289bb4b41
|
|
|
|
|
|
|
|
|
|
We've had a typo for a while that a parameter is named
"controllerExtraConfig" with lowercase c, which can be quite confusing
for users because the other similar parameters
(e.g. NovaComputeExtraConfig) consistently start with an upper case
letter.
We'll support both variants from now on, marking the typoed variant as
deprecated.
Change-Id: Ic67a4297e7fa08308889b95ba35389a01f70f5a4
|
|
Horizon's backends (httpd) see IP address of the haproxy in the logs instead
of the client address.
This patch allows to:
- Install the remoteip httpd module [1].
- Use the X-Forwarded-For HTTP header and override the haproxy address.
- Configure the Horizon's logs with the client address via httpd logformat.
[1] https://httpd.apache.org/docs/2.4/mod/mod_remoteip.html
[2] https://httpd.apache.org/docs/2.4/mod/mod_log_config.html#logformat
Change-Id: Ib2f215913065426848b48f6293f33a75aff3d328
Depends-On: I54f0f5549d64768dacca71539c71a28cc99d9d95
|
|
|
|
With the addition of the num_engine_workers param in puppet-heat, we
can now wire the HeatWorkers parameter to also set that value, as
with the other workers.
Change-Id: I169b648737f797ccd45d12b66705d111e7b95bb7
Depends-On: I53a4333ca16df516294c733f85546c0b0a6a0b88
|
|
This change will fix a logic error when L3 agent was disabled, where
a Pacemaker constraint (neutron-dhcp-agent-to-l3-agent-constraint) were
still looking for l3_agent_service in the Puppet catalog, but could not
because L3 agent was disabled.
It was sending this Puppet error:
Error: Could not find dependency
Pacemaker::Resource::Service[neutron-l3-agent] for
Pacemaker::Constraint::Base[neutron-dhcp-agent-to-l3-agent-constraint]
Change-Id: I0e5d24d844810c58a3205303399d1c20773af3dd
|