Age | Commit message (Collapse) | Author | Files | Lines |
|
NeutronL3HA used to be enabled by the tripleoclient if the controller
count > 1. This functionality has been moved into the relevant heat
template, making the parameter less valuable for general use. If
necessary, deployers can override the automatic behavior through extra
config.
Change-Id: Id5bb5070b9627fd545357acc9ef51bdc69d10551
Related-Bug: #1623155
|
|
|
|
The previous logic left out the default Count completely when it was
zero, which breaks nested validation and it's likely similar problems
would exist with the other optional defaults, so rework it so the
defaulting happens in the jinja2 logic, and document the interfaces
better in roles_data.yaml
Change-Id: I7f2eb4a3a0b43c5d2cd0d001ed3c73f783c95c74
Closes-Bug: #1625760
|
|
|
|
|
|
The neutron metadata agent's metadata_ip field is meant to refer to the
nova metadata service, not the local address on the NeutronApiNetwork.
Change-Id: Ibb25a80ea3e66ab3f5cf63c197460d495939778d
Closes-Bug: #1625504
|
|
This is needed because currently we're not generating
nova_metadata_vip or nova_metadata_nodes_ip, and a service profile is
required for that. Unfortunately, currently puppet-nova only deploys
osapi and metadata through the same manifest, so this profile doesn't
really inject any puppet code. We can make this more elegant later.
Change-Id: Id7112111f16d0c749a6203b90e29e6d9f1e4d57e
Closes-Bug: #1625543
|
|
Currently in puppet/services/rabbitmq.yaml we hardcode the thread pool
size to 30 (via the +A30 snippet):
rabbitmq_environment:
RABBITMQ_SERVER_ERL_ARGS: '"+K true +A30 +P 1048576 -kernel inet_default_connect_options [{nodelay,true},{raw,6,18,<<5000:64/native>>}] -kernel inet_default_listen_options [{raw,6,18,<<5000:64/native>>}]"'
Upstream rabbit has gained the ability to dynamically configure the
number of threads since 3.6.2 via the following commit:
https://github.com/rabbitmq/rabbitmq-server/commit/41ce5ad808863944cd6d62ce7f7e2271f1010582
Given that the default was hardcoded in rabbit from at least 3.4.0 up
until 3.6.2 (see LP bug associated to this commit), we can actually
remove this hardcoded value as it overrides a sane default.
Before the change:
/usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -A30 -P 1048576 ...
After the change:
/usr/lib64/erlang/erts-7.3.1/bin/beam.smp -W w -A 64 -K true -P 1048576 ...
So effectively with this change we will have the following:
- With older rabbitmq versions we keep the +A30 default
- With rabbitmq versions >= 3.6.2 the thread number is dynamically
computed to nr_cpus * 16
Change-Id: I8d30c7d141c29fcc439d40fc767498520be7966e
Closes-Bug: #1625486
|
|
This patch conditionally enables Neutron L3 HA if there are multiple
controllers but DVR has not been enabled. If the conditions are false,
the value of NeutronL3HA is used.
Change-Id: If1ebeaf417c0da99d833450e394b71cabff2c800
Closes-Bug: #1623155
|
|
|
|
|
|
This is the initial work to have a function that migrates a full HA
architecture as deployed in Mitaka to the HA architecture as deployed in
Newton where only a few resources are managed by pacemaker.
The sequence is the following:
1) We remove the desired services from pacemaker's control. The services
at this point are still running normally via the systemd service as
invoked by pacemaker
2) We do a "systemctl stop <service>" on all controllers for all the
services that were removed from pacemaker's control. We do this to make
sure that during the yum upgrade, the %post sections that call
"systemctl try-restart" do not take ages, because at this point during
the upgrade rabbit is down. The only exceptions are "openstack-core"
and "delay" which are dummy pacemaker resources that do not exist on
the system
3) We do a "systemctl start <service>" on all nodes for all the services
mentioned above.
We should probably merge this patch only when newton has branched as it
is very specific to the M/N upgrade.
Closes-Bug: 1617520
Change-Id: I4c409ce58c1a57b6e0decc3cf168b62698b32e39
|
|
While it is possible to override the pg_num, pgp_num and size for
each pool, the defaults are hardcoded. This patch uses as default
the values given via ceph::profile::params::osd_pool_default_*
parameters, if any.
Closes-Bug: 1623590
Change-Id: Iecde772e7f72fd9abedb54cff4b8f2605df8fedd
|
|
|
|
|
|
|
|
Change-Id: I7a041dab8b1b1edc9c80248e1eef3ce7ab272292
Closes-Bug: 1615056
|
|
|
|
These are needed so the computes can advertize the VNC URL correctly.
Change-Id: Ic3eba9fe929ce396b584249eb84415de09ab1b62
Closes-Bug: #1623607
|
|
|
|
For N we cannot assume services are managed by pacemaker.
This adds functions to check if a service is systemd or
pcmk managed and start/stops it accordingly. For pcmk,
only stop/disable on bootstrap node for example, whereas
systemd should stop/start on all controllers.
There is also an equivalent change to the check_resource
which has been reworked to allow both pcmk and systemd.
Implements: blueprint overcloud-upgrades-workflow-mitaka-to-newton
Change-Id: Ic8252736781dc906b3aef8fc756eb8b2f3bb1f02
|
|
|
|
|
|
This implements support for installing fluentd agents as a composable
service on the overcloud.
Depends-On: I2e1abe4d8c8359e56ff626255ee50c9cacca1940
Implements: tripleo-opstools-centralized-logging
Change-Id: I23b0e23881b742158fcfb6b8c145a3211d45086e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Currently RabbitMQ cluster uses a predefined port 35672 for clustering.
This port belongs to so-called ephemeral ports range.
Ephemeral ports are the ports kernel assings to application if it
doesn't specify which port to open. So there is a small chance that this
application being started before RabbitMQ itself could grab this port.
While rather unlikely we did see this happen.
Selinux change should already be in place. On my Centos 7 we have:
rabbitmq_port_t tcp 25672
corenet_tcp_bind_rabbitmq_port(rabbitmq_t)
corenet_tcp_connect_rabbitmq_port(rabbitmq_t)
First noted via:
https://bugzilla.redhat.com/show_bug.cgi?id=1357522
Closes-Bug: #1623818
Depends-On: I0bcd0d063a7a766483426fdd5ea81cbe1dfaa348
Change-Id: I995bd96c2a17614e954ea5bbae4d58998ef420dc
|
|
In scenario where mongo and collector are on separate nodes like as
indicated in the bug, collector should be able to access mongo replset
and other hiera data.
Closes-bug: #1620468
Depends-On: I0bcd0d063a7a766483426fdd5ea81cbe1dfaa348
Change-Id: Iadf4c78fb03da183d19e93c30f78817a3cfed425
|
|
|
|
Previously [1] we updated from_pool_v6 to use str_split but mistakenly
copy/pasting lines referencing an attribute which isn't created in
these templates.
1. I282dbc025500b1628d4f08a49b54a2adefd38b5f
Closes-Bug: 1624412
Change-Id: I409ff5b36eab2a791db4d352dea5b68096c2dc21
|
|
|
|
|
|
The batch_create and rolling_update keys were incorrectly defined
as properties of the resource instead of update policies.
Change-Id: I19261adc78e4cdc3616f16221e85490a6b48d47b
Closes-Bug: 1623506
|
|
CephRgw defaults to None in the registry, seems like we missed it
in roles_data after a rebase.
Change-Id: I4ce8b160edfb193f5f6226f8295861e6625ef37b
|
|
The Ceph upgrade scripts was failing on the following:
1. a syntax error in an if condition
2. an attempt to read a possibly unbound variable
3. an attempt to chown a directory which might not exist
this change aims at fixing all of the above.
Closes-Bug: 1623942
Change-Id: I9e9d63d4ab7626893aaf2a25dccfcafbb97ccbdf
|
|
|
|
|
|
This adjusts the interface to OS::TripleO::AllNodesExtraConfig so
it supports custom/composable/optional roles.
Note this does break backwards compatibility, and I can't see any way
to avoid that. I've converted the in-tree templates, and we'll have
to document carefully and or provide a script (or automated conversion
via mistral perhaps?) to allow folks to easily adjust any out of tree
templates to the new format.
Basically you just have to:
1. Remove all the *_servers parameters, replace with one "servers"
json parameter
2. Replace references to e.g "controller_servers" with "servers, Controller"
which does a path-based lookup into the json map provided by overcloud.yaml
Change-Id: I5eebf853646b2f6300d6b542fcd4f43e82d3b413
Partially-Implements: blueprint custom-roles
|
|
We need to remove the hard-coded roles from overcloud.j2.yaml
as now it's valid to e.g remove BlockStorage completely.
The previous behavior for the per-role upgrade scripts is maintained
but we'll need to rework this for newton->ocata upgrades where we
can no longer be sure the servers mapping will contain all roles.
Change-Id: I25e6c84757e3c00fba2aae834cd8206c62e44acf
Partially-Implements: blueprint custom-roles
|
|
Refactor so the post-deploy steps recently moved into
puppet/post.yaml are generated by jinja2 instead of hard-coded
Change-Id: I488e46aaa449c95571bd3d1de9513c3d0730baf3
Partially-Implements: blueprint custom-roles
|
|
To communicate to glance registry, glance API has several parameters
that it uses to form the URI. Right now we are defaulting to http,
when we enable TLS everywhere, this will break. So setting the value
from the endpoint map should fix it.
Closes-Bug: #1623477
Change-Id: Id86787cbaa6f87fdcf9c26111c228fd59fbba012
|
|
The puppet-tripleo change for the same is merged
I9220b7d020dc8ed45dd6ca83ea9647efd67ea648
Change-Id: Ic5309ada98c78a15aa3a47dd94acb9e68eb25295
|
|
|
|
|
|
To support custom roles we need to generate these lists of role
specific data.
Change-Id: Ide97cd57d1c07f7f7ff260ff7a6bbe2b71753bd0
Partially-Implements: blueprint custom-roles
|